๐Ÿ“š Beginner

1. Basics of Prompting

Understand how LLMs think. Learn the core components: Instruction, Context, Input, and Output.

How LLMs Think

Large Language Models (LLMs) are prediction engines. They don't "know" things; they predict the next most likely word based on the context you provide. To get the best results, you need to structure your request clearly.

The Core Components

[Instruction] + [Context] + [Input Data] + [Output Indicator]

1. Instruction

Tell the model exactly what to do. Use active verbs.

  • โŒ "Can you maybe write a poem?"
  • โœ… "Write a haiku about coding."
  • โœ… "Summarize," "Translate," "Classify," "Generate."

2. Context

Give the model background information to narrow down the possibilities.

"I am a beginner in Python. Explain loops to me using a cooking analogy."

3. Input Data

The text you want the model to process.

4. Output Indicator

Tell the model how you want the answer formatted.

  • "Format the answer as a bulleted list."
  • "Return the result in JSON format."
  • "Limit the response to 50 words."
๐Ÿ”‘ Key Takeaway: The clearer your prompt, the better your results. Always include what you want, the context, and the format.
๐Ÿงช

Try It Yourself

Practice what you learned with our interactive tools.

โœจ Open Magic Optimizer
๐Ÿ’ก

Pro Tips

  • โ€ข Be specific with your instructions
  • โ€ข Use examples when possible
  • โ€ข Iterate and refine your prompts

Sign in to track your progress

Sign in to Complete