← All guides

Prompt Engineering Complete Guide

A complete guide to prompting. Why it's structured thinking, the three components of a good prompt, common mistakes, and advanced techniques like chain of thought and few-shot learning.

Ebenezer Don

Prompt engineering has a misleading name. It sounds like a bag of tricks: sprinkle the right keywords, use “think step by step,” and watch the magic happen. In reality, prompting is thinking. The better you structure your thinking, the better your results. This guide covers the fundamentals, common pitfalls, and advanced techniques that actually work.

Why Prompting Is Thinking, Not Trickery

An LLM doesn’t have access to your intentions. It only sees the tokens you send. Your prompt is the entire context it has to work with. When you write a prompt, you’re doing the same thing you do when you brief a colleague: you’re clarifying the problem, the constraints, and the desired outcome. The difference is that your colleague can ask follow-up questions. The model cannot. It has to get it right from what you wrote.

That’s why “prompt engineering” is really structured thinking made explicit. You’re externalizing your reasoning so the model can follow it. The people who get the best results aren’t the ones who know secret incantations; they’re the ones who think clearly and communicate that thinking well.

The Three Components of a Good Prompt

Every effective prompt has three building blocks. You don’t always need all three explicitly, but missing one often leads to vague or off-target output.

1. Context: What the Model Needs to Know

Context answers: What am I working with? Give the model the background it needs. That might be:

  • The domain (e.g., “You’re a senior Python developer reviewing code”)
  • Relevant facts (e.g., “This API returns JSON with fields X, Y, Z”)
  • The current state (e.g., “Here’s the function I’ve written so far”)

Without context, the model guesses. It might assume you’re a beginner when you’re not, or that you’re in a different domain entirely. A little context goes a long way.

Example:

I'm building a REST API for a todo app. Each task has: id, title, completed, createdAt. I'm using Express.js and PostgreSQL.

2. Instruction: What You Want Done

The instruction answers: What should you do? Be specific. “Improve this” is vague. “Add input validation and return 400 with a clear error message for invalid payloads” is specific.

Good instructions often include:

  • The action (summarize, rewrite, debug, generate)
  • The scope (this paragraph, this function, this list)
  • The format (bullet points, JSON, a table)

Example:

Review the validateEmail function below. List any edge cases it misses, then suggest an improved version that handles them. Return your answer as: 1) A bullet list of edge cases, 2) The improved code.

3. Constraints: What to Include or Avoid

Constraints answer: What are the guardrails? They prevent the model from drifting into unwanted territory.

Common constraints:

  • Length: “Keep it under 200 words” or “One paragraph only”
  • Tone: “Professional but friendly” or “No jargon”
  • Format: “JSON only, no markdown” or “Use bullet points”
  • Exclusions: “Don’t use external libraries” or “Avoid suggesting deprecated APIs”

Example:

Write a short welcome email for new users. Tone: warm but not cheesy. Length: 3-4 sentences. Do not mention pricing or upsells.

Putting It Together

Here’s a complete prompt that uses all three components:

**Context:** I'm a frontend developer learning React. I have a component that fetches user data and displays it. The API sometimes returns null for the user field.

**Instruction:** Explain why my component might crash when user is null, and show me how to fix it with optional chaining or a guard clause. Include a one-line explanation of which approach you prefer and why.

**Constraints:** Use functional components and hooks only. No class components. Keep the explanation under 100 words.

This prompt gives the model everything it needs: domain (React, frontend), situation (null user), desired output (explanation + fix + preference), and limits (functional only, brief).

Common Mistakes

Mistake 1: Assuming the Model Reads Your Mind

You know what you want. The model doesn’t. Vague prompts like “make it better” or “fix this” leave too much to interpretation. Always spell out what “better” or “fixed” means in your context.

Mistake 2: Burying the Important Part

Models pay more attention to the beginning and end of prompts (position bias). If your critical instruction is buried in the middle of a long paragraph, it may get less weight. Put the most important instruction early or repeat it at the end.

Mistake 3: Overloading in One Shot

Asking for five different things in one prompt often yields mediocre results on all of them. Break complex tasks into steps. Get one thing right, then build on it.

Mistake 4: Ignoring the Model’s Training

Models have strengths and weaknesses based on their training. They’re generally better at common patterns, clear structure, and well-documented domains. They struggle with very niche topics, precise numbers, and tasks that require real-time or private data. Design your prompts around what the model can realistically do.

Advanced Techniques

Chain of Thought (CoT)

When a task requires reasoning, ask the model to “think step by step” or “show your reasoning before giving the answer.” This encourages the model to generate intermediate steps, which often improves accuracy on math, logic, and multi-step problems.

Example:

Solve this: A store sells apples for $2 each and oranges for $3 each. If I buy 5 apples and 3 oranges, how much do I pay? Show your reasoning step by step, then give the final answer.

CoT works because it forces the model to “show its work,” and that work is often more reliable than jumping straight to an answer.

Few-Shot Learning

Give the model 1–3 examples of input-output pairs before asking it to perform on a new case. This teaches the format, style, and level of detail you want without lengthy explanation.

Example:

Convert these sentences to title case:

Input: "the quick brown fox"
Output: "The Quick Brown Fox"

Input: "hello world from python"
Output: "Hello World From Python"

Input: "a tale of two cities"
Output: "A Tale of Two Cities"

The model infers the pattern from the examples. Few-shot is especially useful for consistent formatting, domain-specific phrasing, or when verbal instructions would be cumbersome.

System Prompts (When Available)

Some interfaces let you set a “system” or “instruction” prompt that persists across the conversation. Use it to establish the model’s role, tone, and constraints once, so you don’t have to repeat them in every user message.

Example system prompt:

You are a technical writer. You explain concepts clearly, use examples, and avoid jargon unless necessary. When you use jargon, define it. Keep responses concise but complete. If the user asks for code, include brief comments.

Then your user prompts can be short and task-specific; the system prompt handles the meta-instructions.

Why Understanding the Model Makes You Better at Prompting

LLMs predict the next token. They don’t have a plan; they generate one token at a time. When you understand that, you can design prompts that:

  • Reduce ambiguity so the model has fewer plausible wrong paths
  • Provide structure so the model knows what format to follow
  • Chunk complex work so the model isn’t asked to hold too much in one generation
  • Ground the output so the model has something concrete to build on (e.g., “Given this code…”)

The best prompters think like the model: they ask, “What would make the next-token prediction problem easier?” and then write prompts that do exactly that.

Start Simple, Iterate

You don’t need to use every technique in every prompt. Start with context, instruction, and constraints. If the output is off, refine one component at a time. Is the context missing something? Is the instruction unclear? Are the constraints too loose or too tight?

Prompting is iterative. The first version is rarely the best. Treat it like a draft: write, run, observe, refine.


For a deeper treatment of prompting as structured thinking, including frameworks for complex tasks, working with AI agents, and building reliable AI-powered workflows, check out Get Insanely Good at AI.

Go deeper with the book

This guide covers the essentials. Get Insanely Good at AI goes further, with practical frameworks, real workflows, and the understanding that makes everything click.

Get the Book