If you have ever sat in front of a blank chat window, cursor blinking, wondering whether there is some secret incantation everyone else knows and you do not — welcome. The first year I used ChatGPT I was quietly convinced I was missing a "magic phrase" everyone else had been handed at some sort of secret meeting. It turned out nobody had. There is no magic phrase. There is just a word — "prompt" — that sounds far more mystical than it actually is.
The word "prompt" has taken on a kind of reverence lately. The job title "prompt engineer" made headlines a couple of years back, prompt libraries get sold at real prices, and there is no shortage of confident declarations about "the right way" to speak to AI. If that has left you feeling like the definitional baseline slipped past you while everyone else was busy selling courses, that is completely normal. Most guides skip the plain-English bit entirely and launch straight into formulas, frameworks, and acronyms.
This article is the plain-English bit. No formula, no acronym, no "ten secrets". Just: what is a prompt, what is it not, and why does clearing that up change the quality of everything you produce with AI afterwards. This is the second piece in our Terminology Tamer series — a sibling to the plain-English explainer on LLMs. If you find yourself nodding along to "LLM" and "prompt" without quite being able to define either, you are in the right place.
The One-Sentence Answer
A prompt is the instruction you give an AI tool to produce an output.
That is it. That is the whole definition. You type (or speak, or paste) something into ChatGPT, Claude, or Gemini, and whatever you have typed is the prompt. The response that comes back is the output. The bit in between — the prompt — is the only part you actually control.
If you want a slightly longer version: a prompt is a brief. It is the set of instructions, context, and constraints you hand to an AI tool so it knows what you want it to produce. A good prompt reads a bit like a message to a capable new colleague who has no idea what you do, who the audience is, or what "the usual format" looks like. A bad prompt reads like "write me a thing."
Everything else you have heard about prompts — system messages, few-shot examples, chain-of-thought, temperature, what tech people call "prompt engineering" — is a refinement on top of that one sentence. The refinements matter eventually. The definition matters first.
What A Prompt Is Not (And The Myths That Get In The Way)
This is the part most guides skip, and it is the part that quietly wrecks people's expectations. It is easy to feel like you are the only one struggling here. You're not — the myths around prompts are genuinely pervasive, and most of them contradict each other, which is part of why this feels so confusing.
Let us take them one at a time.
A prompt is not a spell
There is no magic phrase that unlocks dramatically better results. "Act as a world-class expert" does not transform the tool into a world-class expert. "Think step by step" sometimes helps on reasoning tasks and sometimes does nothing at all. Screenshots of viral "mega-prompts" that claim to "10x your output" almost always rely on you already knowing what a good output looks like for your situation — which the prompt cannot supply.
If a phrase helps, it helps because it adds clarity or structure — not because the words themselves are special.
A prompt is not a one-time setting
This one catches people out constantly. A prompt is not something you configure once at the top of the chat and then forget about. Each message you send is a prompt. Every follow-up ("make it shorter", "more formal", "add a bullet on pricing") is another prompt. The conversation is a sequence of prompts and responses, and the output at any point is shaped by everything you have said so far in that chat.
This is also why starting a brand new chat for a new task is usually a good idea — the old prompts in a long conversation can quietly steer the tool in directions you have forgotten about.
A prompt is not a hidden technical setting
You do not need to "access the prompt" somewhere in settings. You are already writing prompts every time you type something into the chat box. That is the prompt. There is nothing more clever going on behind a curtain — though some tools do add their own instructions on top of yours (what the industry calls a "system message"). That is a detail for another day. For now: the thing you type in the box is the thing.
A prompt is not the same as the output
This sounds obvious, but it is worth saying clearly. The prompt is the input — the instruction. The output is what comes back. When someone says "that prompt gave me an amazing result", they mean the instruction produced a good response. The prompt itself is not the useful artefact; it is the cause of the useful artefact. The distinction matters because it reframes the job: your job is to write a good instruction, not to extract a good response through sheer force of will.
🧠 Spot the Difference: Which of these two prompts will probably produce a result you can actually use?
- A) "Write something about remote work for LinkedIn. Make it engaging."
- B) "I'm a HR director at a mid-sized UK tech company. Draft a 120-word LinkedIn post for my feed about the three most common mistakes managers make when running remote one-to-ones. Tone: direct, slightly wry, no emojis. End with an open question to invite replies."
Answer: B — and not because it is longer. It is because it is briefed. Prompt A could describe a thousand different articles by a thousand different people; Prompt B could really only describe one. That specificity is what a prompt is for. Prompt A is a wish. Prompt B is an instruction. A longer prompt is not automatically better — a clearer one almost always is.
Why The Same Idea Written Three Different Ways Gives Three Different Outputs
Here is where the definition earns its keep. If a prompt is an instruction, then the wording of the instruction shapes the output — in ways that can feel disproportionate until you see them side by side.
Imagine you want help preparing for a difficult conversation with an underperforming team member. Three prompts, same underlying need:
- "Help me prepare for a difficult conversation."
- "Give me talking points for a performance conversation with an underperforming employee."
- "I'm a team lead in a design agency. Tomorrow I need to have a calm, fair conversation with a mid-level designer whose work has slipped over the last two months — missed deadlines, quality dropping. Draft three opening lines I could use, each one direct but not accusatory, and suggest one open question I should ask before offering any feedback."
The first gives you generic advice that could apply to almost any awkward conversation in human history. The second narrows it down but still returns textbook performance-review boilerplate. The third produces something you might actually use on Tuesday morning.
💬 The shorthand we keep coming back to: the AI is not reading your mind; it is reading your prompt. If the prompt is thin, the output is thin. If the prompt is briefed, the output is briefed.
None of this is about being clever. None of it is about memorising a template. It is about recognising that the tool has no idea who you are, what your situation is, or what "good" looks like for you — unless you tell it. Once the definition lands — a prompt is a brief — the quality gap mostly sorts itself out.
The Practical Definition, In Work Terms
If you have ever briefed a freelancer, written a creative brief for a designer, or sent instructions to a new hire, you already know how to write a prompt. You just have not called it that.
A useful working definition for the rest of your career:
A prompt is a brief you write for a very literal, very fast, occasionally overconfident collaborator who has no memory of your work and no idea what you mean by "the usual."
Every word in that definition is doing work.
- Very literal — it takes your words more or less at face value, so ambiguity in the brief produces ambiguity in the output.
- Very fast — you can iterate cheaply, which means a second prompt to refine the first is almost always worth it.
- Occasionally overconfident — it will produce a plausible-sounding answer even when it should say "I don't know", so the brief should specify what to do when unsure.
- Shaky memory of your work — most tools now carry some context across chats via a product-level memory feature, but it is short, lossy, and easy to get wrong. Safer to treat each new chat as day one and paste in the context that matters.
- No idea what "the usual" means — the quiet assumptions you carry around about tone, format, and audience need to be written down.
This is why the same people who write excellent creative briefs for humans tend to write excellent prompts within a week of trying. The skill transfers almost entirely. The vocabulary just looks intimidating from the outside.
Where To Go Next
Now that the definition is clear, the natural next step is learning how to actually write one of these things well. We have a companion piece for exactly that: how to write AI prompts that actually work. It walks through a four-part structure — context, task, format, constraints — that turns the definition into a repeatable habit. Read this one for the what. Read that one for the how.
A few other places to point yourself:
- If you want to see what good prompts look like in practice rather than theory, the AI Tutorium Prompt Library has ready-made examples across common workplace tasks. Treat them as worked examples, not templates to copy verbatim — the point is to see the pattern.
- If other bits of AI terminology are also fuzzy — "token", "context window", "hallucination", "system message" — the essential AI terms glossary keeps the definitions short and plain.
- If you want the full beginner arc rather than picking off articles one at a time, the AI for Beginners learning path sequences everything from "what is a prompt" through to applying AI in your actual work.
- And if you are curious about where this whole topic sits in our wider framework, prompts are part of the Educate pillar of the ICE Method — using AI to build understanding first, then directing it with intent.
The short version of all of it: a prompt is not a trick. It is a brief. Once that clicks, the rest of AI stops feeling like a secret club and starts feeling like a skill you can practise.
Try one this week — Pick one small task you would normally do on autopilot (a reply, a summary, a bullet list for a meeting). Write a two-sentence brief for it — who you are, what you want, any constraints — and paste it into the Prompt Library starter templates or straight into your favourite AI tool. Notice how the output changes when the instruction does. You've got more than enough to get started — the rest is just practice.