You have learned the basics of AI prompting. You give it context, specify a task, describe the format, add constraints — and you get decent results. Maybe even good ones. But there is a nagging feeling that you are leaving performance on the table. You see people sharing AI outputs that feel sharper, more nuanced, more exactly right than what you are getting. And you suspect there is another gear.
There is. And the good news is that it does not require a computer science degree or months of practice. The five techniques in this article each take under five minutes to learn, work across every major AI tool, and produce noticeably better results from the very first time you try them. If you are comfortable with the four-part prompt formula, you are ready for these.
We should be honest: we resisted learning these techniques for longer than we should have. The basics were working well enough, and "advanced prompting" sounded like something for AI researchers, not working professionals. We were wrong. The difference between a solid prompt and a great one often comes down to one or two of these techniques — and the payoff is immediate.
Technique 1 — Chain of Thought
Most people ask AI for an answer. Chain of thought asks AI to think through the problem first, then give the answer. The difference is dramatic — especially for anything involving logic, analysis, or multi-step reasoning.
Without chain of thought:
"What pricing strategy should we use for our new SaaS product?"
The AI will give you a generic answer — probably mentioning value-based pricing and competitive analysis. Correct but shallow.
With chain of thought:
"I need to decide on a pricing strategy for a new SaaS product. Before giving your recommendation, think through this step by step: first, consider the main pricing models available (freemium, usage-based, per-seat, tiered). Then evaluate the pros and cons of each for a B2B tool targeting mid-market companies with 50-200 employees. Then consider what signals we should look for in early customer conversations. Finally, give your recommended approach with reasoning."
The AI now walks through each pricing model systematically, weighs trade-offs specific to your market segment, and arrives at a recommendation that shows its working. You can spot where its reasoning is strong and where it needs correction — which is far more useful than a confident-sounding conclusion you cannot evaluate.
The key phrase is simple: "Think through this step by step before answering." You can also say "reason through each step," "show your working," or "walk me through your analysis before concluding." All of these trigger the same behaviour.
Chain of thought works best for:
- Strategic decisions with multiple variables
- Data analysis and interpretation
- Problem diagnosis (debugging code, troubleshooting processes)
- Any question where the reasoning matters as much as the conclusion
Research Callout: Wei et al.'s chain-of-thought prompting research (Google Brain, 2022) showed that adding step-by-step reasoning examples improved accuracy on maths benchmarks from under 18% to over 58% with the same model. A separate study by Kojima et al. found that simply adding "Let's think step by step" roughly quadrupled accuracy on similar tasks. The technique has since been validated across dozens of reasoning benchmarks.
Technique 2 — Few-Shot Examples
When you want the AI to produce output in a specific style, tone, or format, nothing beats showing it examples. Instead of describing what you want (which is surprisingly hard to do precisely), you demonstrate it.
Without examples:
"Write short product descriptions for our e-commerce store. Keep them punchy and benefit-focused."
You will get something reasonable, but the AI is guessing at your definition of "punchy" and "benefit-focused."
With two examples:
"Write product descriptions for our e-commerce store. Match the style of these examples:
Merino Wool Beanie — Keeps your head warm without the itch. Temperature-regulating merino stays comfortable from the morning commute to the evening dog walk. One size. Five colours.
Canvas Weekender Bag — Big enough for a three-day trip. Small enough for overhead bins. Waxed cotton exterior shrugs off rain. Interior pockets keep cables separate from clothes.
Now write descriptions for: 1) Insulated water bottle (500ml, keeps drinks hot 12hrs/cold 24hrs), 2) Laptop sleeve (fits 13-15 inch, recycled materials)"
The AI immediately matches the rhythm, length, sentence structure, and tone. Two examples communicate more about your desired output than five paragraphs of instructions ever could.
The principle: two to three examples are usually enough. One example might be a fluke; two establish a pattern; three confirm it. More than three rarely improves results and wastes context window.
Few-shot examples work best for:
- Consistent formatting across multiple outputs (product descriptions, email templates, social posts)
- Matching an existing brand voice or writing style
- Complex data transformations (show the AI three input-output pairs, then give it the next input)
- Any task where "I'll know it when I see it" describes your quality bar
Quick Tip: Keep a small library of your best AI outputs. When you get a result that nails the tone or format, save it. Next time you need similar output, paste it as a few-shot example. Over time, you build a personal prompt toolkit that produces consistently excellent results.
Technique 3 — Role Stacking
You probably know about giving AI a role: "You are a marketing expert" or "Act as a financial analyst." Role stacking takes this further by assigning multiple, complementary perspectives in a single prompt.
Why does this work? Because real-world problems sit at the intersection of multiple disciplines. A marketing strategy is not just a marketing problem — it involves psychology, data analysis, brand positioning, and budget management. When you stack roles, the AI draws on a broader range of its training data and produces output that is more nuanced and more practical.
Single role:
"You are a content strategist. Write a blog post outline about remote work productivity."
Stacked roles:
"You are a senior content strategist who also understands SEO, behavioural psychology, and B2B sales funnels. Write a blog post outline about remote work productivity that will rank well in search, keep readers engaged through the full article, and naturally lead to our project management tool as a solution — without being salesy."
The stacked version produces an outline that balances discoverability (SEO), reader engagement (psychology), and business outcomes (sales funnel) — three perspectives that would normally require three different people in a room.
Some powerful role combinations:
- Copywriter + UX designer + conversion specialist — for landing page copy
- Data analyst + business strategist + storyteller — for executive presentations
- HR professional + employment lawyer + empathetic manager — for sensitive employee communications
- Technical writer + product marketer + customer support lead — for product documentation
The key is choosing roles that create productive tension. A copywriter wants compelling language; a UX designer wants clarity; a conversion specialist wants action. When the AI has to satisfy all three, the result is better than any single perspective would produce.
Quick Challenge: Pick a task you do regularly and design a three-role stack for it. What perspectives would improve the output if they were all represented?
Answer: There is no single right answer — the value is in identifying which complementary viewpoints your work benefits from. Most professionals find that adding a "user/customer perspective" role alongside their domain expertise improves nearly everything.

Technique 4 — Iterative Refinement Loops
Here is an uncomfortable truth: the best AI outputs almost never come from a single prompt. They come from a conversation — a back-and-forth where each prompt builds on the previous output.
Most people treat AI as a vending machine. Insert prompt, receive output, done. The professionals getting exceptional results treat it as a collaborator. The first prompt generates a rough draft. The second prompt refines it. The third prompt polishes it. Each iteration is faster and easier than the last.
Here is a three-step loop for writing an executive summary:
Step 1 — Generate the raw material:
"I need an executive summary for a board presentation about our AI adoption initiative. Key facts: 6-month pilot, 42 employees participating, 23% average time saving on routine tasks, $180K projected annual savings, three departments involved (marketing, finance, customer support). Write a first draft — about 200 words."
Step 2 — Refine with specific feedback:
"Good foundation. Now revise with these changes: lead with the financial impact rather than the timeline, make the tone more confident (this initiative exceeded expectations), cut any hedging language, and add one sentence about what we recommend for the next phase."
Step 3 — Polish for the audience:
"Almost there. Final pass: our board includes two members who are sceptical about AI investment. Add one sentence that addresses the ROI concern directly. Also, replace 'routine tasks' with something more specific — mention the actual tasks: report generation, data reconciliation, and email triage."
Three prompts. Each takes under 30 seconds to write. The final output is dramatically better than what any single prompt would have produced — because each iteration adds specificity, addresses edge cases, and sharpens the focus.
The iterative approach works because it mirrors how good writing actually happens. Nobody writes a perfect first draft. The AI is no different.
Common refinement directions:
- "Make it shorter / more concise"
- "Adjust the tone — more formal / more conversational / more confident"
- "Add specific examples for point 3"
- "The second paragraph is too generic — make it specific to [industry]"
- "Rewrite the opening to lead with [specific angle]"
Quick Tip: If you are not sure how to refine, ask the AI: "What's the weakest part of what you just wrote, and how would you improve it?" Most models are surprisingly good at self-critique — and their suggestions often point you to the refinement that matters most.
Technique 5 — Constraint Layering
This technique is the most counterintuitive: the more constraints you add, the better the output gets. Beginners think constraints limit creativity. In practice, they focus it.
Constraint layering means progressively adding specific requirements — word count, audience, tone, format, exclusions — to narrow the AI's output space until it produces exactly what you need.
Watch how each layer sharpens the result:
Layer 1 — Basic request:
"Write a LinkedIn post about our company's new remote work policy."
Layer 2 — Add audience:
"Write a LinkedIn post about our company's new remote work policy, targeting mid-career professionals who are considering whether to apply to our company."
Layer 3 — Add tone and format:
"Write a LinkedIn post about our company's new remote work policy, targeting mid-career professionals considering applying. Tone: warm but professional. Format: open with a personal observation, then three short paragraphs, close with a question to drive comments."
Layer 4 — Add constraints and exclusions:
"Write a LinkedIn post about our company's new remote work policy, targeting mid-career professionals considering applying. Tone: warm but professional. Format: open with a personal observation, then three short paragraphs, close with a question. Constraints: under 150 words, do not use the word 'excited' or 'thrilled,' do not include hashtags, mention that we've been remote-first for 3 years."
Each layer eliminates a category of generic output. By layer 4, the AI has almost no room to produce something bland — every sentence has to earn its place within the constraints you have set.
The exclusions are particularly powerful. Telling the AI what not to do prevents the default patterns that make AI output feel, well, AI-generated. "Do not use corporate jargon." "Do not start with a question." "Do not include a list of benefits." These negative constraints are often more impactful than positive instructions.
Putting It All Together
Let us combine three techniques in a single real-world scenario. Suppose you need to write a proposal summary for a consulting engagement.
"You are a management consultant who also understands financial modelling and organisational psychology [role stacking]. I need a one-page proposal summary for a client considering a digital transformation initiative. Before writing, think through the key concerns a CFO would have about this investment, then the operational risks a COO would flag, then the change management challenges an HR director would raise [chain of thought]. Write the summary in under 300 words. Use the structure: one opening paragraph that frames the opportunity, three bullet points addressing each stakeholder's concern, one closing paragraph with a clear next step. Do not use the phrases 'digital transformation journey' or 'leveraging technology.' Tone: confident, direct, no jargon [constraint layering]."
That is one prompt. It takes perhaps 90 seconds to write. The output will be specific, multi-perspective, well-structured, and free of the corporate fluff that makes most proposals forgettable.
You do not need to use all five techniques every time. Most prompts benefit from one or two. The skill is recognising which technique fits the task:
- Complex reasoning? Chain of thought.
- Need consistent style? Few-shot examples.
- Multi-disciplinary problem? Role stacking.
- First draft is close but not right? Iterative refinement.
- Output feels generic? Constraint layering.
Where to Go from Here
These five techniques are the bridge between using AI as a convenience and using it as a genuine competitive advantage. They work with ChatGPT, Claude, Gemini, and every other major AI tool — the principles are universal.
If you want to practise with real prompts that use these techniques, our Prompt Library has dozens of ready-to-use templates organised by task type. Many of them incorporate chain of thought and constraint layering by default — you can see the techniques in action and adapt them to your own needs.
For a deeper exploration of prompt craft, the Prompt Engineering learning path takes you from intermediate to advanced, with worked examples across writing, analysis, coding, and creative tasks. And if you want to experiment interactively, the Prompt Engineering Demo lets you toggle techniques on and off to see how each one changes the output in real time.
The most important thing is to try one technique today. Not all five — just one. Pick the technique that maps to a task you do this week. Use it once. See the difference. Then try the next one.
That is how every skill builds — not by reading about it, but by doing it once and noticing that it works.




