The most underrated AI skill isn't writing better prompts. It's knowing when to close the tab.
If you've read one "AI use cases" listicle, you've read thirty. They all blur together: AI for emails, AI for meeting notes, AI for brainstorming, AI for every corner of your working day. What almost none of them cover is the other half of the competence — the tasks where reaching for AI produces a worse outcome than doing the work yourself. And I'd argue that's the half that separates confident AI users from people still quietly underwhelmed by the whole thing.
I've made most of the mistakes on the list below myself. A couple more than once. Nobody ever told me "here are the scenarios where ChatGPT or Claude will quietly make your work worse" — I had to find them. Hopefully this shortcut saves you some of that.
This piece is the companion to our seven common AI mistakes everyone makes article. That one is about how to use these tools better. This one is about when not to use them at all.
The skill nobody teaches — "when-not" as a competence
Most AI writing treats "when to use AI" as the whole skill. Write clearer prompts, pick the right tool, iterate on the output, verify the facts — and you're sorted. Those habits matter. They're covered well in our guide to writing prompts that actually work.
But there's a quieter skill underneath all of that: the judgement to notice when the task in front of you isn't one AI should be touching. Not because the tool has failed, but because the task has a shape AI doesn't fit.
That judgement is the Educate pillar of our ICE Method in practice. Learning how these tools work means learning where they stop working. And in our experience, the professionals who get the most out of AI are not the ones with the longest prompt library — they are the ones who've mapped the edges.
Here are the seven edges we've mapped so far.
Scenario 1 — High-stakes judgement calls with consequences you'd own
The template is easy to spot: a decision where, six months later, someone might ask you why. Performance reviews. Hiring calls. Deciding to end a client relationship. Choosing between two job candidates, both qualified, both likeable.
You can absolutely use AI to draft the communication after the decision — a warm email to the unsuccessful candidate, a clean write-up of a review conversation that already happened. That's fine, and often good. What it shouldn't do is lead the decision itself.
Here's why. The weight of a high-stakes judgement comes from you having sat with the specific context — the person, the history, the quiet signals the AI never saw. An AI suggestion that arrives in three seconds without that context has the tone of a decision without being one. And the moment you ship it, it becomes yours to defend.
In our experience, the sequence that works is: decide first, with the full weight of human judgement working alongside AI for information, not instruction. Then use AI to polish the comms. The other way round, you've borrowed authority you didn't earn.
Scenario 2 — Work you don't yet understand well enough to evaluate
This is the one I wish someone had told me earlier. If you can't tell whether the output in front of you is good, mediocre, or subtly wrong — you can't use AI for that task yet. You can only use it to pretend you've done it.
The trap is that AI output always looks plausible. Grammatical. Confident. Structured. For a task you don't yet understand, "looks plausible" is exactly the signal that misleads you. A financial model you can't check. A legal clause you can't assess. A medical summary you can't interrogate. The tool hands you a finished-looking artifact and you have no way to tell if it's 90% right or 60% right.
Here's one we'd suggest trying instead. When you're newer to a task, use AI as a tutor before using it as a ghostwriter — ask it to explain the task's underlying structure, walk you through what a good version looks like, and quiz you on the principles. Our Learning Paths are built around exactly this sequence: understand, then produce.
The general rule: AI can accelerate work you already grasp. It can't replace understanding you don't have.
🧠 Quick Challenge: Your manager asks you to write a performance review for a direct report you've worked with for two years. Based on what you've read so far, what's the best approach?
- A) Open Claude, paste the bullet points of their work, and ask for a full review draft
- B) Write your own assessment and decisions first, then use AI to tighten the phrasing and check for unintended tone
- C) Ask AI to "pretend you're their manager" and generate the full review from scratch
Answer: B) Performance reviews are a high-stakes judgement call with consequences you'd own — see Scenario 1. The weight of the review comes from your direct context with the person. AI is useful for polishing language after you've made the calls; it shouldn't generate the assessment itself. Option A buries your judgement under a plausible-sounding draft; Option C fabricates judgement that isn't yours.
Scenario 3 — Genuinely creative work where your distinctive voice is the product
A lot of creative work doesn't need a distinctive voice. A product description. A routine LinkedIn post. A draft of meeting notes. For those, AI is excellent — and nobody's reading them for the author anyway.
The scenarios where AI quietly undermines you are the ones where your voice is the product. A keynote talk. A founder letter. A piece of criticism. A newsletter people subscribe to because they like how you think. An essay you'd want quoted.
The underlying issue is simple. AI outputs are produced by sampling toward the most probable continuations given enormous amounts of training text. The default register is, by design, the most likely phrasing, the most familiar structure, the safest metaphor. Distinctive voice is exactly what you get by not taking the most probable route. The two pull in opposite directions.
You can still use AI in this kind of work — as a sparring partner to pressure-test ideas, as a researcher, as an editor once the draft exists. What tends to flatten the result is handing AI the blank page. The first 30% of creative work — the angle, the framing, the voice — is where your distinctiveness lives. Protect that bit.
Scenario 4 — Anything with confidential, regulated, or client-sensitive data
We won't repeat the full argument here — we've written about this at length in what happens to your data when you use AI tools. The short version: free-tier consumer AI tools are not the right place for client contracts, regulated data, named employee records, credentials, or anything with compliance strings attached.
The nuance that matters: this isn't an argument against using AI with serious work. It's an argument for knowing your tier, your settings, and your sharing rules before you paste. For sensitive work, that usually means an enterprise or team tier with contractual data guarantees, plus a simple anonymisation habit.
If you haven't checked your settings lately, that article is the five minutes to spend on it.
Scenario 5 — Tasks where the thinking is the point
Some work produces an artefact — an email, a report, a slide deck. The artefact is the deliverable. AI is great for those.
Other work looks like it produces an artefact but is really producing a change in you. Notes you take while reading a dense paper. A written argument you work through to figure out what you actually think. A summary you write of a strategy document so you understand it well enough to lead from it. The notes are almost beside the point — the thinking is the deliverable, and the thinking only happens if you do the work.
Handing these to AI feels efficient. You get the artefact in thirty seconds instead of an hour. What you don't get is the mental structure that was supposed to form while you wrote. A month later you'll reach for the understanding and notice you never actually built it.
The rule we use: if the value of the task is in the artefact, AI is fair game. If the value is in the learning, close the tab. One useful test — ask yourself what you want to be different about your own head when the task is done. If the answer is "nothing, I just need the document", delegate. If the answer is "I want to actually understand this", don't.
Scenario 6 — Time-sensitive factual claims
Most general-purpose AI tools generate from a training set that can be months out of date, depending on the tool. Even the ones with browsing enabled can return confident summaries of stale pages, slightly-wrong numbers, or sources that no longer exist.
For anything that depends on current truth, that's a quiet landmine. Live share prices. This week's news. A compliance framework that changed last month. A product's current pricing. A court ruling from yesterday. The output will sound as confident as the output about last century's history — and you have no easy way to spot the difference.
The fix isn't to avoid AI here; it's to change tools for this specific kind of question. Search-grounded tools (AI that cites current web pages) are better suited. Primary sources — the regulator's own site, the company's own pricing page, the publisher's own article — are better still. And anywhere a wrong number has real consequences, AI's job is to summarise what you've already verified, not to be the source.
One habit that has saved us repeatedly: for any date, price, statistic, or current event, give the AI the source document rather than asking it to recall.
Scenario 7 — Social and emotional work
Here's the one I feel most strongly about. A condolence message. A difficult apology to a friend you've hurt. Hard feedback to someone you care about. A note to a colleague going through something awful.
The thinking you do while writing these — the pause, the reaching for the right specific detail, the second-guessing, the care — is not overhead. It is the work. It's the part the other person can feel, even though nothing visible on the page marks it. A message that reads as though you sat with it hits differently from a message that reads as though you didn't. People can tell. Often they can't articulate why.
An AI draft of a condolence note tends to read as exactly what it is: a generically competent message that skipped the thinking. It's not that the words are wrong. It's that the care is missing, and the person on the receiving end notices. Even if they never say so.
You can absolutely use AI around the edges here — to check whether a phrasing lands the way you intended, to catch a tone that reads colder on the page than it did in your head. What you shouldn't do is ask it to write the message. That part is yours. It's supposed to cost you something.
The one question we ask before reaching for AI now
Over time, the seven scenarios above collapsed into a single diagnostic we ask ourselves before opening a chat window. It's this:
What, specifically, do I want AI to do that I couldn't do as well or better myself?
If the answer is clear — "draft a first version of this routine email so I can edit rather than write from blank", "explain this unfamiliar concept at three levels of depth", "turn these bullets into a tidy summary", "suggest ten angles for this piece so I can pick one" — brilliant. Off you go.
If the answer is vague, or if it starts to sound like "think for me on a thing I should be thinking about myself", that's the signal to close the tab. The task is one of the seven. You already know what to do.
The point of learning these tools well isn't to use them for everything. It's to use them where they genuinely add, and to keep the rest — the judgement, the voice, the thinking, the care — as yours. That distinction is what the Educate pillar is really about. Not more AI. Better-placed AI, with the human in charge of what goes where.
You've got this — and the fact that you're thinking about where AI fits rather than whether to let it take over is already most of the work.
Ready to build the judgement layer? Our Learning Paths walk through when to reach for AI and when to do the work yourself — across writing, decision-making, and daily professional work. Start with the Core Mindset path if you're newer to this, or go straight to Workplace AI Ethics if you want the deeper version of what we covered here.