There is a quiet anxiety running through a lot of experienced professionals right now. Not the dramatic "robots will take my job" version — that one is easy to dismiss. The subtler version goes something like this: If AI can produce a plausible-sounding answer in seconds, what exactly am I still bringing to the table? It is a fair question. And the answer is more reassuring than you might expect — but only once you understand what has actually changed.
The Part of Your Work That AI Has Already Commoditised
For most of professional history, a significant portion of your value came from two things: access to relevant information, and the time and capacity to process it. A consultant who had read the right reports, a lawyer who had memorised relevant case law, a strategist who had the bandwidth to research the competitive landscape — these people had an edge simply because the information and processing capacity were scarce.
AI has largely ended that scarcity. Ask almost any AI tool to summarise a field, generate an analysis, produce a first draft, or outline a strategy — and you will get something coherent and often surprisingly thorough within seconds. The cost of producing a plausible answer has collapsed.
This is disorienting if you have built your professional identity around knowing things and producing outputs. I will not pretend otherwise. But the disorientation points toward something real — a shift in where value now lives.
What Remains, and Why It Cannot Be Generated
Here is what AI cannot produce: judgement about what the information means in this particular situation, for these particular people, with these particular constraints.
Judgement is not information retrieval. It is not even pattern matching in the simple sense. It is the accumulated residue of having been wrong before — of having recommended an approach that looked correct on paper and watched it fail in practice. Of having worked with enough different teams to know that "culture issues" can mean fifty different things. Of having enough domain scar tissue to spot when a conclusion is technically sound but contextually absurd.
A junior analyst and a senior partner can read the same AI-generated market analysis. Both receive identical text. What separates them is not what they read — it is what they notice. The senior partner sees that the growth projections assume a regulatory environment that is about to change. She sees that the recommended acquisition target has a leadership profile that will create integration nightmares with the client's existing culture. She sees that the real strategic question was not the one the brief asked.
That seeing is judgement. It is built from years of observing consequences, making calls, being right and wrong, and noticing what you missed in hindsight. It cannot be downloaded.
🧠 Quick Challenge: An experienced product manager and a recent graduate both use AI to generate a product launch strategy. Both receive the same output. Which of the following best describes the key difference between them?
- A) The experienced manager will write better prompts, so they'll get a higher-quality output
- B) The experienced manager can evaluate which parts of the output are wrong, incomplete, or misaligned with the real situation
Answer: B) Better prompts help, but they are not the core differentiator. The experienced manager's real advantage is knowing what to do with the output — spotting flawed assumptions, recognising what was missed entirely, and making a confident call about what to use, what to discard, and what to push back on. That evaluative capacity is judgement, and it accumulates through experience, not prompting technique.
The Inversion: Cheap Outputs Make Evaluation More Valuable
Here is what few people are saying clearly enough: when everyone can produce a draft, the ability to evaluate a draft becomes the scarcest skill in the room.
Think about what happens inside organisations right now. AI generates a proposed restructuring plan. Someone has to decide if it is actually right. AI produces three strategic options. Someone has to say which one will work with this board, this budget cycle, this team's current capacity. AI drafts a legal clause. Someone has to know if it will hold up in the specific jurisdiction, under the specific conditions, against the specific counterparty.
The people who can do that confidently — who can look at three AI-generated options and say "that one, and here's exactly why" — are not being replaced. They are being promoted, implicitly, to a layer of work that did not have a clear name before: the evaluation layer.
This is not a consolation prize. Evaluation is where the actual decisions live. It is where the real professional risk sits. And it is where experience, pattern recognition, and contextual wisdom pay off most visibly.