It usually happens after you submit something good. The brief was answered clearly, the client replied positively, the meeting went smoothly — and then a small, sharp thought surfaces: did I earn that? You used AI to draft the structure, to sharpen the wording, maybe to summarise a document you didn't have time to read properly. Nobody knows. And for a moment, you're not sure whether to be proud or embarrassed.
This quiet discomfort is one of the most common experiences among professionals using AI at work — and one of the least talked about. Partly because admitting it means admitting you're using AI at all, and partly because the feeling is hard to name. It sits somewhere between guilt and uncertainty. Not quite a conviction that you've done something wrong, but not quite confidence that you haven't.
Here's what's worth examining: the word "cheating" carries a very specific meaning. It means breaking a rule in a context where rules have been agreed upon. Before we ask whether you're cheating, it's worth asking — what rules, exactly, and agreed by whom?
"Cheating" Requires a Test. Work Isn't a Test.
When you cheated in school, the rules were clear. The exam was a measure of your individual knowledge, unsupported by notes or tools, and using external help was a violation of that specific agreement. The point of the test was to verify you, isolated.
Work is not structured this way. Your employer — or your client — wants outcomes. They want the report to be accurate, the email to land well, the analysis to be sound. The measure is the output, not the unaided process that produced it. There is no invigilator, because no one agreed there should be one.
Most workplaces have no AI policy at all. A 2024 Microsoft survey found that 75% of knowledge workers were already using AI tools at work — and the majority of their organisations had not formally addressed it. If you're working in a policy vacuum, you're not breaking rules. You're operating in a space where the rules haven't been written yet. That's ambiguous, yes. But ambiguity is not guilt.
The feeling of cheating, in this case, is really a fear of being judged — of someone finding out and deciding, after the fact, that what you did was wrong. That's a social anxiety, not an ethical violation.
Every Productivity Tool Felt Like This Once
It's worth knowing that this discomfort has a history.
When calculators arrived in classrooms, teachers debated whether students who used them were genuinely learning maths or just outsourcing it. When word processors replaced typewriters, some argued that spell-checkers were making writers lazy. When Google became the first stop for any research question, academics worried about the death of rigorous inquiry. The pattern is consistent: a new tool arrives, people use it to do things faster or better, and for a while there's a collective anxiety about whether that's allowed.
In every case, the anxiety faded. Not because the concerns were entirely wrong — some students did avoid developing mental arithmetic, some writers did stop proofreading carefully — but because society recalibrated what the actual skill was. The skill wasn't the laborious method. It was the judgement, the quality of reasoning, the ability to recognise a good answer when you saw one.
AI is no different. The professionals who thrive with it are the ones who review what it produces, apply their expertise to what it gets wrong, and take full responsibility for the finished work. The tool does not replace that — it demands it.