You open an AI tool, paste in a draft email, and hit generate. The output is good. You clean it up, send it, move on. Nobody said anything was wrong with that. Nobody said anything at all — because there's no guidance, no policy, no approved list of tools, no memo about what's in or out. There's just you, a deadline, and a tool that helps. That low-level unease sitting underneath it all — the sense that you might be doing something you're not supposed to — is not paranoia. It's the entirely rational feeling of operating in a vacuum.
Most Organisations Are Running 12 to 18 Months Behind
Formal AI governance takes time. It requires legal review, IT security sign-off, HR input, and usually someone senior enough to make a call nobody wants to be wrong about. Most organisations haven't finished that process — not because they're negligent, but because the tools moved faster than any policy process was designed to handle.
That's not unusual. The same thing happened with BYOD (bring your own device) policies in the early 2010s, and with social media use before that. Employees were already using personal phones for work email and posting on LinkedIn before most companies had written a word about it. AI is the same pattern, playing out again, only faster.
So if you're currently making judgement calls about AI use at work without any organisational guidance, you are not the exception. You are the majority.
The discomfort, though, is real. Operating without a framework doesn't mean the decisions don't carry weight — it just means the weight falls entirely on you, individually, each time. The risk isn't that you'll make a catastrophically bad choice. It's that you'll make inconsistent ones, without quite noticing the difference between them.
The Three Questions Worth Asking Before Every AI Task
You don't need a 40-page policy document to make good decisions. You need three questions, asked honestly, before you start.
1. What data am I putting in?
This is the one most people skip, and the one that matters most. When you paste text into a public AI tool — a chatbot, a writing assistant, a summarisation tool — that text leaves your machine. Depending on the provider's data retention settings, it may be stored, used for training, or visible to support staff. Most people are aware of this in theory and forget it in practice.
Ask yourself: does this text contain client names? Project codenames? Financial figures that aren't public? Personally identifiable information about colleagues or customers? Medical or legal details? Anything your organisation would describe as confidential?
If yes, pause. Either use a tool your organisation has approved and contracted (which typically includes enterprise-grade data handling commitments), or remove the sensitive elements before you submit. You can often replace a client name with "[Client]" and a project figure with "[amount]" without losing the context the AI needs to help you.
2. Who will see the output, and does it need to be disclosed?
This question is about attribution and honesty. If you're drafting a report that will be presented as your own research, there's a meaningful difference between using AI to brainstorm structure and using it to generate findings you haven't verified. If you're writing a client-facing document, does your client expect original analysis or would they be surprised to know it was AI-assisted?
None of this means AI assistance is wrong. It means the disclosure question is yours to answer, not your employer's, and not the tool's. In some roles and sectors, the expectation of original work is explicit. In others, it's simply assumed. Knowing which situation you're in — and acting accordingly — is the judgement call nobody else can make for you.
3. What if this got out?
Call this the transparency test. Before you submit anything to an AI tool, ask: would I be comfortable if my manager could see exactly what I just pasted in?
Not the output — the input. The actual text you fed the tool.
I'll be honest: I've caught myself pausing on this one. Not because I was doing something dishonest, but because seeing the question clearly made me realise the data I was about to include was more sensitive than I'd initially registered.
If the answer is "I'd rather they didn't see that," that's signal worth listening to.
🧠 Quick Challenge: You're summarising a long thread of internal Slack messages to get a quick briefing before a meeting. The thread includes your colleagues' names, a client's company name, and some back-and-forth about a pricing decision that isn't public yet. You're thinking of pasting it into a free AI summarisation tool. Is this safe?
- A) Yes, this is fine — it's just an internal summary
- B) No, this is risky
- C) It depends
Answer: B) No, this is risky. Even if the content feels routine, pasting a thread that includes a client's name, unpublished pricing decisions, and identifiable colleague details into a free public tool is the kind of thing most data protection frameworks would flag. Free tools often have broad data retention rights. The fix is either to use your organisation's contracted AI tool (if one exists), or to summarise the key points manually first and use AI only to reformat or tighten your own summary.
The Risks People Get Wrong — In Both Directions
Most people who are anxious about AI at work are anxious about the wrong things.
What people overestimate as risky:
Drafting an email from scratch using an AI tool. Using AI to brainstorm ideas for a presentation you haven't written yet. Asking AI to summarise a publicly available article. Getting AI to suggest better phrasing for something you've already written. These are generally fine. If the content isn't sensitive and the output is going to be reviewed and edited by you before it goes anywhere, the risk is low.
Using AI prompts effectively for drafting and ideation is a legitimate professional skill — the kind organisations will eventually expect their people to have, not discourage.
What people underestimate as risky:
Pasting a client brief verbatim into a public AI tool. Using AI to generate text that will be presented as original expert analysis without verification. Using free tools with unclear data handling policies for anything that touches regulated data — health information, financial data, legal correspondence. Assuming that because a tool is widely used, it must be approved.
The specific pattern that catches people out is the copy-paste reflex — a habit formed when the only consequence of pasting text somewhere was that it appeared on screen. That reflex doesn't automatically account for where the text goes from there.
💬 "The question isn't whether to use AI at work. The question is whether you've thought clearly about what you're putting into it."
You Don't Have to Wait for Permission to Have a Framework
Your organisation will eventually produce an AI policy. It might be six months away. It might be two years. In the meantime, the decisions are still happening — one task at a time, by individuals who are largely figuring it out as they go.
The most professional thing you can do in that situation is to treat yourself as the policy. Not in an anxious, self-policing way, but in the way any competent professional applies their own judgement when the rules haven't caught up with the situation.
That means developing a consistent personal framework — not a 40-point checklist, but a short set of principles you apply every time. The three questions above are a starting point. You might add your own based on your sector, your role, or the specific tools your organisation has (or hasn't) sanctioned.
Document it, even loosely. A note in your phone. A short paragraph in a personal document. Something that makes the framework explicit, rather than leaving it to the vagaries of how much time you have on any given day. When a policy does arrive, you'll find that most of what it says is something you'd already worked out.
The AI learning path at AI Tutorium covers the judgement skills that sit underneath effective AI use — the thinking behind the tool, not just the tool itself. If the governance questions feel tangled up with broader uncertainty about how to use AI well at work, that's a good place to start untangling them.
And if you want a library of ready-to-use, professionally considered prompts that have already had the "what am I putting in here" question worked out, the prompt library is built for exactly that.
A Framework Is Just Consistent Judgement, Written Down
The honest answer to "what's your organisation's AI policy?" is often: there isn't one. And the honest answer to "so what do you do?" should be: I have my own.
Not because you've invented something radical. Because you've taken three questions seriously, applied them consistently, and decided in advance rather than in the moment. That's what a policy is — a decision made once, so you don't have to remake it under pressure every time.
You're already making these decisions. Making them deliberately is just the next step.
Ready to build sharper AI judgement? The AI Tutorium learning path covers the thinking frameworks behind responsible, effective AI use at work — practical, not theoretical. Start at your own pace, with no prior experience needed.