If you have ever opened a browser tab, stared at a ChatGPT prompt box, and quietly closed it again — you are not alone. A lot of professionals we have worked with describe the same pattern. They read the articles. They hear colleagues mention prompts. They see the screenshots of polished outputs. And somewhere in the middle of all that, a quiet voice says: this is probably too complex for me.
I get why this might feel intimidating at first. The word "AI" is doing heavy lifting in 2026. It covers everything from models with trillions of parameters to the autocomplete in your email app. When the conversation jumps between neural networks, context windows, fine-tuning, and agentic workflows, it is genuinely hard to know where a normal working professional is meant to start.
Here is the reassuring part. Almost nobody using AI well at work is thinking about any of that. They are typing sentences into a box. So let's slow down and reframe what "using AI" actually involves.
The complexity is in the plumbing, not the driving
The part of AI that is genuinely complex — the mathematics, the training process, the model architecture — is completely hidden from you. It sits behind the chat window in the same way an engine sits under the bonnet of a car. You do not need to understand fuel injection to drive to the shops.
When you open ChatGPT, Claude, or Gemini, what you actually see is a text box. You type. Something types back. You read it, decide what to do with it, and respond. That is the entire interaction. The interface has been deliberately stripped back so that the complexity stays out of your way.
This matters because a lot of the intimidation comes from mistaking the wrapper for the tool. Reading an article about transformer architectures and concluding "I could never use this" is a bit like reading a paper on internal combustion and concluding you could never drive. The two things require completely different skills. One is engineering. The other is a conversation.
If you take one thing from this whole piece, let it be this.
You do not need to understand AI to use AI well. You need to know how to describe what you want clearly. That skill you already have.
If you can brief a colleague, you can brief AI
Here is a small thought experiment. Imagine a new team member starts on Monday. They are capable, fast, and willing, but they have just arrived and know nothing about your work. You want them to draft a short update email to your team about a delayed project.
What would you tell them? Probably something like: "We were due to launch on the 15th, but testing found a few issues, so we're moving to early May. Keep the tone reassuring — the team has been putting in long hours and I don't want them to feel their work is wasted. Around 150 words. Friendly but professional."
That is a prompt. That is the whole skill.
Most professionals we have worked with are surprised when they realise this. They expected AI prompting to feel like coding, or like learning a new language. In practice, it is much closer to something they already do every day: explaining context to another human so that human can help. The only real difference is that you are typing it instead of saying it, and the "colleague" on the other end is patient enough to read the whole thing without interrupting.
If you want a more structured way to think about this, our guide on the four-part prompt formula breaks it into context, task, format, and constraints. But honestly — if you brief the AI the way you would brief a capable new hire, you are already most of the way there.
🧠 Quick Challenge: Your manager asks you to use AI to draft a short update for the team about a missed deadline. You have ten minutes. Based on what you have read so far, which of these is the best approach?
- A) Type "write an update email about a missed deadline" and send the output as-is
- B) Brief the AI with context (project name, new date, tone, audience, length), read the draft, tweak it, then send
- C) Skip the AI entirely — it will take longer than writing it yourself
Answer: B) A ten-minute window is exactly the right kind of first task — short, low-stakes, and well-defined. Option A produces the vague, generic output that gives AI a bad name, because the model has no context to work with. Option C is the response most professionals default to at first — and it is precisely the loop we are trying to break. A clearly briefed AI will usually land a usable first draft in under a minute, and the remaining nine minutes are yours to shape it into something that sounds like you.
Three first tasks that prove you can do this
If you are still holding back, it usually helps to stop thinking about "using AI" as one big commitment and start thinking about three small, specific tasks. Each one takes around ten minutes. Each one produces something useful at the end. And each one proves to you, quietly, that you can do this.
Here is what we would suggest trying first.
1. Write a first draft of something you were already going to write
Pick something low-stakes — a status update, a short internal email, a LinkedIn post, a message to a colleague. Open ChatGPT, Claude, or Gemini. Describe what the piece needs to say, who it is for, what tone you want, and roughly how long. Read what comes back. Edit it. Send it.
You are not asking the AI to replace your voice. You are asking it to get you past the blank page — which, in our experience, is where most of the time pressure actually lives.
2. Summarise a long document
Find a report, a long email thread, a policy document, or a meeting transcript that you would otherwise have to slog through. Paste it into Claude or ChatGPT. Ask for a 5-bullet summary highlighting the key decisions and open questions.
This is one of the tasks where AI genuinely punches above its weight, because the model is working from source material you provided rather than trying to recall facts from memory. The output is much more reliable, and the time saved is immediate.
3. Reword something that is not quite right
Take something you have written — an awkward paragraph, a message that reads too bluntly, a sentence that will not click into place. Paste it in. Say what is wrong and what you want instead. "This paragraph sounds too formal — can you make it warmer without losing the structure?" or "This email is too long. Cut it to three short sentences."
This is the gentlest first task of all. You are not generating anything from scratch. You are asking for a second opinion on words you already wrote.
If you do one of these three tasks this week, you will know more about using AI than someone who has read ten articles about it and never opened the tool.