Let us start with what has actually improved, because the progress is real and it matters.
The models are significantly more capable. This is not a subjective impression — it is measurable. The leading models in 2026 (GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, Grok 4) handle complex reasoning, nuanced writing, and multi-step tasks noticeably better than their 2024 predecessors. Tasks that required careful prompt engineering two years ago — like producing a well-structured analysis or maintaining consistency across a long document — now work with simpler instructions. The capability floor has risen.
Context windows are enormous. In early 2024, most models topped out at 8,000-32,000 tokens. In 2026, million-token context windows are available across multiple providers. This sounds like a technical detail, but the practical impact is significant: you can paste an entire report, codebase, or document set and ask questions across the whole thing. The "I can only look at a small piece at a time" limitation that frustrated early adopters is largely gone.
Multimodal input is standard. Uploading images, PDFs, spreadsheets, and screenshots into AI conversations is now a basic feature across all major platforms. In 2024, this was a novelty. In 2026, it is table stakes. Professionals routinely photograph whiteboards, screenshot dashboards, and upload documents rather than typing descriptions — which is both faster and produces better results.
Coding assistants have matured. GitHub Copilot, Cursor, and similar tools have moved from "helpful autocomplete" to genuine development partners. They can understand entire codebases, suggest architectural changes, write tests, and debug complex issues across multiple files. For software teams, the productivity impact is substantial and well-documented — multiple studies show 30-55% time savings on defined coding tasks.
Enterprise adoption accelerated. Two years ago, most organisations were running cautious pilots. Today, enterprise AI deployment is mainstream. The infrastructure around AI — governance frameworks, security certifications, compliance documentation, data processing agreements — has caught up with the technology. For most businesses, the question is no longer "should we adopt AI?" but "how do we adopt it well?"
Research Callout: According to McKinsey's State of AI research, 78% of organisations now use AI in at least one business function, up from 55% in 2023. Generative AI adoption specifically has surged — from 33% in early 2024 to over 70% by late 2025.
What Hasn't Changed Much
Progress makes for good headlines. But an honest assessment of where AI stands also requires looking at what has not moved as far as people expected.
Hallucinations still happen. The models are better at staying factual, but they still confidently generate incorrect information — especially for niche topics, recent events, and specific technical details. The rate has decreased, but the fundamental problem persists. If you are using AI for anything where accuracy matters, you still need to verify. This has not changed, and anyone telling you otherwise is selling something.
Domain expertise still matters. Two years of AI advancement have not eliminated the need for professionals who deeply understand their field. In fact, the evidence suggests the opposite: as AI handles more routine tasks, the premium on genuine expertise has increased. The person who knows enough to evaluate AI output, catch subtle errors, and apply judgement in ambiguous situations is more valuable than ever.
Prompt skill still determines quality. Better models are more forgiving of vague prompts, but the gap between a mediocre prompt and a well-crafted one is still enormous. The professionals getting the best results are the ones who have invested in learning how to communicate effectively with AI — and that investment continues to pay compounding returns. If anything, prompt skill matters more now because the ceiling of what is possible has risen.
AI does not replace thinking. This might be the most important thing that has not changed. AI is an extraordinarily powerful tool for accelerating execution, but it does not set strategy, make judgement calls in ambiguous situations, or navigate the political and interpersonal dimensions of work. The professionals who treat AI as a replacement for thinking produce mediocre work faster. The ones who treat it as a tool that frees up time for better thinking are the ones pulling ahead.
The learning curve is real. Despite better interfaces and more intuitive tools, becoming proficient with AI still requires deliberate practice. The tools are easier to start with than they were in 2024, but the gap between basic usage and expert-level usage has actually widened as the tools have become more capable. There is more to learn now, not less.
Quick Tip: If you only learn one AI skill this year, make it iterative refinement — the ability to start with a decent first prompt and improve the output through two or three follow-up prompts. This single technique accounts for most of the quality difference between casual AI users and proficient ones. Our guide to advanced prompt techniques covers this and four other methods that produce dramatically better results.
The Biggest Surprise: AI Agents
If there is one development that defines the 2025-2026 period, it is the emergence of AI agents. And if you have not been paying close attention, the reality of agents might surprise you — both in what they can do and in what they cannot.
What are agents? In simple terms, an agent is an AI system that can take a series of actions autonomously to accomplish a goal, rather than just responding to a single prompt. Instead of you writing a prompt and getting a response, you give the agent an objective — "research these five competitors and create a comparison spreadsheet" — and it figures out the steps, executes them, and delivers the result.
Agents can browse the web, read and write files, use software tools, make API calls, and chain together multiple actions without human intervention at each step. Think of the difference between asking someone a question and delegating a task. Prompting is the question. Agents are the delegation.
What they can do well. Agents excel at multi-step tasks with clear objectives and well-defined success criteria. Research workflows (gather information from multiple sources, synthesise, present), data processing pipelines (collect data, clean it, analyse it, produce a report), and software development tasks (write code, run tests, fix errors, iterate) are all areas where agents are delivering genuine value in 2026. The Model Context Protocol (MCP) — a standard for connecting AI agents to external tools — has made it significantly easier to build agent workflows that interact with real software systems.
Where they fail. Agents struggle with ambiguity, changing requirements, and tasks that require human judgement at intermediate steps. They can go confidently down the wrong path for an extended period before anyone notices. They sometimes make irreversible mistakes (deleting files, sending emails, modifying databases) when they misunderstand an instruction. The autonomy that makes them powerful also makes them risky when guardrails are insufficient.
The practical reality in 2026. Most professionals are not yet using agents daily. The technology is real and improving rapidly, but it is still in the "early adopter" phase for most use cases. The people benefiting most are developers (coding agents are the most mature category), researchers (multi-step information gathering), and power users who are comfortable building and supervising custom workflows.
If you are curious about agents but have not tried them yet, our AI Agents learning path provides a grounded introduction — what they are, how they work, and where to start experimenting safely.
Quick Challenge: Think of one task you do regularly that involves multiple steps and clear success criteria. Could an agent potentially handle it? What would go wrong if it made a mistake at step 3?
Answer: The "what would go wrong" question is the critical one. If the consequences of a mid-process error are minor (a badly formatted document you can redo), agents might be worth trying. If the consequences are significant (an incorrect email sent to a client, a database entry changed), you want tighter supervision — at least for now.
What This Means for Your Career
Here is what we think matters most for professionals navigating AI in 2026. We are trying to be practical rather than philosophical.
The fundamental skills have not changed. Critical thinking, clear communication, domain expertise, relationship building, creative problem-solving — these are more valuable than ever, not less. AI has made execution cheaper, which means the premium is on knowing what to execute. Strategy, judgement, and the ability to ask the right questions remain distinctly human advantages.
AI fluency is becoming non-negotiable. Two years ago, AI competence was a differentiator. In 2026, it is becoming a baseline expectation — similar to how spreadsheet skills evolved from "impressive" to "expected" over the course of the 1990s. If you are not yet comfortable using AI tools in your daily work, the gap is widening, but it is absolutely still closeable. The fundamentals have not changed; the tools have just gotten easier.
The best professionals combine AI speed with human judgement. The pattern we see consistently across industries: the people thriving are not the ones who use AI the most. They are the ones who use it most effectively — knowing when to lean on it, when to override it, and when to skip it entirely. This is what our ICE Method framework is designed to help with: understanding when to Improve your existing work with AI, when to Create something new, and when to Educate yourself through AI-assisted learning.
Continuous learning matters more than ever. The pace of change has not slowed down. New capabilities, new tools, and new best practices emerge regularly. The professionals who stay current — not obsessively, but consistently — maintain an advantage over those who learned the basics once and stopped. Our Staying Current with AI learning path covers practical strategies for keeping up without burning out.
Specialisation is valuable, but adaptability is essential. Deep expertise in a specific domain remains highly valued. But the ability to adapt — to learn new tools, adjust workflows, and integrate AI into evolving processes — is what determines whether that expertise compounds or stagnates.
Three Things to Do Right Now
If this article leaves you with one takeaway, let it be this: the professionals who benefit most from AI in 2026 are not the ones who know the most about it. They are the ones who started doing something with it and kept going. Here are three concrete actions you can take this week.
1. Try one agent workflow. You do not need to build anything complex. Most AI tools now have some form of agent capability. Give Claude or ChatGPT a multi-step task — "Research the top five project management tools for remote teams, compare their pricing and key features, and present the findings in a table" — and see what happens. Pay attention to where it succeeds and where it goes off track. That observation is itself a valuable education.
2. Learn one advanced prompt technique. If you are still using basic prompts, pick one technique from our guide to advanced prompt techniques and try it on a real task. Chain of thought (asking the AI to think step by step) is the easiest starting point and produces the most dramatic improvement. Five minutes of learning, permanent improvement in results.
3. Audit one repetitive task. Look at your last week of work. Identify one task that was repetitive, time-consuming, and did not require your unique expertise. Then ask yourself: could AI have done the first 80% of this, leaving me to refine and finalise? If the answer is yes, that is your next AI experiment. Our AI ICE Skill Challenge can help you assess where your current AI skills are strongest and where there is room to grow.
These are not dramatic steps. They are small, concrete actions that compound over time. The people who started with small experiments in 2024 are the ones who are most comfortable and productive with AI today. The same will be true in 2028 for those who start now.
AI is not going to replace you. But it is going to keep changing how work gets done — gradually, unevenly, and in ways that reward the people who stay curious and keep experimenting. Two years from now, you will either be glad you started, or you will wish you had.
We would rather you be glad.