Something is shifting under our feet. Not a loud rupture, not a crisis with headlines — more like a steady tremor, the kind that makes us realize the landscape is changing even if no one is shouting about it. Since AI learned to imitate the tone, gestures and sometimes even the reasoning of experts, one question has quietly settled in: what remains truly “ours” when anyone can generate a convincing professional output in seconds?
This unease isn’t limited to one field. We see it among lawyers, teachers, nurses, managers, analysts — everywhere people once felt their expertise was a well-protected territory. And now AI enters this territory with surprising confidence, sometimes clumsily, often convincingly.
Researchers note that AI “short-circuits the learning process that leads to real expertise” (IE University, 2024). Others describe an “illusion of competence,” the moment when a user feels like an expert simply because the tool provides answers with authority (ArXiv, 2023). In other words: AI gives everyone the ability to look like a professional. It does not give them the ability to be one.
This shift doesn’t leave professionals untouched. In medicine, for example, signs appeared as early as 2024 showing a decline in certain clinical skills when AI takes on too much of the diagnostic workload (Financial Times). In other fields, confidence is the first casualty: nearly one-third of UK workers say they feel “less expert” when they use AI (Wired). And a form of cognitive stress is rising. A study in Nature found that adopting AI doesn’t directly cause burnout, but it increases stress — and stress, in turn, feeds burnout.
The experience is real. One can be perfectly competent yet feel suddenly accessory, as if AI has stepped into a mental space that used to belong to us. Some call this a form of “cognitive downgrading.” The term is blunt, but it captures a psychological truth: the sense that one’s hard-earned expertise is losing value.
So what becomes of expertise in this new landscape? When technique can be imitated, we must look elsewhere. And that is where something genuinely interesting begins.
Recent studies show that human experts are essential where AI still falters: interpretation, judgment, ethics. The research The Paradox of Professional Input (2025) notes that experts collaborating with AI do transmit tacit knowledge — but remain the only ones capable of grasping its nuances. Knowledge is no longer a vault; it becomes a filter, a system of discernment.
This is where a second transformation emerges: the rising importance of interpretive competence. Thomson Reuters observed that the professionals of the future will not be those who “know everything,” but those who understand how to assess, contextualize and reframe what AI produces. That takes distance, critical sense, and a clear understanding of how these models work — and where they fail.
But for this transition to succeed, organizations must shift their posture. Introducing a new tool and hoping balance will magically return isn’t enough. This moment calls for caution, listening, and governance that genuinely recognizes the value of human work. Studies show that when workers feel supported as they learn to use AI, their stress levels drop significantly (Nature, 2024). We learn better when we’re not afraid of losing our footing.
We also need to talk about empathy — this strange mirror that AI holds up to us. A recent study found that AI-generated answers were sometimes perceived as “more empathetic” than those from human professionals. A finding to handle with nuance. AI has no empathy; it mimics its patterns. But it is true that, in some sectors, urgency, overload and exhaustion have drained human availability. AI reminds us of what we risk losing if we fail to protect the relational, the attentive, the humane dimension of our work.
What is at stake here is not the disappearance of experts. It is their redefinition. And that redefinition must be deliberate, discussed, supported. In large organizations and the public sector — where technological transitions move slower than the slogans — this is a strategic challenge. The illusion of competence created by AI should never become an excuse to devalue professionals. It should instead push us to clarify what genuine competence is: judgment, experience, contextual sensitivity, humanity.
AI can simulate a great many things. It can write, reason, calculate, converse — sometimes brilliantly. But it does not know what it means to be responsible for a decision. It carries no real-world consequences. We do.
And perhaps that is where the heart of expertise now lies: in the ability to face ambiguity, to sense what is left unsaid, to detect when something is off. The expertise of tomorrow won’t be a speed contest against machines. It will be an exercise in clarity — knowing where AI helps us, where it misleads us, and where it forces us to remain fully human.
References
- IE University – Is AI creating incompetent experts? (2024)
https://www.ie.edu/insights/articles/is-ai-creating-incompetent-experts/ - ArXiv – Knowing About Knowing: An Illusion of Human Competence (2023)
https://arxiv.org/abs/2301.11333 - Financial Times – On declining human skills in AI-assisted medicine (2024)
https://www.ft.com/content/74b82366-1ea1-4f90-80aa-e84a1e655d28 - Wired UK – Does using AI make me lazy? (2024)
https://www.wired.com/story/does-using-ai-make-me-lazy/ - Nature Humanities & Social Sciences Communications – How AI adoption impacts stress and burnout (2024)
https://www.nature.com/articles/s41599-024-04018-w - ArXiv – The Paradox of Professional Input (2025)
https://arxiv.org/abs/2504.12654 - Thomson Reuters – The Future of Professionals: Building AI-Ready Skills (2024)
https://www.thomsonreuters.com/en-us/posts/sustainability/future-of-professionals-building-ai-ready-professionals/ - Nature – Perceived empathy in AI-generated responses (2024)
https://www.nature.com/articles/s44271-024-00182-6
