For more than a century, modern organizations have been built around a single ideal: master what you don’t yet understand. This reflex, inherited from Taylorism, has taken many forms — procedures, audits, dashboards — until it became a culture of control. Now, artificial intelligence, supposedly the next evolution of that mastery, exposes its fragility.
AI, by its very nature, defies the logic of cause and effect. Its models are statistical, opaque, often unpredictable. And yet, we continue to treat it as a classic management tool: a means to optimize, standardize, and measure everything. Here lies the first paradox: by seeking to strengthen control through algorithms, we sometimes lose sight of what control truly means.
The 2025 AI-Ready Governance Report by OneTrust notes that 92% of large organizations plan to increase their AI governance budgets over the next two years. Their top priority: “risk reduction.” But that word, risk, often hides a fear of ambiguity. We want our automated systems to be framed, auditable, documented — in short, reassuring. Yet, as the ITU’s Annual AI Governance Report 2025 reminds us, algorithmic governance cannot be reduced to compliance. It requires “a capacity for cultural adaptation.” In other words, we must learn to live with a degree of uncertainty.
For decades, companies have believed that a good dashboard could pilot complexity. But in the age of AI, data is fluid: it is interpreted, reshaped, retrained. What was once a reliable indicator can now amplify invisible bias. A study published in Technological Forecasting & Social Change describes how AI systems “learn” our power dynamics and blind spots — and then reproduce them at scale. Control, in this context, becomes circular: we monitor models that mirror us.
This phenomenon is already visible in organizations. In some public institutions, AI projects are stalled not because of technology, but because no one knows who controls what. Managers fear losing their grip on automated decisions, while analysts resist carrying sole responsibility for systems they only partially understand. The result is a form of “contactless control”: everything is measured, yet no one feels truly accountable.
Consulting firms are sounding the alarm. The PwC Responsible AI Survey (2025) reveals that 78% of executives see responsible AI governance as a strategic priority, but fewer than a third say they are capable of implementing it effectively. The gap between ethical charters and daily practice is widening.
In the private sector, the temptation is strong to use AI as a lever for constant optimization: productivity, screen time, individual performance. But this obsession with quantified control often produces the opposite effect. As KPMG points out in its Trust, Attitudes and Use of Artificial Intelligence study, trust cannot be decreed through algorithmic precision. It must be built through transparency, understanding, and human accountability.
An article from Corporate Compliance Insights sums up the dilemma in one line: “AI audits numbers, not ethics. Humans must govern.” In other words, the culture of control cannot be delegated to machines. It must evolve into a culture of discernment.
This doesn’t mean abandoning rigor — it means shifting the center of gravity. Moving from compliance control to meaning control. Asking not only “is it accurate?” but “is it right?”. This is a quiet but essential transition: the future of algorithmic governance will depend less on the quality of audits than on the ethical maturity of organizations.
Leaders who understand this now speak of “operational trust” — the confidence that allows us to delegate without disengaging, to rely on AI while maintaining control over purpose. It rests on three pillars: transparency (explaining how and why a model acts), proportionality (not automating simply because we can), and reflexivity (being willing to question the outputs).
In a world where algorithms claim to help us decide better, the real question becomes: will we still know how to doubt? Because freedom, in business as elsewhere, is not measured by what we control — but by what we understand.
OneTrust, 2025 AI-Ready Governance Report. https://www.onetrust.com/resources/2025-ai-ready-governance-report/
ITU, Annual AI Governance Report 2025 – Steering the Future of AI.https://www.itu.int/epublications/en/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai/en
Technological Forecasting & Social Change, “Responsible artificial intelligence governance: A review”, 2024.https://www.sciencedirect.com/science/article/pii/S0963868724000672
PwC, Responsible AI Survey 2025. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html
KPMG, Trust, Attitudes and Use of Artificial Intelligence: A Global Study, 2024.https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html
Corporate Compliance Insights, “AI Audits Numbers, Not Ethics: Why Humans Must Govern”, 2025. https://www.corporatecomplianceinsights.com/ai-audits-numbers-not-ethics-humans-must-govern/
