Sitemap

From Data Governance to AI Governance

4 min readAug 10, 2025

A non-linear extension.

Press enter or click to view image in full size
Photo by A Chosen Soul on Unsplash

Background

Back in March 2024, I wrote a contrarian article entitled “The Problem with Data Governance” arguing that the role of the Data Governance Officer was disingenuous in its modern construct (to survive). Little did I know that the article would turn out to be my 2nd most read on Medium to date. My tone was adversarial and the response was polarising. I didn’t realise that the data governance community was so large across the world! But it opened up an interesting vein of discussion. While my article did cover the data management aspect of AI then, dial forward 1.5 years later, we now stand at the precipice of another potentially crushing tsunami called Agentic AI. The topic of governance is once again not far from my thoughts.

And so I dedicate my 103rd article to a discourse on the shape of “Data & AI Governance”, and what it might mean for the future of data governance officers.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

Governance

In the context of the corporate world, governance is about a framework of processes, functions, structures, and rules applied to how a job gets done. Data Governance is described as the framework of managing the availability, usability, integrity, and security of the data in enterprise systems. Onto this framework has been added AI/ML model lifecycle management, and the ethical and responsible use of AI/ML solutions. With the introduction of Gen AI, additional obligations were added to ensure the Gen AI solution met certain minimal expectations in:

  1. Safety — e.g. toxic, adult-oriented or biased outputs
  2. Trustworthiness — e.g. hallucinations and misinformation
  3. Reliability — e.g. consistent and repeatable
  4. Robustness — e.g. token limits may cause an LLM to stop working

In many respects, Gen AI governance is a logical extension of Predictive AI (i.e. AI/ML) governance. However, the arrival of Agentic AI has created a dislocation to the governance framework. Agentic AI governance cannot be seen a logical extension of Gen AI governance. It is way more complicated.

The Agent in the Middle

In the case of Gen AI, the responsibility of safety is mostly embedded with the maker/provider of the LLM. Most of us are simply re-purposing. Fine-tuning and RAG requires just a “topping up” of safety evaluations. But safety and trustworthiness in the context of Agentic AI means something different. Agentic AI is goal-oriented, and various research have shown that these agents may achieve their goals in undesirable ways (e.g. hijacking and changing the email account of an employee when it can’t find the target email address to send a message to). These AI agents possess no moral code, no ethics, no common sense. Most importantly, they have no fear of reprisal.

At the heart of it, Agentic AI is about replacing the man-in-the-middle with the agent-in-the-middle. We have taken a simplistic reductionist approach by viewing the man-in-the-middle as a collection of non-deterministic tasks. The translation of non-determinism into determinism is non-linear and non-unique, deeply driven by choice-making under various influences and parameters. Despite our differences in morality and ethics, the fear of reprisal is universally understood. It’s a survival instinct. I can put myself in the shoes of the man-in-the-middle and figure out what they might do in such and such a situation, constrained by that fear. But I can’t imagine being an AI agent. Building governing frameworks in the absence of a first-person perspective is fraught with danger … and failure. There is obviously work being done on emerging frameworks for governing Agentic AI; they sound clever (and well thought-through) on paper, but are operationally impossible to execute at present.

The Governance Job

Frameworks aside, who exactly is going to do all this “governing”? Data Governance Officers are already ill-equipped to embrace additional responsibilities on governing AI. Going from data quality to understanding how uncertainties in decision-making are being mitigated by AI/ML is a non-linear extension. Going from uncertainties in decision-making to AI interaction culture is a stretch! Even Chief Analytics Officers and Chief Information Officers will struggle to double-hat as AI Governance Officers. When the (AI) agent-in-the-middle becomes ubiquitous in organisations, a new job role will probably need to be created, assuming we can land on its scope of work. And who should that role report into? An AI Resource Officer (instead of a Human Resource Officer)?

Conclusion

Strategic frameworks. Operational frameworks. Roles and responsibilities. Skills and competencies. Everything’s up for grabs in the age of AI, and the work of governance just got exponentially more difficult. It’s not going to be about compliance. It’s about the ability to navigate ambiguity. Where will we find the talent?

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

Responses (2)