Thursday, January 29, 2026
HomeBusinessWho owns AI governance and risk?

Who owns AI governance and risk?

Published on

spot_img

When an AI-driven decision produces an outcome that no one is comfortable defending, something revealing happens inside organisations. Conversations quickly shift away from what the system recommended and toward who approved it, who relied on it, and who is ultimately responsible for the consequences. In that moment, the technology fades into the background, and questions of ownership move to the front and centre.

As AI systems begin to influence credit decisions, customer interactions, recruitment choices, and operational priorities, they quietly reshape how responsibility is distributed. Decisions still carry consequences, but the chain of accountability is no longer obvious. When outcomes are positive, AI is credited with efficiency and insight. When they are not, responsibility becomes harder to locate.

In many organisations, this ambiguity is not accidental. AI initiatives are often introduced as technical enhancements rather than organisational systems. Responsibility is spread across IT teams, external vendors, business units, and compliance functions, with no single group clearly accountable for outcomes. For a while, this worked. Early results look promising, and difficult questions can be postponed. Research and our experience suggest this is precisely where risk accumulates.

A recent systematic review of AI governance research, published in the journal “AI and Ethics”, examined how organisations assign responsibility for AI decisions and risks. The authors, Batool, Zowghi and Bano, found a recurring pattern across industries and regions: governance failures rarely stem from flawed algorithms. Instead, they arise because ownership of decision-making and risk is unclear. Responsibilities are fragmented, escalation paths are weak, and governance mechanisms are often introduced only after something has gone wrong. Organisations, in effect, adopt AI faster than they decide who is accountable for its consequences.

This insight aligns closely with what practitioners are observing. Writing in Harvard Business Review, Michael Wade and Tomoko Yokoi examine how organisations attempt to implement AI responsibly, drawing on the experience of Deutsche Telekom. One of their central observations is that responsible AI cannot be achieved through ethical statements or technical controls alone. It requires leadership and ownership. In the Deutsche Telekom case, senior executives took responsibility for defining principles, clarifying decision rights, and ensuring that governance was embedded throughout the AI lifecycle. Governance was treated as a leadership obligation, not a technical afterthought.

This distinction is particularly relevant in Nigeria and across much of Africa. Organisations are eager to harness AI, but they often operate within complex institutional environments. Regulatory frameworks are still evolving, internal controls are uneven, and pressure to deliver short-term results is intense. In such contexts, AI tools may be adopted opportunistically, with the assumption that existing governance arrangements will somehow stretch or catch up to accommodate them. Evidence suggests this assumption is risky.

Another perspective comes from MIT Sloan Management Review, which highlights how governance challenges often emerge not at the board or executive level, but at the point of execution. Even where organisations articulate high-level principles for AI use, real decisions about how systems are applied are made by managers and frontline professionals. When these teams lack clear guidance on when to rely on AI, when to override it, and how those overrides are reviewed, governance becomes inconsistent and opaque.

This pattern is easy to recognise locally. Consider a financial institution deploying an AI-based risk model. Officially, the system supports decision-making. In practice, relationship managers override recommendations to meet targets, while committees rely on intuition when outcomes feel uncomfortable. These interventions are rarely documented or examined systematically. Over time, leadership loses visibility into how decisions are being made, and the organisation becomes exposed to risks it does not fully understand.

What these studies collectively reveal is that AI governance is not primarily about technology. It is about ownership. Boards and senior executives must take responsibility for setting boundaries and defining acceptable trade-offs. Management must ensure that incentives do not quietly undermine responsible use. Teams must understand their decision rights and obligations. Without this clarity, AI systems tend to amplify existing organizational weaknesses rather than correct them.

Even in the public sector, there are documented cases of such governance challenges. For example, in the Netherlands, a government system designed to detect welfare fraud was struck down by the courts in 2020 because its operation was opaque and no individual or team could clearly explain or justify the decisions it produced. Similarly, in the UK, a standardisation algorithm used to assign exam grades during the pandemic produced widely criticised outcomes, prompting a government reversal and raising questions about oversight and accountability in public systems.

It is important that we do not get too sceptical about AI governance and make the point that governance does not slow innovation. Organisations that define ownership early are better able to scale AI with confidence. They know who can intervene, how risks are surfaced, and how learning occurs when systems fail or are overridden. Governance becomes an enabler of performance, not a constraint on it.

Omagbitse Barrow is a strategy and organisational effectiveness consultant and Chief Executive of the Abuja-based Learning Impact NG.

Latest articles

More like this

Share via
Send this to a friend