The introduction of Agentic AI, or autonomous systems empowered to execute decisions without direct human oversight, is transforming enterprise leadership in profound ways. As organisations delegate increasing operational and strategic responsibilities to these AI agents, a fundamental question emerges: Who truly holds the reins of enterprise decision-making? What is developing is a captivating -- and often thought-provoking -- interplay between human leadership, organisational structures, and advanced technologies that creates a dynamic mosaic of accountability.
Historically, AI in business has served as a decision support tool, analysing data, identifying trends, and providing recommendations for human leaders to consider. The rise of Agentic AI, however, marks a major shift. These agents "do." They carry out complex, multi-step tasks independently, whether it's optimising logistics, executing financial trades, or managing HR functions. The speed and scale at which these systems can operate create urgent new governance challenges.
Why does this matter? Human error is inevitable, but even the most error-prone employee cannot make thousands of mistakes every minute. However, this is precisely the risk with Agentic AI. Autonomous agents, powered by relentless automation, can compound errors -- rapidly propagating financial losses, compliance breaches, or reputational harm before a human manager even knows there's a problem. This is a problem that will magnify rapidly for enterprises in the coming months. The latest Gartner forecast is that 40 per cent of enterprise apps will feature task-specific AI agents by 2026, up from less than 5% this year.
By next year, the main governance question will no longer be, "Why did the AI give bad advice?" but rather, "Should the AI have been allowed to act at all?" This shift will lead to calls for new oversight structures and risk controls in the era of fast, impactful AI autonomy. While boards and regulators struggle to keep up with developments in the AI space, a more difficult challenge exists from a business perspective: the insurance industry must respond quickly to new and emerging AI-related risks.
Navigating Distributed Responsibility
Contrary to the science fiction scenarios in most of our minds, Agentic AI does not actually "run" the enterprise on its own. Final responsibility remains with human actors. But the responsibility is distributed across developers, deployers, operators, and organisational leaders. Each group plays a specific part in designing, implementing, monitoring, and responding to AI-driven actions.
To address these challenges, leading organisations are adopting multi-tiered governance models. Examples include cross-functional oversight bodies, the deployment of Responsible, Accountable, Consulted, and Informed (RACI) frameworks, and the establishment of board-level AI committees to ensure clarity at every phase -- from conception through deployment and on to ongoing operations.
Modern enterprise AI solutions demand increased transparency and traceability. Building robust audit structures, including explainable AI modules and comprehensive logging, is crucial for reconstructing decision paths and facilitating post-hoc reviews. This drive for greater auditability reflects broader calls across industries to ensure that each AI action is understandable, justifiable, and attributable to specific sources.
The Moral Crumple Zone
Recent incidents involving autonomous systems highlight an ethical dilemma: frontline operators are often scapegoated for failures that originate deeper within organisational design or governance. An example of scapegoating was highlighted in 2018, when a pedestrian was killed by an Uber self-driving car in Tempe, Arizona. Following the incident, the solo safety driver in the vehicle, Rafaela Vasquez, was identified as being primarily liable, despite investigations revealing that deeper design and organisational decisions were significant contributing factors.
The incident highlights the need for equitable accountability models that track responsibility back to foundational design decisions and oversight, protecting operational staff from becoming mere "moral crumple zones." This approach encourages organisations to design accountability frameworks that are multi-layered, resilient, and fair.
While regulators are trying to address the challenges, new ones are emerging for enterprises. As regulators worldwide establish new standards for AI accountability, enterprises are facing an increasingly complex and fragmented legal landscape. The European Union's AI Act emphasises risk-based obligations and mandates human oversight of high-impact applications, while US regulators adopt a sector-specific approach.
China, the UK, and Japan are developing their regulatory standards, resulting in a complex web that multinational organisations must navigate. Regulatory compliance, while fundamental, represents only the starting point; building trust and resilience requires a commitment to robust internal governance and ethical leadership.
The Business Anxiety
The insurability of Agentic AI risk constitutes one of the most significant challenges in modern enterprise risk management. Traditional insurance models, rooted in human error and historical data, struggle to accommodate the sheer scale and pace of autonomous system failures. For insurers, opaque "black box" decision processes and the unpredictability of algorithmic learning and bias present substantive obstacles to quantifying risk and assigning liability.
The problem is that AI-driven risks are fundamentally different: One, the potential for error propagation at machine speed upends conventional actuarial assumptions; two, systemic bias or emergent behaviour may trigger losses beyond the reach of standard coverage; and three, a lack of historical claims data complicates premium pricing and risk modelling.
A new generation of AI insurance products is taking shape, tailored to cover algorithmic bias, operational disruptions, regulatory infractions, and losses directly attributed to autonomous decisions. While traditional policies, such as general liability, are being amended with AI-specific exclusions, a new category of affirmative AI insurance is emerging to cover unique perils.
Insurers are tightening underwriting standards, frequently requiring evidence of strong governance -- such as adherence to frameworks like the National Institute of Standards and Technology's (NIST) AI Risk Management Framework or ISO/IEC 42001 certification -- as prerequisites for coverage. Still, most policies exclude poorly understood risks, underscoring the premium placed on demonstrable enterprise risk controls.
The Future Of Accountability
The insurance industry offers a window into understanding the broader challenges that lie ahead and the necessity of ensuring that Agentic AI is implemented with clear, distributed accountability within the enterprise. Effective governance now means meticulously crafted responsibility chains, transparent audit trails, and built-in human override mechanisms (human operators should be authorised with a master "kill switch" that can bring AI agents to an instant halt).
Insurers, risk managers, and organisational leaders must collaborate to navigate the unique nature of AI-driven errors, recognising that ultimate accountability, although distributed, remains fundamentally human. Let's understand and recognise this: Agentic AI does not remove humans from the enterprise; it makes smarter governance, clearer oversight, and stronger risk controls, powered by technology and collaboration, more essential than ever.