Though AI promises untold new efficiencies, many companies have struggled to realise them. That’s not because AI isn’t capable of achieving great things – rather, it’s down, in large part, to the absence of a sensible, comprehensive governance structure put in place before those deployments even began. 

Without establishing sovereignty over AI and data, organisations risk losing control over how these assets are used, governed, and protected in that world. This could not only lead to a lack of ROI but also to new and unfamiliar legal risks when AI is left unchecked. As such, there are critical governance factors that technology leaders need to take into account when starting out on their journey to join the agentic AI goldrush.

A world of intentional machines needs sovereign control 

Agents do not just execute commands; they pursue goals. Left unchecked, they can drift from strategic priorities without clearly specified guardrails. For instance, a procurement agent in Southeast Asia optimising for cost may enter into agreements that violate European data regulations when its workflow intersects with a CRM agent in the EU. In complex enterprises, where divisions span continents and regulatory regimes, these micro-decisions may collectively undermine corporate strategy.

The core challenge for CIOs overseeing agentic AI deployments will lie in ensuring that agentic decisions remain coherent with enterprise-level intent, without requiring constant human arbitration. This demands new governance models that define strategic guardrails in machine-readable logic and enforce them dynamically across distributed agents.

The illusion of automated compliance

Highly regulated sectors like finance, pharmaceuticals, and energy dream of agent-led compliance. An intelligent agent that digests regulations, codifies them, and enforces them would be a compliance officer’s utopia. But regulation is rarely black and white.

Agentic agents in the network, especially those retrained or fine-tuned locally, may fail to grasp the nuance embedded in these regulatory thresholds. Worse, their decisions might be logically correct yet legally indefensible. Enterprises risk finding themselves in court arguing the ethical judgment of an algorithm. 

The answer lies in hybrid intelligence: pairing agents’ speed with human interpretive oversight for edge cases, while developing agentic systems capable of learning the contours of ambiguity.

Who is responsible in a machine-to-machine world? 

In a traditional enterprise, responsibility is traceable through hierarchy and human signatures. In an agentic world and world of agentic networks, actions are passed from one software entity to another. A customer dispute might begin with a conversational agent, escalate to a pricing model, trigger a logistics rerouting, and result in a regulatory violation. As such, current legal frameworks are ill-equipped to assign accountability across non-human causal chains.

This introduces the urgent need for agent identity frameworks. Each agent must carry a verifiable digital identity, with logs of decisions, justifications, and downstream impacts. CIOs should think of this as a blockchain of machine intent, not to decentralise, but to preserve integrity.

Cross-border chaos: when jurisdictions collide in AI governance

For global firms, regulatory complexity compounds. An HR agent trained in the United States may unknowingly process employee data in ways that breach Brazil’s LGPD or Germany’s works council protections. But the law is not so fluid. Thus, jurisdiction-aware agent design becomes essential.

Enterprises must build policy meshes that understand where an agent operates, which laws apply, and how consent and access should behave across borders. Without this, global companies risk creating algorithmic structures that are legal in no country at all.

In regulated industries, ethical norms require human accountability. Yet agent-to-agent systems inherently reduce the role of the human operator. This may lead to catastrophic oversights, even if every agent performs within parameters. Therefore, executives must delineate “red line” decisions: moments when agents must pause, escalate, or seek human sign-off.

This is not about distrusting machines. It is about preserving legitimacy, especially when the consequences are human. We have already seen the economic, innovation, and efficiency advantages of gaining a mission-critical focus on your organisation’s AI and data. Organisations that get it right will have systems that are faster, more compliant, and more adaptive than the competition, creating a flywheel effect as secure, compliant agents work seamlessly together around the clock.

Rob Feldman and Jozef de Vries are, respectively, EnterpriseDB’s chief legal officer and chief product engineering officer.

Read more: We don’t have to let AI become a job killer