Agentic AI: Why Governance Can’t Wait

As AI evolves, so too does the challenge of managing it
Much of today’s enterprise AI use centres on ‘zero-shot’ models – tools that respond to prompts within defined boundaries. But a new layer of complexity is emerging as organisations experiment with agentic systems: AI tools (autonomous agents) that can initiate tasks, adapt strategies and, in some cases, coordinate with other agents or external systems. While that capability promises faster workflows, it also creates legal and operational risks that are harder to govern – especially when organisations have a poor understanding of how these systems behave.
That combination of autonomy and uncertainty is what makes agentic AI a risk amplifier. As agents begin interacting with third-party platforms, liability and accountability can become difficult to assign – particularly when vendors disclaim responsibility for how their agent tools are used.
In this context, governance becomes more than a compliance obligation, it is a structural safeguard. Without it, organisations risk being blindsided by systems they do not fully understand or lack the ability to track.
“You can’t deploy and use AI on shifting sands. If you don’t have a clear vision of what’s being used, and defined governance guardrails around it, you face a risk chain reaction. This is especially the case with agentic AI.”
Tamara Quinn, Director – AI, Data & IP Knowledge, Osborne Clarke UK
Current Agentic AI Use Cases
Potential may drive headlines, but practical deployments are quietly taking shape.
Agentic AIOps
Agents monitor and manage complex IT environments, predicting failure points and resolving issues without human intervention.
Customer Experience and Call Centre Management
Agents handle customer queries and execute resolution pathways, improving satisfaction and operational efficiency.
Autonomous Drug Discovery
In biomedical research, agentic systems analyse pharmacological data, simulate responses and apply adaptive logic to accelerate discovery.
Procurement and Supply Chain Management
Agents are beginning to take over routine procurement tasks and are being trialled to reroute supply chains and optimise logistics.
What Makes an AI Agent Truly ‘Agentic’?

The term ‘AI agent’ is often misused to describe basic LLM wrappers or scripted tools that simply coordinate system calls. However, the classic definition remains unchanged – an AI agent is an autonomous system that sets goals, determines actions and executes tasks while continuously learning and adapting without human intervention. For enterprise, agents go beyond automation, demonstrating contextual reasoning, adapting to unforeseen challenges and dynamically adjusting plans to succeed in complex environments.
Emergence AI recently unveiled the first demonstration of AI agents autonomously creating other agents and multi-agent systems in real time to successfully complete enterprise tasks. Though still early, this capability is expected to advance quickly, enabling the automatic creation of increasingly sophisticated agents and multi-agent systems interacting with one another across a growing landscape of enterprise challenges.
From Legal Burden to Business Essential
AI governance has often been treated as a downstream activity: a policy layer applied after deployment to satisfy regulatory expectations. But that model is becoming increasingly unsustainable, with agentic capabilities already surfacing across enterprise environments.
“Given current adoption trajectories, where 50% of enterprises have already deployed AI agents and another 32% plan to do so within a year, mainstream adoption of agentic AI tools is rapidly approaching. We believe the space is set to move even faster.”
Satya Nitta, Co-founder and CEO, Emergence AI
This represents a shift in enterprise risk, with even partial agentic functionality introducing accountability gaps and blurring the lines of liability. Many vendors are developing platforms that enable transparency and control, but those capabilities often depend on how businesses configure and oversee them.
As a result, the responsibility for outcomes is increasingly resting with the enterprise, not the provider. And this is happening against a backdrop of intense market forces to adopt and upgrade quickly, often without the luxury of careful planning.
“The pressure to adopt and upgrade is relentless. While the need to be agile is obvious, it also means that having your governance in place now is essential.”
Łukasz Węgrzyn, Partner, Osborne Clarke Poland
The inevitable conclusion is that AI governance needs to move from compliance safeguard to operational backbone. That means not just documenting policies, but building a structural framework – one that integrates usage rules, ethical guardrails, staff training and escalation protocols. The next step is ensuring those structures are visible and understood across the business.
No Visibility Equals No Governance
Addressing gaps in visibility requires a structured understanding of the tools in use, who is using them, for what purpose and how they interact with other systems. Balancing carrot and stick incentives vs. prohibition in terms of behaviours is also critical.
Mapping also reveals patterns, such as which teams are adopting AI first and where the pressure to experiment is strongest, serving as a diagnostic tool. This process shows whether governance frameworks are aligned with how AI is actually being used, or whether they are operating on outdated assumptions. Without continuous visibility, even the best-designed policies risk drifting out of sync with reality.
“Without a clear inventory of AI tools and use cases, businesses risk designing governance frameworks in a vacuum. These frameworks may fail under scrutiny, or worse, create a false sense of compliance.”
Adrian Schneider, Partner, Osborne Clarke Germany
The Shadow Adoption Problem
How inadvertent risks are created when agentic features are adopted ‘under the wire’.
Agentic AI features are already being bundled into enterprise software, often without clear oversight. This kind of shadow adoption – where tools enter through procurement, partnerships or software updates – creates risk from within.
Failing to provide staff with an authorised enterprise tool risks unauthorised and clandestine use on personal devices.
Without visibility into how these systems are used or by whom, governance is undermined before it begins.
The Governance Steps You Can’t Ignore
Without visibility, even the best governance plan will fail.
Audit
Begin with a full audit of your AI tools, including usage behaviours (who is using them and for what purpose).
Identify
Identify how those tools interact with your internal systems and external data.
Evaluate
Evaluate which use cases introduce the greatest data exposure or operational risk.
Frame
Use that insight to frame guardrails and escalation pathways.
Raising the Floor, Not Just the Ceiling
AI governance often focuses on high-stakes use cases and advanced model oversight, but the real risk is more widespread. Governance fails when employees do not understand the tools they are using or the risks they introduce.
These risks are not theoretical. Real failures are emerging – not from malice, but from everyday misunderstandings. Sensitive client data pasted into public LLMs, unvetted plugins and unflagged AI outputs in regulated workflows are already appearing across professional settings.
“An organisation is the sum of its parts – and that collective needs to understand the risk. If awareness is limited to a few, the whole organisation is compromised.”
John Buyers, Partner and Co-head of Osborne Clarke’s International AI Service team
Regulators are starting to respond, with the EU AI Act requiring both providers and deployers to ensure staff possess adequate AI literacy. This is broadly defined as the knowledge and skills needed to make informed decisions about AI use and its potential impact. These responsibilities cannot be delegated, and businesses remain accountable for ensuring their staff can identify and manage the risks AI introduces into daily operations.
Meeting that standard requires a structured training and literacy approach. General users need lightweight onboarding to cover responsible use, common risks and data boundaries. Those designing or embedding AI into business processes need deeper, role-specific training. Regardless of the training adopted, regular testing is essential to confirm that staff can act on what they have learned, not just recall it.
Don’t Mistake Delay for Safety
The EU AI Act is now in force, with some use cases already prohibited, and core obligations for high-risk systems set to apply from August 2025. But questions remain about how, and how aggressively, those obligations will be enforced. The EU’s enforcement stance is being shaped in part by transatlantic dynamics, with the US embracing a deregulatory agenda that places pressure on EU and UK policymakers to prioritise innovation over early intervention. However, this should not divert from the essential fact that the EU AI Act is law.
Of course, legal exposure is not limited to new laws, with existing regimes already applying to many AI-related activities. Data protection obligations, such as those under the GDPR in the EU and UK and the CCPA in California, still apply to any AI system that processes personal data or makes automated decisions about individuals.
The use of copyrighted material in model training, and the originality of AI-generated outputs, continues to raise unresolved intellectual property questions. Consumer protection and anti-discrimination rules remain in force, especially for B2C applications. In regulated industries such as finance or healthcare, AI use may also trigger sector-specific compliance obligations.
A phased approach to enforcement does not mean businesses can wait. Governance remains essential to managing risk under law – and to preparing for what is coming next, including agentic AI.
Who’s Responsible When AI Fails?
As with intellectual property, AI liability is a constantly evolving area. In the B2C arena, much of the EU’s digital regulatory agenda – including the EU AI Act – is focused on protecting consumer rights. In B2B settings, the picture is less clear, and technologies such as agentic AI only reinforce that uncertainty.
The key issue is how responsibility should be divided between those who build the tools and those who use them. Platform providers may develop the technology, but enterprise users must understand the markets they operate in, the regulatory frameworks that apply and the ethical implications of deploying autonomous systems. Some vendors are now framing deployment as mutual, offering compliance tooling while placing ultimate accountability with the user.
“Agentic AI deployment is a shared responsibility. We design our platform with regulatory needs in mind, providing audit and policy tools to help customers meet compliance in their specific contexts.”
Satya Nitta, Co-founder and CEO, Emergence AI

Lead with Governance
AI governance is no longer a downstream fix – it is becoming the infrastructure that enables safe, scalable innovation. For forward-looking organisations, and those that are heavily regulated, it is as essential as the tools themselves.
“Agentic AI is not just another tech trend, it marks the beginning of a seismic shift. Organisations that harness its potential now will unlock intelligent automation, scalable innovation and new forms of efficiency.”
Satya Nitta, Co-founder and CEO, Emergence AI
That shift is already underway, with agentic features entering businesses faster than many can govern them. Even the perception of autonomy is enough to create legal and reputational exposure. Waiting for legal clarity or technical maturity will not shield businesses from the risks already forming around them.
The organisations best positioned to navigate this moment are not those with the most advanced tools, but those with a clear line of sight into AI use and an enabling governance structure ready to manage it.

“We’re already witnessing analysis paralysis in the enterprise community. Simply put, indecision and uncertainty will act as blockers to you fully embracing the potential of agentic AI. A failure to implement clear AI governance means you risk being left behind as your competitors race ahead.”
John Buyers, Partner and Co-head of Osborne Clarke’s International AI Service team
Contributors
We would like to thank these individuals for having shared their insight and experience on this topic.




