NEXT IN AI
7
min read

Agentic AI: Why Governance Can’t Wait

Published on
11 June 2025
Last updated on
11 June 2025

As AI evolves, so too does the challenge of managing it

Much of today’s enterprise AI use centres on ‘zero-shot’ models – tools that respond to prompts within defined boundaries. But a new layer of complexity is emerging as organisations experiment with agentic systems: AI tools (autonomous agents) that can initiate tasks, adapt strategies and, in some cases, coordinate with other agents or external systems. While that capability promises faster workflows, it also creates legal and operational risks that are harder to govern – especially when organisations have a poor understanding of how these systems behave.

That combination of autonomy and uncertainty is what makes agentic AI a risk amplifier. As agents begin interacting with third-party platforms, liability and accountability can become difficult to assign – particularly when vendors disclaim responsibility for how their agent tools are used.

In this context, governance becomes more than a compliance obligation, it is a structural safeguard. Without it, organisations risk being blindsided by systems they do not fully understand or lack the ability to track.

“You can’t deploy and use AI on shifting sands. If you don’t have a clear vision of what’s being used, and defined governance guardrails around it, you face a risk chain reaction. This is especially the case with agentic AI.”

Tamara Quinn, Director – AI, Data & IP Knowledge, Osborne Clarke UK

ADDITIONAL READING

Current Agentic AI Use Cases

Potential may drive headlines, but practical deployments are quietly taking shape.

Agentic AIOps

Agents monitor and manage complex IT environments, predicting failure points and resolving issues without human intervention.

Customer Experience and Call Centre Management

Agents handle customer queries and execute resolution pathways, improving satisfaction and operational efficiency.

Autonomous Drug Discovery

In biomedical research, agentic systems analyse pharmacological data, simulate responses and apply adaptive logic to accelerate discovery.

Procurement and Supply Chain Management

Agents are beginning to take over routine procurement tasks and are being trialled to reroute supply chains and optimise logistics.

What Makes an AI Agent Truly ‘Agentic’?

Headshot of Satya Nitta, facing the camera and wearing a shirt and glasses
By Satya Nitta
Co-founder and CEO, Emergence AI

The term ‘AI agent’ is often misused to describe basic LLM wrappers or scripted tools that simply coordinate system calls. However, the classic definition remains unchanged – an AI agent is an autonomous system that sets goals, determines actions and executes tasks while continuously learning and adapting without human intervention. For enterprise, agents go beyond automation, demonstrating contextual reasoning, adapting to unforeseen challenges and dynamically adjusting plans to succeed in complex environments. 

Emergence AI recently unveiled the first demonstration of AI agents autonomously creating other agents and multi-agent systems in real time to successfully complete enterprise tasks. Though still early, this capability is expected to advance quickly, enabling the automatic creation of increasingly sophisticated agents and multi-agent systems interacting with one another across a growing landscape of enterprise challenges.

No Visibility Equals No Governance

Addressing gaps in visibility requires a structured understanding of the tools in use, who is using them, for what purpose and how they interact with other systems. Balancing carrot and stick incentives vs. prohibition in terms of behaviours is also critical.

Mapping also reveals patterns, such as which teams are adopting AI first and where the pressure to experiment is strongest, serving as a diagnostic tool. This process shows whether governance frameworks are aligned with how AI is actually being used, or whether they are operating on outdated assumptions. Without continuous visibility, even the best-designed policies risk drifting out of sync with reality.

“Without a clear inventory of AI tools and use cases, businesses risk designing governance frameworks in a vacuum. These frameworks may fail under scrutiny, or worse, create a false sense of compliance.”

Adrian Schneider, Partner, Osborne Clarke Germany

ADDITIONAL READING

The Shadow Adoption Problem

How inadvertent risks are created when agentic features are adopted ‘under the wire’.

Agentic AI features are already being bundled into enterprise software, often without clear oversight. This kind of shadow adoption – where tools enter through procurement, partnerships or software updates – creates risk from within.

Failing to provide staff with an authorised enterprise tool risks unauthorised and clandestine use on personal devices.

Without visibility into how these systems are used or by whom, governance is undermined before it begins.

ADDITIONAL READING

The Governance Steps You Can’t Ignore

Without visibility, even the best governance plan will fail.

01

Audit

Begin with a full audit of your AI tools, including usage behaviours (who is using them and for what purpose).

02

Identify

Identify how those tools interact with your internal systems and external data.

03

Evaluate

Evaluate which use cases introduce the greatest data exposure or operational risk.

04

Frame

Use that insight to frame guardrails and escalation pathways.

Raising the Floor, Not Just the Ceiling

AI governance often focuses on high-stakes use cases and advanced model oversight, but the real risk is more widespread. Governance fails when employees do not understand the tools they are using or the risks they introduce.

These risks are not theoretical. Real failures are emerging – not from malice, but from everyday misunderstandings. Sensitive client data pasted into public LLMs, unvetted plugins and unflagged AI outputs in regulated workflows are already appearing across professional settings.

“An organisation is the sum of its parts – and that collective needs to understand the risk. If awareness is limited to a few, the whole organisation is compromised.”

John Buyers, Partner and Co-head of Osborne Clarke’s International AI Service team

Regulators are starting to respond, with the EU AI Act requiring both providers and deployers to ensure staff possess adequate AI literacy. This is broadly defined as the knowledge and skills needed to make informed decisions about AI use and its potential impact. These responsibilities cannot be delegated, and businesses remain accountable for ensuring their staff can identify and manage the risks AI introduces into daily operations.

Meeting that standard requires a structured training and literacy approach. General users need lightweight onboarding to cover responsible use, common risks and data boundaries. Those designing or embedding AI into business processes need deeper, role-specific training. Regardless of the training adopted, regular testing is essential to confirm that staff can act on what they have learned, not just recall it.

Don’t Mistake Delay for Safety

The EU AI Act is now in force, with some use cases already prohibited, and core obligations for high-risk systems set to apply from August 2025. But questions remain about how, and how aggressively, those obligations will be enforced. The EU’s enforcement stance is being shaped in part by transatlantic dynamics, with the US embracing a deregulatory agenda that places pressure on EU and UK policymakers to prioritise innovation over early intervention.  However, this should not divert from the essential fact that the EU AI Act is law.

Of course, legal exposure is not limited to new laws, with existing regimes already applying to many AI-related activities. Data protection obligations, such as those under the GDPR in the EU and UK and the CCPA in California, still apply to any AI system that processes personal data or makes automated decisions about individuals.

The use of copyrighted material in model training, and the originality of AI-generated outputs, continues to raise unresolved intellectual property questions. Consumer protection and anti-discrimination rules remain in force, especially for B2C applications. In regulated industries such as finance or healthcare, AI use may also trigger sector-specific compliance obligations.

A phased approach to enforcement does not mean businesses can wait. Governance remains essential to managing risk under law – and to preparing for what is coming next, including agentic AI.

ADDITIONAL READING

Who’s Responsible When AI Fails?

As with intellectual property, AI liability is a constantly evolving area. In the B2C arena, much of the EU’s digital regulatory agenda – including the EU AI Act – is focused on protecting consumer rights. In B2B settings, the picture is less clear, and technologies such as agentic AI only reinforce that uncertainty.

The key issue is how responsibility should be divided between those who build the tools and those who use them. Platform providers may develop the technology, but enterprise users must understand the markets they operate in, the regulatory frameworks that apply and the ethical implications of deploying autonomous systems. Some vendors are now framing deployment as mutual, offering compliance tooling while placing ultimate accountability with the user.

“Agentic AI deployment is a shared responsibility. We design our platform with regulatory needs in mind, providing audit and policy tools to help customers meet compliance in their specific contexts.”


Satya Nitta, Co-founder and CEO, Emergence AI

Lead with Governance

AI governance is no longer a downstream fix – it is becoming the infrastructure that enables safe, scalable innovation. For forward-looking organisations, and those that are heavily regulated, it is as essential as the tools themselves.

“Agentic AI is not just another tech trend, it marks the beginning of a seismic shift. Organisations that harness its potential now will unlock intelligent automation, scalable innovation and new forms of efficiency.”


Satya Nitta, Co-founder and CEO, Emergence AI

That shift is already underway, with agentic features entering businesses faster than many can govern them. Even the perception of autonomy is enough to create legal and reputational exposure. Waiting for legal clarity or technical maturity will not shield businesses from the risks already forming around them.

The organisations best positioned to navigate this moment are not those with the most advanced tools, but those with a clear line of sight into AI use and an enabling governance structure ready to manage it. 

“We’re already witnessing analysis paralysis in the enterprise community. Simply put, indecision and uncertainty will act as blockers to you fully embracing the potential of agentic AI. A failure to implement clear AI governance means you risk being left behind as your competitors race ahead.” 

John Buyers, Partner and Co-head of Osborne Clarke’s International AI Service team

Contributors

We would like to thank these individuals for having shared their insight and experience on this topic.

Headshot of John Buyers facing the camera, wearing a suit and tie
John Buyers
Partner
Osborne Clarke UK
Headshot of Tamara Quinn, facing the camera, wearing a purple top and glasses
Tamara Quinn
Director – AI, Data & IP Knowledge
Osborne Clarke UK
A headshot of Lukasz facing the camera, wearing a suit and tie, and glasses.
Łukasz Węgrzyn
Partner
Osborne Clarke Poland
Headshot of Adrian Schneider, facing the camera, wearing a shirt and suit jacket
Adrian Schneider
Partner
Osborne Clarke Germany
Headshot of Satya Nitta, facing the camera and wearing a shirt and glasses
Satya Nitta
Co-founder and CEO
Emergence AI