Navigating The Next Phases of Digital Regulation

Over the past decade, digital regulation has moved from the margins to the mainstream, becoming a strategic priority at the heart of global policy agendas. The “lighter touch” approach of the early 2000s – with more emphasis on fostering innovation and protecting expression – has shifted towards intervention, with new rules targeting privacy, consumer protection, online safety and market fairness.
Groundbreaking frameworks such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), the Digital Services Act (DSA), the Digital Markets Act (DMA), and the UK’s Online Safety Act sought to strike a balance between innovation and consumer protection, while transforming how businesses approach compliance and accountability.
This has paved the way for a new era of heightened enforcement. Regulators are now proactive enforcers, issuing multimillion-euro fines and driving global standards.
As a result, organisations are working to mitigate enforcement risks, which requires the implementation of internal response structures and cross-function coordination.
Yet the story is far from over. With new technologies expecting to attract regulatory response, legal teams are likely to question how the future digital regulation landscape will present amidst simplification efforts and pro-growth narratives, and whether significant new changes might still be coming down the line.
Put another way, with landmark digital regulations in force, have we now in fact reached the peak of digital regulation, and the dawn of a period of stability and refinement? Or, are we moving into a more complex phase, shaped by strategic divergence, increased and differing approaches to enforcement, and a new and more targeted focus on fairness and consumer welfare?
Geopolitical Pressure Points
Strategic divergence is now a hallmark of the digital regulatory landscape, driven by political and economic priorities.
In the US, a second Trump administration was expected to usher in a deregulatory agenda which would redefine the government’s role in overseeing digital platforms and emerging technologies. Focus on AI leadership and development rather than increased oversight reflects a strategic bid to boost US competitiveness and counter China’s influence in global tech standards.

In the EU, calls for regulatory simplification are gaining political attention, even as the broader legislative agenda continues to expand. Former European Central Bank President Mario Draghi’s 2024 report warned that complex digital laws were constraining growth, prompting the European Commission to explore ways to reduce compliance burdens, particularly for SMEs. While a limited GDPR simplification proposal has been confirmed to ease record-keeping burdens, broader discussions have also emerged around reducing overlap between digital laws including the DSA, the DMA and the AI Act.
The UK, meanwhile, has signalled regulatory reform through its March 2025 policy paper, which outlines plans to reduce friction for businesses, particularly SMEs, and to ensure a more agile and innovation-friendly digital regulatory framework. Nevertheless, this sits uneasily alongside other developments. For example, the Digital Markets, Competition and Consumers Act 2024 introduced a consumer law regime that is, in many respects, more complex than the EU’s.
Elsewhere, more protectionist instincts are influencing the agenda. For example, India’s proposed Digital India Act introduces tight controls on data and platform governance, with stringent obligations for online content platforms and providers.
These diverging models are reshaping the regulatory risk map and increasing fragmentation.
“To maintain operational resilience, legal teams must be equipped to adapt to evolving priorities across jurisdictions and to structure flexible compliance strategies that accommodate these new challenges.”
Rafael García del Poyo, Partner, Osborne Clarke Spain
A Compliance Paradox?
While simplification and pro-growth initiatives are gaining momentum, this does not always mean fewer rules or reduced burdens. Conversely, they can introduce transitional complexity – what might aptly be described as a compliance paradox.
While the UK’s Data (Use and Access) Act (DUA Act) aims to modify the UK GDPR to simplify and ease compliance in areas such as clarifying the definition of legitimate interest and easing the requirements around automated decision-making, it also introduces a framework for a new data sharing regime that some organisations will need to comply with. This is likely to involve making adjustments to processes and policies.
Meanwhile, the EU is also exploring ways to streamline digital regulation and make enforcement more effective. This includes efforts to harmonise overlaps and contradictions between the GDPR, the DSA, the DMA and the AI Act. For example, it is working to ensure the proposed Digital Fairness Act (DFA) – which is expected in 2026 and aims to tackle issues including unethical practices relating to dark patterns and addictive design – avoids duplication with digital rules. The slated withdrawal of its proposed ePrivacy Regulation and AI Liability Directive also reflects concerns over unnecessary overlap with new and existing laws.
The EU Competitiveness Compass, introduced in January 2025, includes other simplification proposals. Among its proposed solutions is the “28th legal regime”, an optional EU-wide framework that will offer new and growing businesses a single, uniform set of rules, including relevant aspects of corporate law, insolvency, labour and tax laws. Nevertheless, as the proposed regime would sit alongside rather than replace national regimes, it risks creating an optional parallel framework that exacerbates divergence by creating one more way to comply rather than a singular approach.
“Simplification does not always mean alignment and harmonisation. It can lead to additional divergence and fragmentation between regions.”
Henrik Bergström, Partner, Osborne Clarke Sweden
Key Drivers of New Digital Regulation
New and Emerging Technologies
Technological advancements such as generative AI, synthetic media and quantum computing are drawing regulatory attention, not only for their transformative potential and associated risks, but also for their potential societal consequences. For example, the misuse of generative AI and synthetic media can blur the lines between fact and fabrication, leading to the manipulation of public opinion and erosion of trust. Similarly, with its potential to disrupt critical areas of security and governance, quantum computing challenges current cybersecurity, data protection, and encryption standards.
Policymakers across jurisdictions are responding by establishing frameworks that promote innovation while ensuring transparency, accountability and societal safeguards – although responses vary. Generative AI, particularly in the form of large language models, has prompted renewed scrutiny of existing laws, with legal concerns ranging from misinformation and algorithmic bias to balancing the conflicting agendas of rightsholders and AI developers under copyright law. Some jurisdictions are applying established IP and consumer protection laws while others are developing new instruments. The European AI Office, for example, has been drawing up a voluntary General-Purpose AI Code of Practice to be used by providers to demonstrate compliance with the AI Act.
In the UK, concerns around the use of copyright materials to train AI were extensively debated in the lead-up to the passing of the DUA Act. Although the Act does not make changes to copyright law, the government has agreed to publish an economic impact assessment and a report on its copyright and AI proposals within nine months. The UK’s recent consultation on AI and copyright outlines potential changes to IP law to address training data concerns, and the government is considering its position in light of the responses received.
Synthetic media also raises complex legal concerns from identity rights and consent to misinformation and content governance. The audiovisual sector has adopted contractual protections, including safeguards secured through the SAG-AFTRA strikes in the US, to protect performers’ likenesses. Elsewhere, national responses remain fragmented: the EU AI Act introduces transparency rules for synthetic content whereas the UK and the US are prioritising stronger protections for children and vulnerable users through the Online Safety Act and the US Kids Online Safety and Privacy Act (KOSPA), respectively.
In contrast, India’s proposed Digital India Act would give broad discretionary powers to regulate high-risk AI, aligned with its goal of tackling misinformation, although this has drawn concerns over transparency and civil liberties.
Sector-Specific Shifts
Financial services regulators are also tackling broader societal concerns such as financial inclusion and consumer welfare, albeit pulling in different directions. While the UK moves to close gaps in consumer protection – from crypto asset oversight to buy-now-pay-later schemes – the US continues to rely on a fragmented mix of agency guidance and litigation, without a unified crypto regime or consistent lending protections.
Within life sciences and healthcare, regulatory frameworks for AI-enabled technologies are increasingly being shaped around patient trust and tackling bias, but in different ways. The EU’s approach is anchored in the AI Act, which classifies medical devices as “high risk” – meaning more stringent obligations around human oversight and data quality. While the US lacks a single regulatory framework, FDA initiatives such as its AI/ML-Based Software as a Medical Advice Action Plan are embedding trust and transparency into AI products and tools. Similarly, the UK’s Software and AI as a Medical Device Change Programme includes a workstream on “Assurance of Trust” focussed on transparency and explainability for patients and clinicians as well as bias detection and mitigation.

Safety and Democracy
Themes such as addictive design, the protection of minors online and election integrity are also gaining traction across jurisdictions.
The EU’s proposed DFA, as well as laws and initiatives such as KOSPA and the UK’s Children’s Code, aim to comprehensively tackle the challenges arising from online practices such as dark patterns and addictive design, particularly where minors are at risk. Following the publication of its guidance on how to protect children from harmful content online, UK regulator Ofcom has made clear that it is getting ready to take early enforcement action against services that do not comply.
The EU’s DSA Elections Toolkit promotes transparency in political ads and content moderation with the aim of safeguarding democratic processes from digital interference. Comparable initiatives in India, the US and Southeast Asia vary widely in enforcement and scope – underscoring how shared risks still yield distinct national responses.

Preparing for the Next Phase
Digital regulation has not reached its peak – rather, it is entering a new chapter that is shifting in tone and focus.
Looking back, the very first wave of digital regulation, established in the lead-up to the new millennium, was characterised by a reactive, sector-neutral approach, largely aimed at supporting the growth of the internet in the dotcom era. This laid the foundations for the most recent wave, which created landmark digital frameworks and increased platform accountability.
The new chapter we are now entering will test the bandwidth of digital regulation teams in three key ways:
First, digital regulation enforcement risk has significantly increased, with more laws, higher fines, and regulators that are more active and better resourced. Organisations will face more regulatory enquiries and challenges and may need to recalibrate risk-based stances.
Second, the simplification-driven regulatory changes emerging in many jurisdictions will need to be assessed and absorbed.
Third, the new wave of digital regulation – more values-led and increasingly centred around consumer welfare, digital fairness and societal impact – is driving fresh lobbying and readiness.
As a result, organisations will need deeper and more nuanced thinking from their legal teams. Legal must adopt a more integrated and strategic role, shaping governance and compliance structures as well as educating product, commercial and technical teams on evolving regulatory requirements.

Crucially, as regulatory enforcement gathers pace, and with private enforcement gradually gaining traction, now is not the time to dial down on resources. Investment in legal and regulatory infrastructure will be critical to tracking jurisdictional divergences as well as highlighting conflicting enforcement priorities across key markets.
Visit Osborne Clarke’s Digital Regulation Timeline to monitor developments.
Contributors
We would like to thank these individuals for having shared their insight and experience on this topic.






