NEXT IN AI
7
min read

AI-Driven Hyper-Personalisation: Future Risks and Opportunities

Published on
11 June 2025
Last updated on
11 June 2025

AI is taking personalisation of online content to a new level

Systems that once personalised in predictable ways using basic pre-existing data are increasingly reacting to user behaviours and context in real time – adapting in the moment to tailor offers, conversations, imagery, tone and service flows based on individual user signals and live online data. This shift to “hyper-personalisation” will reshape both how businesses deliver value and how consumers engage and what they expect.

But as personalisation deepens and becomes more reactive, not only do existing risks become more acute but new kinds of regulatory challenge also arise. The ability for an AI system to generate unique content and experiences for individuals opens the door to different forms of consumer law breach and the potential for liability under AI-specific laws.  It also raises complex challenges around ensuring legal and regulatory compliance in real time and at massive scale.

Early Examples of Increased Personalisation

Spotify Wrapped is a personalised annual video generated for individual users highlighting their top songs, artists and genres from the year.

Subscription video on demand (SVOD) streaming services use AI to offer individualised recommendations based on viewing history, time of day and user-entered data.

Starbucks tailors in-app personal offers in real time based on geographic proximity to stores, purchase history and time of day.

Examples of Emerging/Future Hyper-Personalisation

Future online advertising will rely on data-driven algorithms not just to target audiences but also to generate individualised content for recipients based on their real-time behaviour.

AI customer service agents will be able to use historical and real-time data to adapt their accent, tone and vocabulary – creating bespoke user experiences that are optimised to drive the organisation’s desired outcomes.

Voice-based AI-powered systems are already being used to comfort and reassure people with Alzheimer’s, using best practice techniques tailored to individual needs.

Existing Risks of Personalisation

Current forms of personalisation give rise to a number of legal and regulatory risks that are increasingly well recognised and addressed under data protection, consumer protection and anti-discrimination laws. AI-powered hyper-personalisation will amplify those existing risks.

Under data protection laws, if personalisation relies on processing of personal data, then requirements around transparency, legal basis for processing and “special category data” need to be navigated. For instance, if purchase history data is processed and this includes non-prescription pharmacy-only medicine products, then in Europe at least this risks being seen as processing of “data concerning health”, which requires explicit consent under the GDPR.

In hyper-personalisation scenarios, risk increases in line with the scale of data use and the wider range of processing purposes, and also because AI processing can be unpredictable and introduce data bias. Additional transparency may be required, and the legal basis for processing will need to be considered.

Special category data issues may also surface in new ways. For example, if an AI voicebot learns to modulate its accent to match customers’ accents on a personalised basis, it could be argued this amounts to processing of “data revealing racial or ethnic origin”.

Under consumer protection laws, issues can already arise when personalised pricing lacks transparency. For example, the UK’s Competition and Markets Authority (CMA) is investigating Ticketmaster’s use of dynamic pricing for Oasis tickets. Likewise, if current personalisation techniques result in less favourable treatment of individuals from protected groups, anti-discrimination/equality laws may apply. With at least some forms of AI-driven hyper-personalisation, consumer transparency will be more challenging, and discrimination may arise not just in who receives a message, but also in what that message says and the style in which it is communicated.

New Techniques, New Risks

Hyper-personalisation does not just boost existing risks. Using AI and factoring in real-time data and user reactions can add significant additional legal issues. These predominantly arise from how, when and why systems adapt to individual users. They fall into three categories.

Algorithmic Exploitation of Vulnerability

First, there is the risk of algorithmic exploitation of vulnerability. If an AI-powered system is instructed to optimise for customer spend and can adjust not only when it targets messages, but also what those messages say – and how they say them – the AI system may start to identify and exploit patterns that correlate to individual vulnerabilities. Online gamblers, for example, may be targeted with messaging shaped by behavioural patterns statistically linked to high-risk engagement – such as frequent session restarts or repeated late-night activity – at moments when they are particularly susceptible. This kind of AI-learned targeting is likely to attract attention under consumer protection and/or AI laws in territories where these prohibit unfair commercial practices and/or exploitation of situational vulnerability.

Misleading Output from the AI

Second, if not adequately constrained with technical guardrails, the output from the AI may be misleading. In its attempts to generate content in line with its instructions – whether to optimise sales or otherwise – the AI may “hallucinate” statements that are inaccurate and thus mislead the recipient into making purchases or other transactional behaviour. A recent Canadian court decision involving a major airline found the company liable after its chatbot gave incorrect advice regarding the airline’s bereavement policy.

“Using AI to personalise content is not inherently unlawful, but hyper-personalised techniques can raise flags under multiple regimes where AI functionality is covert, subliminal and exploitative.”

Emily Tombs, Senior Associate (NZ Qualified), Osborne Clarke UK

AI-Specific Legislation

Third, issues may arise under AI-specific legislation such as the EU AI Act. For example, covert personalisation strategies may breach prohibitions on subliminal techniques if they lead individuals to make important decisions they would not otherwise have made if fully aware of the influences at play. Care will also be needed if any element of the system could be seen as an “emotion recognition system” under the EU AI Act, and to ensure AI-generated outputs are appropriately identifiable.

Compliance at Hyper-Scale

Those who use hyper-personalisation will need to address the challenge of how to handle content compliance issues. In scenarios where bespoke content is generated automatically in real time and with a potentially massive number of instances, that challenge may be very significant. Organisations will need to assess the extent to which compliance can be baked into their AI systems and the level of oversight that will be appropriate to monitor the success of any built-in measures.

Platforms that have statutory obligations to maintain repositories of online ads – a requirement for certain large entities under the EU Digital Services Act (DSA) – may face a corresponding technical and organisational challenge in how they comply with those obligations for potentially limitless hyper-personalised variants.

From Risk to Strategic Advantage

Hyper-personalisation also offers potential opportunities. In areas such as consumer law, accessibility and data protection, it might help deliver clearer, more actionable information. For example, AI could help tailor the content and timing of disclosures based on factors such as user understanding and the context of the interaction – thereby aligning with consumer law goals of informed decision-making and timely disclosure. Equally, adaptive font sizing, simplified language modes, audio-assisted navigation and real-time content tailored to user needs could all improve digital access for users with disabilities.

Over time, practices that enhance accessibility may evolve from being seen as optional improvements to becoming standard regulatory expectations. Legal concepts of “reasonable adjustment” may well expand to include digital personalisation, as regulators or courts start to consider this when assessing compliance. By engaging early with accessibility and design colleagues, legal teams can help the business stay ahead of evolving standards – demonstrating a clear commitment to inclusive user experience.

However, certain disclosures – such as withdrawal rights, cancellation terms or product warnings – may need to remain fixed and unaltered to meet legal requirements in some jurisdictions. These should be excluded from certain forms of personalisation to avoid non-compliance.

“AI compliance is not only about the EU AI Act. It can include many different legal fields. It is important to hardwire compliance, accessibility and trust directly into the user experience and involve legal teams from the outset.”

Dr Lina Böcker, Partner, Osborne Clarke Germany

AI-driven hyper-personalisation will reshape how organisations engage with users, but with its benefits come additional risks and a more complex regulatory landscape. Compliance models will need to adapt quickly. The key challenge will be to embed legal requirements in a way that is both accurate and agile.

Contributors

We would like to thank these individuals for having shared their insight and experience on this topic.

Nick Johnson
Partner
Osborne Clarke UK
Emily Tombs
Senior Associate (New Zealand qualified)
Osborne Clarke UK
Dr Lina Boecker
Partner
Osborne Clarke Germany