We expect that in 2025 the global focus on children’s online safety, and digital well-being will continue to grow, underpinned by regulatory frameworks in regions such as the UK, EU, US, and Australia. This year is set to deepen the global dialogue on safeguarding young users, introducing more sophisticated mechanisms for online identification and verification.
Protecting children around the world
The past few years have seen significant strides in legislation aimed at protecting children in the digital world. The trend shows no signs of slowing in 2025, with increasingly stringent measures being introduced globally to address risks such as exploitation, exposure to harmful content, and data misuse.
UK
The threshold for parental consent in the UK is 13 years of age, and one of the largest fines ever issued by the UK Information Commissioner’s Office was to TikTok for failing to enforce this on its eponymous platform.
The Children’s Code (Age-Appropriate Design Code) has been a game-changer since its introduction in 2021. This code requires organisations to adopt child-friendly design standards for online services likely to be accessed by children. These measures include minimising data collection, defaulting to high privacy settings, and avoiding practices that might exploit children's data or attention. The Code’s influence has been far-reaching, inspiring similar frameworks, such as California’s Age-Appropriate Design Code Act.
The Online Safety Act which is set to be fully implemented in 2025, further enhances the protective framework by imposing a duty of care on online platforms to shield children from harmful content and interactions. Platforms will be required to prioritise safety measures, report on compliance, and collaborate with Ofcom, the UK’s communications regulator, which is introducing Children’s Safety Codes of Practice as part of the Act’s enforcement. These codes provide detailed guidance to online platforms, clarifying their obligations to prevent harm to young users, from combating cyberbullying to blocking access to age-inappropriate content. They are likely to focus on age-appropriate design, content moderation, transparency, and risk assessments.
US
In California, the legislation, which came into effect in 2024, imposes robust requirements on businesses offering online services likely to be accessed by children under 18. Companies must assess and mitigate risks to children’s privacy and safety in their platform designs, including minimising profiling, limiting geolocation tracking, and ensuring that default privacy settings are appropriate for minors.
Federal action remains fragmented, though progress is being made at the state level. States like Utah and Arkansas have introduced legislation requiring parental consent for minors to access social media platforms, while federal initiatives such as the Kids Online Safety Act are gaining momentum. These efforts indicate a growing awareness of the need for cohesive national policies to address children's safety online.
EU
Organisations need to collect parental consent to process the personal data of a child under 16 years of age, although member states may reduce this threshold as low as 13 years of age.
The Digital Services Act (DSA) represents another landmark in digital regulation, requiring platforms to assess and mitigate systemic risks, including those that disproportionately affect children. The DSA introduces heightened transparency obligations, demanding that platforms publish reports on content moderation and the measures taken to protect minors from harmful content and targeted advertising.
Australia
New legislation passed in 2024 prohibits children under 16 from using social media without verified parental consent. Regulators are taking a proactive approach, planning to investigate age-verification mechanisms in 2025.
Who are you? Answering the identity question
Central to the issue of children’s online safety is the challenge of digital identification and verification. Ensuring that children are appropriately safeguarded requires reliable mechanisms to verify their age and identity without unnecessarily infringing on their privacy.
The UK Data (Use and Access) Bill, (DUA Bill), expected to take effect in 2025, introduces innovative digital verification measures. These provisions aim to streamline the process of verifying a user’s age and identity, reducing reliance on traditional forms of identification, such as passports or driver’s licences, which can pose significant security risks if mishandled. Instead, the DUA Bill advocates for privacy-preserving digital identification solutions, which enable organisations to confirm a user’s age or identity without exposing additional personal data.
This shift toward digital identification is particularly valuable for organisations grappling with customer due diligence requirements. Digital verification methods promise to reduce administrative burdens, expedite compliance processes, and enhance security. For example, a platform designed for children could integrate age-verification technology that flags underage users without requiring them to upload sensitive documents. Similarly, businesses handling Know Your Customer (KYC) obligations could rely on secure digital IDs to validate user identities with minimal risk.
Globally, the development of digital verification systems is being prioritised as a means of enhancing both privacy and security. In the EU, discussions around the European Digital Identity Framework are advancing, with the goal of creating a unified approach to digital identification that balances security with user control. The US, too, is exploring the potential of decentralised digital identity systems, particularly in the context of protecting children online.
The road ahead: challenges and opportunities
In 2025, we anticipate further progress in online safety and digital identification, setting the stage for a new era of child-focused digital governance. The increasing focus on children’s online safety and digital identification presents both challenges and opportunities for organisations. On the one hand, compliance with these evolving frameworks requires significant investment in new systems, processes, and training. On the other hand, companies that embrace these changes proactively can position themselves as leaders in ethical innovation, building trust with users and regulators alike.
Related item: Looking back: The EU Data Act and a new era for data
This article was co-authored by Joshua Curzon, Trainee Solicitor.