Table of Contents
- 1 What the UK Online Safety Act Actually Requires
- 2 Timeline: When the UK Online Safety Act Will Kick In
- 3 Is the UK Online Safety Act Protecting Children or Expanding Surveillance?
- 4 Why Critics Say the UK Online Safety Act Might Backfire
- 5 How Platforms Are Scrambling to Comply with UK’s Online Safety Act
- 6 Conclusion: The Debate Is Just Beginning
The UK’s Online Safety Act has now become law, beginning a phased rollout that will reshape how people interact with online platforms. Lawmakers framed the Act as a necessary step to protect children from harmful material such as pornography, violent imagery, and content promoting self-harm.
Few dispute the goal of child safety, but critics say the law goes too far. James Baker of the Open Rights Group warned it is “an overblown legislative mess that could seriously harm our security by removing privacy from internet users.”
One measure has drawn more attention than any other: mandatory age verification. Platforms must now prove a user is old enough before granting access to restricted content. The law leaves it to companies to decide how, which is where the controversy begins. Critics argue that turning to ID uploads, facial scans, or third-party vendors could turn online safety into a new form of mass data collection. Platforms around the world are responding in similar ways, tightening their age rules and experimenting with different verification systems.
This article looks beyond the headline debates to break down what the Act actually requires, how enforcement will work, and why its impact may extend well beyond child protection.
What the UK Online Safety Act Actually Requires
The Online Safety Act puts the responsibility on platforms to stop children from accessing harmful content. That covers material such as pornography, graphic violence, and sites that encourage eating disorders or self-harm. If a platform has a significant number of UK users and hosts this type of material, it will need to have an age assurance system in place.
The law talks about “robust” age checks but stops short of saying how they should work. That leaves companies with a menu of options. Some may ask users to upload a passport or driver’s license. Others are trialing facial analysis tools that estimate a person’s age from a selfie. Still others are considering third-party verification vendors that specialize in digital identity checks.
What counts as robust will ultimately be decided by Ofcom, the UK’s communications regulator. The agency has said age assurance must be accurate, fair, and consistent. In practice, that means the systems should be able to reliably flag under-18 users without discriminating against certain groups or storing more data than necessary.
This flexibility was meant to give companies room to adapt, but it also creates uncertainty. Services as different as Reddit, X (formerly Twitter), Discord, Grindr, and adult content sites all fall under the same rules. Each will have to weigh user privacy, cost, and technical feasibility as they design their compliance strategies. The result is likely to be a patchwork of different methods, with users asked to prove their age in very different ways depending on which platform they are on.
Timeline: When the UK Online Safety Act Will Kick In
With the requirements set, the next question is when platforms will actually feel the impact. Ofcom, the UK’s communications regulator, is overseeing enforcement, and it has opted for a phased rollout rather than a single deadline. This staggered approach is meant to give companies time to adjust, but the timetable is already moving quickly.
Phase 1: Tackling Illegal Content
The first stage began in early 2025, when services were required to carry out risk assessments focused on illegal content. This included looking at how their platforms might be used to share material such as terrorism content or child sexual abuse imagery. By March, companies had to show Ofcom how they intended to reduce those risks. This set the groundwork for broader duties around children’s safety.
Phase 2: Introducing Age Assurance
The child safety provisions followed soon after. In January 2025, Ofcom published its final guidance on age assurance, making clear that “robust” checks would be expected on sites hosting pornography, content that encourages self-harm, or other material harmful to minors. The most visible shift came in July 2025, when platforms with significant UK audiences were required to put these systems in place. This deadline applied not only to adult content sites but also to social networks, dating services, and gaming platforms. Some companies scrambled to integrate third-party verification tools, while others quietly restricted access for UK users rather than risk penalties. Ofcom has said it will prioritize supervision of the largest platforms, signaling that enforcement will be active rather than symbolic.
Phase 3: Transparency and Ongoing Oversight
From late 2025 into 2026, the focus expands from child safety checks to transparency. Larger platforms—those with the biggest UK audiences—will be expected to publish regular reports on the risks their services pose and the steps they are taking in response. Ofcom is also preparing to introduce a fees system so that regulated companies help fund the cost of oversight, with invoices expected as early as mid-2026.
Is the UK Online Safety Act Protecting Children or Expanding Surveillance?
The sharpest debate around the Online Safety Act is not about whether children deserve protection but about what kind of internet the UK will be left with as a result. Civil liberties groups argue that the law risks setting a precedent where identification becomes a default requirement for basic online access.
The Open Rights Group has warned this shift could “make us less secure by threatening our privacy and undermining our freedom of expression.” Their concern is less about any single platform and more about the regulatory model itself. If Ofcom can mandate “high assurance” checks without clear limits, the door is open for governments to expand those powers in the future. What begins as child protection could evolve into routine monitoring of everyday activity online.
The implications extend to free expression. Adults may avoid lawful but sensitive content—health resources, support forums, or independent media—if access requires surrendering personal data. Groups that depend on anonymity, such as journalists, whistle-blowers, or abuse survivors, could find themselves shut out altogether. Critics see this as a return to prior restraint, where speech is filtered before it can even reach the public.
Supporters frame these checks as safeguards, but opponents warn they risk normalizing surveillance under the banner of safety. Once identification becomes a prerequisite for participation, it is difficult to roll back, raising long-term questions about how far regulators should go in governing online life.
Why Critics Say the UK Online Safety Act Might Backfire
But beyond the questions of privacy and surveillance, many critics argue the law may also fail on its own terms. Instead of building a safer internet for children, it could introduce new risks while proving easy to sidestep. Critics highlight five main problems:
1. Data Security Risks
Cybersecurity experts told Tom’s Guide the Act is “a disaster waiting to happen,” warning that any system requiring large-scale verification becomes a magnet for attackers. The MOVEit breach in 2023, which exposed millions of sensitive government and corporate records, is often cited as evidence that even well-resourced organizations struggle to secure such information. Critics fear the Act could leave platforms managing vast stores of sensitive data with no realistic way to guarantee its safety.
2. Excluding Vulnerable Users
Another risk lies in who gets left behind. Millions in the UK lack the documents typically needed to verify identity, including children in foster care, undocumented migrants, and adults without stable housing. For them, age checks are not just inconvenient but exclusionary. Instead of providing greater protection, the Act could deny access to online spaces that serve as vital sources of education, support, or community.
3. Regulatory Gaps
The Act demands “robust” verification but leaves key details undefined. Ofcom has issued guidance, but there is still no single standard for how platforms should manage sensitive data or how long it should be retained. This lack of clarity shifts responsibility onto private companies, creating inconsistent practices and the risk of over-collection. Without clearer rules, enforcement risks becoming fragmented and trust in the system may erode.
4. Easy Workarounds
Even if companies comply, it is unclear whether the measures will work as intended. Harmful content has always found ways to circulate—through coded hashtags, encrypted groups, or smaller platforms outside Ofcom’s remit. Circumvention is also common. VPN providers reported a 1,400% surge in UK sign-ups during the rollout of verification rules. At the same time, users began openly sharing tutorials for tricking facial recognition tools with AI-generated images or even video game characters. These examples suggest that determined users will always find ways around the rules, raising doubts about whether the Act will meaningfully reduce risks to children.
5. Industry Pushback
Resistance is also coming from the platforms themselves. Some companies have quietly restricted UK access rather than attempt costly compliance, while others are challenging Ofcom directly. A lawyer representing the message board 4chan announced the site will refuse to pay a £20,000 fine, calling Ofcom’s enforcement an “illegal campaign of harassment” against U.S. firms. 4chan’s legal team has argued that American companies do not surrender their First Amendment rights because a foreign regulator demands it, and they are prepared to fight the case in U.S. courts. If they succeed, Ofcom may have to explore more aggressive tactics such as asking internet service providers to block UK access entirely. This kind of standoff underscores how difficult enforcement becomes once jurisdiction crosses borders.
How Platforms Are Scrambling to Comply with UK’s Online Safety Act
With enforcement underway, platforms have little choice but to show regulators they are acting. The approaches vary, and the differences reveal just how fragmented compliance has become.
Some adult sites have taken the simplest option: blocking UK visitors entirely rather than risk fines or handle sensitive identity checks. Others are turning to third-party verification systems. Reddit, for example, has partnered with Persona, which asks users to upload a government ID or selfie and confirms only age-status back to the platform. Bluesky has gone in a different direction by working with Epic Games’ Kids Web Services, offering users multiple options ranging from ID upload to facial scans or credit card validation. Even entertainment services are adjusting. Microsoft has begun rolling out prompts for Xbox players to confirm their age before unlocking purchases or online play.
The methods differ, but the outcome is the same: users are being asked to prove their age in inconsistent ways across services. For platforms, it has become a balancing act between satisfying Ofcom’s demand for “robust” checks and maintaining user trust. The stakes are high, as Ofcom can issue fines of up to £18 million or 10% of a company’s global revenue. That figure is large enough to make even the biggest tech firms take notice.
This would have been an ideal moment to explore more user-centric identity tools—systems where people could verify their age once and share only the minimum proof required across platforms. The European Union’s upcoming Digital Identity Wallet, for instance, is designed to let citizens prove attributes such as age without revealing underlying details like their date of birth or passport number. Similar privacy-preserving approaches, built on selective disclosure, could have offered a way to meet Ofcom’s goals while keeping users in control of their data.
Conclusion: The Debate Is Just Beginning
The UK Online Safety Act signals how far governments are willing to go to regulate online spaces, but it also shows how complex that task has become. Efforts to protect children are colliding with questions about privacy, free expression, and the effectiveness of enforcement. The law is still in its early stages, and the full consequences of its age verification mandates will only become clear as Ofcom tightens oversight and platforms continue to adapt.
What is clear already is that the debate is far from settled. Regulators want certainty, platforms want clarity, and users want both safety and privacy. Striking that balance will require more than patchwork compliance or reactive fixes. It will demand a conversation about how to verify age and protect minors without undermining the rights and trust of everyone else online.