Ofcom Tightens the Screws on Tech Over Child Safety & Illegal Content
Key Takeaways
- Rapid Response Protocols: Platforms should implement crisis protocols to prevent illegal content from going viral—such as in riots or terrorist attacks—and avoid recommending potentially illegal material.
- Livestream Safeguards: Services must have real-time human moderators and systems to flag harmful livestreams, especially those with imminent threats to physical safety.
- Tackling Deepfakes and Abuse at the Source: Firms are expected to use hash-matching and automated tools to detect explicit deepfakes, terrorism content, and child sexual abuse material before it reaches users.
- Children's Livestream Protections: Ofcom proposes banning comments, reactions, and gifts on children’s livestreams and preventing their recordings to combat grooming and coercion.
Deep Dive
Tech companies love to talk about safety, but Ofcom isn’t buying the press releases. On Monday, the UK’s media and communications regulator released a fresh batch of proposals under the Online Safety Act, this time urging platforms to stop turning a blind eye to how fast harm can spread and who it hurts most.
The message is that it’s no longer enough to react to online abuse. Platforms need to prevent it. Among the new measures on the table: better controls to stop illegal content from going viral during crises, stronger protections for children livestreaming online, and new expectations that companies detect and block things like terrorist content and deepfake nudes before they hit people’s screens.
“Important online safety rules are already in force and change is happening,” said Oliver Griffiths, Ofcom’s Online Safety Group Director. “But technology and harms are constantly evolving, and we’re always looking at how we can make life safer online.”
Virality Isn’t Neutral, It’s Dangerous When Left Unchecked
Ofcom’s latest proposals take aim at one of the internet’s most under-regulated accelerants: virality. When algorithms promote content faster than moderation can catch it, things can spiral. We saw it last year during the riots that broke out in Southport, where graphic and inflammatory footage spread like wildfire.
To get ahead of future crises, Ofcom wants platforms to build rapid-response protocols, systems that kick in during surges of potentially illegal content. If your recommendation engine is pushing violent or extremist posts before a human’s even looked at them, that’s a problem.
And live-streaming? That’s another major risk vector. The regulator is proposing a baseline: if your platform lets people go live, you’d better have human moderators on-call and the ability to detect when something dangerous is happening in real time.
Don’t Just Mop Up the Mess, Stop It at the Door
Rather than chasing content after it's already done damage, Ofcom is asking tech companies to bake safety into their platforms from the start, what the regulator calls being “safer by design.”
One example: hash-matching technology. If there’s already a known piece of child abuse material, terrorist propaganda, or non-consensual deepfake circulating online, platforms should be able to spot it instantly and keep it from reappearing. There’s no excuse not to use this.
But that’s just the floor. Ofcom also wants to see companies explore new tools (like AI-powered scanners) to catch previously unseen content promoting suicide, self-harm, or scams. Not every solution has to be perfect. But ignoring the tools entirely isn’t an option anymore.
For Kids Online, Livestreaming Is a Double-Edged Sword
Yes, livestreaming can be fun. But it also creates a direct line between kids and people who want to exploit them. Ofcom isn’t mincing words here: children have been groomed, coerced, and even encouraged to self-harm while livestreaming.
That’s why the regulator is proposing a ban on commenting, gifting, or even recording children’s livestreams. No more anonymous emojis. No more digital “gifts.” No more clips ripped and reposted.
And now that Ofcom’s age assurance guidance is live, platforms are expected to use it. If you don’t know whether your user is a child, you can’t protect them. It’s that simple.
The clock is ticking
Ofcom’s consultation is open until October 20, and final decisions are due next summer. In the meantime, enforcement under the Online Safety Act is already underway, and the regulator isn’t afraid to use its teeth.
Starting at the end of July, companies must implement specific child safety measures, regardless of where they’re based. If your platform is accessible to UK users and poses a material risk of harm, you’re in scope. No hiding behind borders.
Because in the end, this isn’t about checking a regulatory box. It’s about whether platforms are finally ready to treat user safety as more than an afterthought.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.