Ofcom Turns Up the Heat on AI & Platform Safety With New Online Safety Probes

Ofcom Turns Up the Heat on AI & Platform Safety With New Online Safety Probes

By
Key Takeaways
  • AI Use Is Squarely in Scope: Ofcom’s investigation into X shows that AI-driven features like chatbots are fully subject to the Online Safety Act when they contribute to illegal content risks.
  • Non-Consensual Imagery and CSAM Remain Top Priorities: The regulator is treating reports involving intimate image abuse and child sexual abuse material as high-priority enforcement issues.
  • Age Assurance Is a Live Enforcement Issue: The probe into Novi highlights that age checks for services exposing users to pornographic content are not optional and will be actively tested.
  • Risk Assessments Are Not Box-Ticking Exercises: Snapchat’s remediation illustrates that Ofcom expects risk assessments to realistically reflect how harm can occur, not simply restate policies.
Deep Dive

The UK’s online safety regulator Ofcom has opened a fresh set of investigations under the Online Safety Act, sharpening its focus on how platforms assess and manage risks tied to AI-driven services and illegal content. The actions target X over the use of its Grok AI chatbot, as well as Novi's AI companion chatbot service, while also highlighting how regulatory pressure has already forced Snapchat to rethink its approach to illegal content risks.

Taken together, the moves signal a more assertive phase of enforcement, particularly where concerns involve non-consensual sexual imagery, child sexual abuse material, and children’s exposure to pornographic content.

Investigation Into X and Grok’s Use

Ofcom confirmed that it had opened a formal investigation into X to assess whether the platform has met its legal duties to protect users in the UK from illegal content. The decision follows reports that the Grok AI chatbot account on X was used to create and share undressed images of people that may constitute non-consensual intimate image abuse, alongside sexualised images of children that could amount to child sexual abuse material.

The regulator said it contacted X urgently, setting a firm deadline of 9 January for the company to explain what steps it had taken to comply with the Online Safety Act. After receiving a response and carrying out an expedited review of the available evidence, Ofcom concluded that a formal investigation was warranted.

The probe will examine whether X failed to properly assess the risk of illegal content appearing on its service, put in place effective measures to prevent priority illegal content such as non-consensual intimate images and CSAM, remove illegal material swiftly once identified, and protect users’ privacy. It will also assess whether the platform adequately considered risks to children and used highly effective age assurance to prevent minors from accessing pornographic material.

In an update published on 15 January, Ofcom said X has told the regulator it has implemented measures to prevent the Grok account from being used to create intimate images of people. While describing this as a welcome development, Ofcom stressed that the investigation remains ongoing as it works to understand what went wrong and whether the fixes are sufficient.

Suzanne Cater, Director of Enforcement at Ofcom, said reports of Grok being used to generate illegal non-consensual intimate images and child sexual abuse material were “deeply concerning,” adding that the regulator would not hesitate to investigate where there is a risk of harm to children.

Ofcom also confirmed it is reviewing a response from xAI and considering whether there may be compliance issues linked to the provision of Grok that warrant a separate investigation.

Age Assurance Under Scrutiny for AI Companion Chatbots

Alongside the X investigation, Ofcom announced that it has opened a separate probe into Novi, which operates an AI character companion chatbot service. The investigation focuses on whether the service has complied with age check requirements under the Online Safety Act.

Under the Act, services that allow access to pornographic material must use highly effective age assurance to prevent children from readily accessing such content. Ofcom said it will now gather and analyse evidence to determine whether Novi has breached these obligations. If a compliance failure is identified, the company will be issued with a provisional decision and given the opportunity to respond before a final decision is made.

According to the regulator, Novi’s service had an estimated global reach of nearly 6.5 million monthly users in late 2025, including between 100,000 and 300,000 users in the UK.

Snapchat’s Risk Assessment Reworked After Enforcement Pressure

In contrast to the newly opened investigations, Ofcom also pointed to a concrete enforcement outcome elsewhere. The regulator confirmed that Snapchat has materially improved its illegal content risk assessment following direct engagement from Ofcom’s enforcement team.

After reviewing Snapchat’s original assessment, Ofcom raised concerns that it did not accurately reflect the likelihood of illegal content or activity occurring on the platform. Even after initial revisions, the regulator said key issues remained, prompting it to warn Snapchat that formal enforcement action was being considered.

That warning led to Snapchat being placed into a compliance remediation process, giving the company a final opportunity to address the shortcomings. Ofcom said Snapchat subsequently carried out a full revision of its risk assessment, including a comprehensive reassessment of risk levels across all identified harms.

The revised assessment represents a significant shift, according to Ofcom, and will require Snapchat to implement a broad set of safety measures aligned with the risks it has now acknowledged. The regulator added that it will closely scrutinise how effective those measures are in practice.

“Making sure tech firms properly assess the risks of harm to their users, and then take the necessary steps to mitigate those risks, is at the very heart of the Online Safety Act,” Cater said, noting that Snapchat is the only one of 11 platforms reviewed in 2025 to be taken through formal compliance remediation.

How Far Ofcom Can Go Under the Online Safety Act

Ofcom has wide enforcement powers under the Online Safety Act, including the ability to require platforms to take specific remedial steps and to impose fines of up to £18 million or 10 percent of qualifying worldwide revenue, whichever is higher. In the most serious cases of ongoing non-compliance, the regulator can also seek court-ordered business disruption measures, such as requiring payment providers or advertisers to withdraw services or directing internet service providers to block access to a site in the UK.

The regulator said it will provide updates on the X and Novi investigations as they progress, while continuing to monitor Snapchat’s implementation of safety measures following its revised risk assessment.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong