French & UK Watchdogs Launch Parallel Probes Into Grok as Deepfake Risks Test AI Safeguards
Key Takeaways
- Multiple Authorities Now Investigating Across Jurisdictions: UK regulators and French prosecutors are pursuing separate but related investigations into X and Grok, reflecting escalating cross-border scrutiny of AI, data protection and child safety risks.
- ICO And Ofcom Probing Different Legal Failures: The UK Information Commissioner’s Office is investigating potential data protection breaches linked to Grok’s ability to generate sexualised deepfake content, while Ofcom is examining whether X failed to prevent and remove such material under the Online Safety Act.
- French Criminal Investigation Raises Stakes: French authorities have raided X’s Paris offices as part of a cyber-crime investigation into suspected unlawful data extraction and alleged complicity in the possession of child sexual abuse material, significantly escalating enforcement pressure.
- Senior Figures Summoned By Prosecutors: French prosecutors have confirmed that Elon Musk and former X chief executive Linda Yaccarino have been summoned to appear at hearings scheduled for April.
- Regulatory Gaps Around AI Remain Exposed: Ofcom has acknowledged limits in its ability to regulate standalone AI chatbots, highlighting how current legal frameworks struggle to keep pace with rapidly deployed generative AI systems.
Deep Dive
UK regulators have launched parallel investigations into the Grok artificial intelligence system following reports that it has been used to generate non-consensual sexualized images and videos of real people, including children.
On 3 February, the Information Commissioner’s Office confirmed it has opened formal investigations into X Internet Unlimited Company and xAI, examining whether personal data was processed lawfully in the development and deployment of the Grok system.
At the same time, Ofcom confirmed it is continuing its own investigation into X, focused on whether the platform did enough to prevent the widespread sharing of sexual deepfake imagery and to remove such content quickly once it was identified.
While both investigations stem from the same reported harms, the regulators are approaching the issue from different legal angles (one through data protection law, the other through online safety obligations) highlighting the increasingly complex regulatory landscape surrounding AI-generated content.
ICO Focuses On Data Protection Failures and Safeguards
The ICO said its investigation centers on whether Grok’s design and deployment included appropriate safeguards to prevent the generation of manipulated sexual imagery using people’s personal data, and whether personal data was processed lawfully, fairly and transparently.
The regulator said it acted after reports that Grok had been used to generate non-consensual sexual imagery, including content involving children, raising concerns about individuals losing control of their personal data in ways that could cause immediate and serious harm.
William Malcolm, Executive Director for Regulatory Risk and Innovation at the ICO, said the reports raised “deeply troubling questions” about how people’s personal data may have been used without their knowledge or consent, particularly where children are involved.
He added that the ICO’s role is to address the data protection risks at the centre of the issue, while recognising that other regulators also have responsibilities for online safety and digital services.
The ICO confirmed it had already contacted X and xAI on 7 January to seek urgent information and said it will now gather evidence, analyse Grok’s technical design and safeguards, and assess how personal data may have been used to generate intimate or sexualised imagery.
Ofcom Examines Platform-Level Online Safety Failures
Ofcom’s investigation is focused on X’s responsibilities as a social media platform under the Online Safety Act. The regulator is assessing whether X adequately identified and mitigated the risk of sexual deepfake imagery spreading on its service and whether it acted quickly enough to take such content down.
Ofcom said it was one of the first regulators globally to respond to the reports, which it said may involve criminal offenses, including the creation and sharing of sexual imagery of children.
Since the investigation began, X has told Ofcom that it has implemented measures aimed at addressing the issue. Ofcom said it continues to use its formal information-gathering powers and warned that companies are legally required to respond accurately, completely and on time.
The regulator confirmed it is working closely with the ICO and with international counterparts, including the European Commission, which opened its own investigation in late January.
French Authorities Escalate Scrutiny With Raid On X Offices
Regulatory pressure on X intensified further this week after French authorities confirmed that the company’s Paris offices were raided by the cyber-crime unit of the Paris prosecutor’s office.
According to prosecutors, the raid forms part of a criminal investigation into suspected offenses including unlawful data extraction and alleged complicity in the possession of child sexual abuse material. The prosecutor’s office also said that Elon Musk and former X chief executive Linda Yaccarino have been summoned to appear at hearings scheduled for April.
The French investigation is separate from, but related in subject matter to, the regulatory actions now under way in the UK, where authorities are examining both the handling of harmful content and the use of personal data in AI systems connected to the platform.
Musk responded publicly to the French raid in a post on X, describing the action as a “political attack.” French prosecutors have not commented on that characterization and said the investigation remains ongoing.
The developments in France add to a widening circle of regulatory and law enforcement scrutiny facing X and its affiliated AI operations. In the UK, the Information Commissioner’s Office has opened a formal investigation into Grok over its potential to generate harmful sexualized image and video content using personal data, while Ofcom continues its own investigation into whether X complied with its online safety duties.
The parallel investigations in the UK and France underscore how concerns around AI-generated sexual imagery, child safety, and data protection are increasingly triggering coordinated, and sometimes overlapping, responses from regulators and prosecutors across jurisdictions.
Regulatory Gaps Around AI Chatbots Remain
In its update, Ofcom also highlighted limitations in how the Online Safety Act applies to AI chatbots. Under the current framework, not all chatbot-generated content falls within scope, particularly where content is created in one-to-one interactions and not shared between users or generated through search.
As a result, Ofcom said it is currently unable to investigate the creation of illegal images by the standalone Grok service in this case, although it continues to demand information from xAI and is examining whether age-assurance obligations apply where pornographic content is published.
Any expansion of regulatory powers over AI chatbots, Ofcom said, would be a matter for government and Parliament, with ministers now reviewing how such systems should be regulated.
Both regulators stressed that their investigations are ongoing and that no conclusions have yet been reached. Enforcement processes are expected to take months, with any final decisions dependent on further evidence-gathering and representations from the companies involved.
Under UK data protection law, the ICO has the power to issue enforcement notices and impose fines of up to £17.5 million or 4 percent of an organization’s global annual turnover. Ofcom can also levy significant penalties where online safety duties are breached.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

