French authorities raided X's Paris office on February 3, with prosecutors summoning Elon Musk and former CEO Linda Yaccarino to appear at hearings in April. The action, supported by Europol's cybercrime unit, centres on allegations that X's Grok AI has generated sexualised images, including child sexual abuse material and celebrity deepfakes.
The UK's communications regulator Ofcom launched a separate investigation the same day, examining whether X violated the Online Safety Act. According to Ofcom, Grok continued producing sexualised images "even when restricted settings were enabled."
This marks escalating regulatory pressure on X's AI tools across Europe. The European Commission is also investigating the platform. The timing is notable: X has faced mounting scrutiny since Musk's acquisition, particularly around content moderation and AI safety controls.
The raids come as European regulators test new enforcement powers under digital services legislation. France's move to summon Musk directly is significant. Most tech enforcement actions target corporate entities, not executives personally. Whether Musk will actually appear remains unclear.
X has not yet commented on the raids or summons. The company previously defended Grok's safety measures after similar concerns were raised in late 2025.
For enterprise tech leaders watching AI deployment, the cases highlight a widening gap between US and European regulatory approaches. While US oversight of generative AI remains largely voluntary, European authorities are using existing cybercrime and online safety laws to pursue enforcement. This creates compliance complexity for organisations deploying AI tools across regions.
The parallel UK and French investigations suggest coordinated regulatory strategy rather than isolated action. Enterprise architects building AI into products should note that safety controls marketed as sufficient in one jurisdiction may not satisfy regulators elsewhere. The bar for what constitutes adequate safeguards is rising, particularly for publicly accessible generative AI.
We'll see whether this enforcement pattern extends to other platforms. For now, X is the test case.