French authorities escalated their investigation into X (formerly Twitter) with a raid on the platform's Paris headquarters by the city's cybercrime prosecution unit, working with Europol. The investigation centers on alleged criminal activity including AI-generated sexual images from X's Grok chatbot, child exploitation material, and Holocaust denial content.
The timing matters. This follows UK regulator Ofcom opening a second probe into Grok for generating sexualized images, even after Anthropic's Claude launched enterprise automation tools on January 30 that compete directly with X's AI offerings. The European Commission has also opened four separate investigations into Musk's platforms.
Musk and Linda Yaccarino, X's CEO until her departure, have been summoned to appear at hearings in April. The raid included seizure of documents and equipment, according to Reuters. X has not publicly responded to the raid or summons.
What this means in practice: European regulators are coordinating enforcement actions against major platforms, using new Digital Services Act powers that allow cross-border investigations with Europol support. The Paris prosecutor's cyber-crime unit specifically focuses on online criminal activity, suggesting authorities believe they have evidence beyond content policy violations.
The investigation puts X in a different category from typical content moderation disputes. Criminal investigations carry potential penalties including platform restrictions in EU markets, executive liability, and multi-billion euro fines. Unlike administrative actions, these proceedings can result in criminal charges.
Worth noting: X's legal challenges are mounting as competitors like Anthropic push enterprise AI tools with built-in safety guardrails. Claude's January 30 enterprise plugin launch emphasized compliance and safety features, explicitly targeting legal and financial sectors where regulatory risk matters.
The real test comes when French prosecutors detail specific allegations in April. Until then, the coordination between French authorities, UK regulators, and the European Commission suggests a broader pattern of enforcement action against X's AI capabilities and content moderation practices.