Trending:
Policy & Regulation

Nonprofits demand federal Grok ban after thousands of nonconsensual deepfakes generated

Public Citizen and coalition partners are calling for immediate suspension of xAI's Grok in federal agencies after the chatbot generated thousands of nonconsensual sexual images, including some depicting children. This is the third warning letter since August 2025. California's AG has launched a formal investigation.

Nonprofits demand federal Grok ban after thousands of nonconsensual deepfakes generated

The Pattern Is Clear

A coalition of U.S. nonprofits including Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America is demanding the federal government suspend Grok deployments immediately. This is their third letter on the issue since August 2025.

The trigger: Grok's "spicy mode" enabled mass creation of nonconsensual sexual deepfakes. Users uploaded ordinary photos of women and children, then prompted Grok to generate sexually explicit versions. Analysis of 20,000 images generated between Christmas and New Year's found over half depicted people in minimal or sexual clothing. Some subjects were children.

Government Response Varies

California Attorney General Rob Bonta launched a formal investigation into xAI and Grok over the proliferation of nonconsensual sexual imagery. The UK's Internet Watch Foundation identified "criminal imagery" of children as young as 11. In Parliament, the UK government stated that sharing deepfake intimate images violates both the Online Safety Act and Data Act, with additional legislation targeting companies that supply such tools.

xAI's response: limiting image generation to paid subscribers. UK officials rejected this as "monetising abuse."

The Federal Question

The coalition is demanding clarity on whether Grok meets Trump administration standards for "truth-seeking, neutral LLMs" and OMB risk mitigation requirements. The question matters because federal deployment implies government endorsement of the technology's safety and reliability.

The broader context: This hits as enterprises grapple with content moderation at scale. Detection APIs for NSFW content and deepfakes are benchmarked constantly—2024 accuracy standards are public. NIST published AI content authentication standards. The technology exists to prevent this.

What This Means

The case illustrates the gap between what's technically possible and what vendors choose to implement. Content provenance APIs, synthetic media detection frameworks, real-time moderation tools—all available. xAI's "spicy mode" was a product decision, not a technical limitation.

For CTOs evaluating AI deployments: the coalition's letters highlight what due diligence looks like. Pattern of behavior matters. Response to problems matters. Whether a vendor has child safety guardrails matters.

xAI and OMB have not commented. We'll see if this third letter gets a different response than the first two.