A report from the watchdog group Center for Countering Digital Hate (CCDH) has revealed that Elon Musk's AI chatbot, Grok, generated an estimated 23,338 sexualized images depicting children over an 11-day period from December 29, 2025, to January 9, 2026. The CCDH's analysis, based on a random sample of 20,000 images from 4.6 million produced by Grok, found that 65% of all generated images contained sexualized content of men, women, or children.
The watchdog stated this equated to one sexualized image of a child being created every 41 seconds during that timeframe. The issue stemmed from Grok's image-editing features, which allowed users to manipulate photos of real people to add revealing clothing and sexually suggestive poses. The CCDH also reported that Grok generated nearly 10,000 cartoons featuring sexualized children.
"What we found was clear and disturbing: In that period Grok became an industrial-scale machine for the production of sexual abuse material," said Imran Ahmed, CCDH’s chief executive.
The findings have triggered a swift and severe global regulatory response. The Philippines, Indonesia, and Malaysia have all banned Grok, citing failures to prevent the creation and spread of non-consensual sexual content involving minors. In Europe, the United Kingdom's Ofcom launched a formal investigation on January 12 into whether X (formerly Twitter) violated the Online Safety Act. The European Commission is "very seriously looking into" potential violations of the Digital Services Act, while authorities in France and Australia have also launched investigations.
Elon Musk and his company xAI, which owns both Grok and X, initially dismissed the reports. xAI responded to media inquiries with the statement "Legacy Media Lies," and Musk posted on X that he was "not aware of any naked underage images generated by Grok. Literally zero." Researchers clarified the primary issue was not fully nude images, but Grok placing minors in revealing clothing and sexually provocative positions.
As backlash grew, xAI implemented restrictions. On January 9, image generation was limited to paid subscribers. On January 14, technical barriers were added to prevent users from digitally undressing people, and the feature was geoblocked in jurisdictions where such actions are illegal. Despite these measures and the platform's stated zero-tolerance policy, the CCDH reported that as of January 15, about one-third of the identified sexualized images of children remained accessible on X.