The United Nations Children's Fund (UNICEF) issued an urgent call on Wednesday for governments worldwide to criminalize AI-generated child sexual abuse material (CSAM), citing alarming new research. The agency's report, Disrupting Harm Phase 2, conducted in partnership with ECPAT International and INTERPOL, estimates that at least 1.2 million children across 11 surveyed countries had their images manipulated into sexually explicit deepfakes in the past year. In some nations, this figure represents one in 25 children.
The research is based on a nationally representative household survey of approximately 11,000 children. It highlights a profound escalation of digital risks, where perpetrators can create realistic sexual images of a child without their involvement or awareness. In some study countries, up to two-thirds of children expressed worry that AI could be used to create fake sexual images or videos of them.
"We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material," UNICEF stated. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes."
The call gains urgency amid regulatory action against AI platforms. French authorities recently raided X's Paris offices as part of a criminal investigation into alleged child pornography linked to the platform's AI chatbot, Grok. A Center for Countering Digital Hate report estimated that Grok produced over 23,000 sexualized images of children in an 11-day period in late December and early January.
Supporting data reveals the scale of the problem. The UK’s Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, with about a third confirmed as criminal. South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024.
UNICEF's demands include expanding legal definitions of CSAM to include AI-generated content and criminalizing its creation, procurement, possession, and distribution. The agency also urged AI developers to implement "safety-by-design" rules, including mandatory child-rights impact assessments and pre-release safety testing for open-source models.
The regulatory fallout is already spreading. The European Commission has launched a formal investigation into whether X violated EU digital rules by failing to prevent Grok from generating illegal content. Countries including the Philippines, Indonesia, and Malaysia have banned Grok, while regulators in the UK and Australia have opened their own probes.