UNICEF Urges Global Criminalization of AI-Generated Child Sexual Abuse Material

Feb 5, 2026, 12:18 p.m. 2 sources neutral

Key takeaways:

  • Regulatory crackdowns on AI platforms like X's Grok could pressure tech stocks and related crypto tokens.
  • Increased focus on 'safety-by-design' may accelerate compliance-driven AI development, impacting open-source model growth.
  • The scandal highlights systemic content moderation risks for social media and AI-integrated blockchain projects.

The United Nations Children's Fund (UNICEF) issued an urgent call on Wednesday for governments worldwide to criminalize AI-generated child sexual abuse material (CSAM), citing alarming new research. The agency's report, Disrupting Harm Phase 2, conducted in partnership with ECPAT International and INTERPOL, estimates that at least 1.2 million children across 11 surveyed countries had their images manipulated into sexually explicit deepfakes in the past year. In some nations, this figure represents one in 25 children.

The research is based on a nationally representative household survey of approximately 11,000 children. It highlights a profound escalation of digital risks, where perpetrators can create realistic sexual images of a child without their involvement or awareness. In some study countries, up to two-thirds of children expressed worry that AI could be used to create fake sexual images or videos of them.

"We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material," UNICEF stated. "Deepfake abuse is abuse, and there is nothing fake about the harm it causes."

The call gains urgency amid regulatory action against AI platforms. French authorities recently raided X's Paris offices as part of a criminal investigation into alleged child pornography linked to the platform's AI chatbot, Grok. A Center for Countering Digital Hate report estimated that Grok produced over 23,000 sexualized images of children in an 11-day period in late December and early January.

Supporting data reveals the scale of the problem. The UK’s Internet Watch Foundation flagged nearly 14,000 suspected AI-generated images on a single dark-web forum in one month, with about a third confirmed as criminal. South Korean authorities reported a tenfold surge in AI and deepfake-linked sexual offenses between 2022 and 2024.

UNICEF's demands include expanding legal definitions of CSAM to include AI-generated content and criminalizing its creation, procurement, possession, and distribution. The agency also urged AI developers to implement "safety-by-design" rules, including mandatory child-rights impact assessments and pre-release safety testing for open-source models.

The regulatory fallout is already spreading. The European Commission has launched a formal investigation into whether X violated EU digital rules by failing to prevent Grok from generating illegal content. Countries including the Philippines, Indonesia, and Malaysia have banned Grok, while regulators in the UK and Australia have opened their own probes.

Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.