AI Giants OpenAI and xAI Face Landmark Lawsuits Over Copyright Infringement and Child Safety Violations

1 hour ago 1 sources neutral

Key takeaways:

  • Legal rulings against AI firms could pressure tech-heavy crypto projects reliant on similar data-scraping models.
  • Increased regulatory scrutiny on AI safety may spill over into adjacent sectors like decentralized AI and data marketplaces.
  • Investors should monitor sentiment in AI-linked tokens as these lawsuits could dampen risk appetite for narrative-driven assets.

In a seismic legal development for the artificial intelligence industry, two of its most prominent players, OpenAI and Elon Musk's xAI, are facing separate but equally significant lawsuits that challenge the fundamental practices of AI development and deployment.

OpenAI is being sued by Encyclopedia Britannica and Merriam-Webster in a federal court for alleged "systematic and massive copyright infringement." The publishers accuse the AI lab of illegally using nearly 100,000 copyrighted articles to train its large language models, including ChatGPT, without permission or compensation. The complaint outlines infringement at three stages: unauthorized scraping of Britannica's online repository for training data, ChatGPT generating outputs containing "full or partial verbatim reproductions" of copyrighted entries, and violations through OpenAI's use of Retrieval-Augmented Generation (RAG) technology, which allows ChatGPT to scan the web in real-time.

The lawsuit introduces a novel legal argument by alleging violations of the Lanham Act, claiming OpenAI harms Britannica's reputation when ChatGPT generates inaccurate "hallucinations" and falsely attributes them to the publisher. "ChatGPT starves web publishers of revenue by generating responses that substitute, and directly compete with, the content from publishers like Britannica," the complaint states. This case joins a growing wave of litigation from major media entities, including The New York Times and a coalition of over a dozen U.S. and Canadian newspapers, against OpenAI over similar copyright concerns.

Simultaneously, Elon Musk's xAI faces a major lawsuit filed in the U.S. District Court for the Northern District of California on June 9, 2025. Three anonymous plaintiffs, two of whom are minors, are seeking class-action status against X.AI Corp and X.AI LLC. The lawsuit alleges that xAI's Grok AI models were used to generate sexually abusive imagery of real, identifiable minors from their childhood photos, and that the company failed to implement basic safeguards to prevent this.

The complaint details harrowing experiences: one plaintiff discovered her high school photos had been manipulated by Grok to depict her unclothed, with the images circulating on a Discord server. Another was notified by criminal investigators after a third-party app using Grok's models created sexualized images of her. The plaintiffs argue that xAI neglected established industry safety protocols, such as strict input filtering, output classifiers, and prohibited concept training, in pursuit of a less restricted, 'maximum truth-seeking' AI. The lawsuit seeks civil penalties under various statutes designed to protect exploited children and could establish a precedent defining AI developers' liability for harmful outputs.

Legal experts suggest the OpenAI case's outcome could reshape the entire AI industry, potentially forcing companies to establish licensed data partnerships or develop new training methodologies if courts side with publishers. Conversely, a ruling for OpenAI could solidify current large-scale web scraping practices. The financial stakes are enormous, as training advanced AI models requires unprecedented volumes of high-quality text data from reliable sources like encyclopedias and news archives.

The xAI case highlights the urgent need for robust safety protocols in generative AI, particularly for multimodal systems that manipulate visual media. It tests the legal boundaries of developer accountability, especially when third-party applications utilize a company's AI models via APIs. The outcome could force an industry-wide re-evaluation of deployment ethics and content moderation standards.

Both cases represent critical junctures for artificial intelligence, copyright law, and digital safety. Their resolutions will collectively determine the boundaries of innovation, fair use, intellectual property, and corporate responsibility in the age of generative AI, with profound implications for publishers, AI developers, and society's access to trustworthy information.

Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.