Elon Musk testified in federal court on Thursday that his AI company, xAI, partly used OpenAI models to train its Grok chatbot, according to a TechCrunch report. The admission came during the high-profile lawsuit Musk filed against OpenAI, CEO Sam Altman, and co-founder Greg Brockman, alleging the company abandoned its original nonprofit mission.
The technique, known as distillation, involves training a new AI system by querying an existing model through its public interface or API and using those outputs as learning signals. Musk described it as a broader industry practice, though it remains under growing legal and regulatory scrutiny. Distillation is not explicitly illegal, but it can raise questions about whether it violates platform rules or terms governing API use.
Earlier this year, Anthropic accused several Chinese AI developers of using fraudulent accounts to extract large volumes of responses from its Claude chatbot for competitive training. In April, the White House warned of “industrial-scale” campaigns using proxy accounts and jailbreaks to replicate U.S. AI capabilities. Musk’s testimony now indicates that U.S.-based companies, not just foreign competitors, are employing the method.
Musk co-founded OpenAI in 2015 as a nonprofit focused on developing AI for humanity's benefit, but left the board in 2018. xAI launched in July 2023 and entered a market dominated by Google, Microsoft, and OpenAI. Earlier that year, Musk and other tech figures had signed an open letter calling for a six-month pause on developing more advanced AI systems. The court is expected to hear more testimonies in the coming weeks.
The trial, which began this week in a California federal court, will examine OpenAI’s governance and the broader AI landscape. Both sides are preparing for possible appeals. Legal experts say the case could set a precedent for nonprofit-to-for-profit conversions in the tech industry.