Elon Musk 'agrees' to the AI training practice that Anthropic complained to White House against China
Elon Musk told a federal courtroom in California this week that his artificial intelligence startup, xAI , has used models from OpenAI to improve its own systems. His comments came during testimony in a case that is drawing attention to how AI companies build and train their models. The issue being discussed is known as model distillation — a method where one AI model helps train another. While the process is widely used in the tech industry, it has also raised concerns about whether companies are copying or benefiting from rivals’ technology without clear permission. Musk’s remarks added to the ongoing debate around how far companies can go when using others’ AI systems.
What Elon Musk said in court
During questioning, Musk explained that model distillation means using one AI model to train another. When asked directly whether xAI had used OpenAI’s technology in this way, he appeared to avoid a clear answer, saying that “generally all the AI companies” do such a thing. When pressed further on whether that meant yes, Musk responded, “Partly.”
He added, “It is standard practice to use other AIs to validate your AI.”
Elon Musk Can CRY, But He Will Still Pay $140,000,000 X Fine: EU Adamant After Hitler Attack
Growing debate around AI training practices
Model distillation has become more common in recent years, but it has also sparked debate across the AI industry. The main concern is whether such practices cross legal or ethical boundaries, especially when companies use rival systems.
Firms like OpenAI and Anthropic have accused some companies, including Chinese AI labs, of using distillation to copy their models. OpenAI has raised concerns about DeepSeek , while Anthropic has named DeepSeek, Moonshot AI, and MiniMax.
Meanwhile, Google has taken steps to block what it calls “distillation attacks,” describing them as “a method of intellectual property theft that violates Google’s terms of service.”
In a blog post, Anthropic said, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
Next Story