In a landmark regulatory move, India's Department for Promotion of Industry and Internal Trade has proposed a framework that would require artificial intelligence companies, including giants like OpenAI and Google, to pay royalties when using copyrighted material to train their AI models. The proposal, detailed in a 125-page committee submission, establishes a "mandatory blanket license" system.
The system would grant AI developers automatic access to all copyrighted works in exchange for payments to a central collecting body, which would then distribute royalties to creators such as writers, musicians, and artists. The committee argues this approach addresses the imbalance where AI firms derive significant revenue from Indian users while relying on Indian creators' work, asserting that a portion of that value should flow back to the creators.
The proposal comes at a critical time, as OpenAI CEO Sam Altman has identified India as the company's second-largest market globally and a potential future leader. The framework aims to lower compliance costs by eliminating individual negotiations and create legal certainty in an area currently mired in global lawsuits. In India itself, news agency ANI has sued OpenAI in the Delhi High Court over alleged unauthorized use of its articles for training.
Industry pushback has been swift and significant. Nasscom, India's technology industry body representing firms like Google and Microsoft, has filed a formal dissent, advocating instead for a broad text-and-data-mining exception. They warn that a mandatory licensing regime could slow innovation, increase costs for startups, and create administrative burdens. The Business Software Alliance, which includes Adobe, Amazon Web Services, and Microsoft, has also pressed the government to avoid a purely licensing-based system, cautioning it could reduce model quality by limiting training data.
The Indian government has opened the proposal for a 30-day public consultation period. After reviewing feedback, the committee will finalize its recommendations before the government considers implementation. This interventionist approach differs markedly from the U.S., which relies on fair use doctrines being tested in courts, and the EU, which focuses on transparency requirements.