On December 2, 2025, the Australian government unveiled its updated National AI Plan, marking a significant shift from earlier proposals for stricter AI regulations. The strategy, released on Tuesday, focuses on a flexible approach that leverages existing laws rather than introducing new regulatory measures, aiming to balance innovation with public safety.
The plan prioritizes three key areas: attracting investment for advanced data centers to foster AI development and economic growth, upskilling the workforce to adapt to AI-driven changes and protect jobs, and ensuring public safety as AI becomes integrated into daily life. The government explicitly stated that AI risks will be managed under current legal frameworks, with individual agencies responsible for overseeing risks in their domains.
Australia announced the establishment of an AI Safety Institute, set to open in 2026, which will monitor emerging AI risks and respond to threats. Federal Industry Minister Tim Ayres defended the plan, emphasizing its goal to help Australians benefit from new technology while keeping them safe from risks such as misinformation from tools like OpenAI's ChatGPT and Google's Gemini.
However, experts have raised concerns about the plan's gaps. Niusha Shafiabady, an Associate Professor at Australian Catholic University, warned that without addressing accountability, sovereignty, sustainability, and democratic oversight, Australia risks building an AI economy that is efficient but not equitable or trusted. The government's approach reflects a desire to encourage innovation without stifling progress, but it remains uncertain if existing laws are sufficient for all potential AI risks.