Australia's AI Safety Institute: What It's Done So Far and What's Coming


When the Australian government announced the AI Safety Institute in August 2025, it was positioned as a response to rapid AI advancement and the need for regulatory frameworks. The initial announcement was heavy on intentions, light on specifics.

Six months later, we can see what the institute is actually doing. It’s not moving as fast as some advocates wanted, but it’s making more progress than the skeptics expected.

What the Institute Actually Is

The AI Safety Institute sits within the Department of Industry, Science and Resources. It’s got a budget of about $45 million over four years and a staff of approximately 30 people—mix of policy experts, technical researchers, and regulatory specialists.

Its stated mandate is to:

  • Develop AI safety standards and testing frameworks
  • Provide guidance to government and industry on AI risk assessment
  • Coordinate with international AI safety efforts
  • Contribute to Australian AI regulation development

That’s broad enough to mean almost anything, which was intentional. The scope evolved as the institute figured out what work was most urgent.

The Standards Work

The institute’s most concrete output so far is draft AI safety standards for government procurement. These aren’t legally binding yet, but they’ll likely become mandatory for federal government AI purchases by mid-2026.

The standards cover:

  • Transparency requirements (documentation of training data, model capabilities, known limitations)
  • Testing for bias and fairness across demographic groups
  • Security requirements for AI systems handling sensitive data
  • Human oversight requirements for high-risk applications

These aren’t revolutionary—they draw heavily from existing frameworks in the EU and UK—but having Australian-specific guidance matters for government procurement teams who’ve been making decisions without clear policy.

Several state governments have indicated they’ll adopt similar standards, which would give them broader impact than just federal procurement.

The Risk Assessment Framework

The institute released a preliminary AI risk assessment framework in January 2026. It’s designed to help organizations categorize AI systems by risk level and apply appropriate safeguards.

The framework defines four risk categories:

Minimal risk: AI systems with negligible potential for harm (example: spam filters, recommendation systems for non-critical content)

Limited risk: Systems where transparency requirements apply but impact is constrained (example: chatbots, basic automation tools)

High risk: Systems affecting rights, safety, or access to essential services (example: AI used in hiring, credit decisions, healthcare diagnostics)

Unacceptable risk: Applications prohibited or requiring exceptional oversight (example: social scoring, certain surveillance applications)

This framework is clearly influenced by the EU AI Act, which isn’t surprising—Australia doesn’t need to reinvent categorization systems that already work.

The framework is currently voluntary. Whether it becomes mandatory depends on broader AI regulation that’s still being developed.

International Coordination

One of the institute’s main activities has been international engagement. Australian staff have participated in:

  • The UK AI Safety Summit follow-up working groups
  • OECD AI policy development
  • Bilateral discussions with Singapore, Japan, and the UK on AI safety testing

This coordination work is less visible than policy development but potentially valuable. Australia can adopt proven approaches from other jurisdictions rather than developing everything from scratch.

The institute has specifically focused on coordination with the UK’s AI Safety Institute, which has more resources and longer runway. There’s discussion about collaborative testing infrastructure and shared evaluation frameworks.

What They Haven’t Done Yet

Several areas where progress has been slower than expected:

Testing infrastructure. The institute was supposed to develop capabilities for testing AI systems, particularly large language models. This hasn’t materialized yet beyond preliminary planning. Building actual testing infrastructure requires more technical capacity than the institute currently has.

Sector-specific guidance. Healthcare, education, and justice system AI applications all need specific guidance. The institute has done workshops and consultations but hasn’t published sector-specific frameworks yet.

Red teaming capabilities. There was discussion about the institute conducting adversarial testing of AI systems to identify risks. This requires significant technical expertise that the institute is still hiring for.

Public AI incident database. The institute planned to track AI failures and harms in Australia to inform policy. This database exists but isn’t public yet due to concerns about liability and commercial sensitivity.

The Regulatory Question

The institute’s work is separate from but related to broader AI regulation currently being developed by the Attorney-General’s Department.

There’s ongoing debate about whether Australia should have:

  • Sector-specific AI rules (regulating healthcare AI differently from financial services AI)
  • Cross-cutting AI legislation similar to the EU AI Act
  • A principles-based approach relying on existing consumer protection and anti-discrimination law

The institute’s work assumes eventual regulation but isn’t driving the regulatory design. That’s happening through a separate process involving multiple government departments.

The realistic timeline for Australian AI legislation is probably late 2026 or 2027. The institute is developing technical frameworks that will inform that legislation.

Industry Response

Industry response has been cautiously positive. The big tech companies engaged with the institute’s consultations and generally support standards-based approaches over prescriptive regulation.

Australian AI startups are more mixed. Some appreciate clarity and guidance. Others worry that compliance requirements will be burdensome for small companies without dedicated compliance teams.

One concern I’ve heard repeatedly: the standards and frameworks are written for large organizations with resources. A three-person AI startup doesn’t have capacity for extensive documentation and testing that the frameworks assume.

The institute has said it’s developing simplified guidance for small businesses, but that hasn’t appeared yet.

Academic and Civil Society Views

Academic researchers engaged with AI safety are generally supportive but want to see more technical depth. The institute’s initial outputs are policy-focused; the research community wants to see actual technical evaluation capabilities.

Civil society groups focused on algorithmic justice and AI ethics think the institute is moving too slowly on high-risk applications like facial recognition and predictive policing. There’s concern that government use of these technologies is proceeding without adequate oversight while the institute develops frameworks.

The institute’s response has been that they’re prioritizing foundational work (standards, risk frameworks) before tackling specific controversial applications. That’s defensible but frustrating to advocates who want immediate action on systems they see as harmful.

The Realistic Assessment

The institute has made reasonable progress on what’s possible with limited resources and a six-month timeframe. Standards for government procurement and a risk assessment framework are useful contributions.

But the ambitious vision from the initial announcement—Australia as a leader in AI safety research and testing—hasn’t materialized yet. That would require significantly more technical capacity and funding than the institute currently has.

The institute is functioning more as a policy coordination and standards development body than a technical research organization. That’s probably the right scope for its current resources.

What’s Coming in 2026

Based on published plans and discussions with people familiar with the institute’s work:

Sector-specific guidance for healthcare AI is expected mid-2026. This will likely become the template for other sector-specific frameworks.

Testing capabilities for government procurement should be operational by late 2026, probably through partnerships with universities rather than in-house capacity.

Public reporting on AI incidents and risks in Australia is planned for late 2026, though the format and detail level are still being determined.

Expanded international coordination, particularly with Pacific nations interested in AI policy development but lacking technical capacity.

Contribution to regulatory design as the broader AI legislation process continues.

The Comparison Question

How does Australia’s AI Safety Institute compare to equivalent bodies internationally?

The UK AI Safety Institute has more funding (about £100 million), more staff, and more developed technical capabilities. It’s ahead of Australia in most areas.

Singapore’s AI Verify Foundation is more focused on testing and certification, less on policy development. It’s technically sophisticated but narrower in scope.

The US AI Safety Institute (part of NIST) is primarily standards-focused and doesn’t do much policy work or regulatory development.

Australia’s institute is somewhere in the middle—broader than Singapore’s, more policy-focused than the UK’s technical work, but less resourced than any of them.

The Honest Outlook

The AI Safety Institute is doing useful work within its limitations. It won’t make Australia a leader in AI safety research—that would require different scale of investment.

But it’s providing clarity for government procurement, developing frameworks that will inform eventual regulation, and coordinating internationally in ways that help Australia adopt proven approaches.

For a small country with limited resources, that’s a reasonable approach. We don’t need to develop everything from scratch. We need to adapt good work from elsewhere and apply it in Australian context.

The institute is doing that. It’s not revolutionary, but it’s competent and useful.

For more on Australian AI policy, check out the Department of Industry’s AI policy page, CSIRO’s Responsible AI resources, and Human Technology Institute’s Australian AI policy analysis.