FairWorkMate

Will Australia Regulate Workplace AI? What's Coming in 2026-27

|6 min read

Australia is moving toward AI workplace regulation. See what's proposed for 2026-27: mandatory guardrails, union positions, employer groups, and how it will affect your rights.

Where Australia stands on AI workplace regulation in 2026

As of March 2026, Australia does not have dedicated legislation regulating the use of AI in the workplace. However, the regulatory landscape is evolving rapidly. The Australian Government released its Safe and Responsible AI consultation paper in 2023, followed by an interim response in 2024 that flagged employment as a high-risk area for AI deployment. The government has signalled its intention to introduce mandatory guardrails for high-risk AI applications, which are expected to include AI systems used in hiring, performance management, workforce planning, and termination decisions. The Department of Industry, Science and Resources is leading the policy development, working alongside the Department of Employment and Workplace Relations and the Attorney-General's Department. While no bill has been introduced to Parliament, the direction of travel is clear — Australia is moving from voluntary AI ethics principles to enforceable regulation.

The proposed mandatory guardrails for high-risk AI

The government's interim response to the Safe and Responsible AI consultation identified ten voluntary AI Ethics Principles and signalled that mandatory guardrails would be introduced for high-risk AI applications. High-risk areas are expected to include employment decisions (hiring, firing, promotion, performance assessment), health, education, justice, and government services. The proposed guardrails are likely to require transparency about AI use (telling people when AI is involved in decisions about them), human oversight of AI-driven decisions, regular testing for bias and accuracy, data governance and privacy protections, mechanisms for individuals to challenge AI decisions, and accountability frameworks including audit requirements. The government has indicated it will draw on international best practice, including the EU AI Act, the OECD AI Principles, and Canada's Directive on Automated Decision-Making. A detailed regulatory impact analysis is expected in late 2026, with legislation possible in 2027.

The union position — ACTU and industry unions

The Australian Council of Trade Unions (ACTU) has been one of the most vocal advocates for AI workplace regulation. The ACTU's policy position includes a right to know when AI is being used in employment decisions, a right to human review of any AI-driven employment decision, a requirement for employers to consult with workers and unions before introducing AI systems that affect jobs, transparency about the data AI systems collect and how it is used, protections against AI-driven surveillance and algorithmic management, and a prohibition on using AI to undermine collective bargaining or union organising. Individual unions have also taken strong positions. The Transport Workers Union has campaigned for gig worker protections against algorithmic management. The CPSU (Community and Public Sector Union) has raised concerns about AI in the Australian Public Service. The Finance Sector Union has highlighted risks of AI bias in banking and insurance. These positions are shaping the regulatory debate and influencing enterprise bargaining.

Employer group perspectives — ACCI, Ai Group, and BCA

Employer groups have taken a more cautious approach, emphasising the productivity benefits of AI and cautioning against over-regulation. The Australian Chamber of Commerce and Industry (ACCI) has argued that existing employment law — including the Fair Work Act, anti-discrimination legislation, and privacy law — already provides adequate protections and that AI-specific regulation risks stifling innovation. The Australian Industry Group (Ai Group) has supported a risk-based approach but has advocated for industry co-regulation rather than prescriptive legislation. The Business Council of Australia (BCA) has emphasised Australia's opportunity to become a global AI leader and has warned that heavy-handed regulation could drive investment offshore. However, even employer groups have acknowledged the need for some guardrails, particularly around transparency, bias, and data governance. The debate is not whether to regulate but how much and how prescriptively.

What enterprise agreements are already doing about AI

While legislation catches up, some Australian workplaces are already negotiating AI governance clauses through enterprise bargaining. Recent enterprise agreements in the public sector, finance sector, and some large employers have included provisions requiring consultation before AI systems are introduced or changed, commitments to human oversight of AI-driven employment decisions, transparency obligations about what data AI systems collect, protections against adverse action based solely on AI assessments, and retraining and redeployment commitments when AI changes roles. The Fair Work Commission has generally approved these clauses as part of enterprise agreements. For employees covered by enterprise agreements, the next round of bargaining is an opportunity to push for AI-specific protections. Union members should raise AI governance as a bargaining priority. Even without legislation, enterprise bargaining can deliver workplace-level protections that go beyond the minimum standards.

What to expect in 2026-27 and how to prepare

Based on the government's current trajectory, expect the following developments in 2026-27. A detailed regulatory framework for high-risk AI is expected to be released for public comment. This will likely include specific provisions for employment-related AI systems. The Privacy Act reforms, which include provisions relevant to automated decision-making, are expected to progress through Parliament. The Fair Work Commission may begin considering AI-related disputes more frequently, establishing informal precedents. Enterprise agreements negotiated in 2026-27 will increasingly include AI clauses. To prepare, employees should stay informed about their rights under current law, document any AI systems used in their workplace, raise AI governance as an issue with their union or employer, and be aware of the complaints mechanisms available if AI is used unfairly. Employers should begin reviewing their AI systems against the likely regulatory requirements, conducting bias audits, and developing transparent AI use policies.

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.