FairWorkMate

Gen AI at Work in 2026: FWC Cracks Down on AI Claims, NSW Passes First AI Safety Law

|4 min read

AI-generated Fair Work claims have surged 70%. NSW just passed Australia's first AI workplace safety law. The FWC now requires AI disclosure. Here's what every Australian worker and employer needs to know about Gen AI at work right now.

RM

Rachel Morrison

Senior Workplace Relations Writer · GradDip Employment Relations, Griffith University

AI-generated claims are flooding the Fair Work Commission

The Fair Work Commission is being overwhelmed by AI-generated claims. FWC workload has grown 70% in three years: from approximately 30,000 matters per year before 2023 to 45,000 in 2024-25, with a projected 55,000 matters this financial year.

The problem isn't just volume — it's quality. Workers are using ChatGPT and other AI tools to draft unfair dismissal claims, general protections applications, and workplace complaints. The result is claims that look professional but are often fundamentally flawed.

In one recent case, Riley v Nuvei Australia Merchant Services [2026], an applicant submitted a claim containing fake case law citations generated by AI. The Commission ultimately gave leniency because the applicant didn't press the fabricated citations when challenged — but this is a warning shot.

In another case, a Sydney worker's general protections claim was thrown out entirely after the FWC determined it was ChatGPT-generated, had been lodged 2.5 years late, and contained fundamental legal errors that any human review would have caught.

The takeaway: AI can help you draft a claim, but it cannot replace legal advice. If you're lodging a Fair Work claim, have a human — ideally a lawyer or union representative — review it before you submit.

FWC now requires AI disclosure on all applications

In response to the flood of AI-generated claims, the Fair Work Commission will now require all applicants to disclose whether they used AI in preparing their application, and to confirm that they have checked the AI's output for accuracy.

This is a significant shift. It means:

  • You can use AI tools to help prepare your claim — it's not prohibited
  • You must disclose that you used AI
  • You must verify everything the AI produced — every case citation, every legal reference, every factual claim
  • If the FWC discovers undisclosed AI use, or AI-generated content that contains errors you didn't check, it will weigh against your credibility

For employers responding to claims: be aware that AI-assisted claims may look more polished than their substance warrants. Don't assume a well-written claim is a strong claim. Look at the actual merits and check every case reference cited.

NSW passes Australia's first AI workplace safety law

On 12 February 2026, NSW passed the Digital Work Systems Amendment to the Work Health and Safety Act — making it Australia's first state to explicitly regulate AI in the workplace.

The law amends the WHS Act to cover algorithms, AI systems, automation, and online platforms as "digital work systems." Key provisions include:

  • Employers must ensure digital work systems do not risk worker health and safety — this covers AI-driven work allocation, performance monitoring, automated rostering, and algorithmic management
  • Union access powers: union permit holders can require "reasonable assistance" to access and inspect digital work systems suspected of breaching WHS obligations (48 hours' notice required)
  • The law treats AI-related risks the same as any other workplace hazard — meaning employers have the same duty of care obligations

The law has not yet commenced — it's awaiting proclamation and SafeWork NSW guidelines. But it signals the direction of regulation nationally.

Industry groups including the Ai Group have criticised the law as "fundamentally flawed," warning that union inspection powers could lead to "fishing expeditions" through employer technology systems. The debate is far from settled.

What about federal AI regulation?

The federal government has paused standalone AI legislation, taking the position that existing "technology-neutral" laws are sufficient to cover AI in the workplace. This means:

  • The Fair Work Act already covers AI-related employment decisions — if an algorithm fires you, your unfair dismissal rights are the same as if a human made the call
  • Anti-discrimination laws already apply to AI-driven hiring and performance management — if an algorithm discriminates on the basis of age, gender, race, or disability, the employer is liable
  • The AI Safety Institute, announced November 2025, is being rolled out from early 2026 but has no regulatory teeth yet

In practice, this means Australian workers are relying on laws designed before AI existed to protect them from AI-related workplace issues. NSW's move to explicitly regulate digital work systems may force the federal government's hand — particularly if other states follow.

Practical guide: using AI tools at work without getting fired

If you're using ChatGPT, Copilot, Gemini, or other AI tools at work, here's what you need to know to protect yourself:

  • Check your employer's AI policy: many workplaces now have explicit policies on AI tool use. Some ban it entirely, others allow it with conditions. Violating your employer's AI policy could constitute misconduct
  • Never input confidential information: anything you type into a public AI tool may be stored, logged, or used to train models. Client data, financial information, personal employee records, and trade secrets should never go into ChatGPT or similar tools
  • You're responsible for AI output: if you use AI to draft a report, email, or document and it contains errors, you — not the AI — are accountable. Always review, fact-check, and verify before using AI-generated content in your work
  • AI can't replace professional advice: this is especially true for legal, medical, and financial matters. AI tools can help you organise your thoughts and draft documents, but they hallucinate facts, invent legal citations, and confidently state things that are wrong
  • Your employer can monitor your AI use: if you're using company devices or networks, your employer can see what tools you're accessing. Don't assume AI use is private

The bottom line: AI is a tool, not an expert. Use it to draft, brainstorm, and organise — then apply your own judgment before hitting send.

AI and your job security: know your rights

If your employer is introducing AI tools that affect your role, you have rights:

  • Consultation obligations: under most modern awards and enterprise agreements, your employer must consult with affected employees before introducing major workplace changes, including new technology
  • Redundancy protections: if AI genuinely makes your role redundant, you're entitled to redundancy pay, notice periods, and the right to be redeployed to a suitable alternative position within the business
  • Unfair dismissal: being replaced by AI doesn't exempt your employer from unfair dismissal laws. If the redundancy isn't genuine, or if proper process isn't followed, you can challenge it
  • Reasonable adjustments: if AI tools are introduced that you struggle to use due to disability, age, or other protected attributes, your employer has an obligation to provide reasonable adjustments

The workplace is changing fast. But your rights under the Fair Work Act haven't changed — they apply regardless of whether the decision-maker is human or machine.

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.

RM

About Rachel Morrison

Rachel spent nine years in HR advisory roles across retail and hospitality before moving into workplace compliance writing. She holds a Graduate Diploma in Employment Relations from Griffith University and has a particular interest in award interpretation and underpayment issues. Based in Brisbane.

About our editorial process →