FairWork Mate

AI in the Workplace — Your Rights as an Australian Employee [2026 Guide]

|7 min read

Your employer is introducing AI tools at work. Learn your rights around AI monitoring, surveillance, data privacy, refusal to use AI, and discrimination through AI hiring systems.

Employer obligations when introducing AI tools at work

When your employer introduces AI tools into the workplace, they have several legal obligations. Under most modern awards and enterprise agreements, the introduction of new technology that significantly affects employees constitutes a major workplace change requiring formal consultation. Your employer must notify you of the proposed change, explain its expected impact on your role, discuss measures to mitigate any negative effects, and genuinely consider any concerns or alternatives you raise. Beyond consultation, employers must provide adequate training on any new AI tools you are required to use. Directing you to use unfamiliar technology without training and then disciplining you for poor performance would likely be found unreasonable in an unfair dismissal claim. If the AI tools change the nature of your role — for example, requiring you to review AI-generated outputs instead of producing them yourself — this may constitute a change to your position description that should be formally documented and may trigger a reclassification under your award. Your employer cannot unilaterally and fundamentally change the nature of your role without your agreement. If they do, this could constitute a repudiation of your employment contract, potentially giving rise to a constructive dismissal claim.

Can you be fired for refusing to use AI at work?

The short answer is: it depends on whether the direction to use AI is lawful and reasonable. An employer can generally direct employees to use specific tools and technology as part of their job, provided the direction is lawful, reasonable, and within the scope of the employment contract. If AI tools have been properly introduced with adequate training and consultation, and using them falls within your role's duties, refusing to use them could constitute a failure to follow a lawful and reasonable direction — which may be grounds for disciplinary action up to and including termination. However, there are important exceptions. If you have a genuine and reasonable basis for refusal, such as the AI tool requiring you to breach professional ethical obligations (for example, a lawyer or doctor whose professional body has issued guidance against certain AI uses), or the tool processes personal data in ways that may breach privacy legislation, or you have not been given adequate training, or using the tool creates workplace health and safety risks (such as unreasonable workload or psychological harm from AI monitoring), then your refusal may be protected. If you are dismissed for refusing to use AI and believe the dismissal was harsh, unjust, or unreasonable, you can lodge an unfair dismissal application with the Fair Work Commission within 21 days.

AI monitoring and surveillance at work — what is legal?

AI-powered workplace monitoring and surveillance is a rapidly evolving area of law in Australia. The rules vary by state. In New South Wales, the Workplace Surveillance Act 2005 requires employers to give employees at least 14 days written notice before commencing surveillance of any kind, including computer and email monitoring. Covert surveillance is only permitted in very limited circumstances with a court order. In Victoria, there is no specific workplace surveillance legislation, but privacy principles under the Privacy and Data Protection Act 2014 apply to public sector employers, and the general law of privacy is developing. In Queensland, there is no comprehensive workplace surveillance act, but the Information Privacy Act 2009 applies to government employers. At the federal level, the Privacy Act 1988 regulates how private sector organisations with turnover exceeding $3 million handle personal information, including employee data collected through AI monitoring. Key principles include: the employer must notify you about what data is being collected and why, the collection must be reasonably necessary for the employment relationship, and the data must be stored securely and not used for unrelated purposes. AI tools that monitor keystrokes, screen activity, facial expressions, or communications must comply with these requirements.

Data privacy and AI tools — your personal information rights

When AI tools process your personal information at work, Australian privacy law applies. Under the Australian Privacy Principles (APPs), your employer must only collect personal information that is reasonably necessary for the employment relationship, inform you of the purpose of collection, take reasonable steps to ensure the information is accurate and secure, and not use or disclose it for purposes other than those notified. If your employer uses AI tools that process your personal data — such as productivity monitoring software, AI-powered HR analytics, or tools that analyse your communications — you have the right to know what data is being collected, how it is used, who has access to it, and how long it is retained. You can request access to your personal information held by your employer under APP 12. If the information is inaccurate, you can request correction under APP 13. The proposed Privacy Act reforms (expected to progress in 2026) may strengthen employee privacy rights further, potentially including a statutory tort for serious invasions of privacy. If you believe your employer is mishandling your personal data through AI tools, you can lodge a complaint with the Office of the Australian Information Commissioner (OAIC) or seek advice from your union.

Using AI tools without employer permission — the risks

Using AI tools like ChatGPT, Claude, or Copilot at work without your employer's permission can expose you to disciplinary action. The primary risks are: confidentiality breaches — if you input company data, client information, trade secrets, or proprietary processes into an AI tool, you may be breaching your duty of confidentiality, your employment contract, and potentially privacy legislation. Many AI tools store and use input data for training, meaning sensitive information could be exposed. Intellectual property issues — content generated by AI tools has uncertain copyright status in Australia, and using AI-generated work product without disclosure may breach your employer's IP policies. Quality and accuracy — AI tools can produce plausible but incorrect outputs (hallucinations), and submitting AI-generated work without proper review could constitute negligence. Policy breaches — many employers have implemented AI acceptable use policies, and breaching these policies is a disciplinary matter. To protect yourself: always check whether your employer has an AI usage policy before using any AI tools for work; never input confidential, personal, or commercially sensitive information into AI tools; disclose when work has been assisted by AI; and if there is no policy, ask your manager for guidance in writing before using AI tools.

Discrimination through AI hiring and management tools

AI-powered hiring tools — including resume screening algorithms, video interview analysis, psychometric testing, and automated shortlisting — are increasingly used by Australian employers. These tools can unlawfully discriminate if they produce biased outcomes based on protected attributes such as age, gender, race, disability, or other characteristics protected under the Fair Work Act and federal and state anti-discrimination legislation. The Australian Human Rights Commission has flagged AI hiring discrimination as a significant concern. AI systems trained on historical data can perpetuate and amplify existing biases — for example, a resume screening tool trained on data from a historically male-dominated industry may systematically disadvantage female applicants. Under Australian law, both direct and indirect discrimination are prohibited. Even if an AI tool appears neutral on its face, if it has a disproportionate adverse impact on a protected group without reasonable justification, it constitutes indirect discrimination. The employer is liable for discriminatory outcomes produced by their AI tools — they cannot delegate responsibility to the technology vendor. If you believe you have been discriminated against by an AI hiring or management tool, you can lodge a complaint with the Australian Human Rights Commission or the relevant state anti-discrimination body.

Fair Work protections and where to get help

The Fair Work Act provides several protections relevant to AI in the workplace. General protections (Part 3-1) prohibit adverse action against an employee for exercising or proposing to exercise a workplace right — which includes raising concerns about AI implementation, making complaints about AI monitoring, or requesting information about how AI tools are being used. If you are disciplined, demoted, or dismissed for raising legitimate concerns about AI at work, this may constitute unlawful adverse action. The modern awards system requires that any dispute about the introduction of new technology, including AI, be dealt with through the dispute resolution procedure in your award or enterprise agreement. This typically involves discussion at the workplace level, followed by mediation or conciliation at the Fair Work Commission if the dispute cannot be resolved. For help, contact the Fair Work Infoline on 13 13 94 for free advice about your rights, the Office of the Australian Information Commissioner for privacy concerns, the Australian Human Rights Commission for discrimination complaints, your union for representation and advocacy, and a workplace lawyer for complex matters. The law in this area is evolving rapidly — staying informed about your rights is essential as AI becomes more prevalent in Australian workplaces.

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.