Using ChatGPT at Work — Can You Be Fired? [Australian Law 2026]
Can your employer fire you for using ChatGPT at work? Learn when AI use is grounds for dismissal, when it's not, confidentiality risks, and how to protect yourself.
Megan Cole
Leave & Entitlements Specialist · JD, Monash University
Why employers are worried about ChatGPT at work
For the rapid adoption of ChatGPT, Claude, Gemini, and other generative AI tools has created genuine anxiety among employers about data security, quality control, and legal liability. When employees input information into AI chatbots, that data may be stored, used for model training, or potentially exposed to third parties. For employers handling sensitive client data, trade secrets, medical records, financial information, or government-classified material, the risk is real and significant.
Several high-profile incidents have driven employer concern. In 2023, Samsung employees inadvertently leaked proprietary source code by pasting it into ChatGPT.
Law firms worldwide have faced sanctions after lawyers submitted AI-generated court filings containing fabricated case citations. In Australia, employers in regulated industries (finance, healthcare, legal, government) face specific compliance obligations that may be breached if employees use AI tools without appropriate safeguards. The result has been a wave of AI policies, outright bans in some workplaces, and in some cases, disciplinary action against employees who used AI tools without authorisation. Understanding where the legal lines are drawn is essential for protecting yourself.
When using AI at work IS grounds for dismissal
A few scenarios where using ChatGPT or similar AI tools at work could legitimately result in termination.
There are several scenarios where using ChatGPT or similar AI tools at work could legitimately result in termination.
- breaching a clear AI usage policy — if your employer has a written policy prohibiting or restricting AI use and you breach it, this is a policy violation that can support disciplinary action
- leaking confidential information — if you input client data, trade secrets, patient records, or other confidential information into an AI tool, this may constitute a serious breach of your duty of confidentiality and could amount to serious misconduct justifying summary dismissal without notice
- submitting AI work as your own in contexts where accuracy and authorship matter — for example, a lawyer submitting an AI-generated brief without reviewing the citations, or an auditor using AI-generated analysis without verification. This could constitute negligence or misrepresentation
- breaching regulatory requirements — in industries with specific data handling regulations (such as the Privacy Act, APRA prudential standards, or health records legislation), using AI tools that process regulated data without authorisation may be a compliance breach
- using AI to engage in other misconduct — such as using AI to generate inappropriate content, bypass security controls, or facilitate fraud
When using AI at work is NOT grounds for dismissal
In many situations, using AI at work shouldn't result in termination, and dismissal in these circumstances could be found to be unfair by the Fair Work Commission. If your employer has no AI usage policy, you cannot be dismissed for breaching a policy that doesn't exist. Using publicly available AI tools for general productivity — such as drafting emails, brainstorming ideas, or formatting documents — without inputting confidential information is unlikely to constitute misconduct.
If you used AI in good faith to do your job better and didn't breach any specific policy or confidentiality obligation, dismissal would likely be considered disproportionate. The Fair Work Commission applies a test of whether the dismissal was harsh, unjust, or unreasonable.
Relevant factors include whether you were warned that AI use was prohibited, whether there was a valid policy in place, whether you were given training on the policy, and whether dismissal was proportionate to the conduct. A first offence of using ChatGPT for a routine task — without any confidentiality breach — is extremely unlikely to justify dismissal. A warning and direction to cease would be the appropriate and proportionate response in most cases.
Confidential information risks — the biggest danger
For the single biggest risk of using AI tools at work is inadvertently disclosing confidential information. When you type or paste information into ChatGPT, Claude, or other AI platforms, you should assume that information is no longer confidential. While AI providers have varying data retention and training policies, and enterprise versions may offer stronger privacy protections, the general consumer versions of these tools may retain your inputs.
Information that should never be entered into AI tools includes: client names, personal details, or case specifics; trade secrets, proprietary formulas, or source code; financial data, pricing strategies, or unpublished business plans; employee personal information or HR matters; patient health records or medical information; government classified or sensitive material; and any information subject to contractual confidentiality obligations. In Australia, breaching confidentiality obligations can expose both you and your employer to legal liability under contract law, the Privacy Act, equitable obligations of confidence, and industry-specific regulations.
Even if your employer doesn't terminate you, they may face claims from affected clients or third parties, which could then flow back to you. The safest approach is to treat all work information as confidential and never input it into external AI tools unless specifically authorised.
How to protect yourself — practical steps
To use AI tools safely at work, follow these practical steps.
To use AI tools safely at work, follow these practical steps.
- check for an existing AI policy — review your employer's intranet, policy handbook, or IT acceptable use policy. If no AI-specific policy exists, check whether the general IT or confidentiality policy addresses use of external tools or cloud services
- ask before you use — if there's no clear policy, send your manager an email asking whether AI tools can be used and for what purposes.
Getting guidance in writing protects you
- never input confidential data — this is the golden rule.
If you want to use AI for a work task, strip out all identifying information, client details, and proprietary data first. Use generic examples or hypothetical scenarios instead
- disclose AI assistance — if you use AI to help draft a document, report, or analysis, be transparent about it
- verify everything — AI tools produce confident-sounding but sometimes incorrect outputs
This avoids any suggestion of misrepresentation.
Always check facts, figures, citations, and legal references. Sixth, keep records — if you're using AI with your employer's knowledge and consent, keep evidence of that consent. Seventh, stay updated — AI policies are evolving rapidly, so check for policy updates regularly.
Employer social media and technology policies extending to AI
Many employers are extending their existing social media and technology policies to cover AI tools, rather than creating standalone AI policies. This means your obligations regarding AI use may be buried within broader policy documents you signed when you started employment. Common policy provisions that may capture AI use include: acceptable use of technology policies that restrict use of external software or cloud services on work devices; confidentiality clauses in your employment contract that prohibit disclosing company information to any third party (which includes AI platforms); social media policies that restrict public commentary or sharing of work information online (some AI interactions may be publicly accessible); intellectual property clauses that assign all work product to the employer and require disclosure of tools used; and data handling policies that specify how different categories of information must be stored and processed.
Review your employment contract and any policies you signed. If your contract contains a broad confidentiality clause, inputting work information into AI tools could technically breach it even without a specific AI policy.
The practical question is whether the breach is serious enough to warrant disciplinary action, which depends on the nature of the information, the risk of harm, and your employer's expectations.
What to do if you're facing disciplinary action for AI use
If your employer has raised concerns about your use of AI at work, or has commenced disciplinary proceedings, take these steps. Don't panic — many employers are navigating AI issues for the first time and may overreact. Request the specific allegation in writing — ask your employer to identify exactly what policy or obligation they say you breached and what evidence they've.
Gather your own evidence — save any communications showing you sought permission, acted in good faith, or did not breach confidentiality. Check whether proper process is being followed — your employer must follow a fair process before terminating you, including giving you an opportunity to respond to allegations with a support person present.
Assess the proportionality — dismissal for a first offence of using AI without a clear policy in place would likely be found unfair by the Fair Work Commission. Even with a policy, dismissal may be disproportionate if no confidential data was breached and no harm resulted. Seek advice — contact the Fair Work Infoline (13 13 94), your union, or a workplace lawyer. If you're dismissed and believe it was unfair, you've 21 days to lodge an unfair dismissal application with the Fair Work Commission.
The filing fee is modest and you don't need a lawyer to make the application.
Try these free tools
Official resources
General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.
Related articles
Enterprise agreements and Modern Awards both set workplace conditions, but they work differently. Learn the BOOT test, how EAs are made, zombie agreements, and how to check which applies to you.
National Employment Standards (NES) — Complete Summary of Your 11 RightsThe NES gives every Australian employee 11 minimum workplace rights. Here is a plain-English summary of each entitlement — maximum hours, leave, flexible arrangements, termination, and more.
Right to Disconnect Australia — What the New Law Means for YouAustralia's right to disconnect law lets employees refuse unreasonable out-of-hours contact. Learn who it covers, what counts as unreasonable, and how the FWC enforces it.
How to Make a Fair Work Complaint — Step-by-Step GuideLearn how to lodge a complaint with the Fair Work Ombudsman or Fair Work Commission. Step-by-step process, evidence checklist, timelines, and what to expect.
About Megan Cole
Megan is a former Fair Work Commission associate who spent four years supporting conciliation conferences and unfair dismissal hearings. She now writes about leave entitlements, termination, and employee rights. She completed her Juris Doctor at Monash University.
About our editorial process →