FairWork Mate

Using ChatGPT at Work — Can You Be Fired? [Australian Law 2026]

|7 min read

Can your employer fire you for using ChatGPT at work? Learn when AI use is grounds for dismissal, when it is not, confidentiality risks, and how to protect yourself.

Why employers are worried about ChatGPT at work

The rapid adoption of ChatGPT, Claude, Gemini, and other generative AI tools has created genuine anxiety among employers about data security, quality control, and legal liability. When employees input information into AI chatbots, that data may be stored, used for model training, or potentially exposed to third parties. For employers handling sensitive client data, trade secrets, medical records, financial information, or government-classified material, the risk is real and significant. Several high-profile incidents have driven employer concern. In 2023, Samsung employees inadvertently leaked proprietary source code by pasting it into ChatGPT. Law firms worldwide have faced sanctions after lawyers submitted AI-generated court filings containing fabricated case citations. In Australia, employers in regulated industries (finance, healthcare, legal, government) face specific compliance obligations that may be breached if employees use AI tools without appropriate safeguards. The result has been a wave of AI policies, outright bans in some workplaces, and in some cases, disciplinary action against employees who used AI tools without authorisation. Understanding where the legal lines are drawn is essential for protecting yourself.

When using AI at work IS grounds for dismissal

There are several scenarios where using ChatGPT or similar AI tools at work could legitimately result in termination. First, breaching a clear AI usage policy — if your employer has a written policy prohibiting or restricting AI use and you breach it, this is a policy violation that can support disciplinary action. Second, leaking confidential information — if you input client data, trade secrets, patient records, or other confidential information into an AI tool, this may constitute a serious breach of your duty of confidentiality and could amount to serious misconduct justifying summary dismissal without notice. Third, submitting AI work as your own in contexts where accuracy and authorship matter — for example, a lawyer submitting an AI-generated brief without reviewing the citations, or an auditor using AI-generated analysis without verification. This could constitute negligence or misrepresentation. Fourth, breaching regulatory requirements — in industries with specific data handling regulations (such as the Privacy Act, APRA prudential standards, or health records legislation), using AI tools that process regulated data without authorisation may be a compliance breach. Fifth, using AI to engage in other misconduct — such as using AI to generate inappropriate content, bypass security controls, or facilitate fraud.

When using AI at work is NOT grounds for dismissal

In many situations, using AI at work should not result in termination, and dismissal in these circumstances could be found to be unfair by the Fair Work Commission. If your employer has no AI usage policy, you cannot be dismissed for breaching a policy that does not exist. Using publicly available AI tools for general productivity — such as drafting emails, brainstorming ideas, or formatting documents — without inputting confidential information is unlikely to constitute misconduct. If you used AI in good faith to do your job better and did not breach any specific policy or confidentiality obligation, dismissal would likely be considered disproportionate. The Fair Work Commission applies a test of whether the dismissal was harsh, unjust, or unreasonable. Relevant factors include whether you were warned that AI use was prohibited, whether there was a valid policy in place, whether you were given training on the policy, and whether dismissal was proportionate to the conduct. A first offence of using ChatGPT for a routine task — without any confidentiality breach — is extremely unlikely to justify dismissal. A warning and direction to cease would be the appropriate and proportionate response in most cases.

Confidential information risks — the biggest danger

The single biggest risk of using AI tools at work is inadvertently disclosing confidential information. When you type or paste information into ChatGPT, Claude, or other AI platforms, you should assume that information is no longer confidential. While AI providers have varying data retention and training policies, and enterprise versions may offer stronger privacy protections, the general consumer versions of these tools may retain your inputs. Information that should never be entered into AI tools includes: client names, personal details, or case specifics; trade secrets, proprietary formulas, or source code; financial data, pricing strategies, or unpublished business plans; employee personal information or HR matters; patient health records or medical information; government classified or sensitive material; and any information subject to contractual confidentiality obligations. In Australia, breaching confidentiality obligations can expose both you and your employer to legal liability under contract law, the Privacy Act, equitable obligations of confidence, and industry-specific regulations. Even if your employer does not terminate you, they may face claims from affected clients or third parties, which could then flow back to you. The safest approach is to treat all work information as confidential and never input it into external AI tools unless specifically authorised.

How to protect yourself — practical steps

To use AI tools safely at work, follow these practical steps. First, check for an existing AI policy — review your employer's intranet, policy handbook, or IT acceptable use policy. If no AI-specific policy exists, check whether the general IT or confidentiality policy addresses use of external tools or cloud services. Second, ask before you use — if there is no clear policy, send your manager an email asking whether AI tools can be used and for what purposes. Getting guidance in writing protects you. Third, never input confidential data — this is the golden rule. If you want to use AI for a work task, strip out all identifying information, client details, and proprietary data first. Use generic examples or hypothetical scenarios instead. Fourth, disclose AI assistance — if you use AI to help draft a document, report, or analysis, be transparent about it. This avoids any suggestion of misrepresentation. Fifth, verify everything — AI tools produce confident-sounding but sometimes incorrect outputs. Always check facts, figures, citations, and legal references. Sixth, keep records — if you are using AI with your employer's knowledge and consent, keep evidence of that consent. Seventh, stay updated — AI policies are evolving rapidly, so check for policy updates regularly.

Employer social media and technology policies extending to AI

Many employers are extending their existing social media and technology policies to cover AI tools, rather than creating standalone AI policies. This means your obligations regarding AI use may be buried within broader policy documents you signed when you started employment. Common policy provisions that may capture AI use include: acceptable use of technology policies that restrict use of external software or cloud services on work devices; confidentiality clauses in your employment contract that prohibit disclosing company information to any third party (which includes AI platforms); social media policies that restrict public commentary or sharing of work information online (some AI interactions may be publicly accessible); intellectual property clauses that assign all work product to the employer and require disclosure of tools used; and data handling policies that specify how different categories of information must be stored and processed. Review your employment contract and any policies you signed. If your contract contains a broad confidentiality clause, inputting work information into AI tools could technically breach it even without a specific AI policy. The practical question is whether the breach is serious enough to warrant disciplinary action, which depends on the nature of the information, the risk of harm, and your employer's expectations.

What to do if you are facing disciplinary action for AI use

If your employer has raised concerns about your use of AI at work, or has commenced disciplinary proceedings, take these steps. Do not panic — many employers are navigating AI issues for the first time and may overreact. Request the specific allegation in writing — ask your employer to identify exactly what policy or obligation they say you breached and what evidence they have. Gather your own evidence — save any communications showing you sought permission, acted in good faith, or did not breach confidentiality. Check whether proper process is being followed — your employer must follow a fair process before terminating you, including giving you an opportunity to respond to allegations with a support person present. Assess the proportionality — dismissal for a first offence of using AI without a clear policy in place would likely be found unfair by the Fair Work Commission. Even with a policy, dismissal may be disproportionate if no confidential data was breached and no harm resulted. Seek advice — contact the Fair Work Infoline (13 13 94), your union, or a workplace lawyer. If you are dismissed and believe it was unfair, you have 21 days to lodge an unfair dismissal application with the Fair Work Commission. The filing fee is modest and you do not need a lawyer to make the application.

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.