FairWorkMate

AI Performance Reviews: When Your KPIs Are Set by a Machine in Australia

|6 min read

Algorithmic performance management is growing in Australia. Know your rights when AI sets targets, scores your work, or triggers disciplinary action based on machine metrics.

What is algorithmic performance management?

Algorithmic performance management refers to the use of AI and automated systems to set performance targets, monitor worker output, score employee performance, and in some cases trigger disciplinary processes or termination. This is not a future concept — it is already widespread in Australia. Gig economy platforms like Uber, DoorDash, and Deliveroo use algorithms to rate drivers and riders, allocate work, and deactivate accounts. Warehouse operations for companies like Amazon use AI to track pick rates and flag workers who fall below targets. Call centres use AI to monitor call handling times, sentiment scores, and script compliance. Even white-collar employers are adopting tools like Microsoft Viva Insights and Workday to generate AI-powered productivity metrics. The common thread is that machines are increasingly making judgments about human performance.

Are AI-set KPIs legally enforceable in Australia?

Performance expectations — whether set by a human or an algorithm — must be lawful, reasonable, and clearly communicated to be enforceable. Under the Fair Work Act, an employer can set reasonable performance standards and take action if employees do not meet them. However, the performance standards must be genuinely related to the role, achievable, and communicated in advance. An AI system that sets progressively higher targets based on peak performance periods, or that applies a one-size-fits-all metric to diverse roles, may not produce reasonable KPIs. If you are subjected to disciplinary action or dismissal based on AI-generated KPIs, the Fair Work Commission will assess whether the performance expectations were reasonable in the circumstances. Unreasonable AI-set targets are not a valid basis for dismissal.

The Uber and DoorDash model — lessons for all workers

The gig economy provides a preview of fully algorithmic management. Uber drivers are rated by passengers after every trip, and their algorithmic rating determines their access to work. DoorDash delivery riders receive AI-generated performance scores that affect order allocation. Deactivation — the platform equivalent of dismissal — can occur automatically when ratings drop below a threshold, often without human review. Australian courts and the Fair Work Commission have increasingly scrutinised these arrangements. In landmark cases, the Commission has found that some gig workers are employees, not independent contractors, and are therefore entitled to unfair dismissal protections. The Transport Workers Union has successfully campaigned for minimum standards in the gig economy. These cases demonstrate that algorithmic management does not exempt employers from their obligations under Australian law.

Your right to challenge AI performance assessments

If your employer uses AI-generated performance metrics as the basis for disciplinary action, performance improvement plans, or termination, you have the right to challenge both the accuracy of the metrics and the fairness of the process. Under the Fair Work Act, before taking disciplinary action, an employer must clearly communicate the performance concern, provide the employee with an opportunity to respond, genuinely consider the response, and allow the employee to have a support person present. If the performance concern is based on AI-generated data, you are entitled to understand the data, the methodology, and how the AI system reached its conclusion. Opaque algorithms that produce unexplainable results are unlikely to satisfy the procedural fairness requirements of the Fair Work Act. Request the raw data behind any AI performance assessment.

When AI performance scoring becomes workplace bullying

The Fair Work Act defines workplace bullying as repeated unreasonable behaviour directed towards a worker that creates a risk to health and safety. AI systems that set impossible targets, constantly surveil workers, send automated warnings for trivial deviations, or create a culture of fear through algorithmic scoring can potentially constitute bullying — even though the behaviour is automated. The Fair Work Commission has jurisdiction to make orders to stop bullying. If AI-driven performance management is causing you stress, anxiety, or other health impacts, consider whether the behaviour meets the test for workplace bullying. Document every automated warning, unreasonable target, and health impact. You can apply to the Commission for a stop-bullying order under Part 6-4B of the Fair Work Act.

How to protect yourself from unfair algorithmic management

Start by understanding exactly how AI is being used to assess your performance. Ask your employer for documentation about what metrics are tracked, how they are weighted, and what thresholds trigger action. Review your employment contract, position description, and any performance management policy. If KPIs are being changed or escalated by an algorithm without your knowledge or input, raise this formally in writing. Keep records of your actual performance, including any factors the AI may not capture — such as quality of work, client feedback, or extenuating circumstances. If you are a union member, raise algorithmic management as a collective issue at your workplace. Consider whether your enterprise agreement or modern award contains consultation clauses that should have been triggered before AI performance tools were introduced.

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.