Skip to main content
FairWorkMate

Why we built a case-law moat: 320+ live Australian workplace decisions in your AI advisor

|4 min read

Frontier AI models have generic legal knowledge and yearly training cutoffs. We built FairWork Mate around a different bet: a live, jurisdiction-specific corpus of Australian Fair Work, Federal Court and FWO decisions, retrieval-attached to the AI. Here's the strategic argument and the technical reality of that moat.

RM

Senior Workplace Relations Writer · GradDip Employment Relations, Griffith University

The bet: frontier LLMs are necessary but not sufficient for AU workplace law

When ChatGPT launched the world's first product-grade generative AI in late 2022, every Australian HR consultant, employment lawyer and workplace blogger had the same private thought: this is going to eat our research time. Three years on, that thought is half-right. The frontier models — GPT-5, Claude 4, Gemini Ultra — are extraordinary at general legal reasoning. They're impressive even on Australian-specific questions if you prompt them well.

What they're not good at is the specific thing Australian HR needs: knowing what the Fair Work Commission decided last Tuesday. Or what the Federal Court awarded in a general-protections case last month. Or what the Fair Work Ombudsman just signed an enforceable undertaking for. That information sits behind:

  • A training cutoff — every frontier model is sealed at a date months or a year in the past;
  • A jurisdiction blindspot — AU workplace law is a tiny fraction of any model's training corpus, swamped by US employment, UK employment, contract law, criminal law, tax;
  • A surface mismatch — even when the model has the information, it doesn't have plain-English summaries, citation discipline, or jurisdiction-specific framing.

You can fix some of those with prompt engineering. You can't fix the training cutoff. The only way to get an AI that knows what Australia's workplace tribunals decided yesterday is to feed it that information at query time. Retrieval-augmented generation. RAG. And the corpus you retrieve from is the moat.

What we built — the live case-law corpus

FairWork Mate's corpus, as of May 2026, is 320+ published Australian workplace decisions:

  • Fair Work Commission decisions, ingested via the weekly FWC bulletin, summarised into facts / outcome / employer-implication / employee-implication / tags
  • Federal Court of Australia judgments, ingested via the FCA RSS feed, filtered to Fair Work-relevant matters by a 36-term keyword filter (Fair Work Act references, section numbers, FWO prosecution patterns, casual misclassification cues)
  • Fair Work Ombudsman enforceable undertakings and media releases, the largest of any single source — 212 of them, the back catalogue including 30+ million-dollar penalties against universities, aged-care providers, hospitality franchises and finance institutions

Every case is:

  • Plain-English summarised (not just metadata — we draft the actual readable summary)
  • Tagged for retrieval (industry, legal-issue type, penalty amount, NES domain)
  • Embedded as a 1,536-dimension vector using OpenAI's text-embedding-3-small for retrieval similarity search
  • Published at a permanent /cases/[slug] URL — both for search engines to index and as a citation anchor

When an HR manager asks the AI advisor "what's the recent precedent on supervising a recently-pregnant employee?", the system embeds that query into the same vector space, finds the 2-3 most-semantically-similar cases from the corpus, and injects them into the AI's context window with a citation instruction. The AI then writes the answer grounded in those specific cases. The citations are real. The reasoning is current. The plain-English level is set by the summary not the judgment.

Why this is a moat — and not just an integration

Three things make this a moat rather than an integration any competitor can replicate cheaply.

1. The data ingestion is non-trivial. FWC publishes via weekly HTML bulletins that link to PDF judgments. FCA publishes via RSS with metadata, links to PDFs. FWO publishes via media releases with no structured data. AustLII is explicitly off-limits (we don't scrape them — TOS). Building reliable, ethical, daily ingestion across three quite different upstream sources, with the PDF-extraction pipeline and the human-or-LLM editorial gate, is a few months of focused work. We've done it.

2. Editorial throughput compounds. Each new case takes ~3 minutes of compute to enrich (local Llama running gemma3:12b — no API costs) plus a human review gate. A team of one publishing 50-100 cases a week beats a team of ten publishing once a quarter. The corpus is currently growing at ~30/week and accelerating.

3. The integration layer matters as much as the data. Having case decisions on a website doesn't help an HR manager researching a Tuesday-afternoon question. Having case decisions retrieved into the AI's context, automatically, cited specifically, against the exact question asked — that's the integration. The AI advisor isn't a wrapper over the case database. It's the case database operating as advice.

A competitor could match any one of those. Matching all three takes time. The moat compounds because the corpus grows weekly and the AI grounding improves with corpus density.

The business model — and why this is enterprise-relevant

FairWork Mate started as a free tools site for Australian workers. 131+ free calculators, no signup, no email walls. That stays free forever.

The AI advisor at /advisor is free for 2 questions a day — enough for a solo HR manager or small-business owner running the occasional case-law question.

The paid tiers are for businesses that run 5+ case-law research questions a week:

  • Plus / Pro — individual paid tiers from $9.99/day to $49.99/mo, for solo HR consultants and contractors
  • For Business — from $499/mo — small business / mid-market HR teams, unlimited questions, longer context windows, no per-question caps
  • Enterprise / API — from $2,499/mo — large HR teams, the AI advisor embedded into their existing systems via API, custom retrieval scoping (their own award subset, their own internal policies), white-label options

All tiers run on the same corpus. The differentiator is volume and integration depth. The corpus is the moat. The pricing tiers monetise the integration value at different scales.

What's next

The 2026-27 roadmap is corpus density + retrieval quality. Specifically:

  • FWC corpus to 500+ published decisions through summer 2026 (we just shipped 87 in one push)
  • FCA corpus to 100+ via the expanded keyword filter compounding daily through the RSS feed
  • FCFCOA (Federal Circuit and Family Court — small-claims employment matters) added as a fourth jurisdiction
  • Retrieval re-ranking to weight court judgments (FCA / FWC) higher than regulator outcomes (FWO) for citation purposes — court decisions carry more precedent value
  • Industry-specific retrieval scopes for the Enterprise tier — "only retrieve hospitality cases" / "only retrieve healthcare cases" — so larger HR teams get sharper answers in their domain

The work is unglamorous: PDF parsing, gemma3 prompting, editorial review, schema design, embedding cron, ranking heuristics. That's exactly why the moat compounds. The HR research workflow is generic; the data and the retrieval are not.

For Australian HR teams reading this: the corpus is at /cases. The free AI advisor is at /advisor. The paid tiers are at /for-business. Try it on the question you would normally have asked your lawyer this week.

AI

Got a question this article didn't answer? Ask FairWork Mate AI →

Free 2 questions/day, grounded on 320+ live FWC, FCA & FWO decisions. Cite the case law in your answer.

Have a workplace question?

Got a specific situation this article didn't cover? Email us.

hello@fairworkmate.com.au

General information and estimates only — not legal, financial, or tax advice. Always verify with the Fair Work Ombudsman (13 13 94) or a qualified professional.

RM
About Rachel Morrison

Nine years in Australian workplace relations — Queensland hospitality HR, then retail ER in Brisbane and Northern NSW. Graduate Diploma in Employment Relations (Griffith University, 2018). Writes about award interpretation, underpayment recovery, and casual conversion. Member of the AHRI since 2019. Based in Paddington, Brisbane.

Real-world cases on this topic

Fair Work and Federal Court decisions that hit on what you just read.

All decisions →