SIX RISKS EVERY CONSULTING FIRM FACES WHEN AI USE GOES UNDISCLOSED
In October 2025, Deloitte Australia refunded AU$97,000 of a AU$440,000 government contract after a researcher found its report full of AI-generated fabrications. It was the first major public repayment over undisclosed AI use. It will not be the last.
.png)
1. Financial Exposure: Refunds and Unrecoverable Costs
When a client discovers undisclosed AI use, they now have a documented precedent for demanding repayment. The Deloitte case proved it. The hours invested, the senior review time, the project margin: all of it is at risk the moment a disclosure failure is exposed. A refund does not restore the loss, and the internal investigation that follows costs money too.
2. Lost Contracts: Competitors with Clear Answers Are Winning the Work
AI disclosure clauses are already appearing in engagement letters and government tenders. Firms without a structured framework are negotiating from weakness, or losing the work entirely to a competitor who can answer the question cleanly. In a market where clients scrutinise every dollar, transparency has become a sales capability.
3. Reputational Damage: Trust Is The Only Product That Matters
The Deloitte episode did not damage AI. It damaged Deloitte's claim to professional judgment. Senator O'Neill's quote, that clients should be asking exactly who is doing the work they are paying for, ran globally. Reputational damage of that kind is not fixed by a corrected report. It is felt in every tender, renewal, and government relationship that follows.
4. Regulatory Jeopardy: The Courts Have Already Moved
On 16 April 2026, the Federal Court of Australia issued a formal Practice Note requiring disclosure of AI use in all proceedings, including what was used, how, and why. It is enforceable, not advisory. Equivalent frameworks are now in place across Australian state courts and tribunals. In the US, a February 2026 ruling found AI-processed documents may lose legal professional privilege entirely.
5. AI Hallucinations: Fluent, Confident, and Wrong
The Deloitte report failed not because AI was used, but because no one checked what it produced. Fabricated citations, invented experts, a fake judicial quote: all textbook hallucinations. As client awareness of this risk grows, they are demanding visible human supervision at every stage. Firms that cannot show where human judgment was applied are losing ground to those that can.
6. Data Sovereignty: Not Your Keys, Not Your Records
Who holds your AI audit trail? If it sits on a third-party platform outside Australian jurisdiction, or does not exist in a structured, retrievable form at all, your firm cannot produce the evidence needed to respond to a client demand, regulatory inquiry, or court order. Offshoring that record, or waiting for someone else to build the standard, is not a strategy.
%20and%20Dunja%20Lewis%20(CIO).avif)
The Solution: AIUC Global
AIUC Global is an Australian company, founded by practitioners with two decades of consulting experience. The AI Usage Classification standard gives firms a simple, non-judgmental framework for communicating exactly how AI was used in any deliverable, backed by a published Code of Practice, a third-party register any client can check, and an audit trail your firm owns outright. Not a vendor. Not a foreign standard. Yours.
"Clients are not afraid of AI. They are afraid of AI they cannot see. AIUC makes it visible, and that visibility is worth more in a competitive pitch than any capability claim."
- Dunja Lewis, Co-Founder and Chief Innovation Officer, AIUC Global.
About AIUC Global
AIUC Global Pty Ltd is the custodian of the AI Usage Classification™ (AIUC) standard, a cross-industry framework for communicating the level of human and artificial intelligence collaboration in any piece of content, analysis, or professional deliverable. Modelled on the principle of a nutrition label, the AIUC standard provides five clear classifications, from AI-Free to AI-Generated, enabling organisations to disclose AI use in a consistent, non-judgmental, and verifiable way.
AIUC Global developed the standard in response to the absence of a recognised, industry-wide language for AI disclosure. The framework is supported by a published Code of Practice, an organisational licensing programme, and the AIUC Navigator, a publicly accessible register of licensed adopters. AIUC Global is headquartered in Perth, Western Australia, and serves organisations across professional services, government, media, and technology sectors in Australia and internationally.
For more information, visit www.aiuc.global or contact contact@AIUC.Global.
References
1. Deloitte refund confirmed by Australia's Department of Employment and Workplace Relations (DEWR). Contract originally valued at AU$440,000; AU$97,000 refunded. Reported by CFO Dive, 21 October 2025: cfodive.com
2. Federal Court of Australia, Use of Generative Artificial Intelligence Practice Note (GPN-AI), issued 16 April 2026 by Chief Justice Debra Mortimer: fedcourt.gov.au
3. Law Society of NSW, Court Protocols on AI across Australian jurisdictions (updated March 2026): lawsociety.com.au
4. United States v Heppner (February 2026): US federal court rejected privilege claims over AI-generated documents. Analysis: Hamilton Locke, April 2026: hamiltonlocke.com.au
5. AIUC Global AI Usage Classification Standard: www.aiuc.global
Edwina Hanneysee