KPMG has made headlines by demanding that companies receiving its audit services pay lower fees — on the basis that AI now performs a substantial portion of the analytical work that human auditors previously did. The logic is straightforward: if AI reduces the cost of delivering the audit, the client should benefit from the savings. It is a commercial argument, and at face value, it seems reasonable.
But beneath this commercial argument lies a governance question that nobody in the conversation is adequately addressing: who ensures that the AI doing the work meets the professional standards that the audit is supposed to uphold?
When a human auditor reviews financial statements, that auditor is bound by professional standards, subject to regulatory oversight, personally liable for negligence, and answerable to the Independent Regulatory Board for Auditors (IRBA). When an AI system performs the same analysis, none of those accountability mechanisms apply directly. The AI is not a registered auditor. It has no professional obligations. It cannot be disciplined, suspended, or struck from a register. And yet, the audit opinion that relies on its analysis carries the same legal and market significance as one produced entirely by human professionals.
This is the AI governance gap in professional services — and it extends far beyond audit.
The Professional Services Disruption
Professional services — audit, legal, consulting, tax advisory, actuarial — are built on a specific value proposition: clients pay for the expert judgment of qualified professionals who are trained, examined, registered, and regulated. The fee structure reflects not just the time spent, but the assurance that the work has been performed to a defined professional standard by a person who can be held accountable if it falls short.
AI disrupts this model in a way that is fundamentally different from previous technology waves. Earlier technologies — spreadsheets, data analytics platforms, document management systems — augmented professional work without replacing the professional's judgment. The accountant used Excel; the accountant still made the assessment. AI, by contrast, is increasingly capable of making the assessment itself. It can analyse financial statements, identify anomalies, flag risk indicators, and generate conclusions that previously required years of professional training to produce.
When KPMG says that AI reduces the cost of an audit, what it is really saying is that AI is replacing human professional judgment in portions of the engagement. The junior auditor who previously reviewed 10,000 transactions is now redundant — the AI reviews them faster, more consistently, and at a fraction of the cost. But the junior auditor was not just reviewing transactions. They were also developing professional judgment, gaining experience, and operating within a framework of professional accountability that the AI does not share.
The efficiency gain is real. The accountability gap is also real. And the second problem does not disappear because the first problem is solved.
The Accountability Gap: Who Signs Off on AI's Work?
In a traditional audit, the engagement partner signs the audit opinion. That signature carries personal liability. Under the Auditing Profession Act 26 of 2005 and IRBA's Code of Professional Conduct, the engagement partner must be satisfied that sufficient appropriate audit evidence has been obtained, that the work has been performed in accordance with International Standards on Auditing (ISAs), and that the audit opinion is justified by the evidence.
When AI performs the substantive analysis, the engagement partner faces a new challenge: how do you verify the quality of work produced by a system whose reasoning process you cannot fully inspect?
This is not the same as reviewing a junior auditor's work papers. A junior's reasoning is documented, explainable, and subject to direct interrogation. You can ask the junior why they reached a particular conclusion and evaluate whether their reasoning is sound. With an AI system, the "reasoning" is a function of model architecture, training data, and statistical inference — none of which are transparent in the way that human reasoning is.
The engagement partner is effectively being asked to sign an opinion based on analysis they cannot independently verify through traditional professional review methods. They can review the AI's outputs, but they cannot review the AI's reasoning in the way they would review a human's. This creates a structural weakness in the assurance chain that current professional standards have not resolved.
"If you cannot explain how a conclusion was reached, you cannot professionally attest to its reliability. This is not a technology problem — it is a professional standards problem that technology has created."
IRBA and the Regulatory Question
The Independent Regulatory Board for Auditors is responsible for regulating the auditing profession in South Africa. Its mandate includes setting standards, conducting inspections, and taking disciplinary action against auditors who fail to meet professional requirements. But IRBA's regulatory framework was designed for a profession conducted by humans.
Several questions arise that IRBA has not yet publicly addressed:
- Do ISAs permit AI to perform substantive audit procedures? International Standards on Auditing refer to "the auditor" performing procedures, exercising judgment, and evaluating evidence. They do not explicitly contemplate AI performing these functions. Is AI use compliant by default, or does it require specific standards?
- What constitutes sufficient review of AI-generated audit evidence? If an AI flags (or fails to flag) a material misstatement, what standard of review must the human auditor apply? Is reviewing the AI's output sufficient, or must the auditor independently verify the AI's methodology?
- How should IRBA inspect AI-assisted audits? IRBA conducts practice reviews to assess audit quality. When AI performs a significant portion of the work, how should inspectors evaluate whether the engagement met professional standards? Traditional work paper review may be insufficient.
- Should audit firms disclose the extent of AI involvement? Clients and stakeholders who rely on audit opinions may have a legitimate interest in knowing what proportion of the audit was performed by AI versus human professionals. Should this disclosure be mandatory?
These are not academic questions. They have immediate practical implications for audit quality, investor confidence, and the credibility of financial reporting in South Africa. If AI is performing substantive audit work and the regulatory framework has not adapted to govern that work, there is a gap in the assurance infrastructure that the capital markets depend on.
Beyond Audit: The Broader Professional Services Question
The governance gap is not unique to audit. It extends to every profession where AI is replacing or augmenting human judgment:
Legal services: AI tools are drafting contracts, reviewing documents for litigation, and providing preliminary legal analysis. But the Legal Practice Act requires that legal advice be given by admitted attorneys or advocates. If an AI drafts a contract that contains a material error, the attorney who relied on the AI's output is liable — but the extent to which the attorney is expected to independently verify every AI-generated clause is undefined.
Tax advisory: AI systems are calculating tax positions, identifying optimisation opportunities, and preparing returns. SARS holds the taxpayer and their tax practitioner responsible for the accuracy of submissions. If an AI system produces an incorrect calculation that results in a penalty, the governance question is the same: who verified the AI's work, and to what standard?
Consulting and strategy: Management consultants are using AI to analyse market data, model scenarios, and generate recommendations. When a board makes a strategic decision based on AI-assisted consulting advice that proves wrong, the chain of accountability becomes murky. Did the consultant verify the AI's analysis? Did the board understand that AI was involved? Was the AI's output presented as the consultant's professional opinion?
In each case, the pattern is the same: AI does the analytical work, but the professional accountability framework was designed for humans doing that work. The framework has not evolved to address the AI's role, creating an accountability gap that grows wider as AI's involvement deepens.
Governance gap: Professional regulatory frameworks in South Africa — IRBA, the Legal Practice Council, SARS practitioner registration — were designed for human professionals. As AI replaces professional judgment in substantive work, the accountability mechanisms that protect clients, investors, and the public are not keeping pace.
What Boards Should Ask Their Service Providers
The AI governance gap in professional services is not just a problem for the professions themselves. It is a problem for every organisation that relies on professional services — which is every organisation. Boards and audit committees have a direct interest in understanding how AI is being used by their auditors, lawyers, tax advisors, and consultants, and what governance is in place to ensure quality.
Questions Boards Should Ask Their Professional Service Providers
- "What proportion of our engagement is performed by AI versus human professionals?" This is the baseline question. Boards cannot assess AI-related risks if they do not know the extent of AI involvement in the services they are paying for.
- "What quality assurance processes govern AI-generated work product?" How does the firm verify that AI outputs meet professional standards? Is there human review of every AI output, or only sample-based review? What is the escalation process when AI produces anomalous results?
- "How does the firm's professional indemnity insurance cover AI-related errors?" If an AI system produces work that leads to a loss, is the firm's PI coverage adequate? Has the insurer been informed about the extent of AI use?
- "What data does the AI system have access to, and how is our information protected?" If the firm's AI tool processes your financial data, contractual information, or personal information, what data protection measures are in place? Is your data used to train the AI? Is it stored outside South Africa?
- "How does the firm ensure compliance with professional standards when AI performs substantive work?" Specifically: which ISAs, legal practice standards, or regulatory requirements apply to AI-generated work, and how does the firm demonstrate compliance?
- "If the fee is reduced because AI does the work, is the assurance also reduced?" This is the critical commercial and governance question. If you are paying less because AI has replaced human professionals, are you receiving the same level of assurance? Or are you paying for a cheaper, faster, but potentially less reliable service?
The Path Forward: Governance Must Catch Up
The AI governance gap in professional services will not close on its own. It requires deliberate action from multiple stakeholders:
Professional regulators (IRBA, the Legal Practice Council, SAICA) must develop standards and guidance for AI use within their professions. This includes defining what constitutes adequate human review of AI-generated work, establishing disclosure requirements for AI involvement in professional engagements, and updating inspection methodologies to assess AI-assisted work quality.
Professional services firms must build internal governance frameworks for AI use that go beyond efficiency measurement. This includes model validation, output quality monitoring, bias testing, and clear documentation of where AI contributes to client deliverables.
Boards and audit committees must treat AI in professional services as a governance matter, not a procurement matter. The questions above should be standing agenda items in audit committee and board meetings where professional service appointments and renewals are discussed.
Insurers must adapt professional indemnity products to explicitly address AI-related risks, including coverage for losses arising from AI errors, omissions, or failures in professional work product.
The efficiency gains from AI in professional services are real and substantial. But efficiency without governance is a false economy. The value of professional services lies not just in the analysis, but in the assurance that the analysis has been performed to a defined standard by accountable professionals. If AI undermines that assurance without a corresponding evolution in governance, the entire professional services model — and the market confidence it supports — is weakened.
Key Takeaways
Key Takeaways for Governance Professionals
- KPMG's demand for AI-driven fee reductions reveals a deeper question: if AI does the professional work, who ensures it meets professional standards? The accountability gap is real and growing.
- Professional regulatory frameworks (IRBA, Legal Practice Council, SAICA) were designed for human practitioners and have not yet adapted to govern AI-assisted professional work.
- Engagement partners signing audit opinions based on AI analysis face a structural challenge: they cannot inspect the AI's reasoning the way they would review a human auditor's work.
- The governance gap extends beyond audit to legal services, tax advisory, consulting, and every profession where AI replaces human analytical judgment.
- Boards and audit committees should be asking their service providers specific questions about AI involvement, quality assurance, insurance coverage, and data protection.
- The critical commercial question: if the fee is reduced because AI does the work, is the assurance also reduced? Cheaper is not necessarily equivalent.
- Professional regulators must develop AI-specific standards, disclosure requirements, and inspection methodologies before the governance gap becomes an assurance crisis.
- Efficiency without governance is a false economy — the value of professional services includes accountability, and AI currently operates outside the accountability framework.
Close the AI Governance Gap in Your Organisation
Priviso helps South African organisations assess AI governance risks across their operations and supply chain, including professional service providers. Build accountability into your AI strategy.
Start Free Trial Contact Us