A company has launched claiming to operate with zero human employees. Every function — from customer interaction and sales to financial management and strategic planning — is handled entirely by AI agents. No staff. No managers. No human oversight of day-to-day operations. The founders set the system in motion, and the AI runs the business.

This is not a thought experiment or a Silicon Valley pitch deck. It is a registered entity that is transacting with real customers, handling real money, and making real decisions that affect real people. And it exposes a gap in corporate governance, liability law, and data protection regulation that no jurisdiction — including South Africa — has adequately addressed.

The zero-human company forces a set of questions that corporate law was never designed to answer: Who is liable when an AI agent makes a decision that harms someone? Who exercises fiduciary duty when there are no officers? Who ensures compliance with data protection law when no human oversees data processing?

These are not abstract questions for the distant future. They are immediate governance challenges that regulators, boards, and risk professionals must confront now.

Hear this discussed on Priviso Live

This article is based on the discussion from Episode 74, where we explore the legal and governance implications of fully AI-operated companies.

The Companies Act Was Written for Humans

South Africa's Companies Act 71 of 2008 is built on a foundational assumption: companies are governed by natural persons. Section 66 vests the authority to manage the business and affairs of a company in its board of directors. Section 69 sets out who may not serve as a director, implicitly confirming that directors must be natural persons — they must be of legal age, not declared mentally unfit, and not disqualified by criminal conviction.

An AI agent satisfies none of these requirements. It is not a natural person. It cannot be held personally liable. It cannot be imprisoned for breaching a fiduciary duty. It has no legal standing to enter into contracts, make representations, or bear consequences for its actions. Under South African law, an AI cannot be a director.

This creates an immediate problem for the zero-human company model. If the AI is making all business decisions but cannot legally serve as a director, then the actual directors — the human founders who registered the company — remain fully accountable for every decision the AI makes, whether they know about those decisions or not. They have delegated operational authority to a system that operates autonomously, but they have not — and cannot — delegate their fiduciary duties.

Section 76 of the Companies Act requires directors to exercise their powers in good faith, for a proper purpose, and in the best interests of the company. They must exercise the degree of care, skill, and diligence that would reasonably be expected of a person carrying out the same functions. When an AI agent makes a decision that fails this standard, the directors who chose to rely on that AI agent are the ones who answer for it.

"You can automate the work. You cannot automate the accountability. The Companies Act does not recognise AI as a decision-maker. It recognises the humans who chose to delegate decisions to AI — and holds them responsible for the outcomes."

Fiduciary Duty in an Autonomous System

Fiduciary duty is a personal obligation. It requires the fiduciary to act with loyalty, care, and good faith on behalf of another. In corporate governance, directors owe fiduciary duties to the company and, in certain circumstances, to shareholders and creditors. These duties are exercised through informed judgment — the director must understand the relevant facts, consider the options, and make a deliberate choice.

In a zero-human company, the AI agents are making hundreds or thousands of decisions per day without any human reviewing them. The directors may not know what decisions are being made, let alone whether those decisions satisfy the standard of care, skill, and diligence required by law. This is not delegation in the traditional sense — where a director delegates a specific task to a competent employee and monitors the outcome. This is abdication: the complete surrender of decision-making to an autonomous system.

The legal distinction matters. South African courts have consistently held that directors cannot escape liability by claiming ignorance of what happened in the company. In the landmark Fisheries Development Corporation case, the court established that directors have a duty to keep themselves informed about the company's affairs. Choosing not to monitor an AI system that is making material business decisions does not discharge the duty — it compounds the breach.

For organisations that are not fully autonomous but are increasingly delegating decisions to AI systems, the principle is the same: the more you delegate to AI, the greater your obligation to monitor the AI's behaviour. Governance frameworks must scale with the scope of AI delegation.

POPIA Without a Human in the Loop

The Protection of Personal Information Act creates specific obligations for the processing of personal information. Several of these obligations presuppose human involvement that a zero-human company cannot provide.

Section 55 requires every responsible party to appoint an Information Officer. This person is responsible for ensuring POPIA compliance, handling data subject requests, and liaising with the Information Regulator. An AI agent cannot serve as an Information Officer. The role requires a natural person who can be contacted, who can exercise judgment about compliance questions, and who can be held personally accountable for failures.

Section 11 requires that processing be lawful. One of the conditions for lawful processing is that the data subject has consented, or that processing is necessary for a legitimate interest that is not overridden by the data subject's rights. Making this determination requires contextual judgment — weighing the organisation's interests against the individual's privacy rights. This is not a mechanical calculation. It requires understanding of the specific circumstances, the sensitivity of the data, and the potential impact on the data subject. Delegating this entirely to an AI system that has no understanding of human context raises fundamental questions about whether the processing can be considered lawful.

Section 22 requires notification of data breaches to the Information Regulator and, where applicable, to affected data subjects. The notification must be made "as soon as reasonably possible." In a zero-human company, who detects the breach? Who assesses its severity? Who decides whether notification is required? Who communicates with the regulator? If these functions are performed by AI, the quality and timeliness of breach response depends entirely on the AI's programming — and there is no human failsafe if the AI fails to detect or properly categorise a breach.

Regulatory gap: POPIA's enforcement framework assumes human accountability. An Information Officer must be a natural person. Breach notifications require human judgment. Consent determinations require contextual analysis. A fully AI-operated company creates a structural impossibility: POPIA obligations exist, but there is no human to discharge them.

Insurance and Liability: The Uncharted Territory

Professional indemnity insurance, directors and officers (D&O) insurance, and general commercial liability policies are all designed around human decision-making. They cover the consequences of human errors, omissions, and negligence. When the "employee" making the error is an AI agent, insurers face a fundamental underwriting challenge.

Consider a scenario: the AI agent in a zero-human company provides financial advice to a customer that results in a significant loss. The customer sues. Under a traditional D&O policy, the claim would be covered if a human director or officer made the negligent recommendation. But the recommendation was made by an AI agent that is not a director, not an officer, and not an employee. Does the policy respond?

The answer is uncertain and will vary by policy wording. Most commercial insurance policies contain exclusions for losses arising from technology failures, automated systems, or matters outside the insured's direct control. AI decision-making could fall into any of these exclusions. Until insurers develop specific AI liability products — and until case law establishes clear precedent — organisations operating with significant AI autonomy may find themselves in a coverage gap where losses are real but insurance is unavailable.

For South African organisations, this is not a problem exclusive to zero-human companies. Any organisation that deploys AI in customer-facing decisions, credit assessments, claims processing, or professional services is potentially exposed to the same coverage uncertainty. The prudent response is to review existing insurance coverage, engage with insurers about AI-specific risks, and ensure that policy wordings have been tested against AI-related loss scenarios.

Labour Law: What Happens to the Social Contract?

South Africa's labour law framework — the Labour Relations Act, the Basic Conditions of Employment Act, and the Employment Equity Act — exists to protect workers and regulate the employer-employee relationship. A company with zero employees has no workers to protect, no employment relationships to regulate, and no obligations under labour legislation.

This creates a competitive asymmetry. A zero-human company avoids payroll taxes, UIF contributions, skills levies, employment equity requirements, collective bargaining obligations, and dismissal protections. It faces no retrenchment costs, no labour disputes, and no workplace safety obligations. From a pure cost perspective, it has a structural advantage over every company that employs human beings.

The labour implications extend beyond the individual company. If the zero-human model proves viable and scales, it represents a direct challenge to the social contract that underpins employment-based taxation, social security, and economic participation. South Africa, with unemployment exceeding 32%, can ill afford a corporate model that generates economic value while employing nobody.

Regulators will eventually need to address this — whether through AI-specific labour contributions, automation levies, or revised definitions of employment that capture AI-mediated work. But for now, the regulatory framework has a blind spot, and the zero-human company is occupying it.

What Regulators Should Be Thinking About

The zero-human company is not an isolated novelty. It is the logical endpoint of a trend that is already well advanced: the progressive displacement of human decision-making by AI systems. Most organisations are somewhere on this spectrum, from AI-assisted (human decides, AI advises) to AI-augmented (AI decides, human approves) to AI-autonomous (AI decides, human is absent). The zero-human company simply occupies the far end.

Regulators should be thinking about several questions simultaneously:

  1. Should companies be required to maintain minimum human oversight? If certain decisions — processing personal data, making credit assessments, providing professional advice — require human judgment as a matter of law, should companies be prohibited from fully automating these functions?
  2. Should AI agents have a legal status? Some legal scholars have proposed creating a new category of "electronic person" with limited legal rights and obligations. This is controversial but would provide a framework for assigning liability to AI systems directly.
  3. Should automation attract a fiscal contribution? If AI displaces human labour, should the entity deploying the AI make a contribution to social security systems that would otherwise have been funded by employment taxes?
  4. How should insurance adapt? Regulators in the insurance sector need to ensure that AI-related risks are insurable and that coverage gaps do not leave harmed parties without recourse.
  5. How should existing laws be interpreted? Before new legislation is enacted, regulators and courts can provide guidance on how existing frameworks — the Companies Act, POPIA, the Consumer Protection Act — apply to AI-autonomous entities.

What This Means for Your Organisation

Most organisations are not going to eliminate all human employees. But every organisation is moving toward greater AI autonomy in some functions. The governance lessons from the zero-human company apply at every point on the automation spectrum.

AI Autonomy Governance Checklist

  1. Map your AI autonomy spectrum. For each business function, document the current level of AI involvement: advisory, augmented, or autonomous. Identify where human oversight exists and where it is absent.
  2. Ensure human accountability for every AI decision. No AI system should make a material decision without a designated human who is responsible for the outcome. This is not optional under King IV and POPIA — it is a legal requirement.
  3. Review your insurance coverage. Test your D&O, PI, and commercial liability policies against AI-specific loss scenarios. Engage with your broker about coverage for AI-related claims.
  4. Verify your Information Officer obligations. Under POPIA, the Information Officer must be a natural person. Ensure this role is filled and that the person has genuine oversight of AI-driven data processing.
  5. Build AI monitoring into your governance framework. As AI autonomy increases, monitoring must increase proportionally. Automated decisions require automated monitoring, with human escalation pathways for anomalies.
  6. Prepare for regulatory evolution. The zero-human company will accelerate regulatory attention to AI governance. Organisations that build robust governance frameworks now will be ahead of the curve when new requirements emerge.

Key Takeaways

Key Takeaways for Governance Professionals

  • The zero-human company is not a future scenario — it exists now, creating immediate questions about liability, fiduciary duty, and regulatory compliance that no jurisdiction has fully answered.
  • Under SA's Companies Act, AI cannot serve as a director — the human founders retain full fiduciary responsibility for every decision their AI agents make, whether they monitor those decisions or not.
  • POPIA requires a natural person as Information Officer and presupposes human judgment in processing decisions, breach detection, and consent management — obligations that a zero-human company structurally cannot discharge.
  • Insurance coverage for AI-autonomous decisions is uncertain — most D&O and PI policies were not designed for losses caused by AI agents, creating potential coverage gaps.
  • Zero-human companies avoid all labour law obligations, creating a competitive asymmetry that regulators will eventually need to address through automation levies or revised employment definitions.
  • The governance lessons apply to every organisation on the AI autonomy spectrum — as you delegate more to AI, your obligation to monitor, govern, and maintain human accountability increases proportionally.
  • Delegation of operational decisions to AI is legally permissible; abdication of governance responsibility is not — know the difference.
  • Regulators should be considering minimum human oversight requirements, AI legal status, automation fiscal contributions, and insurance adaptation for AI-autonomous entities.

Ensure AI Governance in Your Organisation

Priviso helps South African organisations build governance frameworks that maintain human accountability as AI autonomy increases. Start with a comprehensive governance assessment.

Start Free Trial Contact Us