Last week, the global cybersecurity community convened at RSAC 2026 in San Francisco. The dominant theme was not a new vulnerability or a novel exploit. It was a governance crisis. Specifically, the realisation that organisations are deploying AI agents at pace, and almost none of them can control what those agents do once deployed.

The numbers, drawn from the Kiteworks 2026 Data Security, Compliance & Risk Forecast Report, are striking. Sixty-three percent of organisations cannot enforce purpose limitations on their AI agents. Sixty percent cannot terminate an agent that is misbehaving. Fifty-five percent cannot isolate AI systems from their broader networks. And while ninety percent claim visibility into their AI footprint, fifty-nine percent admit that shadow AI is operating outside of any governance process.

SiliconAngle described the current landscape as the agentic “wild west,” and it is difficult to argue otherwise. Despite these control deficiencies, a third of organisations are already planning autonomous workflow agents that act without human approval. Another twenty-four percent are building decision-making agents with independent access to sensitive data. The industry is deploying capability far ahead of the ability to contain it.

The Governance-Containment Gap

What makes this particularly uncomfortable for boards is the governance-containment gap. Most organisations have invested in monitoring, visibility dashboards, and human-in-the-loop oversight. What they have not built are the harder controls: purpose-binding, kill switches, and network isolation. The gap between watching and stopping is fifteen to twenty percentage points, and it is in that gap where material risk resides.

There is an important distinction here that the RSAC conversations made clear. Monitoring tells you what an agent is doing. Containment tells you whether you can stop it. Many organisations conflate the two. They believe that because they can see their AI agents operating, they have governance in place. They do not. Governance without containment is observation without control — and observation without control is not governance at all.

“The gap between watching and stopping is fifteen to twenty percentage points, and it is in that gap where material risk resides.”

The Shadow AI Problem

The shadow AI statistic deserves particular attention. Fifty-nine percent of organisations acknowledge that AI systems are operating outside formal governance. These are not rogue experiments by junior staff. In many cases, they are productivity tools adopted by entire departments, customer-facing chatbots deployed by marketing teams, or data analysis agents integrated by business units — all without IT, legal, or compliance review.

Shadow AI differs from traditional shadow IT in a critical way: shadow IT stores data; shadow AI acts on it. An unapproved spreadsheet is a data governance risk. An unapproved AI agent that sends emails, submits forms, queries databases, and takes actions on behalf of the organisation is an operational risk, a legal risk, and potentially a regulatory risk — all at once.

The containment deficit: 63% cannot enforce purpose limitations. 60% cannot terminate misbehaving agents. 55% cannot isolate AI from broader networks. 59% have shadow AI outside governance. Yet 33% are planning autonomous agents with no human approval required.

What This Means for South African Organisations

For South African organisations, this is not a distant concern. AI agents are already embedded in productivity platforms, financial systems, and customer-facing operations. The same control deficiencies identified at RSAC 2026 exist in local deployments.

King V requires the governing body to set the direction for, and exercise oversight of, technology risk. If your organisation cannot enforce purpose limitations on its AI agents, or terminate one that has gone off-script, that obligation is unmet.

ISO 42001 demands documented controls over AI system behaviour, including the ability to intervene when systems operate outside their defined parameters. The standard exists precisely because the scenario described by the Kiteworks data — widespread deployment with inadequate containment — is the predictable failure mode for ungoverned AI.

POPIA requires that personal information be processed for specific, defined purposes. If an AI agent cannot be constrained to its stated purpose, the organisation may be in breach of POPIA’s purpose limitation principle — whether or not the breach was intentional.

Three Questions Every Board Should Ask

Board-Level AI Agent Governance Questions

  1. “Can we terminate any AI agent in our environment within minutes?” If the answer is no, or uncertain, your organisation has a containment gap. The ability to stop an AI agent is not optional — it is a baseline governance requirement.
  2. “Do we know every AI agent operating in our environment, including those adopted by business units without IT approval?” Shadow AI is not an IT problem. It is a board-level risk. If fifty-nine percent of organisations globally have shadow AI, assume your organisation does too until proven otherwise.
  3. “Can we enforce purpose limitations on every AI agent — not just monitor what it does, but prevent it from doing what it should not?” Monitoring is necessary but insufficient. Purpose-binding is the harder control, and it is the one that regulators and courts will look for when something goes wrong.

The Path Forward

The RSAC 2026 data does not suggest that organisations should stop deploying AI agents. The efficiency and capability gains are real. What it does suggest is that deployment has outpaced governance, and the gap is now large enough to represent material risk.

Closing the gap requires investment in three areas:

Purpose-binding controls: AI agents should operate within defined parameters that are technically enforced, not merely documented. This means architectural controls that prevent agents from accessing data or taking actions outside their approved scope.

Kill switches: Every AI agent must be terminable. If an agent cannot be stopped, it should not be deployed. This is not a theoretical requirement — it is the minimum viable containment standard.

Network isolation: AI agents should be segmented from critical systems unless access is explicitly approved and monitored. The assumption should be zero trust: no AI agent gets broad network access by default.

The question RSAC 2026 has posed is blunt: who owns your AI agents, and can they be stopped?

Key Takeaways

Key Takeaways for Governance Professionals

  • RSAC 2026 data shows 63% of organisations cannot enforce purpose limitations on AI agents and 60% cannot terminate misbehaving ones — a critical governance-containment gap.
  • Shadow AI is operating outside governance in 59% of organisations, yet a third are planning fully autonomous agents with no human approval.
  • The gap between monitoring and containment is 15-20 percentage points — organisations can watch their agents but cannot stop them.
  • King V, ISO 42001, and POPIA all require controls that most organisations currently lack for their AI agent deployments.
  • Boards should ask three questions: Can we terminate any agent in minutes? Do we know every agent in our environment? Can we enforce purpose limitations?
  • Minimum viable AI governance requires purpose-binding controls, kill switches, and network isolation — not just monitoring dashboards.

Sources

  • TechRepublic — RSAC 2026 Proved the Industry Agrees on the Problem
  • SiliconAngle — Cybersecurity Governance in the Agentic Wild West
  • SC Media — RSAC 2026: AI Agents Are Joining the Workforce, So Who’s in Charge?
  • Computer Weekly — RSAC Rewind: Agentic AI, Governance Gaps and Insider Threats
  • Kiteworks — 2026 Data Security, Compliance & Risk Forecast Report

Get Your AI Agent Governance in Order

Priviso helps South African organisations audit their AI footprint, build containment controls, and align with ISO 42001 and King V requirements. Start with a free assessment.

Start Free Trial Contact Us