OpenAI has announced the retirement of GPT-4o, one of the most widely deployed large language models in enterprise settings worldwide. For thousands of organisations that built workflows, products, and compliance processes around this model, the announcement raises an uncomfortable question: what happens when the AI you depend on simply ceases to exist?
This is not a theoretical concern. It is happening now. And for South African organisations — many of which adopted GPT-4o without formal governance frameworks, vendor risk assessments, or continuity planning — the retirement timeline represents a governance crisis that most boards have not anticipated.
The broader issue extends far beyond a single model from a single provider. AI model retirement is an emerging category of technology risk that has no established playbook, no regulatory guidance, and — in most organisations — no assigned owner. It sits in the gap between IT operations, procurement, compliance, and enterprise risk management. And that gap is where governance failures live.
The Model Retirement Problem Nobody Planned For
When organisations adopt cloud-based AI services, they are not purchasing software in the traditional sense. They are subscribing to access to a model that exists on someone else's infrastructure, is maintained at someone else's discretion, and can be modified or discontinued according to someone else's product roadmap. This is a fundamentally different risk profile from on-premise software, and most governance frameworks have not caught up.
Traditional software deprecation follows a predictable pattern: the vendor announces end-of-life, provides a migration window (typically 12 to 24 months), continues security patches during the transition, and offers a clear upgrade path to a successor product. The software you licensed yesterday still works today, even if it is no longer supported.
AI model retirement works differently. When OpenAI retires GPT-4o, the model does not continue running in a degraded state. It stops existing. Every API call that pointed to that model either fails or is automatically redirected to a different model — a model that may behave differently, produce different outputs for the same inputs, and have different performance characteristics. There is no "keep running the old version" option. The infrastructure is controlled entirely by the provider.
This creates a class of risk that is unique to AI-as-a-service: involuntary migration. Your organisation did not decide to change its AI infrastructure. The vendor decided for you. And the replacement model, while potentially more capable overall, is not the same model. Its outputs will differ. Its failure modes will differ. Its suitability for your specific use cases cannot be assumed — it must be verified.
Business Continuity: The AI Dependency You Did Not Document
Most organisations that adopted GPT-4o did so incrementally. A team integrated it into a customer service workflow. An analyst started using it for report summarisation. A compliance officer began leveraging it to review policy documents. Over time, these discrete use cases accumulated into a substantial operational dependency — one that was rarely documented in business continuity plans.
The result is a pattern we see repeatedly: organisations discover the full extent of their AI dependency only when the model is retired. The customer service team's response quality degrades. The analyst's reports come back formatted differently. The compliance officer's policy reviews miss nuances that the previous model caught. Each of these is a business disruption, and none of them were anticipated because nobody mapped the AI supply chain.
Business continuity planning has traditionally focused on infrastructure failure: what happens when the server goes down, the network drops, or the data centre floods. AI model retirement introduces a new failure mode: capability withdrawal. The infrastructure is fine. The API is responding. But the capability you relied on has been replaced with a different capability, and the delta between the two is unknown until you test it.
"AI model retirement is not a technology problem. It is a governance failure that manifests as a technology problem. The model did exactly what the vendor said it would do — it was available until it wasn't. The failure was in the organisation's assumption that availability would continue indefinitely."
Vendor Lock-in: Deeper Than You Think
The concept of vendor lock-in is well understood in enterprise technology. What makes AI vendor lock-in particularly insidious is that it operates at multiple layers simultaneously.
At the API layer, organisations build integrations against a specific provider's API structure. Moving to a different provider requires rewriting integration code, updating authentication mechanisms, and adapting to different request/response formats. This is the visible lock-in, and it is manageable with proper abstraction layers.
At the prompt layer, organisations invest significant effort in prompt engineering — crafting instructions that produce reliable outputs from a specific model. These prompts are model-specific. A prompt optimised for GPT-4o will not produce identical results on GPT-5, Claude, or Gemini. The organisation's intellectual property in prompt design is effectively tied to a specific model version.
At the behavioural layer, the deepest and most dangerous lock-in occurs. Organisations calibrate their human processes around the model's specific behaviour patterns. Reviewers learn what the model tends to get right and wrong. Workflows are designed around the model's typical response times and output structures. Quality assurance processes are tuned to catch the model's known failure modes. When the model changes, all of this institutional knowledge becomes unreliable.
For South African organisations, the vendor lock-in risk is compounded by limited provider diversity. The major AI model providers are all US-based companies operating under US jurisdiction. There is no South African AI model provider of comparable capability, which means organisations cannot easily shift to a domestic alternative when a foreign provider makes unilateral changes.
POPIA Implications: When Your Data Processor Changes Without Notice
Under the Protection of Personal Information Act (POPIA), organisations that process personal information must do so in accordance with specific conditions, including purpose limitation, processing limitation, and security safeguards. When an AI model is retired and replaced, several of these conditions come under strain.
First, consider purpose limitation. If an organisation disclosed to data subjects that their information would be processed using a specific AI system for a specific purpose, and that system is replaced with a materially different one, has the processing purpose changed? The answer is not straightforward. If the new model handles data differently — retaining different context, making different inferences, or producing different classifications — there is an argument that the nature of processing has changed even if the stated purpose has not.
Second, consider security safeguards. POPIA Section 19 requires responsible parties to secure the integrity and confidentiality of personal information. A new model may have different data handling characteristics, different vulnerability profiles, and different failure modes. The security assessment conducted for the retired model does not automatically transfer to its replacement. A fresh assessment is required.
Third, consider operator agreements. Under POPIA Section 21, where a responsible party uses an operator (processor) to process personal information, there must be a written contract ensuring the operator processes information only with the knowledge or authorisation of the responsible party. When OpenAI replaces GPT-4o with a successor model, is that a change in how the operator processes information? If so, does it require updated authorisation?
POPIA compliance risk: When an AI model is retired and replaced, organisations must reassess their processing activities, update privacy impact assessments, and verify that operator agreements still accurately reflect the processing being performed. Automatic migration to a successor model does not constitute automatic POPIA compliance.
King IV: Board Accountability for Technology Dependencies
The King IV Report on Corporate Governance establishes that the governing body is responsible for the governance of technology and information. Principle 12 specifically requires boards to govern technology as a strategic asset and ensure that technology risks are managed within the organisation's risk appetite.
AI model dependency is a technology risk. When an organisation builds critical business processes on a third-party AI model that can be retired at the provider's discretion, the board has a fiduciary obligation to understand that dependency and ensure appropriate mitigations are in place. This includes understanding the contractual terms governing model availability, the organisation's migration readiness, and the business impact of forced migration.
In practice, most boards have not received this level of briefing on AI dependencies. The adoption of AI tools has typically been driven by operational teams, often without formal board approval or enterprise risk assessment. GPT-4o's retirement is a useful forcing function: it compels organisations to surface AI dependencies that have been accumulating below the board's line of sight.
How Organisations Should Plan for AI Model Lifecycle Management
AI model retirement is not a one-time event. It is a recurring feature of the AI landscape. Models will continue to be deprecated as providers release newer versions, shift strategic priorities, or consolidate their product lines. Organisations need a systematic approach to managing this lifecycle.
AI Model Lifecycle Governance Checklist
- Maintain an AI model registry. Document every AI model in use across the organisation, including the provider, model version, deployment date, business processes dependent on it, and the contractual terms governing its availability. This is the foundation of AI lifecycle governance.
- Build abstraction layers. Where possible, architect AI integrations through abstraction layers that decouple business logic from specific model implementations. This reduces the engineering cost of migration when a model is retired.
- Establish model evaluation protocols. Before migrating to a replacement model, conduct structured evaluation against your specific use cases. Do not assume that a "newer" or "more capable" model will perform identically for your workload. Test with real data and real scenarios.
- Include AI in business continuity planning. AI model retirement should be a documented scenario in your BCP. Define the impact, the response procedure, the responsible parties, and the recovery time objectives for each critical AI dependency.
- Negotiate contractual protections. When entering AI service agreements, negotiate for minimum notice periods before model retirement, commitments to backward compatibility where feasible, and clear documentation of behavioural changes in successor models.
- Conduct POPIA reassessments on migration. Every model change that affects the processing of personal information should trigger a privacy impact reassessment. Update processing records, verify operator agreements, and confirm that security safeguards remain adequate.
- Brief the board on AI dependency risk. Ensure that your governing body understands the organisation's AI dependencies, the risks of model retirement, and the governance structures in place to manage lifecycle transitions. This is a King IV obligation.
- Consider multi-provider strategies. Reducing dependency on a single AI provider is the most effective mitigation for retirement risk. Evaluate whether critical workloads can be supported by models from multiple providers, enabling rapid failover when one model is deprecated.
The Broader Lesson: AI Is Infrastructure, Not Just a Tool
GPT-4o's retirement crystallises a lesson that the governance profession has been slow to absorb: AI has become infrastructure. It is not an optional productivity enhancement that can be switched off without consequence. For many organisations, it is embedded in revenue-generating processes, compliance workflows, and customer-facing operations. It is as critical as the ERP system, the CRM, or the email server.
Infrastructure demands infrastructure-grade governance. That means redundancy planning, lifecycle management, vendor risk assessment, performance monitoring, and documented recovery procedures. It means the board understands the dependency. It means the risk committee has assessed the exposure. It means procurement has negotiated appropriate protections.
Most organisations are not there yet. The adoption of AI outpaced the governance of AI, and model retirement events are the moments when that gap becomes visible. The question for South African organisations is whether they will treat GPT-4o's retirement as a one-time disruption to manage, or as a signal to build the governance infrastructure that should have been in place from the start.
Key Takeaways
Key Takeaways for Governance Professionals
- AI model retirement is an emerging risk category that most governance frameworks do not address — when a model is deprecated, it stops existing entirely, forcing involuntary migration.
- Vendor lock-in with AI operates at three layers: API integration, prompt engineering, and behavioural calibration — each making migration more costly than anticipated.
- POPIA compliance does not automatically transfer to a successor model — organisations must reassess processing activities, security safeguards, and operator agreements when AI models change.
- King IV Principle 12 requires boards to govern technology dependencies, including understanding AI model risks and ensuring appropriate continuity planning.
- Business continuity plans must include AI model retirement scenarios with defined impact assessments, response procedures, and recovery time objectives.
- Abstraction layers and multi-provider strategies are the most effective technical mitigations for reducing AI model retirement risk.
- AI model registries are foundational — organisations cannot govern AI dependencies they have not documented.
- AI has become infrastructure, not just a tool, and demands infrastructure-grade governance including redundancy, monitoring, and lifecycle management.
Build AI Resilience Into Your Governance Framework
Priviso helps South African organisations manage AI dependencies with structured governance frameworks, risk assessments, and continuity planning aligned to POPIA and King IV.
Start Free Trial Contact Us