In January 2024, OpenAI quietly updated its usage policies to remove the blanket prohibition on military and warfare applications of its technology. The change was subtle — a few words deleted from a policy document — but its implications are anything but. The company that launched the generative AI revolution with ChatGPT, that told the world its mission was to ensure artificial general intelligence benefits "all of humanity," had opened the door to defence and intelligence contracts.

By early 2026, the results of that policy shift are visible. OpenAI has engaged with the US Department of Defense, established partnerships with defence contractors, and positioned itself as a provider of AI capabilities for national security applications. The company frames this as responsible engagement — helping democratic governments use AI safely rather than ceding the field to adversaries with fewer ethical guardrails.

For South African businesses that have built workflows, products, and services on top of OpenAI's API, this is not a distant policy debate. It is a supply chain risk event that demands assessment. When your AI vendor's priorities, governance structures, and contractual obligations shift, the risk profile of every product built on that vendor's technology shifts with it.

Hear this discussed on Priviso Live

This article is based on the discussion from Episode 76, where we examine OpenAI's military pivot, the trust implications for businesses, and how SA organisations should respond.

The Policy Change Timeline: From "No Military" to "National Security"

OpenAI's evolution on military use has been gradual, deliberate, and — critics would argue — strategically ambiguous. Understanding the timeline is essential for assessing the credibility of AI vendor commitments more broadly.

2023 and earlier: OpenAI's usage policies explicitly prohibited "military and warfare" applications. The language was unambiguous. The company's positioning was clear: its technology was for civilian benefit, and military applications were off-limits.

January 2024: The "military and warfare" prohibition was removed from the usage policy. In its place, OpenAI retained prohibitions on using its technology to "develop or use weapons," "harm yourself or others," or "destroy property." The company explained the change as a refinement — the old language was too broad and prevented beneficial uses like veteran support services and military education programs. Critics noted that the new language left enormous room for interpretation.

2024-2025: OpenAI established formal relationships with defence organisations. Reports emerged of partnerships with Anduril Industries and other defence contractors. The company hired personnel with national security backgrounds and created an internal team focused on government and defence partnerships.

2026: OpenAI's defence engagement is now an acknowledged part of its business strategy. The company has argued that AI safety in a geopolitical context requires engagement with democratic governments, and that abstaining from military applications does not prevent their development — it merely ensures that less safety-conscious providers fill the gap.

What Changed and Why: The Economics of AI Development

Understanding why OpenAI reversed its military policy requires understanding the economics of frontier AI development. Training large language models costs hundreds of millions of dollars. OpenAI has raised over $10 billion from investors, primarily Microsoft, and the pressure to demonstrate revenue growth is intense.

Government and defence contracts represent some of the largest, most reliable revenue streams in the technology industry. The US Department of Defense's technology budget exceeds $100 billion annually. For a company burning through capital at OpenAI's rate, the commercial logic of pursuing defence contracts is straightforward — even if it requires revising previously stated principles.

This economic reality has a direct implication for every organisation that relies on OpenAI's products: the vendor's priorities are shaped by its funding model, not its founding mission statement. When the financial incentives pointed away from military work, the policy prohibited it. When the incentives shifted, the policy shifted with it. This is not unusual for technology companies, but it should inform how organisations assess the stability and reliability of vendor commitments.

"When an AI vendor reverses a stated ethical commitment, the reversal itself is less concerning than the speed and ease with which it happened. If a policy can be changed with a quiet update to a webpage, it was never a structural commitment — it was a positioning statement."

Trust Implications: When Safety Commitments Are Reversible

The trust problem OpenAI's policy reversal creates is not primarily about military AI. It is about the reliability of vendor commitments as governance inputs. Organisations around the world — including in South Africa — have made procurement decisions, risk assessments, and compliance representations based on OpenAI's stated policies. When those policies change, the foundations of those decisions shift.

Consider a South African financial services firm that selected OpenAI's API for a customer-facing application. Part of the vendor assessment may have included OpenAI's ethical positioning — its commitment to safety, its acceptable use policies, its stated boundaries. If those boundaries are movable, the vendor assessment is incomplete. And if the firm represented to its own regulators or customers that its AI provider had specific ethical commitments, those representations may now be inaccurate.

This is not a hypothetical concern. Under POPIA, organisations that use AI to process personal information must ensure adequate security safeguards and purpose limitation. If your AI vendor's priorities, data handling practices, or governance structures shift due to new military contracts, the POPIA compliance posture you established at procurement may no longer be valid.

Supply Chain Risk for South African Businesses

South African businesses using OpenAI's API — whether directly or through products built on it — face several concrete supply chain risks from the military pivot.

Infrastructure sharing. OpenAI's commercial API and its government offerings run on shared infrastructure (primarily Microsoft Azure). While logical separation exists, the physical and operational infrastructure overlaps. This creates a theoretical risk that security incidents, policy changes, or government orders affecting the military side of the business could have spillover effects on commercial services.

Model development priorities. When a significant portion of revenue and strategic focus shifts toward defence applications, model development priorities may follow. Features, safety measures, and capability investments that matter most to military customers may receive disproportionate attention, potentially at the expense of features that matter to commercial and civilian users.

Regulatory exposure. US national security legislation — including the Foreign Intelligence Surveillance Act (FISA), the CLOUD Act, and various executive orders — gives the US government broad authority to compel technology companies to provide access to data and systems. A company with active defence contracts is more, not less, likely to be subject to these authorities. For South African organisations processing personal information through OpenAI's systems, this creates a data sovereignty question that POPIA's conditions for cross-border transfer were designed to address.

Reputational contagion. In an era of increased scrutiny around AI ethics, an organisation's choice of AI vendor is increasingly visible to customers, investors, and regulators. If OpenAI's military work becomes controversial — as Google's Project Maven did — organisations associated with the brand may face reputational questions they have not prepared for.

Data Sovereignty and POPIA: The Specific Risks

POPIA Section 72 regulates the transborder flow of personal information. It permits transfers to jurisdictions with adequate data protection, or where the data subject consents, or where the transfer is necessary for a contract. Most South African businesses using OpenAI rely on one of these grounds.

The military dimension introduces a complication. When your AI vendor has contracts with intelligence and defence agencies, the risk that government authorities will seek access to data processed by that vendor increases. The CLOUD Act explicitly gives the US government the authority to compel US-based technology companies to produce data stored anywhere in the world, regardless of local data protection laws. For South African organisations, this means that personal information sent to OpenAI's API could theoretically be accessed by US authorities without the data subject's knowledge or consent.

This is not a new risk — it existed before OpenAI's military pivot. But the pivot increases the salience and probability. A company actively serving defence and intelligence clients is more deeply embedded in the national security apparatus, more likely to receive data access requests, and potentially more willing to comply without resistance.

POPIA consideration: Section 72 cross-border transfer grounds may need reassessment if your AI vendor's military contracts change the data sovereignty risk profile. The Information Regulator has not issued specific guidance on AI vendor military contracts, but the general principles of adequate safeguards and purpose limitation apply.

How South African Organisations Should Evaluate AI Vendor Risk

The OpenAI military pivot is not an isolated event. It is part of a broader pattern where AI companies' stated commitments evolve as their business models mature. Organisations that build governance around vendor promises rather than structural safeguards will be caught off guard repeatedly. Here is a more resilient approach.

AI Vendor Risk Assessment Framework

  1. Treat vendor policies as inputs, not guarantees. Usage policies, ethical commitments, and safety pledges are useful signals, but they are unilaterally changeable. Build your governance framework on what you can verify and enforce, not on what the vendor promises.
  2. Assess data flow and sovereignty. Map exactly where personal information goes when it enters an AI vendor's system. Identify which jurisdictions are involved, which legal authorities apply, and whether the vendor's government contracts increase exposure to compelled data access.
  3. Evaluate contractual protections. Review your contract with the AI vendor. Does it include data processing agreements? Does it address government data access requests? Does it require notification if the vendor's policies or government relationships change materially? If not, negotiate these terms.
  4. Build vendor diversification into your architecture. Avoid deep dependency on a single AI vendor. Design systems that can switch between providers with manageable effort. Concentration risk in AI is as real as concentration risk in any other supply chain.
  5. Monitor policy changes actively. Subscribe to vendor policy update notifications. Assign someone in your governance team to review changes when they occur, not months later when a journalist reports them.
  6. Conduct POPIA impact assessments for AI vendors. If an AI vendor processes personal information on your behalf, a POPIA impact assessment should account for the vendor's government relationships, jurisdictional exposure, and policy stability. This is not optional — it is a condition of responsible processing under POPIA Section 19.
  7. Report to the board. King IV requires boards to govern technology risk. AI vendor military contracts, policy changes, and data sovereignty risks are material governance matters that belong in board-level technology risk reporting.

The Geopolitical Context: Why This Matters Beyond America

OpenAI's military pivot is occurring in a geopolitical context where AI is increasingly viewed as a strategic national asset. The US, China, Russia, and the EU are all treating AI capability as a matter of national competitiveness and security. This framing has consequences for every country that consumes AI technology produced by these powers.

South Africa, as a consumer of AI technology rather than a producer of frontier models, is particularly exposed to these dynamics. When the AI tools your businesses rely on become instruments of great power competition, the terms of access, the stability of supply, and the conditions of use become geopolitical variables — not just commercial ones.

This does not mean South African organisations should stop using OpenAI or any other American AI provider. It means they should factor geopolitical risk into their vendor assessments alongside technical capability, pricing, and compliance. The question is no longer just "is this AI tool good?" but "is this AI tool reliable, given the political and strategic priorities of the entity that controls it?"

Key Takeaways

Key Takeaways for South African Businesses

  • OpenAI's reversal of its military use ban demonstrates that AI vendor ethical commitments can change rapidly — governance frameworks should be built on verifiable safeguards, not vendor promises.
  • The policy shift was driven by economic pressure: defence contracts represent massive, reliable revenue for capital-intensive AI companies. Follow the funding model to understand where priorities will go.
  • South African businesses using OpenAI's API face supply chain risks including infrastructure sharing with military programmes, model development priority shifts, and increased regulatory exposure under US national security law.
  • POPIA cross-border transfer assessments should account for the increased data sovereignty risk when AI vendors have active defence and intelligence contracts subject to FISA and the CLOUD Act.
  • Contractual protections with AI vendors should include data processing agreements, government access notification requirements, and material change clauses tied to policy reversals.
  • Vendor diversification is a risk management imperative — deep dependency on a single AI provider whose priorities may shift creates unacceptable concentration risk.
  • AI is increasingly a geopolitical asset, not just a commercial product. South African organisations must factor great power competition into AI vendor risk assessments alongside technical and compliance considerations.

Assess Your AI Vendor Risk Exposure

Priviso helps South African businesses evaluate AI vendor risk, ensure POPIA compliance for cross-border data processing, and build resilient governance frameworks.

Start Free Trial Contact Us