Anthropic has built its entire brand on a single proposition: it is the safety-first AI company. Founded by former OpenAI researchers who left over concerns about the pace of AI deployment relative to safety research, Anthropic has positioned itself as the responsible counterweight to the industry's "move fast and break things" culture. Its Responsible Scaling Policy, its investments in interpretability research, and its public communications all reinforce the same message — safety is not a constraint on the business, it is the business.
So when Anthropic signed a contract with the United States Department of Defense, the reaction was immediate and polarised. Critics called it a betrayal. Supporters called it pragmatic. The truth, as is usually the case, sits somewhere more uncomfortable — in a grey zone where safety rhetoric meets geopolitical reality, and where the ethics of AI development collide with the economics of survival in a market dominated by companies with fewer scruples.
For South African organisations evaluating AI vendors based on their ethical commitments, this development is not just an American controversy. It is a case study in how to read — and how to stress-test — the claims AI companies make about their values.
What Anthropic Actually Agreed To
The specifics of Anthropic's Pentagon contract are, by design, not fully public. What is known is that the engagement involves providing Claude, Anthropic's large language model, for national security applications. This reportedly includes intelligence analysis support, document processing, and potentially strategic planning assistance — tasks that leverage the model's strengths in reasoning, summarisation, and pattern recognition.
Anthropic has been clear about what it claims the contract does not involve: weapons targeting, lethal autonomous systems, or direct combat applications. The company frames this as an extension of its existing acceptable use policy, which permits certain government and security applications while drawing a line at uses that could directly cause physical harm.
The distinction matters, but it also invites scrutiny. Intelligence analysis that informs targeting decisions is not the same as pulling a trigger, but it is not disconnected from the kill chain either. The question of how many steps removed from harm a technology must be before its use is ethically acceptable is one that neither Anthropic nor the broader AI industry has answered satisfactorily.
The Acceptable Use Policy: What It Says and What It Does Not
Anthropic's Acceptable Use Policy (AUP) is more detailed than most in the industry. It explicitly prohibits the use of its models to develop weapons of mass destruction, to generate child sexual abuse material, or to conduct surveillance that violates civil liberties. It carves out exceptions for government use cases that meet certain criteria around proportionality and human oversight.
The challenge with any acceptable use policy is enforcement. When an AI model is deployed within a classified military environment, the vendor's ability to audit compliance with its own policy is severely limited. Anthropic cannot inspect how the Pentagon uses Claude in the same way it might monitor a commercial customer's API usage. The classification boundary creates an accountability gap that no acceptable use policy can bridge.
This is not unique to Anthropic. Any AI company contracting with military or intelligence agencies faces the same structural problem. But it is particularly consequential for a company whose brand rests on the claim that it takes safety more seriously than its competitors. The credibility of a safety commitment is measured not by its existence, but by its enforceability.
The Dual-Use Problem: Why AI Cannot Be Neatly Categorised
Large language models are inherently dual-use technologies. The same model that helps a doctor summarise patient records can help an intelligence analyst process intercepted communications. The same reasoning capabilities that make an AI assistant useful for legal research make it useful for strategic military planning. There is no technical switch that transforms a civilian AI system into a military one — the technology is agnostic to the purpose it serves.
This dual-use nature fundamentally complicates the ethical debate. Unlike a weapons system, which is designed for a specific purpose, a general-purpose AI model acquires its ethical character from the context of its deployment. The model itself is neither moral nor immoral. The question is always: who is using it, for what purpose, under what oversight, and with what consequences?
For governance professionals, this means that vendor ethics assessments cannot stop at the technology level. They must extend to the deployment context. An AI vendor that prohibits military use entirely might seem more ethical than one that permits it under conditions — but if the prohibition is unenforceable or if the vendor has no mechanism to detect violations, the policy is performative rather than substantive.
"The ethics of an AI system are not embedded in the model. They are embedded in the governance structures around its deployment. A safety-focused company with weak enforcement is less trustworthy than a pragmatic company with strong oversight."
International Perspectives: Military AI Is Not an American Issue
The debate about AI in military applications is global. The European Union's AI Act classifies certain military applications as high-risk but explicitly excludes defence from its scope — a carve-out that acknowledges the tension between civil AI regulation and national security prerogatives. NATO has adopted its own AI strategy emphasising responsible use, human control, and interoperability standards. China has invested heavily in military AI with far less public debate about ethical constraints.
South Africa, as a member of the African Union and a signatory to various international frameworks on conventional weapons, occupies an interesting position. The country has historically been active in disarmament diplomacy and has a constitutional framework that emphasises human dignity and proportionality. While South Africa is not a major player in AI development, South African organisations are consumers of AI systems developed by companies that may have military contracts.
This creates a supply chain ethics question that many South African procurement processes are not equipped to answer. When your AI vendor also serves military clients, what are the implications for your own governance posture? Does it affect the risk profile of the technology? Does it create reputational exposure? These are questions that boards and governance committees need to be asking.
The Credibility Question: Can Safety and Military Use Coexist?
The central question Anthropic's Pentagon contract raises is whether an AI company can credibly claim to be safety-focused while simultaneously serving military clients. The answer depends on what we mean by "safety."
If safety means preventing catastrophic AI risk — ensuring that AI systems do not cause unintended harm through malfunction, misalignment, or loss of human control — then there is a reasonable argument that engagement with defence is not only compatible with safety but may be necessary. The military is going to use AI regardless. Better that the AI it uses comes from a company with genuine safety expertise than from one that treats safety as an afterthought.
If safety means refusing to participate in applications where AI could contribute to harm, regardless of the governance structures in place, then Anthropic has compromised its position. From this perspective, the Pentagon contract is a boundary violation — a signal that commercial incentives ultimately override stated values.
The pragmatic middle ground recognises that both positions have merit and that the real question is about conditions, not absolutes. Under what conditions is military AI deployment acceptable? What oversight mechanisms must be in place? What transparency obligations apply? And crucially, who decides when those conditions are met — the vendor, the customer, or an independent third party?
What This Means for South African Organisations
For South African organisations procuring AI systems, Anthropic's Pentagon contract is a reminder that vendor ethics claims require verification, not trust. The fact that an AI company positions itself as safety-focused does not mean its products are safer, its governance is stronger, or its values will hold under commercial pressure. These claims must be tested against evidence.
POPIA does not directly address AI vendor ethics, but it does require organisations to ensure that personal information is processed with appropriate security safeguards and for disclosed purposes. If an AI vendor's military contracts create data sovereignty or security risks — for example, if model improvements derived from military applications are incorporated into commercial products — that could have POPIA implications for South African users of those products.
King IV's governance principles are more directly relevant. Principle 12 requires boards to govern technology and information as business assets, which includes understanding the risks associated with third-party technology providers. An AI vendor's military relationships, the jurisdictions in which it operates, and the oversight frameworks it is subject to are all relevant factors in a King IV-aligned technology governance assessment.
AI Vendor Ethics Evaluation Checklist
- Read the acceptable use policy in full. Do not rely on marketing summaries. Identify what is prohibited, what is permitted with conditions, and what falls in grey areas. Pay attention to government and military carve-outs.
- Ask about enforcement mechanisms. A policy is only as good as the vendor's ability and willingness to enforce it. Ask how compliance is monitored, what happens when violations are detected, and whether classified deployments create oversight gaps.
- Assess dual-use risk. Understand that general-purpose AI models are inherently dual-use. Evaluate whether the vendor's military or intelligence work could affect the commercial products you use — through shared model training, infrastructure dependencies, or policy changes.
- Map jurisdictional exposure. If your AI vendor operates under US national security contracts, understand the legal frameworks that apply. Consider whether FISA, CLOUD Act, or other US national security authorities could affect data processed by those systems.
- Evaluate transparency. Does the vendor publish transparency reports? Does it disclose government requests for data access? Does it allow independent audits of its safety and governance practices? Opacity is a risk factor.
- Document your assessment. Whatever conclusion you reach, document the evaluation process. King IV requires demonstrable governance, which means showing that the board considered relevant risks and made an informed decision.
- Revisit regularly. Vendor ethics positions change. Anthropic's position on military contracts evolved. OpenAI's position evolved. Build periodic vendor ethics reviews into your governance calendar.
The Broader Question: Where Should the Line Be?
Anthropic's Pentagon contract is a symptom of a larger, unresolved question that the AI industry has been avoiding: where should AI companies draw the line on military use?
The technology industry has historically struggled with this question. Google famously withdrew from Project Maven — a Pentagon AI program — after employee protests in 2018, then quietly resumed defence work through other channels. Microsoft has maintained defence contracts throughout. Amazon's cloud division actively courts military business. The pattern is clear: public positions on military AI are often more flexible than they appear.
For the AI safety community specifically, the Anthropic case forces a reckoning. If the company most associated with AI safety is willing to work with the Pentagon, what does "safety-focused" actually mean in practice? Is it a technical capability (building safer AI systems), a market positioning (appearing more responsible than competitors), or an ethical commitment (refusing certain applications regardless of commercial consequences)?
The honest answer is that it has been all three at different times, and the Pentagon contract is the moment when those definitions diverge. Technical safety and ethical safety are not the same thing. A model that is technically robust, well-tested, and resistant to misuse is technically safe. Whether deploying it for military intelligence analysis is ethically safe is a different question entirely — one that technical expertise alone cannot answer.
Key Takeaways
Key Takeaways for South African Governance Professionals
- Anthropic's Pentagon contract exposes the tension between AI safety branding and commercial reality — vendor ethics claims must be verified through policy analysis, not marketing materials.
- Large language models are inherently dual-use: the same capabilities that serve civilian purposes serve military ones. Ethics depend on deployment context, not technology characteristics.
- Acceptable use policies are only as strong as their enforcement mechanisms — classified military deployments create accountability gaps that no policy can fully address.
- South African organisations using AI from vendors with military contracts face supply chain ethics questions that King IV governance frameworks require boards to consider.
- POPIA implications arise if military-commercial model sharing affects data processing, sovereignty, or security safeguards for South African personal information.
- The distinction between "technical safety" (building robust AI) and "ethical safety" (refusing harmful applications) is becoming impossible to ignore.
- Vendor ethics positions change over time — build periodic reassessment into your governance calendar rather than treating initial due diligence as permanent.
- The broader AI industry pattern shows that public ethical positions on military use are often more flexible than they appear — scrutinise actions, not statements.
Evaluate AI Vendor Ethics for Your Organisation
Priviso helps South African businesses assess AI vendor risk, build governance frameworks, and ensure alignment with POPIA and King IV requirements.
Start Free Trial Contact Us