In July 2023, Meta released Llama 2 — a large language model with capabilities rivalling those of commercial systems — and made it freely available for download. By early 2026, Meta had released Llama 3 and its successors, each more powerful than the last, and the open source AI ecosystem had exploded. Mistral, Stability AI, Falcon, and dozens of smaller players had followed suit, releasing increasingly capable models that anyone with sufficient hardware could download, modify, and deploy.

The open source AI movement has produced remarkable innovation. Researchers in developing countries can access state-of-the-art models without paying commercial licensing fees. Startups can build products on foundation models they control rather than depending on API access that can be revoked at any time. The scientific community can inspect, probe, and improve models in ways that closed systems do not permit. For South Africa specifically, open source AI represents the most realistic path to building local AI capability without dependence on foreign technology companies.

But the same characteristics that make open source AI a powerful engine for innovation also make it a profound governance challenge. When a model is released into the wild, the releasing organisation loses control over how it is used. And some of the uses are deeply concerning — from generating synthetic child exploitation material to creating bioweapon synthesis instructions to producing personalised disinformation at scale. The question facing policymakers, organisations, and governance professionals is not whether open source AI is good or bad. It is how to preserve its benefits while managing its risks.

Hear this discussed on Priviso Live

This article is based on the discussion from Episode 72, where we explore the governance implications of open source AI and what South African organisations need to consider.

Understanding the Open Source AI Landscape

The first thing to understand about "open source AI" is that the term is misleading. Most of what the industry calls open source AI is more accurately described as open weights. When Meta releases Llama, it releases the trained model weights — the mathematical parameters that define what the model has learned — along with usage guidelines and a licence. But it does not typically release the training data, the training code, or the full details of the training process.

This distinction matters for governance. True open source software comes with source code that can be fully audited. Open weights AI models come with an opaque artefact — a massive file of numbers that defines model behaviour but does not explain it. You can run the model, fine-tune it, and inspect its outputs, but you cannot trace its behaviour back to specific training decisions in the way that you can trace software behaviour back to specific lines of code.

The open source AI ecosystem operates at several levels, each with different governance implications:

  • Foundation model releases (Llama, Mistral, Falcon): Large, general-purpose models released by well-resourced organisations. These set the capability baseline for the ecosystem.
  • Fine-tuned variants: Community-modified versions of foundation models, optimised for specific tasks or with safety guardrails removed. These are where many of the most concerning use cases originate.
  • Quantised and optimised versions: Compressed versions of models that can run on consumer hardware, making powerful AI accessible to anyone with a gaming PC.
  • Tooling and infrastructure: Frameworks like Hugging Face, Ollama, and vLLM that make it trivial to download, run, and serve open source models.

The practical effect is that as of early 2026, any technically competent individual can download a model with capabilities comparable to GPT-4 circa 2023, remove its safety guardrails through fine-tuning, and deploy it for any purpose — including purposes the original developer explicitly prohibited in their licence terms. The licence prohibition is largely unenforceable once the model weights are in the wild.

The Dual-Use Dilemma

The core governance challenge of open source AI is the dual-use problem: the same model that enables a South African startup to build an affordable healthcare chatbot can also enable a threat actor to generate convincing phishing campaigns in any of South Africa's 12 official languages. The same model that helps a university researcher analyse climate data can help a malicious actor produce synthetic disinformation targeting South African elections.

This is not a new problem. Dual-use challenges exist in chemistry, biology, nuclear physics, and cybersecurity. But AI has a characteristic that makes its dual-use problem uniquely difficult: the marginal cost of misuse is essentially zero. Synthesising a biological agent requires physical infrastructure, materials, and expertise. Generating a thousand personalised phishing emails requires only a laptop and an open source language model. The barrier between capability and misuse is lower for AI than for almost any other dual-use technology.

The 2026 International AI Safety Report explicitly addresses this issue, noting that openly available AI models have reduced the skill barrier for various malicious activities. The report does not call for banning open source AI — it recognises the innovation benefits — but it warns that current governance mechanisms are not calibrated for the speed and scale at which open source AI capabilities are proliferating.

"The question is not whether open source AI will be misused. It already is. The question is whether our governance frameworks can evolve fast enough to mitigate the worst harms without destroying the genuine benefits."

The EU AI Act Approach to Open Source

The European Union's AI Act, which began enforcement in phases from 2025, represents the most comprehensive attempt to regulate open source AI to date. Its approach is instructive for South African organisations, both because it sets a global benchmark and because any South African company doing business in the EU must comply.

The AI Act creates a partial exemption for open source AI models. Models released under open source licences are generally exempt from many of the Act's requirements, including conformity assessments, technical documentation obligations, and quality management systems. The reasoning is pragmatic: imposing full regulatory compliance on open source releases would effectively kill the open source AI ecosystem, since many releases come from academic institutions, individual researchers, and small organisations that lack the resources for formal compliance.

However, the exemption has critical limitations:

  1. High-risk applications are not exempt. If an open source model is deployed in a high-risk use case (healthcare, employment, law enforcement, critical infrastructure), the deploying organisation must comply with the full requirements of the Act, regardless of whether the underlying model is open source.
  2. General-purpose AI models above a capability threshold (defined by training compute) must comply with transparency and safety obligations even if open source. This is designed to capture the most powerful foundation models like Llama.
  3. The deployer bears primary responsibility. When things go wrong, the EU framework holds the organisation that deployed the AI system responsible, not the organisation that released the open source model. This is a critical distinction for South African organisations using open source AI: you inherit the compliance burden even if you did not build the model.

The EU approach sends a clear message: open source is not a compliance shield. Using an open source model does not transfer your governance obligations to the model's creator. If anything, it increases your obligations, because you are deploying a system whose training data, training process, and safety evaluation you cannot fully verify.

Liability When Open Source AI Causes Harm

The liability question for open source AI is largely unsettled, both globally and in South Africa. Traditional product liability frameworks assume a manufacturer who produces a product, a distributor who supplies it, and a consumer who uses it. Open source AI disrupts this chain in several ways.

The model developer releases weights but does not "supply" a product in the traditional sense. The community may fine-tune or modify the model, creating a derivative that behaves differently from the original. The deploying organisation integrates the model into a system and presents it to end users. If the system causes harm — a discriminatory hiring decision, a medical misdiagnosis, a defamatory output — who is liable?

Under South African law, the most likely framework is the common law of delict, which requires fault (intent or negligence), harm, and a causal link. An organisation that deploys an open source AI system without adequate testing, monitoring, or human oversight could be found negligent if the system causes foreseeable harm. The fact that the underlying model was developed by someone else does not extinguish this obligation.

POPIA adds a specific dimension. If an open source AI model processes personal information in a way that violates the Act's conditions for lawful processing — for example, by making automated decisions without the safeguards required by Section 71, or by processing special personal information without the exemptions required by Section 27 — the responsible party (the deploying organisation) is liable, not the model developer.

The practical implication for South African organisations is stark: you cannot outsource your compliance obligations by using open source. If you deploy an open source AI model that processes personal information, you are the responsible party under POPIA, and you bear the full burden of ensuring lawful processing.

What South African Organisations Need to Consider

South Africa is well-positioned to benefit from open source AI. The country has a growing technology sector, strong universities with AI research programmes, and a cost-conscious business environment where open source alternatives to expensive commercial AI services are particularly attractive. But realising these benefits safely requires a governance framework that most organisations have not yet built.

Open Source AI Governance Framework

  1. Establish an AI model approval process. Before any open source AI model is deployed in your organisation, it should pass through a formal review that assesses its provenance, known capabilities and limitations, safety evaluations (if available), and licence terms. "A developer downloaded Llama and built something" is not an acceptable deployment pathway.
  2. Conduct use-case-specific risk assessments. The same model may be low-risk for one application and high-risk for another. Assess risk based on the specific deployment context: what decisions will the model inform or make? What personal information will it process? What are the consequences of failure?
  3. Implement guardrails before deployment. Open source models often come with minimal safety guardrails. Your organisation is responsible for implementing appropriate content filtering, output monitoring, and use-case restrictions before exposing the model to users or integrating it into business processes.
  4. Maintain a model inventory. Track every open source AI model in use across your organisation, including version numbers, deployment locations, and the identity of the team responsible for each deployment. Models should not be deployed without an assigned owner.
  5. Monitor for model updates and vulnerabilities. Open source models are updated regularly, and new vulnerabilities are discovered frequently. Treat AI model management with the same discipline you apply to software patch management.
  6. Document your POPIA compliance posture. For any open source AI model processing personal information, document the lawful basis for processing, the security safeguards in place, and your approach to automated decision-making transparency. This documentation must be specific to your deployment, not generic statements from the model developer.
  7. Prepare for EU compliance if applicable. If your organisation operates in or serves customers in the EU, the AI Act applies to your deployments regardless of whether the underlying model is open source. Understand the high-risk classification criteria and ensure your deployments comply.
  8. Build internal AI governance capability. Open source AI democratises access to powerful technology, but it does not democratise the expertise needed to govern it safely. Invest in training your risk, compliance, and technology teams on AI-specific governance requirements.

The Path Forward: Responsible Open Source AI Adoption

The open source AI debate is sometimes framed as a binary: either you support open release and accept the risks, or you support restriction and accept the innovation costs. This framing is unhelpful. The reality is that open source AI is here, it is not going away, and the task for governance professionals is to develop frameworks that capture its benefits while managing its risks.

Several principles should guide this effort:

  • Deployer responsibility is non-negotiable. Regardless of how the model was developed or released, the organisation that deploys it bears responsibility for its behaviour. This principle must be embedded in organisational culture, not just compliance documents.
  • Transparency about limitations is essential. Open source models come without the safety guarantees that commercial providers (at least nominally) offer. Organisations must be transparent with their users and stakeholders about what they know and do not know about the models they deploy.
  • Contribution to the commons matters. Organisations that benefit from open source AI have an ethical obligation to contribute back — sharing safety evaluations, documenting failure modes, and improving guardrails for the benefit of the broader ecosystem.
  • Governance must be proportionate. Not every use of open source AI requires enterprise-grade governance. A researcher experimenting with a model locally has different governance needs than an organisation deploying the same model in a customer-facing application. Risk-based proportionality is the right approach.

Key Takeaways

Key Takeaways for Governance and Compliance Professionals

  • Most "open source AI" is actually open weights — the model parameters are available but training data and processes are not, limiting the depth of audit possible.
  • The dual-use risk of open source AI is acute because the marginal cost of misuse is essentially zero — no physical infrastructure needed, just a laptop and a downloaded model.
  • The EU AI Act creates partial exemptions for open source models but holds deployers fully responsible for high-risk applications — open source is not a compliance shield.
  • Under POPIA and South African common law, the deploying organisation bears liability for AI-caused harm, regardless of whether the underlying model was developed by someone else.
  • Fine-tuned variants of open source models with safety guardrails removed represent the most immediate governance risk — and are trivially easy to create.
  • South African organisations should establish formal model approval processes, use-case risk assessments, and deployment guardrails before adopting open source AI.
  • Model inventory management should be treated with the same discipline as software asset management and patch management.
  • Open source AI represents the most realistic path to local AI capability in South Africa, but capturing this benefit safely requires governance investment that most organisations have not yet made.

Build a Governance Framework for AI Adoption

Priviso helps South African organisations develop proportionate, risk-based governance frameworks for AI adoption — whether commercial, open source, or hybrid. Start with a comprehensive assessment.

Start Free Trial Contact Us