In early 2026, North-West University (NWU) became the first South African university to publish a comprehensive, formal artificial intelligence policy. While other institutions have issued guidelines, position papers, or departmental memos on AI use, NWU's document is the first to take the form of an institutional policy — approved by governance structures, binding on all staff and students, and designed to be enforceable rather than merely advisory.

This matters far beyond the university sector. South Africa has no national AI legislation. The government's AI policy framework, published by the DCDT in 2024, is a set of principles, not a binding regulatory instrument. In the absence of legislation, institutional AI policies are the primary governance mechanism available to South African organisations. NWU has produced the first serious attempt at such a policy, and it contains lessons that every organisation — corporate, government, or non-profit — should study.

Here is what the NWU policy covers, what it gets right, where it falls short, and what other South African institutions should take from it.

Hear this discussed on Priviso Live

This article is based on the discussion from Episode 71, where we analyse the NWU AI policy and its significance as a governance template for South African organisations.

What the NWU AI Policy Covers

The NWU policy is structured around several core areas that together form a comprehensive AI governance framework. It addresses the use of AI in teaching and learning, research, administration, and institutional operations. It applies to all staff, students, and third parties using NWU systems or acting on the university's behalf.

Acceptable Use and Transparency

The policy establishes clear rules about when and how AI tools may be used. For academic work, students must disclose the use of AI in any submitted work, including which tools were used, how they were used, and what outputs were generated. This is not a ban on AI — it is a transparency requirement. The policy recognises that AI tools are part of the modern academic and professional environment, and that prohibition is neither realistic nor desirable. Instead, it demands honesty about their use.

For staff, the policy requires that AI tools used in teaching, assessment, and administration be approved through a governance process before deployment. This prevents the shadow AI problem that plagues most organisations: staff independently adopting AI tools without institutional awareness, oversight, or risk assessment.

Academic Integrity

The policy draws a clear distinction between using AI as a tool and presenting AI output as your own work. Using AI to research a topic, check grammar, or explore ideas is permitted (with disclosure). Submitting AI-generated text as original work is academic misconduct, treated the same as plagiarism. The policy explicitly addresses the difficulty of detecting AI-generated content, acknowledging that detection tools are unreliable and that the university's approach relies on transparency and honour codes rather than technological policing.

This is a pragmatic and honest position. Organisations attempting to enforce AI bans through detection technology are fighting a losing battle. NWU's approach — permitting use while requiring disclosure — is sustainable in a way that prohibition is not.

Data Protection and Privacy

The policy explicitly addresses POPIA compliance in the context of AI use. It prohibits the input of personal information, student data, or confidential research data into public AI tools without appropriate safeguards. It requires that any AI tool processing personal information be assessed against POPIA's conditions for lawful processing, and that data protection impact assessments be conducted for high-risk AI use cases.

This alignment between AI governance and data protection law is essential. Too many organisations treat AI policy and data protection as separate domains. NWU's policy recognises that AI governance is data governance — every AI system processes data, and if that data includes personal information, POPIA applies regardless of whether the tool is called "AI" or anything else.

Research Ethics

For research, the policy requires that any use of AI in research methodology, data analysis, or publication be disclosed in the research ethics application and in published outputs. It addresses the emerging question of whether AI can be listed as a co-author (it cannot, under NWU's policy, consistent with guidance from major academic publishers) and requires researchers to verify AI-generated outputs before relying on them in published work.

Governance Structure

Perhaps the most important element of the NWU policy is its governance structure. The policy establishes a designated AI governance committee with representation from academic faculties, IT, legal, research, and student affairs. This committee is responsible for reviewing and approving AI tools, assessing risks, monitoring compliance, and updating the policy as the technology evolves.

This is the element most organisations miss. A policy document without an accountable governance structure is a dead letter. NWU has created the institutional mechanism to keep the policy alive and responsive to change.

Why This Matters Beyond Universities

The NWU policy matters because it demonstrates that a South African institution, operating under South African law and with South African resource constraints, can produce a workable AI governance framework. It is not a copy of an EU or US template. It is built for the local regulatory context, referencing POPIA, the National Qualifications Framework, and South African academic standards.

For corporate South Africa, this is significant. Many organisations have delayed developing AI policies because they believe they need to wait for national legislation, or because they think AI governance requires regulatory expertise they do not have. NWU's policy shows that neither excuse holds. You can build a meaningful AI governance framework today, using existing South African law and governance principles.

"NWU has demonstrated that South African institutions do not need to wait for AI legislation to govern AI responsibly. The building blocks — POPIA, King IV, existing ethics frameworks — are already here. What was missing was the will to assemble them."

Comparison with International Approaches

Internationally, universities have taken varied approaches to AI governance. The Russell Group in the UK issued principles-based guidance focused on academic integrity, but left implementation to individual institutions. UNESCO published recommendations on the ethics of AI in education that emphasise equity and inclusion but lack enforcement mechanisms. Several US universities have issued department-level guidelines that vary wildly in scope and rigour.

NWU's approach is more comprehensive than most international peers in three respects. First, it is a policy, not guidance — it carries institutional authority and is enforceable through existing disciplinary processes. Second, it covers the full institutional scope — teaching, research, administration, and operations — rather than addressing only one domain. Third, it integrates data protection into the AI governance framework rather than treating them as separate concerns.

Where international frameworks sometimes outpace NWU's policy is in the depth of risk classification. The EU AI Act's tiered risk framework — unacceptable, high, limited, and minimal risk — provides a more granular basis for determining what governance controls apply to which AI use cases. NWU's policy applies broadly, which is appropriate for a first iteration, but will likely need to evolve toward a risk-tiered approach as AI use proliferates across the institution.

Alignment with King IV

South Africa's King IV Report on Corporate Governance establishes principles that apply to all organisations, including universities. Principle 12 requires governing bodies to govern technology and information as business assets. Principle 11 requires risk governance that is integrated into strategic decision-making. Both principles clearly extend to AI governance.

NWU's policy aligns well with King IV. The establishment of a governance committee, the requirement for risk assessment before AI deployment, and the integration of AI governance into existing institutional structures all reflect King IV's emphasis on integrated governance. For corporate boards looking for a model, the NWU approach demonstrates how AI governance can be embedded within existing governance frameworks rather than created as a standalone function.

King IV's emphasis on stakeholder inclusivity is also reflected in NWU's approach. The policy was developed with input from academic staff, students, IT, legal, and institutional leadership. This broad consultation process is a governance strength: AI policies developed by IT departments alone, or by legal teams alone, consistently miss important perspectives and fail to secure organisational buy-in.

What Other SA Organisations Should Adopt

AI Policy Building Blocks from the NWU Model

  1. Establish a transparency requirement. Require all staff to disclose AI use in their work, particularly when AI outputs inform decisions, reports, or communications. Transparency is the foundation of trustworthy AI governance — and it is far more enforceable than prohibition.
  2. Create an AI tool approval process. Before any AI tool is deployed in your organisation, it should be assessed for data protection compliance (POPIA), security risks, accuracy and reliability, and alignment with organisational values. Shadow AI is the biggest governance risk most organisations face.
  3. Integrate AI governance with data protection. Do not treat AI policy and POPIA compliance as separate workstreams. Every AI system processes data. If that data includes personal information — and it almost always does — your AI governance and data protection frameworks must be aligned.
  4. Appoint a governance structure. A policy without an owner is a policy that will not be maintained or enforced. Designate a committee or individual responsible for AI governance, with clear authority and accountability. This should report to the board or equivalent governing body.
  5. Address the verification obligation. NWU requires researchers to verify AI outputs. Every organisation should establish the same principle: AI outputs used in decision-making, reporting, or communications must be verified by a qualified human before being relied upon.
  6. Build in a review cadence. AI capabilities are changing rapidly. A policy written today may be inadequate in six months. Build a mandatory review cycle — at minimum annually, preferably semi-annually — into the policy itself.
  7. Consult broadly. Involve legal, IT, operations, HR, and frontline staff in developing your AI policy. The perspectives of people who actually use AI daily are essential to creating a policy that is both practical and comprehensive.
  8. Reference existing frameworks. You do not need to start from scratch. POPIA, King IV, the South African AI Policy Framework, and now the NWU policy provide building blocks that can be adapted to any organisational context.

Where the NWU Policy Falls Short

No first-generation policy is perfect, and the NWU policy has identifiable gaps that future iterations — and organisations using it as a template — should address.

The policy does not yet include a risk classification framework. Not all AI use cases carry the same risk. Using AI to draft a marketing email is fundamentally different from using AI to assess student performance or make hiring recommendations. A tiered approach that maps governance controls to risk levels would make the policy more proportionate and easier to apply.

The policy also lacks specific guidance on procurement of AI tools. When the university contracts with an external AI provider, what due diligence is required? What contractual clauses should be included to address data protection, accuracy warranties, and liability for AI failures? Procurement is where many AI governance failures originate.

Finally, the policy does not address bias and fairness in any depth. AI systems can and do produce discriminatory outcomes, particularly when trained on data that reflects historical biases. For a South African institution, where the legacy of systemic discrimination is a lived reality, AI bias is not an abstract concern. Future iterations of the policy should address how AI systems will be assessed for bias and what remediation processes will apply when biased outcomes are identified.

Key Takeaways

Key Takeaways for South African Organisations

  • NWU has published South Africa's first comprehensive institutional AI policy, covering teaching, research, administration, and operations with enforceable governance structures.
  • The policy requires AI use disclosure rather than prohibition — a pragmatic approach that is sustainable and enforceable where outright bans are not.
  • Integration of POPIA compliance into AI governance is a model other organisations should follow — AI governance is inherently data governance.
  • The establishment of a dedicated AI governance committee with cross-functional representation is the structural element most organisations lack and most need.
  • King IV Principles 11 and 12 already require South African organisations to govern AI as part of technology and risk governance — the NWU policy demonstrates how.
  • International approaches vary, but NWU's policy is more comprehensive than most in its scope, enforceability, and integration with local law.
  • Gaps in risk classification, procurement guidance, and bias assessment should be addressed in future iterations and by organisations adapting the model.
  • South African organisations do not need to wait for national AI legislation — existing law, governance frameworks, and now the NWU template provide sufficient basis for action.

Build Your Organisation's AI Governance Framework

Priviso helps South African organisations develop AI governance policies aligned to POPIA, King IV, and international best practice. Start with a comprehensive governance assessment.

Start Free Trial Contact Us