A KwaZulu-Natal man was recently sentenced to five years in prison for sharing digitally altered pornographic images of public figures. He did not create the images himself. He simply forwarded them on a messaging platform. That single act of sharing was enough to trigger a criminal conviction under South African law.

This case should serve as a wake-up call for every South African. The law does not distinguish between a real intimate photograph and one fabricated by artificial intelligence. If you create, distribute, or even forward a deepfake image of another person without their consent, you are committing a criminal offence. The penalties are severe, and as the KZN case demonstrates, courts are prepared to hand down custodial sentences.

On Episode 70 of Priviso Live, we unpacked the legal landscape around deepfakes in South Africa, explored the growing crisis in schools, and discussed what organisations must do to protect themselves. This article distils the key points every business owner, privacy officer, and parent needs to understand.

What Happened: The KZN Deepfake Case

The accused in the KwaZulu-Natal case used image-editing tools to superimpose the faces of well-known public figures onto explicit pornographic material. He then distributed these altered images through social media and messaging groups. The images were not real. They were fabrications. But the harm they caused was entirely real: reputational damage, emotional distress, and a violation of the dignity of every person depicted.

The court handed down a five-year prison sentence. The judge made it clear that the manipulated nature of the images provided no defence. Under South African law, the distribution of intimate images without consent is a criminal act regardless of whether the images are authentic, doctored, or entirely generated by AI. The intent to humiliate, harass, or degrade the subject is what matters, and sharing the content is sufficient to establish that intent.

Warning: Simply forwarding a deepfake intimate image on WhatsApp, even if you did not create it, can result in a criminal conviction and up to five years in prison under the Cybercrimes Act.

This is not a theoretical risk. It is an established legal precedent. South African courts have now demonstrated that they will treat deepfake offences with the same severity as the distribution of real non-consensual intimate images.

South African Laws That Apply to AI Deepfakes

South Africa does not yet have legislation specifically drafted for AI-generated content. However, the existing legal framework is broad enough to cover deepfakes comprehensively. Two primary statutes apply.

The Cybercrimes Act (Act 19 of 2020)

The Cybercrimes Act is the primary weapon in the state's arsenal against deepfake distribution. Section 16 of the Act criminalises the non-consensual sharing of intimate images. The key provisions are:

  • Section 16(1): It is an offence to unlawfully and intentionally disclose, by means of an electronic communications service, a data message of an intimate image of an identifiable person, knowing that the person depicted did not consent to the disclosure.
  • Section 16(2): The offence extends to situations where the accused ought reasonably to have known that the person depicted did not consent.
  • Penalty: A fine, imprisonment of up to five years, or both. For repeat offences or cases involving minors, sentences can be significantly harsher.

Critically, the Act does not require the image to be a genuine photograph. The phrase "intimate image" is defined broadly enough to include any visual depiction, whether real, altered, or entirely fabricated, that shows a person in intimate circumstances. This means AI-generated deepfakes fall squarely within its scope.

The Films and Publications Amendment Act

The Films and Publications Amendment Act (Act 11 of 2019) provides additional enforcement mechanisms, particularly where content involves minors or constitutes what the Act classifies as "prohibited material." Under this legislation:

  • The creation, possession, or distribution of child sexual abuse material, including AI-generated depictions, carries penalties of up to R300,000 in fines and four years' imprisonment.
  • Content that degrades, dehumanises, or constitutes hate speech is also regulated, with specific provisions that apply to digitally manipulated media.
  • Online distributors and internet service providers have obligations to report and remove prohibited content.

Together, these two Acts create a legal environment in which deepfake creators and distributors face significant criminal liability. The absence of a dedicated "deepfake law" does not mean there is a legal gap. The existing framework is robust, and prosecutors are actively using it.

Why AI-Generated Content Is Not a Legal Grey Area

A common misconception, one we hear repeatedly in consultations, is that because an image is "fake," it cannot be illegal. This reasoning is fundamentally flawed, and the KZN conviction proves it.

South African law is concerned with harm, not authenticity. The question is not whether the image is real. The questions are: Does it depict an identifiable person? Was it shared without their consent? Does it cause harm to their dignity, reputation, or emotional wellbeing?

If the answers are yes, the content is illegal. Full stop.

This principle is consistent with the constitutional right to dignity enshrined in Section 10 of the Constitution and the right to privacy in Section 14. The Protection of Personal Information Act (POPIA) further reinforces these protections by regulating the processing of personal information, which includes biometric data such as facial images used to create deepfakes.

Internationally, we have seen similar legal reasoning. In the United Kingdom, the AI chatbot Grok generated images of politicians, including Prime Minister Keir Starmer, in bikinis and other compromising scenarios. While UK lawmakers are still debating specific AI legislation, the incident triggered widespread public outrage and accelerated calls for regulation. South Africa, with its existing Cybercrimes Act, is actually ahead of many jurisdictions in its ability to prosecute these offences.

"The law does not care whether the image was created by a camera, Photoshop, or an AI model. If it depicts a real person without their consent, and it causes harm, it is a criminal offence in South Africa."

The School Deepfake Problem: Children Are Not Exempt

Perhaps the most alarming trend we discussed on the podcast is the proliferation of deepfakes in South African schools. Learners as young as 13 and 14 are using freely available AI tools to generate explicit images of classmates and teachers, then distributing them on school WhatsApp groups.

Parents and educators often assume that because these are children, the law does not apply. This assumption is dangerously wrong.

Under South African law, criminal capacity begins at age 12. This means that a 12-year-old who creates and distributes a deepfake intimate image can be arrested, charged, and prosecuted. The Child Justice Act (Act 75 of 2008) governs the process, and while it emphasises diversion and rehabilitation over incarceration for young offenders, the criminal record implications are real and lasting.

We have already seen cases where learners have been arrested at schools in Gauteng and the Western Cape for creating and sharing deepfake images of classmates. In several of these cases, the images depicted minors, which triggers the more severe penalties under the Films and Publications Amendment Act.

The impact on victims is devastating. Affected learners have reported severe anxiety, depression, social withdrawal, and in some cases, suicidal ideation. Schools that fail to act swiftly can face civil liability for failing to provide a safe learning environment.

This is not a technology problem that can be solved by confiscating phones. It is a digital literacy crisis that requires a coordinated response from schools, parents, and government. Young people need to understand that the tools they are using casually can produce content that carries criminal consequences. The ease of creating a deepfake does not diminish the severity of the offence.

Fake Emergencies: When Deepfakes Waste Real Resources

Deepfakes are not limited to intimate images. On the podcast, we discussed a disturbing incident involving AI-generated content showing a fire at Orlando West High School in Soweto. The images were convincing enough to trigger emergency responses. Emergency medical services were dispatched. Parents rushed to the school in a panic. None of it was real.

This type of deepfake carries its own set of legal consequences. Under the Cybercrimes Act, Section 17 criminalises the dissemination of data messages that are harmful, including content that is inherently false and is aimed at causing harm. Creating and distributing fake emergency content can result in prosecution under this section.

Beyond the Cybercrimes Act, there are common law offences that apply. Wasting police or emergency service resources is a criminal offence. Causing a public panic can constitute public violence or crimen injuria depending on the circumstances. And if anyone is physically harmed during the panic response, for example in a stampede or traffic accident caused by parents rushing to the school, the creator and distributors of the fake content could face additional charges.

The Orlando West incident is a stark reminder that deepfakes are not just a privacy issue. They are a public safety issue. Organisations, particularly schools and public institutions, need to have verification protocols in place to confirm the authenticity of alarming content before acting on it or sharing it further.

What Organisations Must Do Now

The legal landscape is clear. The question for South African organisations is no longer whether deepfakes pose a risk, but whether they are prepared to manage that risk. Based on our experience helping organisations navigate POPIA compliance and cybercrime legislation, here is what every organisation should implement.

Deepfake Preparedness Checklist for Organisations

  1. Update your Acceptable Use Policy. Your existing IT and communications policies must explicitly address AI-generated content. State clearly that the creation, distribution, or forwarding of deepfake content depicting any person without their consent is prohibited and will result in disciplinary action, up to and including dismissal and criminal referral.
  2. Implement content moderation procedures. Organisations that operate internal communication platforms, intranets, or social media accounts must have documented procedures for identifying and removing deepfake content. Assign responsibility for content moderation and establish escalation paths.
  3. Conduct employee awareness training. Most employees do not understand that forwarding a deepfake image is a criminal offence. Include deepfake awareness in your annual privacy and cybersecurity training. Use the KZN case as a real-world example of consequences.
  4. Establish an incident response plan for deepfake events. If your organisation or an employee becomes the target of a deepfake attack, you need a documented response plan. This should include evidence preservation, legal consultation, reporting to SAPS and the Information Regulator, and victim support.
  5. Review your POPIA compliance posture. Deepfakes involve the processing of personal information, specifically biometric data (facial images). Ensure your POPIA policies and privacy impact assessments account for AI-related risks. The Priviso platform can help you identify and track these risks systematically.
  6. Implement verification protocols for external content. Before acting on alarming images, videos, or audio that arrive via social media or messaging, verify the content through official channels. Establish a "trust but verify" culture, especially for content that could trigger emergency responses.
  7. Engage with schools and community organisations. If your organisation has corporate social responsibility programmes, consider supporting digital literacy initiatives that educate young people about the legal consequences of deepfake creation and distribution.

Key Takeaways

Key Takeaways

  • South African law makes no distinction between real and AI-generated intimate images. Both carry criminal penalties.
  • The Cybercrimes Act (Act 19 of 2020) criminalises the non-consensual sharing of intimate images with penalties of up to five years' imprisonment.
  • The Films and Publications Amendment Act imposes fines of up to R300,000 and four years' imprisonment, with harsher penalties for content involving minors.
  • Simply forwarding a deepfake, even one you did not create, is sufficient for a criminal conviction.
  • Criminal capacity starts at age 12 in South Africa. School learners have been arrested for creating and sharing deepfakes.
  • Fake emergency deepfakes, such as the fabricated Orlando West High School fire, waste real resources and carry separate criminal penalties.
  • Organisations must update policies, train employees, and establish incident response plans to address deepfake risks proactively.
  • POPIA applies to deepfakes because they involve processing biometric personal information (facial images) without consent.

The technology behind deepfakes will continue to advance. The barriers to creating convincing fake images, video, and audio are dropping every month. What will not change is the legal principle at the heart of South African law: every person has a constitutional right to dignity and privacy, and violating those rights carries real consequences.

Organisations that fail to prepare for this reality are not just exposing themselves to legal risk. They are failing in their duty to protect their people.

🎙

Listen to this discussion on Priviso Live

This article is based on Episode 70 of the Priviso Live podcast, where we discuss the KZN deepfake conviction, the school crisis, and practical steps for organisations.

Protect Your Organisation from Deepfake and Cybercrime Risk

Need help with POPIA compliance and incident response planning? Priviso has been helping South African organisations navigate privacy and cybercrime legislation since 2014. Our PrivacyOps platform helps you assess risks, manage incidents, and maintain compliance with confidence.

Start Free Trial Contact Us