In a ruling that reverberated across the continent, a Kenyan High Court found Meta Platforms liable for content appearing on its platforms — rejecting the company's longstanding argument that it is merely a passive intermediary with no editorial responsibility for what users post. The case, brought by a Kenyan content moderator alleging exploitative working conditions and exposure to harmful content, resulted in a finding that Meta exercises sufficient control over its platforms to bear legal responsibility for the content they carry.
The ruling is significant not because it creates binding precedent across Africa — it does not, as Kenya's court decisions are not directly enforceable in other jurisdictions. It is significant because it articulates a legal framework that courts in other African countries, including South Africa, can draw upon when confronted with similar questions. And those questions are coming, because the intersection of platform liability and AI-generated content is about to become one of the most contested areas of technology law on the continent.
For South African organisations operating platforms, forums, marketplaces, or any digital space where users (or AI systems) generate content, this ruling is a warning signal that demands immediate attention.
The Kenya Ruling: What Actually Happened
The Kenyan case centred on Meta's relationship with its content moderation contractors in Nairobi. Content moderators employed by a third-party outsourcing firm alleged that Meta exercised de facto control over their working conditions, the content they were exposed to, and the moderation policies they enforced — despite the contractual firewall of the outsourcing arrangement. The court agreed.
But the ruling went further than employment law. In examining Meta's role, the court found that Meta's algorithmic curation, content recommendation systems, and moderation policies constitute active participation in the dissemination of content. Meta does not simply provide a blank canvas for users to post on. It decides what content is amplified, what is suppressed, what is shown to whom, and in what order. That level of editorial control, the court reasoned, is inconsistent with the claim that Meta is a neutral conduit.
The implications are profound. If a platform algorithmically promotes content — whether user-generated or AI-generated — it is making editorial decisions. And entities that make editorial decisions have historically been treated as publishers, not mere conduits, under most legal frameworks worldwide.
Platform Liability vs Publisher Liability: The Core Debate
The distinction between a platform and a publisher has been the central legal fiction of the internet era. Platforms claim they merely host content created by others and therefore should not be liable for it — in the same way that a telephone company is not liable for conversations carried over its network. Publishers, by contrast, exercise editorial judgement over what they disseminate and are legally responsible for the result.
This distinction was coherent in the early days of the internet, when platforms genuinely were passive hosts. A bulletin board in 1998 displayed posts in chronological order with no algorithmic curation. But modern platforms bear almost no resemblance to those early services. Meta, Google, TikTok, and their peers use sophisticated algorithms to decide what content reaches which users, when, and how prominently. They employ AI systems to generate content summaries, suggest responses, auto-complete posts, and create entirely new content through generative AI features.
The Kenya ruling challenges this fiction directly. And it does so at precisely the moment when AI-generated content is making the platform-versus-publisher distinction almost impossible to maintain. When a platform's own AI system generates content — a summary, a recommendation, a chatbot response, a suggested reply — who is the "user" who created it? The platform created the AI. The platform deployed the AI. The platform chose to display the AI's output. At every stage, the platform is the actor.
"When a platform's AI generates content and the platform's algorithm distributes it, there is no third party to shift liability to. The platform is the author, the editor, and the publisher."
POPIA Implications: Processing Personal Information Through AI Content
For South African organisations, the platform liability question intersects directly with the Protection of Personal Information Act (POPIA). POPIA does not distinguish between human-generated and AI-generated personal information. If an AI system generates content that contains, references, or is derived from personal information of South African data subjects, POPIA applies in full.
Consider the practical scenarios. An AI chatbot on your platform responds to a customer query by referencing their previous interactions, purchase history, or personal details. An AI content recommendation system profiles users based on their browsing behaviour, demographic data, and inferred preferences to decide what content to show them. An AI moderation system flags or removes content based on analysis that includes the poster's identity, location, and communication patterns. Each of these involves the processing of personal information as defined in Section 1 of POPIA.
Section 19 of POPIA requires that personal information be secured by appropriate, reasonable technical and organisational measures. Section 9 requires that processing be lawful and done in a reasonable manner that does not infringe the privacy of the data subject. Section 11 requires a lawful basis for processing. If your platform's AI generates content that processes personal information without a valid legal basis, without adequate security, or in a manner that infringes privacy — you are in breach. And after the Kenya ruling, the argument that "our AI did it, not us" is unlikely to hold.
POPIA exposure: If AI systems on your platform generate, curate, or moderate content involving South African personal information, you are the responsible party under POPIA. The Information Regulator will not accept "the algorithm decided" as a defence. You must demonstrate lawful processing, purpose limitation, and adequate security measures for all AI-driven content operations.
Section 99 of the ECT Act: South Africa's Existing Framework
South Africa is not starting from scratch on platform liability. Section 99 of the Electronic Communications and Transactions Act (ECT Act), enacted in 2002, provides a limited safe harbour for service providers. Under Section 99, a service provider that provides a platform for the storage or transmission of data is not liable for the content — provided it does not have actual knowledge of the unlawful activity, acts expeditiously to remove or disable access to content upon becoming aware of its unlawful nature, and does not initiate the transmission or select the receiver.
The critical question is whether Section 99 was designed for — or can reasonably be applied to — platforms that use AI to generate, curate, and recommend content. The safe harbour was crafted for passive hosting: a service provider that stores data uploaded by users. It was not designed for a platform that actively generates content through AI systems, algorithmically amplifies certain content over others, and uses AI to make moderation decisions that shape what users see.
The Kenya ruling's reasoning — that algorithmic curation constitutes active participation — would, if applied by a South African court, likely narrow the Section 99 safe harbour considerably. A platform that uses AI to generate content or to decide what user-generated content is promoted is arguably no longer merely "providing a platform for the storage of data." It is actively participating in the creation and dissemination of that content.
South African courts have not yet squarely addressed this question, but the existing case law on Section 99 is thin, and the provision has never been tested against the realities of AI-driven content platforms. When it is tested — and it will be — the Kenya ruling will be persuasive authority.
The EU Digital Services Act: A Comparative Lens
The European Union's Digital Services Act (DSA), which took full effect in February 2024, offers a useful comparison. The DSA maintains a conditional liability exemption for hosting services but imposes significant due diligence obligations that scale with the size and risk profile of the platform. Very large online platforms (those with more than 45 million monthly active users in the EU) face the most stringent requirements, including mandatory risk assessments, independent audits, transparency in algorithmic recommendation systems, and specific obligations around AI-generated content.
Critically, the DSA does not treat AI-generated content as equivalent to user-generated content for liability purposes. Platforms must clearly label AI-generated content, provide users with information about how recommendation algorithms work, and offer options to use the platform without personalised algorithmic feeds. The DSA recognises that algorithmic curation is an editorial function and imposes governance obligations accordingly — without fully collapsing the distinction between platform and publisher.
South Africa's regulatory trajectory is likely to follow a similar path, though the timeline is uncertain. The Information Regulator has signalled interest in AI governance, and the Department of Communications and Digital Technologies has initiated consultations on platform regulation. South African organisations that align their governance frameworks with DSA principles now will be better positioned when local regulation arrives.
How AI-Generated Content Complicates Platform Liability
The traditional platform liability framework assumes a clear separation between the platform and the content creator. The user creates the content; the platform hosts it. Liability follows authorship. AI-generated content obliterates this separation.
When Meta's AI summarises a news article and presents the summary to users, Meta authored the summary. When an AI chatbot on a South African e-commerce platform provides product recommendations that turn out to be misleading, the platform's AI authored the misleading content. When an AI moderation system incorrectly flags lawful speech as harmful and removes it, the platform's AI made the editorial decision.
This creates several distinct liability vectors:
- Defamation: If an AI system generates content that is defamatory, the platform that deployed the AI is the publisher of that content. South African defamation law requires publication — and an AI system generating and displaying content to users constitutes publication.
- Intellectual property infringement: AI systems trained on copyrighted material may generate content that reproduces protected works. The platform deploying the AI could face claims under the Copyright Act.
- POPIA breaches: AI-generated content that discloses, infers, or processes personal information without lawful basis exposes the platform to enforcement action by the Information Regulator.
- Consumer protection: AI-generated product descriptions, recommendations, or reviews that mislead consumers could trigger liability under the Consumer Protection Act.
- Hate speech and incitement: AI systems that generate or amplify harmful content may expose platforms to criminal liability under the Prevention and Combating of Hate Crimes and Hate Speech Bill (once enacted) and existing common law provisions.
Each of these liability vectors is amplified by scale. An AI system does not make one defamatory statement to one person. It can generate thousands of outputs per second, each potentially reaching a different audience. The liability exposure is not linear — it is exponential.
What South African Platform Operators Should Do Now
Platform Liability Action Checklist
- Audit all AI-generated content on your platform. Map every point where AI systems generate, curate, recommend, summarise, or moderate content. Include chatbots, recommendation engines, auto-generated descriptions, AI moderation tools, and any generative AI features. You cannot assess liability for what you have not inventoried.
- Assess your Section 99 safe harbour position. Determine whether your platform's use of AI takes you outside the ECT Act safe harbour. If your AI systems generate content or make editorial decisions about what content to promote, the safe harbour may not apply. Get a legal opinion specific to your architecture.
- Conduct a POPIA impact assessment for AI content operations. If AI systems process personal information of South African data subjects — whether in content generation, curation, profiling, or moderation — ensure you have a lawful basis, appropriate security measures, and compliant processing notices. Document everything.
- Implement AI content labelling. Clearly identify content generated or substantially modified by AI systems. This is a DSA requirement for EU-facing platforms and will likely become a local requirement. Implementing it now demonstrates good governance and reduces deception risk.
- Establish a rapid content takedown process. The Section 99 safe harbour requires expeditious removal of unlawful content upon becoming aware of it. Define clear escalation paths, response time targets, and decision-making authority for content removal requests — including content generated by your own AI systems.
- Review your terms of service and user agreements. Ensure your terms address AI-generated content explicitly. Clarify what AI systems operate on your platform, how they interact with user content, and what your liability position is. Vague or outdated terms will not protect you.
- Implement human oversight for high-risk AI content. For AI systems operating in areas with elevated liability risk — health information, financial advice, legal guidance, content involving minors — ensure meaningful human review before AI-generated content is published or disseminated.
- Monitor the Kenya ruling's influence. Track how courts in South Africa, Nigeria, and other African jurisdictions reference the Kenya decision. It will shape the direction of platform liability law across the continent, and early awareness enables proactive adaptation.
- Engage with regulatory consultations. The Information Regulator and DCDT are actively considering platform governance. Participating in these processes gives you visibility into forthcoming requirements and an opportunity to shape proportionate regulation.
- Document your governance framework. When a dispute arises — and it will — your ability to demonstrate that you had a reasonable, documented governance framework in place will be critical. Courts and regulators look more favourably on organisations that can show good-faith compliance efforts, even if the legal landscape was uncertain.
Key Takeaways
Key Takeaways for South African Organisations
- The Kenya High Court ruling finding Meta liable for platform content sets persuasive precedent across Africa — South African courts can and likely will reference it when platform liability questions arise.
- The traditional distinction between platforms (passive hosts) and publishers (editorial decision-makers) is collapsing as AI systems generate, curate, and recommend content on behalf of platforms.
- Section 99 of the ECT Act provides a safe harbour for passive hosting, but platforms using AI for content generation or algorithmic curation may fall outside its protection.
- POPIA applies to all processing of personal information by AI systems, regardless of whether content is user-generated or AI-generated — the platform is the responsible party.
- AI-generated content creates multiple liability vectors: defamation, IP infringement, POPIA breaches, consumer protection violations, and potential hate speech liability.
- The EU Digital Services Act provides a useful governance model — South African organisations aligning with DSA principles will be better positioned for future local regulation.
- AI content operates at scale, meaning liability exposure is exponential rather than linear — a single AI system can generate thousands of potentially problematic outputs per second.
- Organisations should audit AI content operations, assess their safe harbour position, implement content labelling, and establish documented governance frameworks now, not when litigation or regulation forces their hand.
Understand Your Platform's Liability Exposure
Priviso helps South African organisations assess their liability exposure for AI-generated and user-generated content.
Start Free Trial Contact Us