In early 2026, an AI-generated video depicting former President and Mrs Obama as apes was shared from a verified presidential social media account. The video was synthetic, racially offensive, and seen by millions before any meaningful response was mounted. Whether it was posted deliberately, recklessly, or through a chain of negligence, the outcome was the same: a catastrophic reputational incident that exposed an almost total absence of social media governance at the highest level of government.
For South African organisations, this is not a distant American scandal. It is a governance case study with direct implications for every company that operates social media accounts, employs staff who post on its behalf, or uses AI-generated content in any form. Under King IV, the Protection of Personal Information Act (POPIA), and the Cybercrimes Act, the reputational and legal risks of ungoverned social media are severe and growing.
The Incident: What Happened on the President's Account
The facts of the incident are straightforward. A video, generated using AI tools, was posted from an official account with tens of millions of followers. The content depicted two identifiable public figures in a dehumanising and racially charged manner. There was no disclaimer identifying the content as synthetic. There was no apparent review or approval process before publication. The video remained live for an extended period before removal, by which time it had been viewed, screenshotted, and shared across every major platform.
What makes this incident instructive is not the content itself, but the governance failures that enabled it. No content approval workflow was triggered. No reputational risk review took place. No separation of duties existed between content creation and publication. In the language of information systems governance, this was production without change control.
"Social media has been treated as a low-risk communications channel when it is, in reality, one of the highest-risk assets an organisation can operate. A single post can reach millions in minutes. There is no recall button."
If this can happen at the level of a head of state, with the resources and scrutiny that entails, it can happen at any organisation. In fact, it already has. South African companies have faced reputational crises from poorly considered social media posts, from insensitive advertising content to employees posting discriminatory material from corporate accounts. The difference now is that AI has dramatically raised both the speed and the severity of potential harm.
The Governance Failures That Made It Possible
Analysing the incident through a governance lens reveals a series of control failures, each of which is individually concerning and collectively disastrous.
No Content Approval Workflow
The most basic governance control for any publishing function is an approval workflow. In print media, in advertising, in corporate communications, content goes through drafting, review, and sign-off before publication. Social media, for reasons that defy risk logic, has often been exempt from this discipline. The Trump deepfake incident demonstrates what happens when content moves directly from creation to publication without any intervening review gate.
No Reputational Risk Assessment
Content that involves identifiable individuals, that references race, gender, religion, or politics, or that uses AI-generated imagery should trigger a heightened risk assessment. None of this appears to have occurred. In a well-governed organisation, content flagged as high-risk would require senior sign-off before publication.
No Separation of Duties
Separation of duties is a foundational principle of internal control. The person who creates a payment should not be the same person who approves it. The same principle applies to social media: the person who creates content should not be the sole person who decides it goes live. In this case, there appears to have been no separation between content creation and publication, a control gap that would not be tolerated in any other high-risk business function.
No Incident Response Readiness
Once the content was live and the backlash began, the response was slow and inadequate. There was no evidence of a pre-prepared social media incident response plan. No clear escalation path. No pre-approved holding statements. The delay between publication and removal, and between removal and official response, compounded the reputational damage significantly.
Governance reality check: If your organisation's social media accounts can be posted to by a single person without review, you have the same control gap that enabled this incident. The only question is when, not whether, it will be exploited.
Why AI Content Makes This Risk Exponentially Worse
Social media has always carried reputational risk. What AI does is multiply that risk by several orders of magnitude.
Speed of creation. AI-generated content, including realistic video, can be produced in minutes. A traditional video production involves scripting, filming, editing, and review, a process that naturally creates checkpoints. AI collapses this timeline to near-zero, removing the friction that historically provided time for reflection and review.
Realism of output. Synthetic media has reached a quality threshold where casual viewers cannot reliably distinguish AI-generated video from real footage. This means the reputational impact of a synthetic post can be identical to the impact of a real one. A deepfake video depicting your CEO making inappropriate statements carries the same brand damage as a real recording, at least for the critical first hours.
Scale of distribution. Social media algorithms amplify content that generates strong emotional reactions. Offensive, controversial, or shocking AI-generated content is precisely the type of material that algorithmic systems will push to the widest possible audience. By the time a human intervenes, the content has already been seen, shared, and archived by third parties.
Difficulty of recall. Unlike a press release that can be retracted or a webpage that can be taken down, social media content is screenshotted, re-shared, and cached within seconds of publication. There is no effective recall mechanism. Governance must therefore be front-loaded: controls must operate before publication, not after.
For organisations that are beginning to use AI tools for content creation, marketing, or communications, this means that AI-generated content must be treated as a distinct risk category with additional review gates, not fewer.
King IV and Social Media Risk in South Africa
South African organisations operate under the King IV Report on Corporate Governance, which establishes principles-based governance requirements that apply broadly to technology, reputation, and stakeholder risk. Social media governance sits squarely within several King IV principles.
Principle 11: Risk Governance. The governing body should govern risk in a way that supports the organisation in setting and achieving its strategic objectives. Social media, as a channel that can instantaneously amplify reputational harm, is a material risk that must be identified, assessed, and managed. An organisation that lacks a social media governance policy has a gap in its risk management framework that King IV would consider a governance failing.
Principle 12: Technology and Information Governance. The governing body should govern technology and information in a way that supports the organisation. AI-generated content, social media platforms, and the tools used to manage corporate accounts are all technology assets that fall within the scope of Principle 12. The absence of controls around these assets is a technology governance deficit.
Principle 16: Stakeholders. The governing body should adopt a stakeholder-inclusive approach. A social media post that offends, harms, or alienates stakeholders, whether customers, employees, regulators, or the public, represents a failure of stakeholder governance. The Trump deepfake incident affected multiple stakeholder groups simultaneously.
In addition to King IV, organisations must consider their obligations under POPIA. If social media content involves the personal information of identifiable individuals, whether employees, customers, or third parties, POPIA's processing conditions apply. Sharing AI-generated imagery of identifiable persons without consent may constitute unlawful processing of personal information. Under the Cybercrimes Act, distributing harmful deepfake content can attract criminal liability with penalties including imprisonment.
Boards and governance committees that have not yet considered social media as a standing risk item are behind the curve. The Trump deepfake incident should serve as the catalyst for that overdue conversation.
Building a Social Media Governance Framework
An effective social media governance framework does not require complex technology or excessive bureaucracy. It requires clarity, accountability, and disciplined execution. The following eight elements form the foundation of a defensible framework for any South African organisation.
Social Media Governance Framework: 8 Essential Controls
- Content Approval Workflow. Establish a defined workflow for all social media content. Routine posts may require single approval. Content involving identifiable individuals, sensitive topics, legal matters, or brand-critical messaging must require dual sign-off. Document the workflow, assign roles, and enforce it consistently.
- Dual Sign-Off for AI-Generated Content. Any content created or substantially assisted by AI tools must trigger an additional review gate. This includes AI-generated images, video, text, and audio. The reviewer must verify that the content is clearly identified as synthetic where appropriate, that it does not contain harmful or misleading material, and that it complies with applicable law including POPIA and the Cybercrimes Act.
- Social Media Access as a High-Risk Asset. Treat social media account credentials with the same rigour as financial system access. Implement role-based access control. Require multi-factor authentication. Maintain an access register. Conduct regular access reviews and revoke access promptly when individuals change roles or leave the organisation. A compromised social media account is as dangerous as a compromised bank account.
- Regular Training and Awareness. All individuals with access to corporate social media accounts must receive regular training covering acceptable use, content standards, legal obligations under POPIA and the Cybercrimes Act, the risks of AI-generated content, and the escalation process for content concerns. Training must be documented and refreshed at least annually.
- Social Media Incident Response Plan. Develop and rehearse a specific incident response plan for social media crises. The plan must include: criteria for escalation, pre-approved holding statements, roles and responsibilities for containment and communication, notification obligations under POPIA if personal information is involved, and post-incident review procedures. Do not wait for a crisis to discover you have no plan.
- Monitoring and Audit Trail. Implement monitoring to detect unauthorised posts, unusual posting patterns, or content that violates policy. Maintain an audit trail of who posted what, when, and from which account. Monitoring does not replace prevention, but it provides the detection capability that limits damage and supports investigation.
- Third-Party Tool Governance. If your organisation uses social media management platforms, scheduling tools, AI content generators, or agencies to manage social media, these third parties must be subject to the same governance standards. Include social media governance requirements in service level agreements. Conduct due diligence on the security and compliance posture of any tool that has access to your corporate accounts.
- Regular Policy Review. Social media platforms, AI capabilities, and the regulatory landscape are changing rapidly. Your social media governance policy must be reviewed at least annually, and updated whenever material changes occur in technology, regulation, or your organisation's risk profile. An outdated policy is almost as dangerous as no policy at all.
These eight controls are not theoretical. They are practical, implementable measures that any organisation, from an SME to a listed company, can put in place. The cost of implementing them is negligible compared to the cost of a single reputational crisis caused by ungoverned social media activity.
Key Takeaways
Key Takeaways
- The Trump deepfake incident is a governance failure, not just a political controversy. The same control gaps exist in organisations of all sizes.
- Social media must be treated as a high-risk asset, not a casual communications channel. Access controls, approval workflows, and audit trails are essential.
- AI-generated content multiplies reputational risk exponentially. Synthetic media requires additional review gates and clear labelling.
- King IV Principles 11, 12, and 16 directly apply to social media risk. Boards must include social media governance in their risk oversight.
- POPIA and the Cybercrimes Act create legal liability for organisations that share harmful deepfake content or process personal information through social media without appropriate controls.
- An eight-point governance framework covering approval workflows, AI content review, access management, training, incident response, monitoring, third-party oversight, and regular review provides a defensible baseline.
- Prevention is the only effective strategy. Once harmful content is published on social media, the damage is immediate and largely irreversible.
Need Help Building Governance Frameworks?
Priviso provides privacy and governance consulting aligned to King IV and POPIA. Let us help you build social media governance and AI risk frameworks that protect your organisation.
Start Free Trial Contact Us