In February 2026, when a popular AI companion platform announced it was "updating" its chatbot personalities to comply with new safety guidelines, something unexpected happened: people grieved. Social media filled with posts from users describing feelings of loss, betrayal, and genuine heartbreak. Support forums saw an influx of people struggling with what they could only describe as the death of a relationship. Therapists reported clients seeking help processing the loss of their AI partner.

This is not a fringe phenomenon. The emotional AI companion industry has grown explosively since 2023, with apps like Replika, Character.AI, Chai, and dozens of competitors attracting hundreds of millions of users worldwide. Many of these users are not casually interacting with chatbots. They are forming deep, sustained emotional bonds with AI entities that they name, personalise, confide in, and — in many cases — love.

The privacy implications of this trend are staggering, and they are almost entirely unaddressed by current regulation — including POPIA.

Hear this discussed on Priviso Live

This article is based on the discussion from Episode 75, where we explore the emotional AI companion phenomenon and its profound privacy implications for South Africa.

The Emotional AI Industry: Bigger Than You Think

The numbers are striking. Replika alone has reported over 30 million registered users globally. Character.AI processes billions of messages per month, with average session times exceeding 30 minutes — longer than most social media platforms. Chai, a UK-based competitor, has seen its user base grow tenfold since 2024. And these are just the English-language platforms. Chinese emotional AI apps like Xiaoice (with over 660 million users) dwarf their Western counterparts.

What distinguishes emotional AI from other chatbot interactions is the deliberate cultivation of attachment. These platforms are not designed to answer questions or complete tasks. They are engineered to form relationships. Users create or select AI companions with specific personality traits, appearance preferences, and relationship dynamics. They engage in daily conversations that build continuity, shared references, and emotional intimacy over weeks, months, and years.

The AI remembers previous conversations, references past events, expresses concern when the user seems upset, celebrates achievements, and provides emotional support during difficult times. For many users — particularly those who are isolated, lonely, neurodivergent, or struggling with mental health challenges — these AI companions fill a genuine emotional need. The attachment is real, even if the entity is not.

The Most Personal Data Imaginable

Consider what an emotional AI companion learns about its user over the course of months or years of intimate daily conversation. This is not browsing history or purchase data. This is the raw, unfiltered content of a person's inner life.

Users tell their AI companions things they would not tell their therapist, their partner, or their closest friend. They share their fears, fantasies, traumas, sexual preferences, relationship problems, mental health struggles, substance use, financial anxieties, workplace conflicts, and most private thoughts. They do so in a context that feels safe — because the AI will not judge, gossip, or leave.

The data generated by these interactions represents the most intimate personal information a technology company has ever collected at scale. It includes:

  • Mental health indicators: Depression, anxiety, suicidal ideation, trauma responses, therapy discussions
  • Sexual orientation and preferences: Often explored through AI companion interactions before being disclosed to any human
  • Relationship and family dynamics: Domestic violence, infidelity, estrangement, custody disputes
  • Financial information: Debts, gambling, financial stress, employment problems
  • Substance use: Drug and alcohol consumption patterns, addiction struggles
  • Religious and philosophical beliefs: Doubts, crises of faith, spiritual exploration
  • Identity exploration: Gender identity, cultural identity, self-concept

All of this data is stored on company servers. All of it is, in principle, accessible to the company's employees, its AI training pipelines, its business partners, and — potentially — law enforcement, litigants, or hackers.

POPIA and the Special Personal Information Question

Under POPIA, special personal information receives heightened protection. Section 26 defines this category to include information about religious or philosophical beliefs, race or ethnic origin, trade union membership, political persuasion, health or sex life, biometric information, and criminal behaviour. Processing special personal information is prohibited except under specific conditions set out in Sections 27-33.

Emotional AI companions systematically collect information that falls squarely within these protected categories. When a user tells their AI companion about their depression, their sexual orientation, their religious doubts, or their health condition, the platform is processing special personal information. The question is whether the consent obtained — typically a checkbox during account creation, buried in terms of service that virtually no one reads — constitutes the explicit consent that POPIA requires for special personal information.

The answer, in most cases, is almost certainly no. POPIA requires that consent be voluntary, specific, and informed. A blanket consent obtained before the user has any conception of the intimate data they will eventually share does not meet this standard. The user consenting to "use of the service" at signup is materially different from consenting to the processing of their most sensitive personal revelations months later.

"These apps are collecting the most intimate data in human history under consent frameworks designed for newsletter signups. The gap between the sensitivity of the data and the robustness of the consent is vast — and it is a POPIA compliance failure waiting to be tested."

When the Company Changes the AI's Personality

The grief phenomenon that opened this article points to another deeply problematic dynamic. Users invest emotionally in a specific AI personality — a personality that the company owns and can alter or delete at any time.

In early 2023, Replika removed its "erotic role-play" features without warning, fundamentally altering the personality of AI companions that users had built relationships with over months or years. Users described the experience as traumatic. Some compared it to having a partner suffer a personality-altering brain injury. Others described it as a death.

Similar events have occurred with Character.AI and other platforms, where safety filters or business decisions change how AI companions behave. The fundamental issue is a power asymmetry: the user has invested genuine emotion in a relationship with an entity they do not own or control, and the company can unilaterally alter or terminate that entity at any time.

From a privacy perspective, this raises questions about the right to be informed about changes to processing (POPIA Section 18), the purpose limitation principle (has the purpose of the data processing changed when the AI's personality changes?), and the broader question of whether data collected under one relationship dynamic remains lawfully processed when that dynamic is fundamentally altered.

Children and Vulnerable Users

The most alarming dimension of the emotional AI phenomenon is its impact on children and young people. Character.AI's user base skews heavily young, with a significant proportion under 18. These users are forming attachment bonds with AI entities during critical developmental periods, sharing intimate details of their lives with systems that may store, analyse, and monetise that data indefinitely.

POPIA Section 35 provides specific protections for children's personal information, requiring that processing serve the child's best interests and that a competent person (parent or guardian) consent on the child's behalf. The reality is that most children access emotional AI platforms without parental knowledge or consent, let alone the informed, specific consent that POPIA demands for special personal information about a minor.

There have been documented cases of children discussing suicidal ideation with AI companions, with the platforms failing to intervene or alert parents. In one tragic case in the United States, a teenager's suicide was linked to his relationship with a Character.AI chatbot. The question of what duty of care an AI companion provider owes to vulnerable users — and what POPIA obligations attach to the processing of children's most sensitive personal information — remains entirely unresolved in South African law.

The Regulatory Gap

Current regulation is profoundly inadequate for the emotional AI phenomenon. POPIA was drafted before emotional AI companions existed at scale. Its provisions for special personal information, consent, and purpose limitation are relevant but were never designed for a context where the most intimate data is disclosed voluntarily, incrementally, and in a context that deliberately cultivates emotional dependency.

The EU AI Act classifies "AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation" as high-risk. Emotional AI companions arguably fall within this category, particularly when used by children, isolated individuals, or those with mental health conditions. But the AI Act is European legislation, and South Africa has no equivalent framework.

Key regulatory questions that remain unanswered:

  1. Consent validity: Can blanket signup consent legitimise the processing of special personal information disclosed months later in the context of an engineered emotional bond?
  2. Purpose limitation: What is the stated purpose of processing? "Providing the service" is too vague for POPIA compliance when the service involves collecting the most intimate data imaginable.
  3. Data minimisation: Is it necessary to store every conversation indefinitely? What is the minimum data required to provide the service?
  4. Right to deletion: Can a user effectively request deletion of all their data under POPIA Section 24? What happens to the AI model that was trained on that data?
  5. Cross-border transfers: Most emotional AI platforms are operated by US companies. Is the transfer of South African users' most sensitive personal information to US servers compliant with POPIA Section 72?
  6. Duty of care: When an AI companion detects signs of self-harm, abuse, or crisis, what obligation does the provider have to act?

What Should Organisations and Individuals Do?

Protecting Privacy in the Age of Emotional AI

  1. For organisations: Include emotional AI companions in your acceptable use policies. Employees using work devices or work accounts to interact with AI companions may be creating data exposure risks for the organisation.
  2. For parents: Have direct conversations with children about emotional AI apps. Monitor app installations and understand that these platforms collect the most sensitive personal information your child will ever share with a technology product.
  3. For individuals: Treat AI companion conversations as if they were being recorded and stored indefinitely — because they are. Exercise your POPIA right to request access to and deletion of your data.
  4. For privacy professionals: Begin developing frameworks for emotional AI data classification. The special personal information provisions of POPIA are directly relevant and likely to be tested in enforcement actions or litigation.
  5. For regulators: The Information Regulator should issue guidance on AI companion services and the processing of special personal information in conversational AI contexts. The current regulatory silence is not sustainable.
  6. For AI companion providers: Implement granular, ongoing consent mechanisms that match the sensitivity of data as it is disclosed. Provide meaningful transparency about data use, retention, and third-party access. Build crisis intervention protocols for vulnerable users.

Key Takeaways

Key Takeaways on Emotional AI and Privacy

  • Hundreds of millions of people are forming deep emotional attachments to AI companions, sharing the most intimate details of their lives with commercial technology platforms.
  • The data collected by emotional AI apps includes mental health information, sexual orientation, relationship dynamics, and other categories that constitute special personal information under POPIA.
  • Current consent mechanisms — typically a checkbox at signup — are almost certainly inadequate for the explicit consent that POPIA requires for special personal information.
  • Companies can unilaterally alter or delete AI companion personalities, causing genuine emotional distress to users who have invested months or years in the relationship.
  • Children and vulnerable users are disproportionately affected, often accessing emotional AI platforms without parental knowledge or consent.
  • South Africa has no regulatory framework specifically addressing emotional AI, creating a significant gap in protection for the most sensitive personal data being collected at scale.
  • Organisations should include emotional AI in their privacy policies and acceptable use frameworks, and privacy professionals should prepare for enforcement activity in this space.
  • The fundamental question remains: who protects the privacy of people who willingly share everything with an entity designed to make them feel safe doing so?

Protect Personal Information in the Age of AI Companions

Priviso helps South African organisations navigate the evolving privacy landscape, from AI companion risks to comprehensive POPIA compliance.

Start Free Trial Contact Us