AI and society

Mozilla study: 90 percent of romantic AI chatbots may sell your personal data

Maximilian Schreiner

Midjourney prompted by THE DECODER

Privacy researchers at Mozilla have examined various romantic AI chatbots and found that most of these applications fall far short of protecting their users' privacy.

According to the report by Privacy Not Included, a privacy-focused sales consultancy launched by Mozilla in 2017, all 11 AI chatbots examined received the "Privacy Not Included" warning label, placing them in the worst product category in terms of privacy.

The AI chatbots tested are designed to collect intimate and personal information from their users to build empathetic and emotional bonds. However, it is unclear how this sensitive data is used or protected. In many cases, the developers of the chatbots have not published any information about how the artificial intelligence behind the chatbots works or what safeguards are in place to prevent harmful or offensive content.

Misha Rykov, a researcher at Privacy Not Included, says: "To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you."

90 percent of AI girlfriend apps can sell your data

Only one of the apps examined (Genesia AI Friend & Partner) meets Privacy Not Included's minimum security standards. The other chatbots have significant privacy and security shortcomings. 90 percent of the apps can share or sell personal data, and about half of the apps (54 percent) do not allow users to delete their data.

The researchers also warn of the dangers that AI chatbots can pose if they fall into the hands of malicious actors. They could use the close relationship that chatbots build with their users to manipulate people and mislead them into problematic ideologies or harmful actions.

In light of these concerns, the researchers recommend that users be aware of the risks and exercise caution when using such AI chatbots. It is important not to share information that you would not share with relatives or colleagues, and to follow good cybersecurity practices such as strong passwords and regular application updates.