Content
summary Summary

Privacy researchers at Mozilla have examined various romantic AI chatbots and found that most of these applications fall far short of protecting their users' privacy.

Ad

According to the report by Privacy Not Included, a privacy-focused sales consultancy launched by Mozilla in 2017, all 11 AI chatbots examined received the "Privacy Not Included" warning label, placing them in the worst product category in terms of privacy.

The AI chatbots tested are designed to collect intimate and personal information from their users to build empathetic and emotional bonds. However, it is unclear how this sensitive data is used or protected. In many cases, the developers of the chatbots have not published any information about how the artificial intelligence behind the chatbots works or what safeguards are in place to prevent harmful or offensive content.

Misha Rykov, a researcher at Privacy Not Included, says: "To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you."

Ad
Ad

90 percent of AI girlfriend apps can sell your data

Only one of the apps examined (Genesia AI Friend & Partner) meets Privacy Not Included's minimum security standards. The other chatbots have significant privacy and security shortcomings. 90 percent of the apps can share or sell personal data, and about half of the apps (54 percent) do not allow users to delete their data.

The researchers also warn of the dangers that AI chatbots can pose if they fall into the hands of malicious actors. They could use the close relationship that chatbots build with their users to manipulate people and mislead them into problematic ideologies or harmful actions.

In light of these concerns, the researchers recommend that users be aware of the risks and exercise caution when using such AI chatbots. It is important not to share information that you would not share with relatives or colleagues, and to follow good cybersecurity practices such as strong passwords and regular application updates.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • An investigation by Mozilla privacy researchers reveals that most AI chatbots designed for romantic purposes do not protect the privacy of their users. All 11 chatbots tested received a 'Privacy Not Included' warning label.
  • The chatbots collect intimate and personal information to build emotional bonds, but it is unclear how this sensitive data is protected or used. Furthermore, 90% of apps can share or sell personal data.
  • The researchers caution that AI chatbots can be dangerous if used by malicious actors and advise users to practice caution and good cybersecurity.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.