Content
summary Summary

Europe discovers ChatGPT's privacy risks, some of which are deeply embedded in the models' function. Can they be eliminated? OpenAI makes first concessions.

Ad

OpenAI has published a safety statement on its website in parallel with Italy's GDPR push. One point concerns privacy, which has been criticized by European data protection authorities. On the one hand, training data can contain personal information, and on the other hand, people enter personal information into the ChatGPT interface when creating personal documents such as emails.

OpenAI aims to remove private data "where feasible"

OpenAI now acknowledges that training data contains personal information from the public Internet. But the models should learn about the world, not individuals. OpenAI does not sell services, advertise or create personality profiles based on the data, the statement said.

At least the first point is debatable, since ChatGPT works indirectly on the basis of personal information and is sold by OpenAI as a service with a monthly usage fee. OpenAI points out in its own privacy policy that personal data entered can also be used to further develop its own services.

Ad
Ad

"We use data to make our models more helpful for people. ChatGPT, for instance, improves by further training on the conversations people have with it," OpenAI writes.

In the future, OpenAI intends to remove personal data from the training dataset "where feasible". It also wants to align models so that they can reject queries about individuals and allow users to have their personal data deleted from OpenAI's "systems" at the model's request. These steps minimized the possibility of models generating responses that contain personal data about individuals.

OpenAI account gets age verification and individual security standards in models

Another criticism from the Italian DPA is the lack of age verification in account creation for ChatGPT, which allows children under 13 to access the service. OpenAI is currently "looking into" verification technologies, according to the statement.

In its terms of service, OpenAI states that the minimum age for use is 18, or 13 and older with parental permission. Without access restrictions, however, this measure is ineffective - and since ChatGPT is an excellent homework tool, children in particular are likely to be drawn to the service.

OpenAI says it has made "significant effort" to ensure that its models don't produce content harmful to children, and is working with the nonprofit Khan Academy on an AI classroom assistant to help both students and teachers. In the future, developers will be able to implement even higher safety standards than OpenAI provides by default.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI acknowledges that ChatGPT contains training data with personal information, and plans to remove it "where feasible" in the future.
  • It also plans to adjust models to refuse requests about individuals, and to allow users to have their personal data removed from OpenAI's "systems" by asking the model to delete it.
  • There are also privacy concerns about the lack of age verification during account creation; OpenAI is looking into verification technologies.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.