Content
summary Summary

Italy sets a deadline for OpenAI and conditions for the reinstatement of ChatGPT in Italy. Some of these conditions could pose significant challenges for OpenAI.

Ad

OpenAI has until April 30th to meet the conditions set by the Italian data protection authority "Garante", or provide the prospect of meeting them.

This includes implementing age verification at registration by the end of May and general age verification by September 30, filtering out users under 13 and users between 13 and 18 who do not have parental consent.

In addition, OpenAI must transparently explain how and for what purpose data will be processed and obtain users' consent to do so. OpenAI should be able to meet both of these requirements. Two additional requirements are much more challenging.

Ad
Ad

OpenAI must run an awareness campaign about the use of personal data for AI, and correct or remove inaccurate information

The Italian Data Protection Authority is requiring OpenAI to run an awareness campaign on TV, in newspapers and online about the use of Italian citizens' personal data to train algorithms. The campaign is scheduled for May 15 and must be approved by Garante.

The biggest headache for OpenAI is likely to be Garante's requirement that generated personal data containing false information must be either corrected or deleted at the request of the data subject. This applies even if that person is not using ChatGPT. Deletion is required when correction is "technically unfeasible".

A set of additional measures concern the availability of tools to enable data subjects, including non-users, to obtain rectification of their personal data as generated incorrectly by the service, or else to have those data erased if rectification was found to be technically unfeasible.

OpenAI will have to make available easily accessible tools to allow non-users to exercise their right to object to the processing of their personal data as relied upon for the operation of the algorithms. The same right will have to be afforded to users if legitimate interest is chosen as the legal basis for processing their data.

Garante

Currently, if you ask ChatGPT for the biography of a lesser-known person, for example, when I ask for mine, the output is full of incorrect information - and also different every time it is generated. Because of the way the model works, predicting sentence and word components (called tokens), it will probably be easier for OpenAI to filter out such requests than to correct them.

If the Italian DPA decides that personal data cannot be used to train algorithms, OpenAI would face a Herculean task. Personal data from training material exists only as abstract representations in the model; it is unlikely that OpenAI could easily detect and remove it with reasonable effort. Presumably, the company would need to train new models with special EU-compliant datasets or develop new methods for handling data within large, complex AI models.

Italy could become an AI privacy role model for Europe, and that could be bad for OpenAI

If, as is already apparent, other European countries follow Italy's lead and ask OpenAI to take similar action, OpenAI faces a turbulent few weeks with many uncertainties. A (temporary) withdrawal from Europe seems possible.

Recommendation

OpenAI's fate could also be tied to other AI developments, particularly at Microsoft, which is rolling out OpenAI in numerous search and office products for business and personal use. If OpenAI technology is heavily regulated or even banned in Europe, Microsoft's products could be affected. Conversely, European companies like Aleph Alpha could benefit from this development.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has until April 30 to comply with the demands of the Italian data protection authority, Garante.
  • In addition to age verification and a GDPR-compliant consent policy, OpenAI will have to correct or delete any personal data with errors. This is likely to be difficult.
  • Furthermore, OpenAI will have to launch a cross-media awareness campaign about how personal data is used in algorithm training.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.