Content
summary Summary
Update
  • Added statements from EU representatives.
  • Added Sam Altman statement.

Update May 27, 2023:

Sam Altman states on Twitter that OpenAI "of course has no plans" to leave Europe. His Europe tour has been productive, he writes, with many good conversations about ways to regulate AI.

In previous conversations, Altman said that OpenAI would try to work with EU regulations. If that is not possible, he said, it will have to pull out of Europe. Altman was just stating the obvious, which could be interpreted as a threatening or warning gesture. Of course, OpenAI never planned to cede a market of more than 500 million customers to Google.

Update, May 26, 2023:

Ad
Ad

Some MEPs disagree with Altman's statement that the EU AI Act could be withdrawn or fundamentally changed. Kim van Sparrentak, who worked on the law, says they won't be "blackmailed by American companies."

"If OpenAI can’t comply with basic data governance, transparency, safety and security requirements, then their systems aren’t fit for the European market," Sparrentak told Reuters.

German MEP Sergey Lagodinsky says Altman is trying to push his agenda in individual countries, but will not influence Brussels' regulatory plans. These are "in full swing," he said. Individual changes are possible, but the general direction will not change, Lagodinsky said.

Altman reportedly canceled his planned visit to Brussels. OpenAI did not comment on the Reuters report.

Original article from May 25, 2023:

Recommendation

OpenAI CEO Sam Altman says Europe's potential AI policy is too restrictive. Withdrawing from Europe is one option.

OpenAI CEO Sam Altman is touring Europe. And he's making politics along the way. Speaking at an event in London, he told Reuters that the current draft of the EU's AI Act risks "over-regulation". Pulling out of Europe is an option, he said.

But Altman cautions that OpenAI will first try to meet Europe's requirements. OpenAI has also heard that the current draft of the EU AI Act will be withdrawn. The EU AI Act is currently in the process of being voted on by the Parliament, the EU Council, and the leading Commission.

Altman suggests that he would prefer a different definition for "general purpose AI system." The European Parliament uses the term as a synonym for "foundational model," a large AI model that is pre-trained with a lot of data and can be optimized for specific tasks with a little more data.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

"There's so much they could do like changing the definition of general-purpose AI systems," Altman said. "There's a lot of things that could be done."

Lawmakers in the US are also discussing AI regulations, with Sam Altman personally calling for them in the US Senate, but with the prospect of helping to shape them. Altman has proposed the creation of a competent authority that would license AI systems up to a certain capability and could revoke them if, for example, safety standards are not met.

Generative AI has copyright as an Achilles heel and more unresolved issues

One aspect of the EU AI Act is the disclosure of the training material used for AI training, partly in view of possible copyright infringements. OpenAI has not yet disclosed the training material used for GPT-4, citing competitive reasons.

Based on the training datasets of previous models, it is likely that the training material used for GPT-4 also contains copyrighted material that the company did not have explicit permission to use for AI training.

The legality of using this material to train commercial AI systems will ultimately be determined by the courts. Similar discussions and early litigation are underway for image AI systems. But these cases could drag on for years.

In addition to safety and copyright, other regulatory issues related to AI include privacy. OpenAI has already run afoul of European data protection authorities, particularly in Italy because ChatGPT has no age restriction, personal data entered in chat can be used for AI training, and personal data is included in the datasets for pre-training AI models.

ChatGPT was briefly blocked in Italy, but was unblocked after concessions were made by OpenAI. However, it is still under review by data protection authorities.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI CEO Sam Altman criticizes the current draft of the EU AI Act as too restrictive and sees a withdrawal from Europe as a possibility.
  • However, OpenAI will first try to meet the EU's requirements, he says. Altman also expects the EU AI Act to be withdrawn in its current form.
  • Altman advocated for AI regulation in the US, but with the prospect of being able to help shape it, including the creation of an AI licensing authority.
  • OpenAI has already run into European privacy issues, particularly in Italy because ChatGPT has no age restriction and can use personal data for AI training. The pre-training material is also likely to have contained personal data.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.