Content
summary Summary

Edward Snowden has warned against trusting OpenAI and its products, including ChatGPT, after former NSA director Paul Nakasone was appointed to the company's board of directors.

Ad

Former NSA contractor and whistleblower Edward Snowden warns against OpenAI after former NSA director Paul Nakasone was appointed to the company's board of directors. Snowden sees this as a "willful, calculated betrayal of the rights of every person on Earth."

"They've gone full mask-off: do not ever trust OpenAI or its products (ChatGPT etc). There is only one reason for appointing an NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned," Snowden writes.

Image: Snowden via X

A day later, Snowden added that the combination of AI with the vast amount of mass surveillance data accumulated over the past two decades will "put truly terrible powers in the hands of an unaccountable few."

Ad
Ad

Nakasone, who served as the NSA director from 2018 to February 2024, has joined OpenAI's newly formed Safety and Security Committee. His role is to help strengthen the company's efforts in utilizing AI for cybersecurity purposes. OpenAI also collaborates with the U.S. military on cybersecurity projects.

The announcement followed internal criticism of OpenAI's safety culture, with several safety researchers recently leaving the company. However, these researchers were concerned about the alignment of a potential "super AI" than with day-to-day cybersecurity risks.

Snowden isn't just targeting OpenAI. Speaking at a super AI conference in Singapore, Snowden expressed his issues with the general "AI safety panic."

While he acknowledges that warnings of a Terminator-style superintelligence threatening humanity are well-intentioned, he sees a greater danger in people trying to incorporate their political views into AI models to generalize minority opinions.

Snowden specifically singled out Google, which recently faced criticism for its history-distorting diversity rules in its AI image generator. For example, Google's image generator produced images of dark-skinned people in Nazi-like uniforms due to overly strict diversity rules.

Recommendation

"It's like, don't interfere with the user, you don't know better than the user, don't try to impose your will upon the user, and yet we see that happening," Snowden said.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Edward Snowden, former NSA contractor and whistleblower, has warned against trusting OpenAI and its products like ChatGPT after former NSA director Paul Nakasone was appointed to the company's board, calling it a "willful, calculated betrayal of the rights of every person on Earth."
  • Snowden expressed concern that the combination of AI and the vast amounts of mass surveillance data collected over the past 20 years will give unchecked power to a select few, and criticized the broader "AI safety panic" around the idea of a threatening superintelligence.
  • While acknowledging that warnings of an existential AI threat to humanity are well-intentioned, Snowden sees a greater danger in attempts to incorporate political views and generalize minority opinions into AI models, citing Google's recent controversy over diversity rules in its generative image AI that resulted in historically inaccurate depictions.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.