Deepmind founder Mustafa Suleyman sees great potential in AI for mental health
Key Points
- In his book The Coming Wave, Deepmind founder Mustafa Suleyman describes the potential emotional benefits of AI, such as support, encouragement, validation, coaching, and mental health counseling.
- A recent psychological test shows that ChatGPT is significantly superior to humans in terms of emotional attention. According to Suleyman, AI is meant to fill gaps, not replace human interaction.
- Suleyman also discusses the potential negative consequences and risks of AI and suggests safety and regulatory strategies, including more safety researchers, screening systems in DNA synthesizers, and international treaties to regulate dangerous technologies.
When people talk about AI, it's usually about productivity and efficiency. Deepmind founder Mustafa Suleyman describes potential emotional benefits in his new book, The Coming Wave.
According to Suleyman, AI can help with mental health. Regardless of background, wealth, or gender, he says, family is a critical factor in a person's development and well-being.
But AI is at a point where it can provide people with support, encouragement, affirmation, coaching and advice, he said, and could help people who haven't had a positive family experience.
"We’ve basically taken emotional intelligence and distilled it," Suleyman said. He believes this could boost the creativity of millions of people.
However, Suleyman stresses that AI is not a replacement for human interaction, but can fill in gaps where humans fall short. AI is a tool for humans to get things done, he says.
The results of a recently published psychological test show that ChatGPT is significantly superior to humans in terms of emotional awareness.
You can't do AI's future without dystopia
Of course, in his book, Suleyman also highlights the often-cited potential negative consequences. AI poses risks such as
- Asymmetric effects (a single hacker can defeat a world power),
- Hyper-evolution (humanity is exposed to unexplored risks by progressing too fast),
- Omni-use (AI is everywhere, can be used for good or bad)
- and Autonomy (similar to Hyper-Evolution, but AI takes control),
which could lead to disasters and therefore require investment in regulation and safety.
In particular, Suleyman expects massive advances in bioengineering. In the future, products and organisms will be grown rather than manufactured, with the precision and scale of today's computer chip or software production.
To minimize the dangers of AI, Suleyman suggests strategies such as massively increasing the number of security researchers - there are 300 to 400 worldwide today, and thousands would be needed - building screening systems into DNA synthesizers, and establishing international treaties to regulate dangerous technologies.
The goal, he says, is to strike a balance between dystopian authoritarianism and AI disasters caused by too much openness.
From Deepmind to Google to new AI startup
Suleyman co-founded Deepmind in 2010 with CEO Demis Hassabis, who now leads Google's next-generation AI project, Gemini. Following criticism of his leadership style, Suleyman left Deepmind in 2019 to take on a policy role at Google.
In March 2022, Suleyman, along with Linkedin founder Reid Hoffman, announced a new language AI startup, Inflection AI, which is funded with at least $1.5 billion.
In late June 2023, Inflection AI unveils its first language model, Inflection-1, which is said to be on par with GPT 3.5, Chinchilla, and PaLM-540B.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now