Content
summary Summary

Advanced AI assistants could soon become an integral part of our daily lives, changing how people interact with AI, a new research paper by Google DeepMind says. But these assistants also pose risks that require action from developers and policymakers.

Ad

AI assistants, based on "foundation models", could act as creative partners, research analysts, tutors or life planners, helping people lead good lives by providing information, formulating goals and developing strategies to realize them, the researchers write.

These AI assistants could help people lead a good life by providing relevant information, helping to formulate goals and developing strategies to realize these goals.

However, there is a risk the systems may not act in the interests of users or society, making false assumptions about human well-being or placing developers' interests above users'.

Ad
Ad

Companies like OpenAI and Google are said to be working on such assistants as the next stage in the evolution of today's chatbots. OpenAI CEO Sam Altman recently said he believes personalizing AI offerings will be more important than developing the underlying basic models.

AI should not be humanized

The research paper says the ability to communicate with AI in natural language and its tendency to imitate human behavior could lead users to form an inappropriately close bond with the assistants. This could result in disorientation and a loss of autonomy.

To assess potential damage and develop protective measures, the researchers say comprehensive research into human-AI interaction is necessary. Measures could include restrictions on human-like elements in AI and steps to protect users' privacy.

Relationships between humans and AI should preserve user autonomy and avoid emotional or material dependence, according to the paper.

On a societal level, AI assistants could speed up scientific research and make high-quality expertise more widely available. They could help limit the spread of misinformation, but also generate it, and mitigate the impact of climate change.

Recommendation

Risks include coordination problems between AI assistants with negative social consequences, loss of social ties due to AI replacing human interaction, and deepening of technological inequalities.

The researchers say AI assistants should be widely accessible and consider the needs of different users and non-users to avoid exclusionary effects.

On the cusp of a fundamental technological and social transformation

Because AI assistants could develop new capabilities and use tools in new ways, it's hard to predict the risks of using them. The research team recommends developing comprehensive tests and assessments that also consider the impact on human-computer interaction and society as a whole.

The researchers say we should act quickly to advance the development of socially beneficial AI assistants, including through public dialogue about the ethical and social implications.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Robust controls against misinformation should be implemented, access issues should be addressed, and the impact on the economy should be studied.

"We currently stand at the beginning of this era of technological and societal change. We therefore have a window of opportunity to act now – as developers, researchers, policymakers and public stakeholders –to shape the kind of AI assistants that we want to see in the world," the research team writes.

A comprehensive presentation of the opportunities and risks can be found in the paper "The Ethics of Advanced AI Assistants."

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google DeepMind researchers believe advanced AI assistants, based on "foundation models", could soon become an integral part of daily life, acting as creative partners, research analysts, tutors or life planners to help people lead good lives by providing information and developing strategies to achieve goals.
  • However, the researchers warn that AI assistants pose risks, such as not acting in the best interests of users or society, making false assumptions about human well-being, or prioritizing developers' interests over users'. The AI's human-like communication could also lead to inappropriate emotional bonds and loss of autonomy.
  • To mitigate risks and ensure socially beneficial AI assistants, the researchers recommend comprehensive research into human-AI interaction, restrictions on human-like elements in AI, protecting user privacy, promoting wide accessibility, and implementing robust controls against misinformation. They emphasize the need for swift action by developers, researchers, policymakers and the public to shape the future of AI assistants.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.