Content
summary Summary

An international group of scientists has called for responsible practices in AI-assisted protein design to prevent the development of bioweapons.

The group released an open letter highlighting the potential of AI in the life sciences, from faster responses to infectious diseases to energy production. At the same time, the signatories warn of the potential for misuse, including biological toxins and bioweapons.

The current 131 signatories, including Nobel laureate Frances Arnold and Turing Prize winner Yann LeCun, call for proactive risk management to prevent potential dangers from misuse.

The group pledges to conduct research only for the benefit of society and to avoid dangerous practices. Likewise, DNA synthesis services should only be procured from providers that perform standardized biosafety testing. Continuous evaluation of AI software and identification of safety risks are also part of the commitment.

Ad
Ad

Despite these fears, the scientists argue that the real danger lies not in AI itself, but in the equipment used to produce new genetic material needed to develop bioweapons.

The letter calls for safety measures for DNA production facilities to prevent them from being used with harmful materials, as well as safety and reliability testing for new AI models before they are released.

"Since no computationally designed protein can cause real-world harm unless it is physically produced, the manufacturing of synthetic DNA presents a key biosecurity checkpoint for the field of computational protein design," the researchers write.

The signatories emphasize the importance of openness and scientific freedom, but also recognize the need to restrict access to AI systems if they pose unresolved risks. In general, however, the scientists support open access to AI technologies so that the scientific community can study them and contribute to their development.

The group also committed to regularly reviewing the principles and commitments and developing new ones. The technologies should bring broad benefits through international collaboration and integrative research approaches, they write.

Recommendation

A recent OpenAI study found that large language models such as GPT-4 only marginally facilitate the development of bioweapons, as the necessary information is relatively easy to find on the Internet even without AI. OpenAI has developed an early warning system to detect potential misuse.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • An international group of 131 scientists, including Nobel laureate Frances Arnold and Turing Prize winner Yann LeCun, has called for responsible AI practices in protein design to prevent the development of bioweapons.
  • The group highlights the potential benefits of AI in the life sciences, such as faster responses to infectious diseases and energy production, but also warns of potential abuses and calls for proactive risk management.
  • Key safety measures proposed include standardized biosafety testing for DNA synthesis services, continuous evaluation of AI software, safety and reliability testing for new AI models, and security measures for DNA production facilities to prevent harmful use.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.