OpenAI and Los Alamos National Laboratory (LANL) are collaborating to study the safe use of AI models by scientists in laboratory settings.
The partnership aims to explore how advanced multimodal AI models like GPT-4, with their image and speech capabilities, can be safely used in labs to advance "bioscientific research."
This collaboration aligns with a White House Executive Order on the safe development and use of AI. The order calls on U.S. Department of Energy national laboratories to evaluate the capabilities of cutting-edge AI models, including their potential in biological applications.
As part of the partnership, an evaluation study will be conducted. Both novices and experts will perform standard experimental tasks in the laboratory, which will serve as proxies for "more complex tasks that pose a dual use concern."
The study builds on OpenAI's existing work on biological threats and follows its Preparedness Framework, which describes OpenAI's approach to monitoring, assessing, predicting, and defending against model risks.
Mira Murati, CTO of OpenAI, described the partnership as a "natural evolution of our mission to advance scientific research while understanding and mitigating risk."
OpenAI hopes the partnership will "help set new standards for AI safety and efficacy in the sciences." Critics may fear that AI research in bioscience, particularly related to bio-threats, could be misused to develop biological weapons.
OpenAI has recently become more open to military cooperation, stating that its AI technology must not be used to cause harm, including weapons development. A publicly known OpenAI project in the military sector focuses on cybersecurity. The former head of the NSA recently joined OpenAI's board.
Founded in 1943 as a secret facility to develop the first atomic bomb as part of the Manhattan Project, Los Alamos National Laboratory is one of the leading national laboratories in the United States, renowned for its nuclear weapons research.