Content
summary Summary

The US National Institute of Standards and Technology (NIST) has signed agreements with Anthropic and OpenAI to collaborate on researching, testing, and evaluating AI safety.

Ad

According to NIST, the memorandums of understanding (MOUs) with the two companies establish a framework for the US AI Safety Institute to access important new AI models from the companies both before and after their release. The agreements will enable joint research to assess safety capabilities and risks, as well as methods to mitigate those risks.

"Safety is essential to fueling breakthrough technological innovation. […] These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, Director of the US AI Safety Institute.

The US AI Safety Institute also plans to provide feedback to Anthropic and OpenAI on potential safety improvements for their models, working closely with partners at the UK AI Safety Institute.

Ad
Ad

The US AI Safety Institute builds on NIST's 120-year history of advancing technologies and standards. The assessments under the agreements with Anthropic and OpenAI are designed to advance NIST's work in AI and support the Biden-Harris Administration's AI executive order and voluntary commitments from leading AI companies.

Anthropic and OpenAI take different approaches to AI safety

While NIST is addressing the two major US AI Startups together, market trends seem to favor Anthropic, at least regarding safety concerns. Recently, several safety researchers have left OpenAI, with some joining Anthropic. Notable departures include Jan Leike, former lead AI safety researcher at OpenAI, and OpenAI co-founder John Schulman.

Anthropic also supports California bill SB 1047, which aims to address the risks of frontier AI models. OpenAI opposes the bill, arguing that these issues should be regulated at the national level. In this context, OpenAI's cooperation with NIST can be considered a statement in the California debate. OpenAI CEO Sam Altman welcomed the cooperation with NIST on X (formerly Twitter), stating that "for many reasons, we think it's important that this happens at the national level."

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The National Institute of Standards and Technology (NIST) has signed agreements with Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.
  • These partnerships will give NIST access to important new AI models from both companies before and after their public release.
  • The agreements enable joint research to assess the capabilities and potential safety risks of advanced AI systems, as well as methods to mitigate those risks.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.