The Biden Administration has announced the US AI Safety Institute Consortium (AISIC), a group of more than 200 companies, including leading AI companies such as OpenAI, Google, Microsoft, and Amazon. The consortium will focus on the safe development and use of generative AI, addressing the priority actions of President Biden's October AI Executive Order. This includes developing guidelines for red-teaming, risk management, security, and watermarking synthetic content. AISIC will be housed under the umbrella of the U.S. AI Safety Institute (USAISI) and will be the largest gathering of testing and evaluation teams. The goal is to "focus on establishing the foundations for a new measurement science in AI safety," according to the Department of Commerce. The full list of participants can be found here.
Hub AI and society
Artificial Intelligence is a key technology that can help us solve major societal challenges such as climate change, energy supply, healthcare, education or logistics. AI can solve specific problems more effectively by supporting us in decision-making, automating solutions and thus scaling them, or discovering completely new solutions. But the use of AI also poses new risks, for example when it comes to surveillance or questions of social justice.
What is our society doing with AI – and what is AI doing to our society? We shed light on this question in our AI and Society Content Hub.
Ad
Ad
Ad
Ad
Ad
Ad
Ad