The US Department of Homeland Security has announced the creation of a new AI Safety and Security Board.
The board is tasked with developing recommendations for the responsible development and use of AI technologies in critical US infrastructure, the Department of Homeland Security writes.
The 22-member board includes CEOs from major technology companies such as Microsoft, Google, IBM, Amazon and Nvidia. Critical infrastructure operators such as Delta Air Lines and energy company Occidental Petroleum are also represented.
The board also includes civil rights activists, scientists such as Fei-Fei Li, Ph.D., co-director of the Stanford Human-centered Artificial Intelligence Institute, and politicians such as the governor of Maryland.
The goal is to help operators of critical infrastructure such as transportation, energy, agriculture, and communications use AI technologies safely and securely, according to the Department of Homeland Security.
In addition, the advisory board will develop recommendations on how to prevent and mitigate AI threats to these vital services. The mandate to establish the board came directly from President Biden.
While AI offers tremendous opportunities, it also poses risks that can be mitigated through recommended practices and concrete measures, according to U.S. Secretary of Homeland Security Alejandro Mayorkas.
According to a recent Department of Homeland Security threat assessment, AI technologies could enable "larger, faster, efficient, and more evasive" cyberattacks on pipelines, railroads, and other critical U.S. infrastructure.
In addition, countries like China are developing AI technologies that could undermine U.S. cyber defenses, such as AI-generated malware, the Department of Homeland Security warns.
Prominent AI figures not on the panel include Meta CEO Mark Zuckerberg and Elon Musk. Both are considered advocates of open-source AI, while the CEOs of the companies represented on the board tend to focus on closed source. However, Alphabet and OpenAI have also released numerous open-source AI models.