Content
summary Summary

Sam Altman, CEO of OpenAI, called for a new agency to regulate and license AI, which was met with both support and skepticism in the U.S. Senate.

In the first U.S. Senate hearing on "Oversight of A.I.: Rules for Artificial Intelligence," it quickly became clear that everyone kind of agreed: OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and deep learning critic Gary Marcus called on the U.S. Senate to regulate artificial intelligence.

Senators expressed similar sentiments, some surprised by Altman's call for regulation: "I can't remember when we last had companies come and plead with us to regulate them," said U.S. Senator Dick Durbin during the hearing. Montgomery reminded the senate, that IBM's position did not change: "Trust is our license to operate. We called for precision regulation for AI for years now. AI should be regulated at the point of risk."

OpenAI CEO calls for new U.S. agency for AI

Specifically, all three advocated for giving out licenses for companies to operate AI models above a certain level of risk; Marcus and Altman also called for the creation of a new U.S. regulatory agency for AI and the initiation of an international regulatory body, sometimes likened to CERN, sometimes to the International Atomic Energy Agency. Existing authorities or litigation under existing laws could be a tool, but "they give not enough coverage, are too slow to protect the things we care about.," Marcus said.

Ad
Ad

A new AI agency would need to conduct safety reviews before and after AI models are deployed, and be a "nimble monitoring agency" able to track AI developments and recall products, Marcus said. He also called for more investment in AI safety research and more transparency from companies like OpenAI.

Altman warned against stifling small AI startups and the open-source community with overly broad regulations. Instead, he suggested that companies would have to have their AI models licensed by the new agency above a certain capability threshold, and could lose it if they failed to meet safety standards. The models would also have to be tested for certain capabilities before they could be used, such as the ability to replicate itself or break out of a system. Altman also advocated that these processes be audited by independent, external parties.

Montgomery, on the other hand, argued that regulation could be handled by existing agencies and repeatedly referred to the EU AI Act in her regulatory proposals. What is needed, she said, is a precision regulation approach to AI with rules governing specific use cases rather than the technology itself. This would also need a clear definition of the risks depending on the use case and different rules for different risks. The focus needs to be on transparency and accountability, she said.

"Pandora's box does need more than words"

Altman and Marcus pointed to immediate dangers, such as election meddling or other forms of targeted influence, as well as dangers that could only emerge with the advent of general artificial intelligence (AGI). Licenses are needed primarily for what AI models will one day be able to do, not just for what they can do now, Altman said. As for a possible moratorium on AI training, such as for GPT-5, he stressed that his company currently sees no reason to stop training new AI models, but instead wants to continue conducting extensive security testing before release.

Most U.S. senators seemed open to the idea of a new AI agency, with some openly supporting it, including Peter Welch and Richard Blumenthal, who cautioned that "Pandora's box does need more than words like 'licensing' and 'new agency.'"

Recommendation

"There is some real hard decision-making, how to frame the rules to fit the risk. First do no harm, make it effective, make it enforceable, make it real," the U.S. senator said. "We need to grapple with the hard questions here." They were raised in the hearing, he said, but have yet to be answered. What is clear, he said, is that "enforcement really does matter," and that any new agency that might be created should be adequately resourced with money and, most importantly, capable scientists.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • At a US Senate hearing, OpenAI CEO Sam Altman called for the creation of a new US agency to regulate and license AI, particularly for companies operating high-risk AI models.
  • Altman's proposal was supported by deep learning critic Gary Marcus and several senators, but IBM's Christina Montgomery suggested that existing agencies could address AI regulation through a precision regulatory approach, focusing on specific use cases and risks.
  • The hearing concluded with broad agreement on the need for AI regulation to address immediate and future potential dangers, but the specifics of implementation remain under debate.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.