AI is becoming more efficient? That may be true in some areas. But training efforts and costs will continue to explode, says Anthropic CEO Dario Amodei.
With Claude 3 Opus, Anthropic became the first AI model developer to dethrone GPT-4. In an interview with Ezra Klein, Amodei talks about the near and distant future of AI development.
He expects the cost of training large language models to rise rapidly in the next few years. While today's models cost in the neighborhood of $100 million, he expects the cost to be in the range of $1 billion in the near future.
"Today's models cost of order $100 million to train, plus or minus factor two or three. The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion. So that's already happening. And then I think in 2025 and 2026, we'll get more towards $5 or $10 billion," says Amodei.
The Anthropic CEO sees the reason for this in so-called scaling laws, which state that the more computing power and data is pumped into AI systems, the more powerful they become. Some say that this relationship is predictable and exponential.
Amodei believes that in two to five years, if the predictions of scaling laws are correct, AI models will be so powerful that they will no longer need or be controlled by humans. He sees the kind of systems we imagine in science fiction coming in two to five years, not 20 or 40.
Slowing down could be a safety risk
Amodei does not see slowing down AI development as a solution. To ensure the safety of systems, they need to be scaled up and further researched. This is the only way to identify potential threats early and develop countermeasures, Amodei argues. He believes that stopping development is neither realistic nor desirable.
At the same time, he stresses the responsibility of the leading AI companies. As the capabilities of the models increase, so does the power in the hands of private companies. Amodei believes that in a few years, society will have to find ways to democratize control over the central systems.
Amodei sees potential dangers for the future, including the use of AI to develop biological weapons or for state disinformation campaigns. AI systems may soon be able to drastically enhance the capabilities of state actors in these areas, for example by providing a high degree of persuasiveness in argumentation. First studies support this theory.
On a scale of 1 to 4 that Anthropic uses internally to assess AI risks, Amodei assigns these scenarios to level 3 - a development that he believes could occur as early as this year or next.
The Anthropic CEO expects level 4, the highest risk level, to be reached between 2025 and 2028. At that point, he expects AI systems to be able to replicate and evolve. This would raise entirely new control issues and could potentially pose existential risks.
Amodei also sees societal challenges beyond such extreme scenarios. For example, a sharp increase in energy consumption due to AI data centers cannot be ruled out. A disruptive impact on the labor market is also a serious possibility, as AI systems take over the cognitive tasks of more and more professional groups.
Despite the challenges, Anthropic's CEO believes the opportunities outweigh the risks. AI systems could enable groundbreaking advances in areas such as medicine, education, and energy in the future. It is important to find ways to harness this potential while managing the risks.
Dario Amodei spent several years as a senior researcher at OpenAI. For two years, he led the work of OpenAI's AI safety team. He and his sister Daniela Amodei were involved in the development of OpenAI's GPT-3. Together they worked on OpenAI's research teams for nearly six years before founding Anthropic to also focus on AI safety research.