Anthropic CEO Dario Amodei expects a major leap in AI assistants this year, announcing "virtual collaborators" that can perform complex tasks independently and a new model.
Anthropic appears to be on the verge of releasing a new generation of AI assistants. Speaking at the World Economic Forum in Davos, CEO Dario Amodei described upcoming "virtual collaborators" - AI systems that work much more independently than current assistants. "I think the thing we have in mind is an assistant in your workplace that you use personally," Amodei explains, "writing some code, testing the code, deploying that code to some test surface, talking to coworkers, writing design docs, writing Google Docs, writing slacks, and Emails - the model goes off and does a bunch of those things and then checks in with you every once in a while."
While being careful not to make firm promises, Amodei shared his outlook: "I do suspect, I'm not promising, I do suspect that a very strong version of these capabilities will come this year and it may be in the first half of this year." The company's AI agent, building on work like their "Computer Use" mode, will likely compete with OpenAI's upcoming "Operator" system.
New model coming soon with enhanced reasoning abilities
Anthropic plans to release a new language model in the coming months. Unlike OpenAI's approach of creating specialized "reasoning" models, Anthropic sees thinking and reasoning as existing on a spectrum. "When you train the model with reinforcement learning, it starts to think and reflect more," Amodei says. "It's not like reasoning or test time computer, the various things that this is called, is a totally new method. It's more like an emergent property, a consequence of training the model." He notes that their Claude 3.5 Sonnet already shows these capabilities in some cases, and they plan to roll out improvements differently than OpenAI's o1 and o3 approach. The new model should arrive within six months.
Several updates are coming to the existing Claude chatbot: web access is coming "relatively soon," along with conversation storage capabilities that Amodei calls "very important." Speed limits will increase once new data centers with Amazon's Trainium 2 chips come online. The company isn't planning to add image and video generation yet, since most of their industrial customers don't need these features.
Amodei on shifting his perspective on AI development
Amodei's views on AI development have evolved significantly. Until recently, he wasn't certain whether AI could surpass human capabilities. "I still do now, but that uncertainty is greatly reduced," he says. "I think that over the next 2 or 3 years I am relatively confident that we are indeed going to see models that show up in the workplace that consumers use that are, yes, assistance to humans but are gradually get better than us at almost everything."
This progress brings both opportunities and challenges. In the near term, Amodei sees people adapting and developing complementary skills. Looking further ahead, society will need to address fundamental questions: "When AI systems are better than humans at almost everything ... we will need to have a conversation at places like this about how do we organize our economy, right? How do humans find meaning?"
Amodei's confidence isn't coming out of nowhere. Before founding Anthropic, he worked as a researcher at OpenAI, where he co-authored the influential paper "Scaling Laws for Neural Language Models." This work played a crucial role in expanding AI infrastructure and scaling OpenAI's training efforts, though DeepMind later refined the optimal training data amounts with their Chinchilla research.
OpenAI seems to share Amodei's outlook on AI's future. The company recently announced "The Stargate Project," a venture planning to invest $500 billion in AI infrastructure over the next four years.
Export controls against China "absolutely existential"
Amodei considers export controls against China and preventing chip smuggling as "absolutely existential": "We are just starting to see the value for military intelligence of these models," he explains. He sees maintaining a technological edge as crucial for AI safety: "Having a lead against China, which is becoming increasingly difficult, really gives us the buffer to do that. And if we don't have that lead, we're in this Hobbesian international competition where it's like you can be in a catch-22: Well, if we slow down 3 months to mitigate the risks of our own models, then China will get there. We don't want to end up in that situation in the first place."
The company maintains a non-partisan stance. "Anthropic is a policy actor. Anthropic is not a political actor," Amodei emphasizes. They develop positions on global AI issues and share them equally across the political spectrum.
Amodei ended with advice for young professionals entering the AI era, encouraging them to study new technologies carefully and develop strong critical thinking skills to handle the growing flood of information.