Meta is gearing up to release several versions of Llama 4 throughout 2025, with a strong focus on reasoning capabilities and voice interaction. The company hopes these updates will help its open-source model keep pace with closed-source competitors.
Meta CEO Mark Zuckerberg says Llama models have now been downloaded more than 650 million times, and developers have created more than 85,000 variations of Hugging Face, including Nvidia's adaptation called Nemotron.
Throughout 2024, Meta rolled out several major Llama updates. These included Llama 3.1, followed by Llama 3.2 which added the first multimodal capabilities and specialized models for mobile devices. The company then released Llama-3.3-70B, which matched the performance of the larger 3.1-405B model while using fewer resources.
Llama's popularity has surged recently, with twice as many licenses issued compared to six months ago. This growth has been helped by partnerships with major tech companies including AWS, AMD, Microsoft Azure, Databricks, Dell, Google Cloud, Groq, NVIDIA, IBM watsonx, Oracle Cloud, and ScaleAI.
Betting big on reasoning and AI agents
For 2025, Meta plans to accelerate Llama's development with several Llama 4 releases. According to Zuckerberg, training Llama 4 could require nearly ten times the computing power used for Llama 3.
While the company aims to improve the model across the board, it's particularly interested in "advanced reasoning" and voice interaction - capabilities that have become increasingly important in the AI community.
The company is already testing business-focused AI agents that can handle customer conversations, provide support, and process transactions. Meta believes these same advances could make AI assistants more capable for everyday users too, helping them tackle more complex tasks independently.
Due to regulatory uncertainties, Meta announced in summer 2024 that Llama 4 won't be initially available to European companies, despite its open-source nature.
Speaking the future
Meta sees voice as the future of AI interaction. As language models become more conversational, the company expects users will shift away from typing and toward speaking with their AI assistants. Meta took its first step in this direction last fall by adding voice features to Meta AI, with bigger updates planned for early 2025.
The numbers suggest Meta's approach is working. Even without access in the European Union, Meta AI has attracted 600 million monthly users, making it one of the most widely used AI assistants globally. For comparison, OpenAI reports ChatGPT reaches about 300 million users each week.
Beyond text and speech, Meta is also pushing into video AI. In October, the company unveiled Meta Movie Gen, a suite of research models designed to generate and edit videos using generative AI.