Content
summary Summary

Meta is gearing up to release several versions of Llama 4 throughout 2025, with a strong focus on reasoning capabilities and voice interaction. The company hopes these updates will help its open-source model keep pace with closed-source competitors.

Ad

Meta CEO Mark Zuckerberg says Llama models have now been downloaded more than 650 million times, and developers have created more than 85,000 variations of Hugging Face, including Nvidia's adaptation called Nemotron.

Throughout 2024, Meta rolled out several major Llama updates. These included Llama 3.1, followed by Llama 3.2 which added the first multimodal capabilities and specialized models for mobile devices. The company then released Llama-3.3-70B, which matched the performance of the larger 3.1-405B model while using fewer resources.

Llama's popularity has surged recently, with twice as many licenses issued compared to six months ago. This growth has been helped by partnerships with major tech companies including AWS, AMD, Microsoft Azure, Databricks, Dell, Google Cloud, Groq, NVIDIA, IBM watsonx, Oracle Cloud, and ScaleAI.

Ad
Ad

Betting big on reasoning and AI agents

For 2025, Meta plans to accelerate Llama's development with several Llama 4 releases. According to Zuckerberg, training Llama 4 could require nearly ten times the computing power used for Llama 3.

While the company aims to improve the model across the board, it's particularly interested in "advanced reasoning" and voice interaction - capabilities that have become increasingly important in the AI community.

The company is already testing business-focused AI agents that can handle customer conversations, provide support, and process transactions. Meta believes these same advances could make AI assistants more capable for everyday users too, helping them tackle more complex tasks independently.

Due to regulatory uncertainties, Meta announced in summer 2024 that Llama 4 won't be initially available to European companies, despite its open-source nature.

Speaking the future

Meta sees voice as the future of AI interaction. As language models become more conversational, the company expects users will shift away from typing and toward speaking with their AI assistants. Meta took its first step in this direction last fall by adding voice features to Meta AI, with bigger updates planned for early 2025.

Recommendation

The numbers suggest Meta's approach is working. Even without access in the European Union, Meta AI has attracted 600 million monthly users, making it one of the most widely used AI assistants globally. For comparison, OpenAI reports ChatGPT reaches about 300 million users each week.

Beyond text and speech, Meta is also pushing into video AI. In October, the company unveiled Meta Movie Gen, a suite of research models designed to generate and edit videos using generative AI.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Meta's open-source language model Llama has seen exponential growth in 2024, with over 650 million downloads, making it the most widely used model, largely due to a growing network of hardware and software partners.
  • With the release of Llama 4 in 2025, Meta aims to drive progress in speech and reasoning capabilities, particularly in developing AI systems with advanced reasoning for applications such as business agents in customer service and retail.
  • Meta predicts a shift from text to voice interfaces in AI applications, with its in-house assistant Meta AI already boasting 600 million monthly users, despite not being available in the EU.
Sources
Jonathan works as a freelance tech journalist for THE DECODER, focusing on AI tools and how GenAI can be used in everyday work.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.