Meta is developing new AI models for images, videos, and text under the codenames "Mango" and "Avocado." The release is planned for the first half of 2026, according to a report from the Wall Street Journal citing internal statements by Meta's head of AI Alexandr Wang. During an internal Q&A with Head of Product Chris Cox, Wang explained that "Mango" focuses on visual content, while the "Avocado" language model is designed to excel at programming tasks. Meta is also researching "world models" capable of visually capturing their environment.
The development follows a major restructuring where CEO Mark Zuckerberg personally recruited researchers from OpenAI to establish the "Meta Superintelligence Labs" under Wang's leadership. The market for image generation remains fiercely competitive. Google recently released Nano Banana Pro, an impressive model known for its precise prompt adherence, and OpenAI quickly followed up with GPT Image 1.5 a few weeks later. Most recently, Meta introduced the fourth generation of its Llama series in April and is currently collaborating with Midjourney and Black Forest Labs on the Vibes video feed.