Meta is intensifying its competition with tech rivals like OpenAI and Google in the burgeoning field of artificial intelligence. The social media giant is developing new AI models – “Mango” for image and video generation and “Avocado” for text – slated for release in the first half of 2026. Thes developments follow a significant internal restructuring and talent acquisition aimed at bolstering Meta’s position in the increasingly crucial generative AI landscape, where advancements are rapidly reshaping content creation and user engagement.
Meta is developing a new generation of artificial intelligence models focused on image and video generation, internally codenamed “Mango,” alongside a large language model for text, dubbed “Avocado.”
The company anticipates launching both models in the first half of 2026, according to individuals familiar with internal discussions. This move reflects intensifying competition among technology giants vying for leadership in the generative AI market, the Wall Street Journal reported. Meta’s push into AI comes as the technology is rapidly transforming industries and attracting significant investment.
The plans were revealed during a meeting held last Thursday, where Meta’s head of AI, Alexander Wang, and Chief Product Officer Chris Cox outlined the company’s strategy to those in attendance.
Wang explained that the new text-focused model, Avocado, will prioritize enhancing its capabilities in writing code, aiming to bolster Meta’s competitiveness in the advanced language model space, which has become a central battleground in the global AI race.
“World Models”
Wang also noted that Meta is in the early stages of exploring “World Models,” a category of AI systems that learn to understand their surroundings by processing visual information. These models aim to build a more comprehensive understanding of the real world and how to interact with it, learning from images, videos, and sensor data to grasp the physical rules governing the universe and the relationships between objects.
This approach is considered an ambitious future direction in AI research, potentially enabling models to simulate human understanding of reality with greater accuracy and complexity – a key element for developing advanced robotics, self-driving cars, and intelligent systems.
Significant Restructuring
Meta underwent a comprehensive restructuring of its AI team last summer, establishing Meta Superintelligence Labs and appointing Alexander Wang to lead the new division.
CEO Mark Zuckerberg spearheaded a recruitment drive, successfully attracting over 20 researchers from OpenAI and assembling a new team of more than 50 researchers, engineers, and specialists with expertise in various AI fields.
A Heated Race for Content Creation
AI-powered image generation has emerged as a key competitive area among major AI developers, driven by growing user demand and its applications in creative and commercial fields.
In September, Meta launched Vibes, an AI-powered video generator developed in collaboration with Midjourney.
Just a week later, OpenAI unveiled its own video generator, Sora.
Earlier this year, the release of Google’s Nano Banana image generator led to a surge in Gemini app downloads, with monthly active users increasing from approximately 450 million in July to over 650 million by late October.
Following the launch of Gemini 3 in November, OpenAI CEO Sam Altman activated a “Code Red” within the company, signaling an urgent need to regain leadership in key AI performance benchmarks.
This prompted a rapid release of an updated image generation product within OpenAI’s ChatGPT Images service.
During a press briefing last week, Altman emphasized the significant importance of AI-powered image generation technologies for consumers, describing them as a “sticky” feature that encourages repeat usage of platforms offering these capabilities.