Meta releases Llama 4 AI models with major upgrades and new licensing rules

Meta has rolled out its latest AI models under the Llama 4 banner—and unusually, the launch happened on a Saturday. The release includes three new models: Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. These models were trained using vast amounts of unlabelled text, images, and videos to boost their ability to understand visual content.
Reportedly, competition from Chinese AI lab DeepSeek, whose models matched or exceeded previous Llama versions, pushed Meta to speed up development. The company even formed urgent internal teams to study how DeepSeek was able to make their models cheaper to run and deploy.
Scout and Maverick are already available on Llama.com, Hugging Face, and other partner platforms. Behemoth, still being trained, is expected to be Meta’s most powerful yet. Meanwhile, Meta AI—the assistant integrated into apps like Instagram, WhatsApp, and Messenger—is now powered by Llama 4 in 40 countries. However, multimodal features are currently limited to English-speaking users in the U.S.
There’s a licensing catch: users and businesses based in the EU are banned from using or sharing Llama 4, likely due to stricter regional regulations on AI and data privacy. Further, any company with over 700 million monthly users must get Meta’s explicit approval to use the models.
Meta’s blog calls this the start of a “new era” for Llama, highlighting that these are its first models to use a Mixture of Experts (MoE) architecture. This technique makes training and inference more efficient by assigning tasks to specialised sub-models. For instance, Maverick has 400 billion parameters overall, but only 17 billion are active at a time. Scout, optimised for summarisation and code analysis, can handle up to 10 million tokens at once—making it ideal for long documents.
/socialsamosa/media/media_files/2025/04/07/5jLtVxQGvIOAkBkEuLMg.png)
Scout runs efficiently on a single Nvidia H100 GPU, while Maverick needs a DGX setup. The upcoming Behemoth, with nearly 2 trillion parameters and 288 billion active, will need even more powerful infrastructure. Meta claims Behemoth beats top models like GPT-4.5 and Claude 3.7 Sonnet in STEM tasks—though not the very latest, like Gemini 2.5 Pro.
/socialsamosa/media/media_files/2025/04/07/yVpTrLuvwVrT4wHvw4wb.png)
None of the Llama 4 models are categorised as “reasoning” models—a label for systems that prioritise accuracy and fact-checking, like OpenAI’s o1 or o3-mini, even if they respond more slowly.
/socialsamosa/media/media_files/2025/04/07/CrEKLXtpGqidEHHlGryP.png)
Interestingly, Meta says Llama 4 will now respond to more controversial topics than its predecessors. The models are designed to be more balanced, refusing fewer prompts and aiming to offer helpful responses without pushing particular viewpoints.
This change comes amid criticism—particularly from conservative circles—that AI tools have been biased or “woke.” Meta appears to be responding by making its models more neutral and open to a wider range of viewpoints.
News