Meta Takes Aim at Google and OpenAI with New Llama 4 Models

Meta Takes Aim at Google and OpenAI with New Llama 4 Models

Key Information about Meta’s Llama 4 Series

Meta is launching two exciting models in its Llama 4 series, named Scout and Maverick. Initial tests indicate that both models outperform existing competitors in various tasks. Below are key features and functions of these new models.

Overview of Scout and Maverick

  • Scout: This model is designed for handling large documents, complicated requests, and extensive codebases. It excels at breaking down massive amounts of text and making logical connections within coding frameworks.
  • Maverick: On the other hand, Maverick serves as a versatile option for both text and images, making it particularly suitable for smart assistants and chat interfaces.

Both models are currently available on Llama.com as well as through Meta’s partners, including Hugging Face. Additionally, they are integrated into the Meta AI assistant, which is being rolled out on platforms like WhatsApp, Messenger, and Instagram across 40 countries. However, at this stage, these multimodal capabilities are limited to the U.S. and English.

The Innovative Technology of Llama 4

Meta claims that Llama 4 represents significant technological advancement for the company. For the first time, they are implementing a Mixture of Experts (MoE) framework. This innovative approach allows the system to run more smoothly and respond to users more quickly.

What is Mixture of Experts (MoE)?

MoE works by dividing heavy tasks into smaller sections, which are then processed by specialized mini-networks. For Scout, there are 17 billion active parameters distributed over 16 expert modules. This architecture reportedly outperforms other well-known models, such as Google’s Gemini 3 and the open-source Mistral 3.1, while efficiently operating on just one Nvidia H100 GPU.

One highlight of Scout is its impressive context window capacity, which allows it to manage up to 10 million tokens simultaneously. This feature makes it suitable for processing both textual and visual information on a grand scale.

Maverick’s Capabilities

Maverick also receives praise from Meta for its robust performance. It has 17 billion parameters, which are allocated across 128 expert networks. While it performs admirably, especially in tasks like coding and logical processing, it does not quite reach the performance levels of some leading competitors such as Google’s Gemini 2.5 Pro or OpenAI’s GPT-4.5.

Comparing Scout and Maverick

  • Scout:

    • Best for: Large documents, complex queries, and logic in large codebases.
    • Features: 17 billion active parameters and efficient task handling.
  • Maverick:
    • Best for: Text and visual tasks in smart assistant applications.
    • Features: 17 billion parameters, designed for versatility but with slightly lower performance than top competitors.

Future Developments: Llama 4 Behemoth

Meta has also hinted at an upcoming model called Llama 4 Behemoth, currently in its training phase. This model aims to be one of the most advanced language models available.

Expected Specifications

Llama 4 Behemoth is expected to feature:

  • 288 billion active parameters divided across 16 experts.
  • A cumulative parameter count approaching two trillion.

Early reports suggest that Behemoth could outperform high-caliber models like GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro, especially in STEM tasks, such as complex math problems. However, it still has yet to surpass Google’s Gemini 2.5 Pro overall.

Meta’s investment in AI advancements through Scout, Maverick, and the upcoming Behemoth indicates a commitment to enhancing the user experience across its platforms. Users can expect to notice sharper responses, improved image generation, and more relevant advertisements through these developments.

Please follow and like us:

Related