Google Launches Gemma 4: Most Capable Open AI Models Under Apache 2.0

Google Gemma 4 Open AI Model Launch Under Apache 2.0 License

Google DeepMind has released Gemma 4, its most capable family of open AI models yet, and for the first time, under the Apache 2.0 license — the most permissive open-source license available. This is a significant strategic move that positions Google directly against Meta's Llama and the wave of Chinese open models from Qwen and others.

Demis Hassabis, head of Google DeepMind, called them "the best open models in the world for their respective sizes" — and the benchmarks suggest he is not exaggerating.

Four Sizes, Two Tiers

Gemma 4 comes in four model sizes across two tiers:

Model Parameters Context Best For
E2B (Edge) 2.3B effective 128K tokens Phones, IoT, Raspberry Pi
E4B (Edge) 4.5B effective 128K tokens Laptops, embedded devices
26B MoE 26B total / 3.8B active 256K tokens Low-latency inference
31B Dense 31B 256K tokens Raw performance, reasoning

The edge models (E2B and E4B) support text, image, and audio inputs — making them true multimodal models that can run on a phone or even a Raspberry Pi. The workstation models (26B MoE and 31B Dense) handle text and image inputs with 256K-token context windows, designed for serious reasoning and coding tasks.

Benchmark Performance

The numbers are impressive:

  • Gemma 4 31B ranks #3 among all open models (#27 overall) on the AI Arena leaderboard — matching models 10x its size
  • 85.2% on MMLU Pro (advanced reasoning)
  • 89.2% on AIME 2026 (mathematical reasoning)
  • 80.0% on LiveCodeBench v6 (coding)
  • 2,150 ELO on Codeforces (competitive programming)
  • The 26B MoE ranks #6 on Arena AI with only 3.8B active parameters — extremely efficient

That is a massive jump from Gemma 3 27B, with an 87-point improvement on the Arena leaderboard according to LM Arena.

Why Apache 2.0 Matters

Previous Gemma releases used Google's custom license, which imposed restrictions on commercial use and redistribution. The switch to Apache 2.0 removes those barriers entirely:

  • Full commercial use without restrictions
  • No attribution requirements beyond the license itself
  • Freedom to modify, distribute, and build derivative works
  • Enterprise-friendly — legal teams love Apache 2.0

As VentureBeat noted, the license change may matter more than the benchmarks. Apache 2.0 is what enterprises need to confidently deploy open models in production without worrying about license complications.

The Strategic Play: Android for AI

Google appears to be running its Android playbook on AI inference. By releasing the most capable open models under the most permissive license, Google is positioning Gemma as the default choice for developers building AI applications — just as Android became the default mobile OS by being open and free.

The timing is notable: Meta's Llama hasn't shipped a competitive open model in over a year, and Chinese labs like Qwen and Zhipu are pulling back from fully open releases. Google is stepping into a vacuum.

Bottom Line

Gemma 4 is not just an incremental update — it is Google's strongest statement yet that the future of AI is open. The combination of frontier-class performance, multimodal edge models that run on a Raspberry Pi, and the most permissive license in the Gemma family makes this a significant release for developers, enterprises, and the broader AI ecosystem.

Download and try Gemma 4 at ai.google.dev/gemma.