Hours
Minutes
Seconds

Today at 4pm EST I Webinar: Dapta 101: Go from zero to your first AI agent in one session.

Google Launches Gemma 4, the Most Capable Open Model Yet

AI News Stories of the Week

Google Launches Gemma 4, the Most Capable Open Model Yet

Picture of Annie Neal
Annie Neal

Growth Advisor

Table of Contents

Share this post

Google DeepMind released Gemma 4 on April 2, 2026, and it represents the most significant update to the company’s open-weight AI model family to date. Built on the same research that powers Gemini 3, Gemma 4 ships in four distinct sizes, from models small enough to run on a smartphone to a 31B dense model that currently ranks third on the Arena AI open model leaderboard. For developers, startups, and enterprises across the globe, this release changes the open-source AI calculus in a fundamental way.

The model lineup includes the E2B variant with 2.3 billion effective parameters, the E4B at 4.5 billion effective parameters, a 26B Mixture-of-Experts model with 4 billion active parameters and context windows of up to 256K tokens, and the flagship 31B dense model. All variants support native processing of text, vision, and audio, along with support for more than 140 languages. The performance improvements over Gemma 3 are dramatic: the AIME 2026 math benchmark jumps from 20.8% to 89.2%, LiveCodeBench coding scores leap from 29.1% to 80.0%, and GPQA science scores climb from 42.4% to 84.3%.

But the most consequential change has nothing to do with benchmarks. Gemma 4 ships under the Apache 2.0 license, a fully permissive open-source license that allows unrestricted commercial use, modification, and redistribution. Previous Gemma versions carried custom licenses with restrictions that blocked many enterprise deployments. With Apache 2.0, companies can now run these models on their own infrastructure, fine-tune them for proprietary use cases, and redistribute modified versions without paying royalties or surrendering data.

This licensing decision is strategic, not charitable. Google is playing a different game than its competitors. If the most capable open models carry Google’s DNA, then Google Cloud becomes the natural choice when those companies want to scale. The more enterprises adopt Gemma for on-premise prototyping, the more likely they are to turn to Google Cloud for production workloads. Open source has become Google’s most effective product acquisition strategy.

For Latin American and emerging market developers, Gemma 4’s multilingual support across 140+ languages and its ability to run on consumer hardware lower the barrier to entry considerably. Teams that previously had to rely on expensive API calls to closed models can now deploy competitive AI capabilities within their own infrastructure. This is particularly relevant for healthcare, fintech, and government applications in regions where data sovereignty requirements make cloud-based AI solutions impractical.

Presented by: Dapta

For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.

Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.

If you want your team to sell more while AI handles the complex stuff, you have to try it.

The on-device capabilities deserve attention as well. Gemma 4’s smaller variants are purpose-built for edge deployment, enabling multi-step planning, autonomous action, and audio-visual processing without requiring an internet connection. For companies building AI-powered products that need to work offline or in low-connectivity environments, this opens up use cases that were previously impossible with open models.

With this release, Google has made a clear statement: the future of open AI is permissive, powerful, and designed to pull developers into the Google ecosystem through quality rather than restrictions.

Link here.

You might also be interested in