Google introduced the Gemma 4 family of open models, supporting 140 languages and distributed under the Apache 2.0 license
Google releases Gemma 4 – four open AI models
Google announced the launch of a new family of language models, Gemma 4, built on the Gemini 3 architecture introduced at the end of last year. The lineup now includes four versions that differ in parameter count.
| Version | Parameters | Purpose |
|---|---|---|
| Effective‑2B | 2 billion | Devices with limited resources (smartphones and others) |
| Effective‑4B | 4 billion | Same, but a slightly more powerful version |
| Mixture‑of‑Experts‑26B | 26 billion | More compute‑intensive systems |
| Dense‑31B | 31 billion | The “heaviest” model |
Google claims an "unprecedented intelligence-to-parameter ratio." This is supported by test results: the 31‑ and 26‑billion‑parameter models ranked third and sixth in Arena AI’s text task leaderboard, surpassing competitors that are twenty times larger (Engadget).
What the models can do
* Video and image processing – all versions can work with multimedia.
* Audio and speech – the lower models (Effective‑2B/4B) understand audio data and recognize speech.
* Offline code generation – Gemma 4 can generate software code without an internet connection, which is convenient for on‑the‑go “web coding.”
* Multilingualism – support for more than 140 languages.
Licensing
Gemma 4 is distributed under the Apache 2.0 license, which allows free use, modification, and sale of software built on these models. Unlike previous versions released under Gemma’s own license, Apache 2.0 opens up broader customization possibilities.
> “This open license provides developers with full flexibility and digital sovereignty: you fully control data, infrastructure, and models. You can freely build and securely deploy applications in any environment—on‑premises or cloud,” Google representatives emphasized.
Where to try
The models are available on Hugging Face, Kaggle, and Ollama, where developers can start experimenting right away.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment