Google Launches Gemma, a Family of Open-Source Lightweight AI Models for Developers

google gemma ai 1708588635642

Google released a new lightweight open-source family of artificial intelligence (AI) models called Gemma on Wednesday, February 21. Two variants of Gemma, Gemma 2B and Gemma 7B, have been made available to developers and researchers. The tech giant said it used the same technology and research for Gemma that it used to create Gemini AI models. Interestingly, Gemini 1.5 model was unveiled last week. These smaller language models can be used to make task-specific AI tools, and the company allows responsible commercial usage and distribution.

The announcement was made by Google CEO Sundar Pichai in a post on X (formerly known as Twitter). He said, “Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available worldwide starting today in two sizes (2B and 7B), supports a wide range of tools and systems, and runs on a developer laptop, workstation or @GoogleCloud.” The company has also created a developer-focused landing page for the AI model, where people can find quickstart links and code examples on its Kaggle Models page, quickly deploy AI tools via Vertex AI (Google’s platform for developers to build AI/ML tools), or play around with the model and attach it to a separate domain using Collab (it will require Keras 3.0).

Highlighting some of the features of the Gemma AI models, Google said that both the variants are pre-trained and instruction-tuned. It is integrated with popular data repositories such as Hugging Face, MaxText, NVIDIA NeMo, and TensorRT-LLM. The language models can run on laptops, workstations, or Google Clouds via Vertex AI and Google Kubernetes Engine (GKE). The tech giant has also released a new Responsible Generative AI Toolkit to help developers build safe and responsible AI tools.

As per reports shared by Google, Gemma has outperformed Meta’s Llama-2 language model in multiple major benchmarks such as Massive Multitask Language Understanding (MMLU), HumanEval, HellaSwag, and BIG-Bench Hard (BBH). Notably, Meta has already begun working on Llama-3, as per various reports.

Releasing open-source smaller language models for developers and researchers is something that has become a trend in the AI space. Stability, Meta, MosaicML, and even Google with its Flan-T5 models already exist in open-source. On one hand, it helps build an ecosystem as all developers and data scientists who are not working with the AI firms can try their hands at the technology, and create unique tools. On the other hand, it also benefits the company as most often firms themselves offer deployment platforms that come with a subscription fee. Further, adoption by developers often highlights flaws in the training data or the algorithm that might have escaped detection before release, allowing the enterprises to improve their models.


Affiliate links may be automatically generated – see our ethics statement for details.

Author: desi123

Desi123.com is an online news portal that aims to provide the latest trendy news for Asians living in Asia and around the World.

Leave a Reply

Your email address will not be published. Required fields are marked *