Google has Approved a trio of recent, “open” models of generative AI which might be described as “safer,” “smaller,” and “more transparent” than most others—a daring claim, little question.
They are additions to Google's Gemma 2 family of generative models, which were introduced in May. The recent Gemma 2 2B, ShieldGemma and Gemma Scope are designed for barely different applications and use cases, but have in common that they’re built for security.
Google's Gemma line of models differs from the Gemini models in that Google doesn’t provide the source code for Gemini, which is utilized by Google's own products and can be available to developers. Rather, Gemma is Google's try to foster goodwill inside the developer community, just like what Meta is attempting to do with Llama.
Gemma 2 2B is a light-weight text generation and evaluation model that may run on a spread of hardware devices, including laptops and edge devices. It is licensed for specific research and business applications and may be downloaded from sources resembling Google's Vertex AI model library, the information science platform Kaggle, and Google's AI Studio toolkit.
ShieldGemma is a set of “safety classifiers” that try to detect toxic content resembling hate speech, harassment, and sexually explicit content. ShieldGemma is built on top of Gemma 2 and may be used to filter prompts to a generative model in addition to the content generated by the model.
Finally, Gemma Scope allows developers to “zoom in” on specific points inside a Gemma 2 model and make its operation more interpretable. Here's how Google describes it in a blog post: “(Gemma Scope consists of) specialized neural networks that help us decode the dense, complex information processed by Gemma 2 and put it right into a form that is simpler to investigate and understand. By studying these expanded views, researchers can gain worthwhile insights into how Gemma 2 recognizes patterns, processes information, and ultimately makes predictions.”
The release of the brand new Gemma 2 models comes shortly after the U.S. Department of Commerce endorsed open AI models in a preliminary report. Open models expand the provision of generative AI to smaller firms, researchers, nonprofits and individual developers, the report said, while underscoring the necessity for capabilities to observe such models for potential risks.