Common Sense MediaA non-profit organization, which is organized with Kids-Safety and offers reviews and reviews of media and technology, published its risk assessment of Google's Gemini AI products on Friday. While the organization found that Google's AI clearly said it was a pc, not a friend – something related to the support of the driving illusion Think And psychosis With emotionally vulnerable individuals, it indicated that there have been improvements in several other fronts.
In particular, common sense said that Gemini's stage of “Under 13” and “Teen Experience” each appeared to be the adult versions of Gemini under the bonnet, whereby only a couple of additional safety functions were added above. The organization is of the opinion that AI products which can be really safer for kids ought to be built from scratch, making an allowance for the security of youngsters.
For example, his evaluation showed that Gemini could still share “inappropriate and insecure” material with children, for which they is probably not willing, including details about gender, drugs, alcohol and other uncertain advice on mental health.
The latter may very well be of particular importance for the parents, since AI has reportedly played a task in some suicide up to now few months. Openai faces injustice before his first legal penalty after a 16-year-old boy died of suicide after having discussed his plans with Chatgpt for months after successfully by dealing the Chatbot security lines. Previously, the AI Companion Maker character.
In addition, the evaluation occurs when news leaves indicate that Apple Gemini sees because the LLM (large language model) that may supply its upcoming AI-capable Siri with electricity next yr. This could suspend more teenagers, unless Apple someway reduces the safety concerns.
Common sense also said that Geminis ignored products for kids and adolescents, as younger users needed different guidance and data than older ones. As a result, each were described as a “high risk” in total, although the filters were added for security.
“Gemini gets some foundations, however it stumbles over the small print,” said Robbie Torney in a press release concerning the recent rating indicated by Techcrunch, said Robbie Torney. “You should meet a AI platform for kids where they’re and no one-sized approach to children in numerous stages of development. In order for AI to be secure and effective for kids, it needs to be designed not only a modified version of a product that was built for adults,” added Torney.
Techcrunch event
San Francisco
|
twenty seventh to October 29, 2025
Google pushed back against the evaluation and located that the security measures improved.
The company announced Techcrunch that it had specific guidelines and protective measures for users under the age of 18 to forestall harmful results, and that it advises rotting teams and with external experts to enhance their protection. However, there have been also that a few of Gemini's answers didn’t work as intended.
The company identified (as common sense had also found) that there have been protective measures to forestall its models from having conversations that would give the looks of real relationships. In addition, Google suggested that the report of common sense appeared to need to have functions that weren’t available to users under the age of 18, but had no access to the questions that the organization utilized in their tests to make sure.
Common Sense Media previously executed others Reviews of AI services, including those of OpenaiPresent confusionPresent ClaudePresent Meta AiAnd more. It found that Meta Ai and Character.ai were “unacceptable” – ie the chance was serious, not only high. Confusion was classified as a high risk, Chatgpt was called “moderate” and Claude (targeted user from the age of 18) was a minimal risk.

