HomeArtificial IntelligenceGoogle says its recent AI models can recognize emotions - and that's...

Google says its recent AI models can recognize emotions – and that's worrying experts

Google says its recent family of AI models has a curious feature: the power to “discover” emotions.

Announced Thursday, the PaliGemma 2 family of models can analyze images, allowing AI to create captions and answer questions on people it “sees” in photos.

“PaliGemma 2 generates detailed, contextually relevant captions,” Google wrote in a blog post shared with TechCrunch, “and goes beyond easy object identification to explain actions, emotions, and the general narrative of the scene.”

According to Google, PaliGemma 2 relies on its open Gemma model set, specifically the Gemma 2 series.Photo credit:Google

Emotion detection doesn’t work out of the box and PaliGemma 2 must be fine-tuned for this purpose. Still, experts TechCrunch spoke to were concerned concerning the prospect of a publicly available emotion detector.

“This concerns me very much,” Sandra Wachter, a professor of information ethics and AI on the Oxford Internet Institute, told TechCrunch. “I find it problematic to assume that we will 'read' people's emotions. It’s like asking a Magic 8 Ball for advice.”

For years, each startups and tech giants have been attempting to develop AI that may recognize emotions for every little thing from sales training to accident prevention. Some claim to have achieved it, but science stands on shaky empirical foundations.

Most emotion detectors are modeled on the early work of Paul Ekman, a psychologist who theorized that folks share six basic emotions: anger, surprise, disgust, joy, fear and sadness. Below Studies pour doubt However, based on Ekman's hypothesis, it shows that there are major differences in the best way people from different backgrounds express their feelings.

“Emotion recognition is mostly impossible because people experience emotions in complex ways,” Mike Cook, a research fellow at Queen Mary University who focuses on AI, told TechCrunch. “Of course we imagine we will tell what other persons are feeling by them, and lots of people have tried through the years, equivalent to spy agencies or marketing firms. I'm sure that in some cases it's definitely possible to acknowledge some generic signifiers, but we will never fully “solve” this.

The unsurprising consequence is that emotion recognition systems are likely to be unreliable and influenced by the assumptions of their developers. In a 2020 MIT studyResearchers showed that facial evaluation models can develop unintended preferences for certain expressions, equivalent to smiling. More recent work suggests that emotional evaluation models attribute more negative emotions to the faces of black people than to the faces of white people.

Google says it conducted “extensive testing” to evaluate demographic bias in PaliGemma 2 and located “low levels of toxicity and profanity” in comparison with industry benchmarks. However, the corporate didn’t provide the complete list of benchmarks used, nor did it specify what forms of tests were conducted.

The only benchmark published by Google is FairFace, a series of portrait photos of tens of hundreds of individuals. The company says PaliGemma 2 performed well at FairFace. But some researchers have done it criticized the benchmark as a bias metric and finds that FairFace only represents a handful of racial groups.

“Interpreting emotions is a reasonably subjective matter that goes beyond the usage of visual aids and is deeply embedded in a private and cultural context,” said Heidy Khlaaf, senior AI scientist on the AI ​​Now Institute, a nonprofit organization that influences society Effects of artificial emotions examine intelligence. “Aside from AI, research has shown that we cannot infer emotions from facial expression alone.”

Emotion recognition systems have drawn the ire of regulators abroad who’ve sought to limit the technology's use in high-risk contexts. The AI ​​law, a very powerful AI law within the EU, prohibits To discourage schools and employers from using emotion detectors (but not law enforcement agencies).

The biggest fear with open models like PaliGemma 2, available from plenty of hosts including AI development platform Hugging Face, is that they will probably be misused or abused, which could lead on to real-world harm.

“If this so-called 'emotional identification' is built on pseudoscientific assumptions, this has significant implications for a way this ability might be used to further – and falsely – discriminate against marginalized groups, for instance in law enforcement, recruiting, border management, etc so on,” Khlaaf said.

Asked concerning the dangers of releasing PaliGemma 2 to the general public, a Google spokesperson said the corporate stands behind its testing for “display damage” because it pertains to visual query answering and captioning. “We have conducted robust assessments of the PaliGemma 2 models when it comes to ethics and safety, including child safety and content safety,” they added.

Watcher isn't convinced that's enough.

“Responsible innovation means occupied with consequences from the primary day you walk into your lab and continuing to achieve this throughout a product’s lifecycle,” she said. “I can imagine countless potential problems (with models like this) that could lead on to a dystopian future wherein your feelings determine whether you get the job, a loan and admission to school.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read