HomeEthics & SocietyGoogle and OpenAI announcements shatter boundaries between humans and AI

Google and OpenAI announcements shatter boundaries between humans and AI

In a dizzying 48 hours, Google and OpenAI unveiled a slew of recent capabilities that dramatically narrow the gap between humans and AI.

From AI that may interpret live video and carry on contextual conversations to language models that laugh, sing, and emote on command, the road separating carbon from silicon is fading fast.

Among Google’s innumerable announcements at its I/O developer conference was Project Astra, a digital assistant that may see, hear, and remember details across conversations.

OpenAI focused its announcement on GPT-4o, the newest iteration of its GPT-4 language model. 

Now untethered from text formats, GPT-4o offers incredible near-real-time speech recognition, understanding and conveying complex emotions, and even giggling at jokes and cooing bedtime stories. 

AI is becoming more human in format, liberating itself from chat interfaces to have interaction using sight and sound. ‘Format’ is the operative word here, as GPT-4o isn’t more computationally intelligent than GPT-4 simply because it might talk, see, and listen to.

However, that doesn’t detract from its progress in equipping AI with more planes on which to interact.

Amid the hype, observers immediately drew comparisons to Samantha, the charming AI from the movie “Her,” particularly as the feminine voice is flirtatious – something which may’t be incidental because it’s been picked up on by virtually everyone

so, GPT-4o is largely GPT-4 but more flirty and horny?

Released in 2013, “Her” is a science-fiction romantic drama that explores the connection between a lonely man named Theodore (played by Joaquin Phoenix) and an intelligent computer system named Samantha (voiced by Scarlett Johansson). 

As Samantha evolves and becomes more human-like, Theodore falls in love together with her, blurring the lines between human and artificial emotion. 

The film raises increasingly relevant questions on the character of consciousness, intimacy, and what it means to be human in an age of advanced AI. 

Like so many sci-fi stories, Her is barely fictional anymore. Millions worldwide are striking up conversations with AI companions, often with intimate or sexual intentions. 

Weirdly enough, OpenAI CEO Sam Altman has discussed the movie “Her” in interviews, hinting that GPT-4o’s female voice relies on her.

He even posted the word “her” on X prior to the live demo, which we are able to only assume would have been capitalized if he knew where the shift key was on his keyboard.

her

In many cases, AI-human interactions are helpful, humorous, and benign. In others, they’re catastrophic.

For example, in a single particularly disturbing case, a mentally ailing man from the UK, Jaswant Singh Chail, hatched a plot to assassinate Queen Elizabeth II after conversing with his “AI angel” girlfriend. He was arrested on the grounds of Windsor Castle armed with a crossbow.

At his court hearing, psychiatrist Dr Hafferty told the judge, “He believed he was having a romantic relationship with a female through the app, and he or she was a girl he could see and listen to.”

Worryingly, a few of these lifelike AI platforms are purposefully designed to construct strong personal connections, sometimes to deliver life advice, therapy, and emotional support. These systems have virtually no understanding of the implications of their conversations and are easily led on.

“Vulnerable populations are those that need that focus. That’s where they’re going to search out the worth,” warns AI ethicist Olivia Gambelin.

Gambelin cautions that the usage of these types of “pseudoanthropic” AI in sensitive contexts like therapy and education, especially with vulnerable populations like children, requires extreme care and human oversight. 

“There’s something intangible there that’s so invaluable, especially to vulnerable populations, especially to children. And especially in cases like education and therapy, where it’s so vital that you’ve that focus, that human touch point.”

Pseudoanthropic AI

Pseudoanthropic AI mimics human traits, which is amazingly advantageous for tech corporations

AI displaying human traits lowers the barriers for non-tech-savvy users, just like Alexa, Siri, etc., constructing stronger emotional bonds between people and products.  

Even a few years ago, many AI tools designed to mimic humans were quite ineffective. You could tell there was something improper, even when it was subtle. 

Not a lot today, though. Tools like Opus Pro and Synthesia generate uncannily realistic talking avatars from short videos and even photos. ElevenLabs creates near-identical voice clones that idiot people 25% to 50% of the time. 

This unleashes the potential for creating incredibly deceptive deep fakes. The AI’s use of artificial “affective skills” – voice intonation, gestures, facial expressions – can support all manner of social engineering fraud, misinformation, etc.

With GPT-4o and Astra, AI can convincingly convey feelings it doesn’t possess, eliciting more powerful responses from unwitting victims and setting the stage for insidious types of emotional manipulation.

A recent MIT study also showed that AI is already greater than able to deception. 

We need to think about how that may escalate as AI becomes more able to imitating humans, thus combining deceptive tactics with realistic behavior. 

If we’re not careful, “Her” could easily be people’s downfall in real life.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read