HomeFeaturesDAI#46 – Skeleton key, exam cheats, and famous AI voices

DAI#46 – Skeleton key, exam cheats, and famous AI voices

Welcome to this week’s roundup of bio-generated AI news.

This week AI got famous voices from beyond the grave.

Your AI memes are driving up Google’s utility bill.

And countries say ‘Let’s share our tech’ as they deploy more AI weapons.

Let’s dig in.

This one easy trick

After all of the alignment work AI firms have done during the last 18 months you’d expect that getting an AI model to misbehave can be tough now, right?

Microsoft revealed a “Skeleton Key Jailbreak” which works across different AI models and is laughably easy. But was this only a marketing exercise?

Making one model protected is difficult. What will it take to get multiple models to work together safely?

AI is becoming an enormous a part of our smartphones with solutions from multiple AI vendors being jammed together. This integration is throwing up some unusual collaborations between competitors and raises potential risks.

The way the largest AI firms demo, release, and recall their products doesn’t exactly encourage confidence.

When OpenAI conducts a demo, that is how ready their product actually is. pic.twitter.com/sCnfZSJj7C

Search controversy

Google’s commitment to becoming carbon-neutral is being derailed by our hunger for AI.

The search giant’s sustainability report demonstrates the urgent need for greener AI and makes for uncomfortable reading if you happen to’re concerned concerning the environment.

Google’s greenhouse gas emissions for 2023 in comparison with 2019 are crazy.

Search upstart Perplexity AI is embroiled in controversy. The news articles it serves up are allegedly scraped from reputable news outlets and reproduced without attribution.

In a time where AI firms play fast and loose with data, Perplexity’s CEO says the outrage over its copy-paste approach is only a “misunderstanding.”

AI examination fails

The Detroit Police Department has blamed AI for a ‘misunderstanding’ that saw an innocent man arrested for shoplifting.

They settled with the victim and made changes to their facial recognition policy. Apparently trusting the pc though the person it flags clearly doesn’t seem like the suspect is a nasty idea.

‘So officer, because you and I each agree that I don’t seem like the guy within the photo, am I free to go?’

via GIPHY

It’s not only the cops which are being fooled by AI.

Researchers on the University of Reading created fake student profiles and submitted answers generated entirely by ChatGPT to online psychology exams.

Guess how a lot of these cheating ‘students’ were flagged by the exam evaluators.

AI finds its voice

It’s getting increasingly difficult to inform human voices aside from AI-generated ones. A brand new study found that the emotion of the voice affects the percentages of you spotting AI appropriately.

Even when we will’t tell them apart, our brain reacts in a different way to human and AI voices. Guess which a part of your brain responds when it processes an AI voice.

Out-of-work voice actors may disagree, but AI-generated voices are an inevitable a part of how we’ll devour content. ElevenLabs signed deals to make use of the long-lasting voices of some famous dead celebrities in its Reader App.

Which of those voices will you select to read your ebook to you? How would Laurence Olivier feel about being reduced to reading your emails?

Fair share?

The UN adopted a Chinese-sponsored resolution that calls for wealthier countries to share their AI technology and know-how with developing countries. The US supported the bill but China accused it of ignoring key commitments with its ongoing sanctions.

Will this resolution see AI advantages flow to poorer countries, or will economic and political interests scupper it?

One of the important thing concerns about sharing AI tech is expounded to defense applications. AI weapons are moving from defense contractors’ dreams to grim reality.

Ongoing conflicts are turning modern battlefields right into a breeding ground for experimental AI weaponry.

Is there an ethical method to allow AI to determine who lives and who dies? What happens when it inevitably goes improper?

By design

University of Toronto researchers built a peptide prediction model called PepFlow and it beats Google’s AlphaFold 2.

Peptide drugs have key benefits over small-molecule and protein-based medicines. PepFlow could make it much easier for scientists to design the subsequent groundbreaking medicine.

How much variation is there amongst butterflies? Are males more diverse than females? If these questions have kept you up at night then AI could help.

A brand new study used AI to unravel birdwing butterfly evolution, shedding light on evolutionary debates.

In other news…

Here are another clickworthy AI stories we enjoyed this week:

Figures fabricated from spaghetti dancing ballet in a dish, realistic pic.twitter.com/PcswnYObLl

And that’s a wrap.

Have you tried the ElevenLabs Reader App? I don’t know if I’d want Judy Garland’s voice to read my emails to me. Which voices should they add next?

I recycle and take a look at to cut back my use of plastic but Google’s sustainability report now has me rethinking how I take advantage of AI. Are environmentally friendly prompts something we needs to be working on?

Let’s hope AI gives us clean energy and quantum computing so we will get back to guilt-free ChatGPT.

Let us know what you think that, chat with us on X, and keep sending us cool AI stories we can have missed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read