HomeEventsDAI#24 – Brain chips, clones and Swifties fight back

DAI#24 – Brain chips, clones and Swifties fight back

Welcome to this week's craft AI news roundup.

This week we discovered that AI can't make it easier to construct a bioweapon in any case, or perhaps it may possibly.

Tay Tay's army of Swifties fought against fake AI porn.

And AI has made us trust politicians even lower than we already did.

Let's dive in.

Quick injustice

Taylor Swift became the goal of explicit AI deep fake images this week. The resulting comprehensible outrage was also directed against platforms like X, which apparently couldn’t defend themselves against such content.

There was widespread response from each industry and the general public. Taylor's army of Swifties went into Sherlock Holmes mode. They tracked down the person allegedly behind the photographs and shot him after he confidently claimed: “They’ll never find me.”

It's just too easy for anyone to create images like this using AI. It's now even easier with InstantID. The model allows AI image generators to create reproductions from a single image of an individual's face.

The InstantID research paper was published before the Swiftgate drama, but guess whose face the researchers used for instance?



It's official, we will't consider our eyes and ears anymore. Sam's roundup of the political deep fakes we've seen in recent months illustrates the progress in each the dimensions and opportunities that AI offers fraudsters.

AI voice clones have improved dramatically. We have moved from robotic monotonous attempts to full-scale imitation of tone and emotion. George Carlin's fake comedy video on YouTube was a living proof.

Carlin's estate is now suing the makers of the AI ​​fake comedy show with some surprising admissions from the people behind the video.

Deep fake audio is becoming easier to create and harder to detect. The democratization of those AI tools signifies that the common man on the road sees himself as a goal. A Baltimore school principal says the voice in an offensive audio recording will not be his but is an AI fake. You resolve.

My first instinct was, “He’s lying,” but then I saw what an audio forensic expert said concerning the clip.

Open and shut

What's in a reputation? The “Open” in OpenAI now not seems to mean what it once did. OpenAI's departure from its namesake and founding principles makes for interesting reading.

The company claims it continues to be transparent so long as people don't ask questions on its financial reports, model training data, conflict of interest policies, the explanation for Altman's firing, etc. You understand what is going on.

what 2024 will seem like pic.twitter.com/bvgBFSAW4H

OpenAI has been open about its ambitions to supply its own AI chips. Altman quietly flew to South Korea to hunt help from Samsung and a number of other chipmakers.

OpenAI's opaque operations could also be more in step with the secretive nature of leaders further north of Seoul.

A report that uncovered the dynamics of North Korea's resurgent AI industry shows that AI plays an even bigger role there than some may need thought. I believe that Kim Jong Un is an enormous fan of Meta's open source strategy.

The Biden administration is now requiring cloud corporations to report foreign users. If you’re a pc scientist in North Korea, it’s possible you’ll wish to use a great VPN when connecting to AWS.

GPT-4 is getting smart

When you ask GPT-4 to make it easier to brainstorm, some ideas get slightly monotonous. Researchers have developed some clever “prompt engineering” strategies to handle this problem.

If you're searching for creative entries so as to add to your resume, ChatGPT may help with that too. It seems that AI is widely used amongst job candidates and encouraged by hiring managers.

This week we discovered something else that GPT-4 was good at. Researchers found that GPT-4 agreed with experienced physicians on beneficial treatments for stroke victims.

Patients with paralysis or ALS could soon profit from one other Elon Musk project. Musk announced that Neuralink has performed its first brain implant on a human.

This could eventually enable direct communication between the brain and devices corresponding to cell phones or computers. Are we experiencing the prequel to “The Matrix”?


Musk has also been trying to lift $6 billion to take his AI project xAI to the subsequent level. Look at what this man has done to date after which just give him the cash. When does this guy sleep?

safety first

The RAND Corporation received a variety of criticism for an October report that said LLMs “could” help bad guys create a bioweapon. Their latest report says this may occasionally not be true in any case. And then OpenAI conducted its own study, which concluded that a special version of GPT-4 could help the bad guys slightly.

A really real danger could come from AI agents roaming the web unsupervised. The researchers outlined the potential dangers and suggested three things that might increase the visibility of AI agents and make them safer.

Is your superpower drawing attention to other people's flaws? The CDAO and DoD organize events to discover biases in language models. They even pay you a bounty for detecting bias errors.

AI within the EU

The upcoming EU AI Act Summit 2024 starts next week. The summit will likely be an excellent opportunity to debate AI regulation proposals and address the EU AI law and its global implications.

Some civil rights groups are calling for the EU to research OpenAI and Microsoft. The large amount of cash Microsoft has invested in OpenAI raises questions on the impact on competition within the AI ​​sector.

It may be hard to argue against this, as Microsoft is predicted to post its best quarterly revenue growth in two years. Much of that is attributable to AI developments that OpenAI has helped shape.

The Italian Data Protection Authority has raised privacy concerns about ChatGPT's gaffes in disclosing personal data and the implications of defamatory hallucinations.

In other news…

Here are another clickable AI stories we liked this week:

“All your models belong to us.”

This is the tip game of intensive lobbying efforts to adopt regulatory mandates by those that wish to sell “AI security as a service.”

For innovation to thrive, models needs to be permissionless. This is a type of net neural taxation and will likely be a net delay. pic.twitter.com/wpP7zwMCv3

And that's a wrap.

To the Swifties in our audience: we hope you've recovered out of your traumatic week. Were you browsing X whenever you unintentionally discovered the AI ​​images? Or did you’ve to work hard to search out them online?

I don't think anyone will take fake nudes with my face, but I could also be more careful about who I send a voice note to in the long run. AI voice cloning is getting crazier.

Have you signed up for the Neuralink trial? Would you let Elon Musk put a chip in your brain? Musk managed to explode a number of SpaceX rockets before getting it right. I feel I'll wait until they fix the bugs.

Let us know what you think that and send us links to interesting AI stories we could have missed.


Please enter your comment!
Please enter your name here

Must Read