HomeEventsDAI#40 – Imitation, OpenAI drama and AI security issues

DAI#40 – Imitation, OpenAI drama and AI security issues

Welcome to the weekly roundup of stories about human-generated AI.

This week, an AI insulted an actress and lost her voice.

Sony doesn't want AI to take heed to its music.

And we glance into the “black box” to decipher the AI ​​brain.

Let's start.

Show your work

Would it matter if an AI system at all times gave you the correct answer, but you didn't know the way it worked? Even the engineers who develop LLMs don't fully understand how they work.

Sam Jeans examines Anthropic's try and change this by having its researchers look contained in the “black box” to decode the AI ​​brain. What did they find?

I'm speechless

Scarlett Johansson said she was shocked to listen to that GPT-4o's sensual “Sky” voice sounded eerily like hers.

Sam Altman says his request to make use of her voice, Johansson's refusal, his tweet referencing the movie “Her” and the proven fact that Sky sounds just like Johansson were all pure coincidences.

What do you think that? This X-post sums up the talk perfectly.

Does Sky sound like Scarlett Johansson? pic.twitter.com/PMTEWt0E81

While this debate continues, we still must be certain that AI doesn’t destroy us all.

It is becoming increasingly clear that Ilya Sutskever and Jan Leike from OpenAI's Superalignment team can have left the corporate because of security concerns. What did they see?

The soap opera drama at OpenAI stays because it raises further questions on Altman's leadership.

You can't touch that

Sony Music Group has warned 700 firms, including Google, Microsoft and OpenAI, that their music and other content is off-limits for AI training.

Sony: “We imagine you used our music. Did you?”
AI company: “We would never try this.”
Sony: “Can we take a take a look at your training data?”
AI company: “Um…”

I assumed it could be appropriate to have Suno shamelessly sum up the situation. It's awful. I like it.

The best image, video or music generators are almost actually trained on copyrighted data. But does it should be that way?

Researchers on the University of Texas have found a approach to train a model to create images without “seeing” copyrighted works.

AI deepfakes have gotten increasingly popular

Bollywood movies could also be one among India's biggest industries, but with elections within the country in full swing, political AI-based deepfakes as a service are having fun with a growing trend.

The line between creative political messages and dangerous AI-generated disinformation is blurring, with potentially serious consequences.

The developers of leading AI models say they’ve put safeguards in place to forestall misuse of their tools, but they don't appear to be working particularly well.

A British government study found that each one five LLMs tested by researchers were “highly vulnerable” to “easy” jailbreaks.

Your job on autopilot

Microsoft has introduced additional AI-powered tools for work automation at its Build event. With upgrades for co-pilot and AI agents can now perform on a regular basis tasks. Your boss could also be wondering for those who still need to come back to work on Monday.

Leading AI firms have agreed to a series of latest voluntary safety commitments ahead of the two-day AI Summit in Seoul. Perhaps they may conform to use their profits to fund a universal basic income (UBI) to exchange staff' salaries.

Proponents of e/acc will let you know we don't must worry about AI safety, but Google is clearly greater than slightly nervous. The company just released its Frontier Safety Framework to mitigate expected “serious” AI risks.

The hypothetical scenarios the document describes are frightening. Even more frightening is Google's admission that there are dangers it cannot foresee.

Yann LeCun disagrees.

Yann LeCun says AI poses no threat of extinction and current language models aren’t any more dangerous than accessing a library pic.twitter.com/NuKe3LKnxJ

Talking AI

Sam Jeans had an enchanting discussion with Chris Benjaminsen, co-founder and director of channels at FRVR, a platform that uses generative AI to create natural language games.

Sam tried his hand at developing two games and showed how easy the method is.

Want to try making your individual game? You can access the FRVR.ai public beta free of charge here and begin making your individual games.

AI Events

This week, the 14th annual City Week conference in London brought together over 1,000 senior decision makers from financial institutions all over the world to debate how technologies like AI are transforming the financial industry.

At the Enterprise Generative AI Summit West Coast in Silicon Valley, California, AI practitioners, data scientists and business leaders explored how firms can integrate generative AI capabilities into their organizations.

If you're planning a visit to the Middle East, here's a very good reason to book your ticket. The COMEX Global Technology Show 2024 takes place next week and offers an exciting glimpse right into a future shaped by AI, VR and blockchain.

In other news…

Here are another click-worthy AI stories we enjoyed this week:

And this can be a wrap.

Do you think that Sky seems like Scarlett Johansson? I'm sure that's what Altman was going for, but I don't really hear it. I hope they create Sky back.

I'd like to know what OpenAI is working on that caused its superalignment team to desert ship. That should be pretty impressive. And scary.

Have you tried developing your individual AI game? We'd like to try it out. Send us a link and tell us if we missed any interesting AI news.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read