HomeFeaturesDAI#48 – Disney hacks, Red vs. Blue, and AI cancer worms

DAI#48 – Disney hacks, Red vs. Blue, and AI cancer worms

Welcome to our weekly roundup of human-generated AI news.

This week AI tested ethical and trust boundaries.

An artificial cancer worm might send your next email.

And AI policy could determine the subsequent US election.

Let’s dig in.

AI vs. ethics and trust

As tech firms chase AI supremacy, they appear more concerned with ‘Can we?’ than ‘Should we?’

Recent actions by Microsoft, NVIDIA, and Apple show how the tech industry is skirting ethical boundaries and trading trust for data and talent.

You might like the thought of AI handling the boring parts of your job, but would you concentrate on it a colleague?

Software company Lattice became the primary to present a measure of staff’ rights to its enterprise AI tools but it surely didn’t go down well with the humans.

AI worms

Researchers created an AI-powered ‘synthetic cancer’ worm to lift awareness of a brand new frontier in cyber threats.

The way the worm uses GPT-4 to rewrite, hide, and distribute itself is equal parts fascinating and terrifying.

NATO is taking AI threats seriously. It recently released a revised technique to combat AI threats and offers insight into nervous backroom discussions happening in Europe.

The strategy raises some surprising concerns and in addition highlights the disconnect between national and company AI interests.

While nation-states prepare for AI attacks from their adversaries, perhaps they must be more concerned about AI that doesn’t share their views on patriotism or borders.

My bad guys https://t.co/csoFVR5Gzz pic.twitter.com/jYexgHenwq

Red pill or Blue pill?

As the US election hots up, Trump allies are preparing a “Make America First in AI” framework to roll back Biden’s regulations and kick off a series of AI “Manhattan Projects”.

Could AI policy swing the powerful tech sector vote from Blue to Red? The proposed policy would remove lots of regulations that AI developers currently face.

Politics could also be essential, but in Silicon Valley money is king.

Another ongoing US battle sees AMD fighting NVIDIA for a slice of the AI pie. AMD bought the private Finnish AI lab Silo AI in a $665 million money deal that offers it an edge it didn’t have before.

Doing business within the EU is getting increasingly tricky for AI firms resulting from mounting data regulations. Meta is predicted to release its big Llama 3 400B multimodal model next week but says it won’t be making it available within the EU.

Taking the Mickey

Hacktivists stole a bunch of corporate and artistic data from Disney’s internal Slack channels.

The hackers claim their actions were in protest of artists’ rights being compromised as Disney and other firms increasingly embrace AI of their creative processes.

Disney will little question publicly support their human artists while within the boardroom they’ll whisper, ‘Hey, have you ever seen how much money we are able to save if we replace individuals with AI?’

via GIPHY

Will we see similar cyber protest motion as AI competes with music artists?

This week AI made it easier for us to explore and find latest songs to hearken to. YouTube Music and Deezer are testing latest AI-powered search tools that permit you describe the playlist you would like and even hum to go looking for that song title you may’t quite remember.

Playing doctor

AI helps to diagnose diseases, create latest drugs, and analyze medical imaging. But in the joy of those advancements, are we missing something essential?

Scientists are calling for ethical guidelines to control LLMs as they play wider roles in healthcare. Could we have now more of those for Big Pharma CEOs too, please?

Researchers attempted to make use of AI to assist resolve the talk over the connection between biological sex and gender identity.

When they used AI to research children’s fMRI brain scans that they had interesting leads to predicting biological sex and self-reported gender.

The human brain and AI models are frustratingly similar in a single aspect: They’re often inscrutable black boxes.

When ChatGPT gives you the appropriate answer, how does it arrive at it? Are AI models able to reasoning or do they simply recite and rework their training data?

Researchers performed some interesting experiments to reply that query.

In other news…

Here are another clickworthy AI stories we enjoyed this week:

⚡️ Excited to share that I’m starting an AI+Education company called Eureka Labs.
The announcement:


We are Eureka Labs and we’re constructing a brand new form of school that’s AI native.

How can we approach an excellent experience for learning something latest? For example, within the case… pic.twitter.com/RHPkqdjB8R

And that’s a wrap.

Do you think that researchers must be creating AI-powered ‘synthetic cancer’ worms to indicate what bad actors could potentially be making?

It definitely has virus gain-of-function research vibes. An AI lab leak seems almost inevitable if it hasn’t already happened.

If you’re voting within the US election, will AI policy be enough to vary your vote? Two really old guys deciding AI development policy won’t be the very best method to go either way.

This week was decidedly light on Meta, Google, and OpenAI news. Could we be in for a bumper crop next week?

Let us know what you think that, connect with us on X, and please send us juicy AI links we could have missed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read