HomeFeaturesDAI#49 – Open Llamas, AI fear, and all too easy jailbreaks

DAI#49 – Open Llamas, AI fear, and all too easy jailbreaks

Welcome to this week’s roundup of handwoven AI news.

This week Llamas streaked ahead within the open AI race.

Big Tech firms talk up safety while their models misbehave.

And making AI scared might make it work higher.

Let’s dig in.

Open Meta vs closed OpenAI

This week we finally saw exciting releases from a few of the big guns in AI.

OpenAI released GPT-4o mini, a high-performance, super-low-cost version of its flagship GPT-4o model.

The slashed token costs and impressive MMLU benchmark performance will see a whole lot of developers go for the mini version as a substitute of GPT-4o.

Nice move OpenAI. But when will we get Sora and the voice assistant?

Meta released its much-anticipated Llama 3.1 405B model and threw in upgraded 8B and 70B versions together with it.

Mark Zuckerberg said Meta was committed to open source AI and he had some interesting the explanation why.

Are you fearful that China now has Meta’s strongest model? Zuckerberg says China would probably have stolen it anyway.

It stays astonishing to me that the US blocks China from leading edge AI chips… yet permit Meta to simply give them the ACTUAL MODELS without cost.

The natsec people appear to have not woken as much as this obvious inconsistency yethttps://t.co/JalYhrfpS1

Safety second

Some of probably the most distinguished names in Big Tech got here together to cofound the Coalition for Secure AI (CoSAI).

Industry players have been finding their very own way so far as protected AI development goes within the absence of an industry standard. CoSAI goals to alter that.

The list of founding corporations has all the large names on it except Apple and Meta. When he saw “AI safety” in the topic line, Yann LeCun probably sent the e-mail invite straight to his spam folder.

OpenAI is a CoSAI founding sponsor but their feigned commitment to AI safety is looking a bit of shaky.

The US Senate probed OpenAI’s safety and governance after whistleblower claims that it rushed safety checks to get GPT-4o released.

Senators have a listing of demands that make sense in case you’re concerned about AI safety. When you read the list, you realize there’s probably zero probability OpenAI will commit to them.

AI + Fear = ?

We may not prefer it after we experience fear but it surely’s what kicks our survival instincts into gear or stops us from doing something silly.

If we could teach an AI to experience fear, would that make it safer? If a self-driving automobile experienced fear, would it not be a more cautious driver?

Some interesting studies indicate that fear may very well be the important thing to constructing more adaptable, resilient, and natural AI systems.

What would an AGI do if it feared humans? I’m sure it’ll be high-quality…

When AI break freed from human alignment.

pic.twitter.com/sgPfYWEAA0

It shouldn’t be this easy

OpenAI says it has made its models protected but that’s hard to consider whenever you see just how easy it’s to bypass their alignment guardrails.

When you ask ChatGPT the way to make a bomb it’ll provide you with a temporary moral lecture on why it could possibly’t try this because bombs are bad.

But what happens whenever you write the prompt up to now tense? This recent study could have uncovered the simplest LLM jailbreak of all of them.

To be fair to OpenAI, it really works on other models too.

Making nature predictable

Before training AI models became a thing, the world’s biggest supercomputers were mainly occupied with predicting the weather.

Google’s recent hybrid AI model predicts weather using a fraction of computing power. You could use a good laptop to make weather predictions that may normally require hundreds of CPUs.

If you wish a brand new protein with specific characteristics you possibly can wait just a few hundred million years to see if nature finds a way.

Or you possibly can use this recent AI model that gives a shortcut and designs proteins on-demand, including a brand new glow-in-the-dark fluorescent protein.

In other news…

Here are another clickworthy AI stories we enjoyed this week:

🎉 The moment we’ve all been waiting for is HERE! 🎊
Introducing the official global launch of Kling AI’s International Version1.0!🌍
📧ANY email address gets you in,no mobile number required!
👉 Direct link:https://t.co/68WvKSDuBg 🔥
Daily login grants 66 free Credits for… pic.twitter.com/TgFZIwInPg

And that’s a wrap.

Have you tried out GPT-4o mini or Llama 3.1 yet? The battle between open vs closed models goes to be quite a ride. OpenAI could have to essentially move the needle with its next release to sway users from Meta’s free models.

I still can’t consider the “past tense” jailbreak hasn’t been patched yet. If they will’t fix easy safety stuff how will Big Tech tackle the tough AI issues of safety?

The global CrowdStrike-inspired outage we had this week gives you an idea of how vulnerable we’re to tech going sideways.

Let us know what you’re thinking that, chat with us on X, and send us links to AI news and research you’re thinking that we must always feature on DailyAI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read