HomeFeaturesDAI#34 – Data grabs, hot chips, and plus-size models

DAI#34 – Data grabs, hot chips, and plus-size models

Welcome to this week’s roundup of triple-distilled artisanal AI news.

This week, firms that scrape your data free of charge complained that others were stealing it.

Big Tech is making its own AI chips to compete with NVIDIA.

And a bunch of latest AI models are flying off the shelves.

Let’s dig in.

AI data hunger games

AI’s insatiable hunger for data continues as firms scramble for text, audio, and video to coach their models.

It’s an open secret that OpenAI almost definitely scraped YouTube video content to coach its text-to-video model Sora. YouTube CEO Neal Mohan warned OpenAI that it can have breached its terms of service. Well, boohoo. Have you seen how cool Sora is?

You’ve got to think that Google should have cut some ethical corners to coach its models. Having YouTube cry foul over the “rights and expectations” of the creators of videos on its platform is a bit wealthy.

A better have a look at Big Tech’s tussle over AI training data reveals how Google amended its Google Docs privacy policies to make use of your data. Meanwhile, OpenAI and Meta proceed to push the legal and ethical boundaries within the pursuit of more training data.

I’m undecided where the info got here from for the brand new AI music generator Udio, but folks have been prompting it with some interesting ideas. This is proof AI ought to be regulated.

This is wild.

Udio just dropped and it’s like Sora for music.

The music are insane quality, 100% AI. 🤯

1. “Dune the Broadway Musical” pic.twitter.com/hzt7j32jIV

Chips ahoy

All that controversial data needs to be processed and NVIDIA hardware is doing most of that. Sam takes us through NVIDIA’s rags-to-riches story from 1993 (when you need to have bought shares) to now (whenever you would have been wealthy) and it’s fascinating.

While firms keep lining up to purchase NVIDIA chips hot out of the oven, Big Tech is attempting to wean itself off its chips.

Intel and Google unveiled recent AI chips to compete with NVIDIA, regardless that they’ll still be buying NVIDIA’s Blackwell hardware.

Release the models!

It’s crazy that just over a 12 months ago, OpenAI had the one models getting any real attention. Now there’s a relentless stream of latest models from the Big Tech usual suspects and smaller startups.

This week we saw three AI models released inside 24 hours. Google’s Gemini Pro 1.5 now has a 1M token context window. Big context is great, but will its recall be nearly as good as Claude 3?

There were interesting developments with OpenAI enabling API access to GPT-4 with vision, and Mistral just gave away one other powerful smaller model.

Politician turned Meta exec Nick Clegg spoke at Meta’s AI event in London to champion open-source AI. Clegg also said that Meta expects Llama 3 to roll out very soon.

During discussions around AI disinformation, he bizarrely downplayed AI’s role in attempts to influence recent major elections.

Does this guy even read the news we report here at Daily AI?

What issues of safety?

Geoffrey Hinton, considered the godfather of AI, was so concerned over AI safety that he quit Google. Meta’s Yann LeCun says there’s nothing to fret about. So which is it?

A Georgetown University study found that just 2% of AI research is concentrated on safety. Is that because there’s nothing to fret about? Or should we be concerned that researchers are focused on making more powerful AI with little thought to creating it secure?

Should ‘move fast and break stuff’ still be AI developers’ rallying cry?

The trajectory of AI development has been exponential. In an interview this week, Elon Musk said he expects AI could also be smarter than humans by the tip of 2025.

AI expert Gary Marcus doesn’t agree and he’s willing to place money on it.

$1 million says your latest prediction – that AI will likely be smarter than any individual human by the tip of 2025 – is incorrect.

Game? I can suggest some rules to your approval.

Best wishes,
Gary

P.S. Note that in some respects (but not all) computers have been…

xAI is facing the identical NVIDIA chip shortage challenge many others are. Musk says the 20,000 NVIDIA H100s the corporate has will complete Grok 2’s training by May.

Guess what number of GPUs he says they’ll have to train Grok 3.

Anthropic streaks ahead

Anthropic says it develops large-scale AI systems in order that they can “study their safety properties on the technological frontier.”

The company’s Claude 3 Opus is definitely on the frontier. A brand new study shows the model blows the remaining of the competition away in summarizing book-length content. Even so, the outcomes show that humans still have the sting in some respects.

Anthropic’s latest tests show that Claude LLMs have turn into exceptionally persuasive, with Claude 3 Opus generating arguments as persuasive as those created by humans.

People who play down the AI safety risks often say that you possibly can simply pull the plug if an AI went rogue. What if the AI was so persuasive that it could persuade you not to do this?

Claude 3’s big claim to fame is its massive context window. Anthropic released a study that shows large context LLMs are vulnerable to a “many-shot” jailbreak technique. It’s super easy to implement and so they admit they don’t know fix it.

In other news…

Here are another clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Do you care that OpenAI, and certain others, can have used your YouTube content to coach their AI? If Altman released Sora free of charge then I’m guessing all can be forgiven. Google may disagree.

Do you’re thinking that Musk is being overly optimistic together with his AI intelligence predictions? I hope he’s right, but I’m just a little uneasy that only 2% of AI research goes into safety.

How crazy is the quantity of AI models we’re seeing now? Which one is your favorite? I’m hanging onto my ChatGPT Plus account and hoping for GPT-5. But Claude Pro is looking very tempting.

Let us know which article stood out for you and keep sending us links to any juicy AI stories we can have missed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read