HomeEventsDAI#45 – New top model, lawsuit blues and confused AI

DAI#45 – New top model, lawsuit blues and confused AI

Welcome to the weekly roundup of handpicked, tailored AI news.

This week, Anthropic pushed OpenAI out of pole position.

AI audio generators must face accountability in court.

And the perfect LLMs tackle a puzzle that your kids can solve.

Let's start.

Claude against GPT-4o

After months of AI models claiming they were “almost nearly as good as GPT-4,” we finally have a model that knocks OpenAI off the highest spot on the leaderboards.

Anthropic Approved Claude Sonnet 3.5, an improved version of the mid-range Claude model. The MMLU benchmark tests show that it beats GPT-4o and Google's Gemini 1.5 Pro in almost every test.

With the soon-to-be-released, much more powerful Claude Opus 3.5, the query arises as to how OpenAI will respond.

Claude 3.5 Sonnet is just not like the opposite LLMs 💁‍♀️

11 impressive demos of the brand new model: pic.twitter.com/2oHZdArz6J

After Meta canceled the launch of Meta AI within the EU, Apple is doing the identical as a result of strict laws within the region.

Apple has launched its Apple There, secret services are presented while EU technology fans watch the remainder of the world take the lead.

Sounds familiar…

AI corporations are being sued, and for a change it's not OpenAI or Meta.

The text-to-audio platforms Suno and Udio produce impressive music, but how did they get so good?

The Recording Industry Association of America is suing the businesses, claiming they “stole copyrighted sound recordings” to coach their AI. If the judge listens to those sample clips, it might be a brief day in court.

An AI company that uses copyrighted material to coach its models without paying the developers? That's no more surprising to us than it’s to you.

However, recreating copyrighted music is just not the worst thing AI is used for. According to a DeepMind study, probably the most common type of AI abuse is bad actors creating deep fakes to govern opinion.

The remainder of the list on AI misuse makes for interesting reading.

Are you sure that is correct?

AI models are really good at generating very plausible but completely improper information.

AI scientists say hallucinations can’t be fixed, but a study from Oxford University has found when AI hallucinations are more common.

Semantic Entropy checks the boldness level of the AI ​​model and can also be my latest polite way of claiming that somebody is talking nonsense.

via GIPHY

Even probably the most advanced LLMs give you things when presented with surprisingly easy puzzles. This week, users on X posted examples of the neatest models failing to resolve a straightforward river-crossing puzzle.

Is this evidence that LLM graduates aren’t good at logical pondering, or is there something else occurring here?

AI may struggle with some puzzles, but it surely knows you higher than you’re thinking that. A brand new study has found that an AI system can predict how anxious you might be based in your response to photos.

The ability of those models to infer human emotions might be very useful, but could also potentially be a source of human fears.

AI offensive

When AI corporations use the word “open” to explain their models, it rarely means what you’re thinking that.

How “open” are these AI models? Sam took a better have a look at which AI models are truly open and why some corporations keep certain points top secret.

This week there was an exciting development in the sphere of open models. ESM3 from EvolutionaryScale is a generative model for biology that turns prompts into proteins.

Until now, when on the lookout for a brand new protein, scientists had to attend for nature to provide it or try it at random within the laboratory.

Now ESM3 enables scientists to program biology and create proteins beyond nature.

AI Events

If you're seeking to step up your marketing efforts, take a look at MarTech Summit Hong Kong 2024, happening on July 9.

The AI ​​Accelerator Institute presents the Generative AI Summit Austin 2024 on July 10. The agenda will feature industry leaders discussing the most recent trends in real-world generative AI applications.

In other news…

Here are another click-worthy AI stories we enjoyed this week:

this Toys R Us business was created entirely using artificial intelligence, which suggests the child is gross and creepy, the emotions are hole, and the Toys R Us brand is dead for no less than the third time. pic.twitter.com/IRprWZKN8O

And it is a wrap.

Have you tried the updated Claude? The artifacts window is basically cool. It is certain that ChatGPT will get an identical feature very soon.

I enjoy fiddling with Udio and Suno, but there's no denying that they steal copyrighted music. Is this the worth of progress or is it a showstopper?

I'm still surprised that AI models struggle with a straightforward river crossing puzzle. We should probably fix that before we let AI control really essential things like power grids or hospitals.

Let us know what you’re thinking that and keep sending us links to interesting AI news and research that we could have missed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read