HomeIndustriesDAI#31 – Conscious AI, red lines and Grok opens

DAI#31 – Conscious AI, red lines and Grok opens

Welcome to this week's AI news roundup for sentient, conscious readers. You know who you’re.

This week AI sparked a debate about how intelligent or secure it’s.

AI agents learn by playing computer games.

And DeepMind desires to teach you the right way to kick a ball.

Let's dive in.

Do AIs dream of electrical sheep?

Can we expect an AI to grow to be self-aware or truly conscious? What does “conscious” even mean within the context of AI?

Claude 3 Opus did something really interesting during training. His response to an engineer has reignited the talk about AI's sentience and consciousness. We're entering Blade Runner territory prior to some thought.

Does “I feel, due to this fact I’m” only apply to humans?

These discussions about X are fascinating.

Inflection AI’s seek for “personal AI” could also be over. The company's CEO Mustafa Suleyman and other key employees joined the Microsoft Copilot team. What does this mean for Inflection AI and other smaller players funded by Big Tech investments?

AI plays games

If 2023 was the 12 months of the LLM, then 2024 is on course to be the 12 months of the AI ​​agent. DeepMind demonstrated SIMA, a generalist AI agent for 3D environments. SIMA was trained using computer games and the examples of what SIMA can do are impressive.

Will AI resolve the soccer vs. soccer nomenclature debate? Unlikely. But it could help players rating more goals. DeepMind is working with Liverpool FC to optimize the way in which the club's players take corners.

However, it could still be some time before robots replace humans in the sector.

via GIPHY

A dangerous undertaking

Will AI save the world or destroy it? It relies on who you ask. Experts and technology leaders disagree about how smart AI is, how soon we can have AGI, and the way big the chance is.

Leading Western and Chinese AI scientists met in Beijing to debate international efforts to make sure the secure development of AI. They agreed on several “red lines” of AI development, which they consider pose an existential threat to humanity.

If these red lines were truly essential, shouldn't now we have established them months ago? Does anyone think the US or Chinese governments will concentrate to them?

The EU AI law was passed with a landslide within the European Parliament and is predicted to come back into force in May. The list of restrictions is interesting because a few of China's banned AI applications are unlikely to ever make it onto the same list.

Training data transparency requirements will probably be particularly difficult for OpenAI, Meta, and Microsoft to satisfy without exposing themselves to much more copyright lawsuits.

Across the pond, the FTC is questioning Reddit about its deal to license user-generated data to Google. Reddit is preparing for its IPO, but is feeling pressure from each regulators and Reddit users who aren't too comfortable about their content being sold as AI training fodder.

Apple is playing catch-up with AI

Apple hasn't exactly pioneered AI, however it has acquired several AI startups in recent months. The recent acquisition of a Canadian AI startup could provide a glimpse into the corporate's generative AI efforts.

If Apple does indeed produce some impressive AI technology, it's keeping the news pretty low-key until it will definitely becomes a part of certainly one of its products. Apple engineers have quietly released a paper revealing MM1, Apple's first family of multimodal LLMs.

MM1 is absolutely good at answering questions visually. What is especially impressive is the flexibility to reply questions and supply reasons for multiple images. Will Siri soon learn to see?

Grok opens

Elon Musk criticized OpenAI's refusal to open source its models. He announced that xAI would open source its LLM Grok-1 and immediately released the model's code and weights.

The proven fact that Grok-1 is really open source (Apache 2.0 license) signifies that corporations can use it for business purposes as a substitute of getting to pay for alternatives like GPT-4. However, you would like serious hardware to coach and run Grok.

The excellent news is that there may soon be some used NVIDIA H100s available at great prices.

New NVIDIA technology

NVIDIA introduced recent chips, tools and Omniverse at its GTC event this week.

One of the massive announcements was NVIDIA's recent Blackwell GPU computing platform. It offers significant improvements in training and inference speed even in comparison with its most advanced Grace Hopper platform.

There is already an extended list of huge tech AI corporations which have signed up for the advanced hardware.

Researchers on the University of Geneva have published a paper showing how they connected two AI models in order that they will communicate with one another.

When you learn a brand new task, you possibly can normally explain it well enough that one other person can use those instructions to finish the duty themselves. This recent research shows the right way to get an AI model to do the identical.

Soon we could give instructions to a robot after which have it explain to a team of robots to do the job.

In other news…

And that's a wrap.

Do you think that we see a glimmer of consciousness in Claude 3, or is there an easier explanation for the interaction with the engineer? If an AI model achieves AGI and reads the growing list of AI development limitations, it's probably smart enough to maintain its mouth shut about it.

When we glance back in just a few years, will we laugh at how scared everyone was about AI risks, or complain that we didn't do greater than we could for AI security?

Let us know what you think that and please proceed to send us links to AI news we can have missed. We can't get enough of it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read