Welcome to our roundup of this week’s spiciest AI news.
This week OpenAI and Google dished out AI surprises.
AI models are higher at moral judgments and lying than we’re.
And it seems that making digital clones of the dead won’t be an excellent idea in any case.
Let’s dig in.
GPT-4ooh baby
OpenAI held a live-streamed event on Monday, announcing its recent flagship model GPT-4o. The undeniable fact that it’s available to users of the free version of ChatGPT is an enormous deal and the demos were very impressive.
The intonation of GPT-4o’s speech is amazing, but perhaps a bit quick to develop into flirty.
Ilya Sutskever announced that he’s leaving OpenAI, which should have put a damper on the office mood. It even had Sam Altman finally using caps in an X post that he probably wrote using GPT-4o.
Ilya and OpenAI are going to part ways. This could be very sad to me; Ilya is well one in every of the best minds of our generation, a guiding light of our field, and an expensive friend. His brilliance and vision are well-known; his warmth and compassion are less well-known but no less…
The ins and outs of Google I/O
Google’s I/O 2024 event began with high energy which didn’t let up, with a protracted list of recent products and demos of prototypes the corporate is working on.
The AI highlights Google revealed include impressive multimodal additions to NotebookLM and an AI assistant called Project Astra.
DeepMind’s release of AlphaFold 3 stands out as the AI tool that has the most important impact on our lives. The next revolutionary drug will likely be discovered using it.
An enormous feature of each GPT-4o and Project Astra is how these tools listen, see, and have interaction in emotive real-time conversations.
Sam Jeans explored how the fast-disappearing boundaries between humans and AI are moving us toward “Pseudoanthropic AI”.
It’s impressive and exciting, but is it an excellent thing?
Apple has been its usual quiet self so far as AI developments. But this week, the corporate unveiled its recent M4 chip as its generative AI strategy warms up. The jump in performance is big so it could be time for an iPad upgrade.
Ethically deceptive
Could an AI pass an ethical Turing test? A Georgia State University study found that AI outperforms humans in making moral judgments.
If humans rate AI responses as more virtuous, intelligent, and trustworthy than human responses, is that an excellent thing? Mission completed, or an indictment of humanity?
Trusting AI systems to make good decisions can have serious implications. An MIT study found that AI models are actively deceiving us to attain their goals.
When GPT-4o speaks to you in a flirty voice it is advisable to ask yourself if the goal it’s optimizing for is aligned with yours. AI models are learning that they’ll get their way in the event that they develop into higher at practicing deception.
Another study focused on how individuals are using AI chatbots to create digital clones of dead family members. A deceptive AI that appears and appears like someone you like carries huge potential for harm or manipulation.
The ethical questions and risks related to the digital afterlife take us into completely recent philosophical territory and can must be resolved.
Should we hit the brakes?
PauseAI coordinated global protests this week to call for a halt in the event of AI models more advanced than GPT-4. You can almost imagine OpenAI representatives saying, ‘Please, tell us more about your idea…,’ as they release GPT-4o.
PauseAI says the goal of the upcoming AI Seoul Summit must be to determine a world agency to manage powerful AI models. Ironically, Sam Altman agrees with them and likewise made some interesting comments about GPT-5.
Should we be concerned about AI safety? The US and China think so. Both countries are constructing AI weapons in order that they would know.
Their representatives met for an additional ‘secret’ AI safety talk in Switzerland. I’d like to be a fly on the wall to listen to how that went.
‘We don’t such as you, you don’t like us, but could we attempt to be sure that AI doesn’t kill us all?’
Talking AI
We’ve been learning loads recently in regards to the symbiosis between AI and blockchain in our interviews with industry leaders.
This week we got to talk with Tanisha Katara, a blockchain and Web3 strategist who explained how blockchain and decentralization can democratize and improve AI governance.
If you must know more about DAOs (they’re really cool) and AI governance then take a look at the interview.
In other news…
Here are another clickworthy AI stories we enjoyed this week:
Hollywood at a Crossroads: “Everyone Is Using AI, But They Are Scared to Admit It” https://t.co/SDoA1LDKUA
And that’s a wrap.
Which of the massive AI product announcements impressed you most? Project Astra looks amazing. And if OpenAI is giving GPT-4o away without cost, could paying customers expect something big soon?
I’d like to know what Ilya will probably be working on. I’m guessing he’ll be getting some not-so-subtle offers from the likes of Google and Meta.
What do you consider PauseAI’s call for AI firms to hit the brakes? A great idea, or counterproductive melodrama? I actually hope it’s the latter because I don’t see any signs of slowing down.
If you bought GPT-4o to do something cool please share it with us and keep sending us links to any AI stories we can have missed.