Welcome to this week’s roundup of handcrafted AI news.
This week AI eavesdropped on elephants.
Big Tech firms pulled the plug on their AI plans.
And Big Brother got an AI boost to maintain a more in-depth eye on you.
Let’s dig in.
AI hits the brakes
After severe backlash from security experts, Microsoft decided to not ship its controversial Recall feature with its latest Copilot+ PCs. A top exec assured Congress the corporate now prioritizes security over AI.
Meta also did a U-turn this week, because it backed down on using EU social media data to coach its AI. In a tit-for-tat response, Meta principally told EU users, ‘If we will’t use your data, you then don’t get Meta AI.’ Crybabies.
AI firms act surprised when users and regulators query their handling of AI risks. Are the doomsayers overreacting?
Maybe. But research headed by Anthropic showed how AI models can develop an emergent tendency to cheat, lie, and game the system for rewards.
via GIPHY
AI Dr. Doolittle
Researchers used AI to explore how elephants communicate and located that they use names when addressing one another very similar to people do. This and related research raises some interesting questions.
Could we use AI to speak with animals? Should we? There are surprising arguments on each side of the moral debate.
Humans could do with some help to speak with one another. If you’re employed within the service industry then AI could assist you to deal along with your next irate customer.
A Japanese company has developed an AI ‘emotion canceling’ solution to assist call center operators weather offended callers.
Watching AI watch you
Will AI make us safer? AI-powered cameras have sparked privacy concerns as they keep popping up in additional public spaces to observe over us.
Some of the things Amazon’s Rekognition system detects at UK train stations seem greater than slightly creepy.
In a move that seems straight out of a Big Brother textbook, OpenAI has appointed former NSA head Paul Nakasone to its board.
The guy who pushed for the correct to spy on people and asked for tech firms to assist the NSA do this is now on the board of the largest AI company. Seems legit.
The revolving door at OpenAI saw co-founder Ilya Sutskever leave the corporate last month.
Sutskever believes AI superintelligence is close by and began a brand new company this week to create it safely. Something he didn’t think OpenAI was able to, or willing to do.
How can we make this work?
An IMF report says there’s excellent news and bad news about AI and your job. AI has significant potential to propel productivity, but could also result in massive job losses.
The report offers interesting insights into who’s most in danger and what governments have to do to cushion the blow.
Pope Francis addressed world leaders on AI ethics on the G7 event in Italy. He had some strong views on learn how to blance the advantages of AI with societal ethics.
The short version of his speech is: ‘Don’t let AI make all of our decisions and ban killer robots.’ That seems like a smart start.
AI video gets higher and worse
Text-to-video (T2V) generators have been an incredible barometer for AI advancement. They’ve given us a visible representation of how far AI has come.
Just over a yr ago, we had the horrific AI-generated video of Will Smith eating spaghetti. Last week we saw demos of Luma and Kling and the comparison is rediculous.
The exponential continues.
Scaling laws have held through *15* orders of magnitude…
…yet people proceed to be surprised, attributable to Exponential Slope Blindness https://t.co/IbogcBYspQ pic.twitter.com/TSnjxRKlI1
This week Runway unveiled its hyperrealistic Gen 3 Alpha T2V generator and the demos are even higher than those tools. The physics and camera angle control look amazing.
As generative AI improves, some people will inevitably use it to make sketchy content. Australian authorities are investigating a deep fake incident at a Melbourne school as deep fake incidents targeting young children have change into more common.
AI doc
AI is giving healthcare a shot within the arm with significant advances in helping doctors to diagnose patients.
OpenAI and Color Health partnered in a project to speed up cancer treatment. A copilot tool powered by GPT-4o helps doctors develop personalized cancer care plans at a pace that wouldn’t be possible otherwise.
Treating Parkinson’s disease stays a challenge but a brand new AI-powered blood test could help with earlier detection. A research team found it could predict Parkinson’s disease with as much as 79% accuracy as much as seven years before symptoms surface.
In other news…
Here are another clickworthy AI stories we enjoyed this week:
✍️ Prompt for audio: “A drummer on a stage at a concert surrounded by flashing lights and a cheering crowd.” pic.twitter.com/z0N8sbbsEU
And that’s a wrap.
Should we be using AI to speak with animals? If we found a method to do this, the animals could have some selection words about what we’re doing to the environment.
I’m all for using AI to catch the bad guys, but a camera that knows after I’m having a nasty day is a bit much. Will more smart cameras make our streets safer or is it an Orwellian step too far?
You can guess what OpenAI’s latest board member would say.
The demo Gen 3 Alpha and Sora videos have been fun to observe, but could they stop teasing us and eventually release one in all these tools publicly now?
If you get your hands on a beta release, please share your creations with us and tell us who you needed to bribe at OpenAI to make it occur.