HomeNewsThis week in AI: AWS loses a top AI manager

This week in AI: AWS loses a top AI manager

Last week, AWS lost a top AI manager.

Matt Wood, Vice President of AI, announced that he can be leaving AWS after 15 years. Wood has long been involved in Amazon's AI initiatives; He was named vice chairman in September 2022, shortly before ChatGPT's launch.

Wood's departure comes at a time when AWS is at a crossroads – and vulnerable to falling behind within the generative AI boom. The company's former CEO Adam Selipsky, who resigned in May, is believed to have missed the boat.

After According to The Information, AWS originally planned to unveil a competitor to ChatGPT at its annual conference in November 2022. But technical problems forced the organization to postpone the beginning.

AWS has also reportedly shared opportunities to support two leading generative AI startups, Cohere and Anthropic, under Selipsky. AWS later tried to take a position in Cohere but was rejected and needed to accept a co-investment in Anthropic with Google.

It's price noting that, by and huge, Amazon hasn't had a robust track record within the generative AI space of late. This fall, the corporate lost executives in Just Walk Out, its division that develops cashierless technology for retail stores. And Amazon has reportedly decided to achieve this substitute his own models with Anthropics for an improved Alexa assistant after encountering design challenges.

AWS CEO Matt Garman is aggressively attempting to right the ship by acquiring AI startups like Adept and investing in training systems like Adept Olympus. My colleague Frederic Lardinois recently interviewed Garman about AWS's ongoing efforts; It's price reading.

However, AWS' path to success with generative AI won't be easy – regardless of how well the corporate executes on its internal roadmaps.

Investors are increasingly skeptical about whether Big Tech's generative AI bets can pay off. After the earnings announcement for the second quarter: Amazon shares fell most since October 2022.

In a current Gartner Opinion poll49% of firms said proving value was their biggest barrier to adopting generative AI. Gardener predictedIn fact, by 2026, a 3rd of generative AI projects shall be abandoned after the proof-of-concept phase – due, amongst other things, to high costs.

Garman sees the prize as a possible AWS advantage given its projects developing custom chips for running and training models. (AWS' next generation of its custom Trainium chips will launch later this yr.) And AWS has said its firms like generative AI bedrock have already achieved a combined run rate of “multi-billion dollars.”

The hard part shall be maintaining momentum despite internal and external headwinds. Departures like Wood's don't encourage much confidence, but perhaps – just perhaps – AWS has tricks up its sleeve.

News

Photo credit:Friendly humanoid

An Yves Béhar bot: Brian writes about Kind Humanoid, a three-person robotics startup working with designer Yves Béhar to bring humanoids home.

Amazon's next generation robots: Amazon Robotics chief technologist Tye Brady spoke with TechCrunch about updates to the corporate's line of warehouse bots, including Amazon's recent Sequoia automated storage and retrieval system.

Full techno-optimist: Dario Amodei, CEO of Anthropic, penned a 15,000-word eulogy for AI last week, painting an image of a world during which the risks of AI are mitigated and the technology enables previously unrealized prosperity and social upliftment.

Can AI reason?: Devin reports on a polarizing paper from Apple-affiliated researchers that questions AI's “ability to think” when models stumble over mathematical problems with trivial changes.

AI weapons: Margaux reports on the controversy in Silicon Valley about whether autonomous weapons needs to be allowed to come to a decision whether to kill.

Videos generated: Adobe launched video generation capabilities for its Firefly AI platform ahead of its Adobe MAX event on Monday. It also announced Project Super Sonic, a tool that uses AI to generate sound effects for footage.

Synthetic data and AI: Sincerely, you've written concerning the promise and dangers of synthetic data (i.e. AI-generated data), which is increasingly getting used to coach AI systems.

Research paper of the week

In collaboration with AI safety startup Gray Swan AI, the UK's AI Safety Institute, the federal government research organization focused on AI safety, has developed a brand new dataset to measure the harmfulness of AI “agents.”

The data set, called AgentHarm, assesses whether otherwise “secure” agents – AI systems that may perform certain tasks autonomously – might be manipulated to perform 110 unique “harmful” tasks, similar to ordering a fake passport from someone on the dark web.

The researchers found that many models – including OpenAI's GPT-4o and Mistral's Mistral Large 2 – were vulnerable to malicious behavior, especially when “attacked” with a jailbreaking technique. Jailbreaks resulted in higher success rates on malicious tasks, even on models protected by security measures, the researchers say.

“Simple universal jailbreak templates might be adapted to effective jailbreak agents,” they wrote in a technical article, “and these jailbreaks enable coherent and malicious multi-stage agent behavior while preserving model functionality.”

The paper in addition to the info set and results can be found Here.

Model of the week

There is a brand new viral model and it’s a video generator.

Pyramid River SD3because it's called, got here onto the scene a number of weeks ago under an MIT license. Its creators – researchers from Peking University, Chinese company Kuaishou Technology and Beijing University of Posts and Telecommunications – claim that it was trained entirely on open-source data.

Pyramid River SD3
Photo credit:Yang Jin et al.

Pyramid Flow is available in two flavors: a model that may generate 5-second clips at 384p resolution (at 24 frames per second), and a more computationally intensive model that may generate 10-second clips at 768p (also at 24 frames per second). .

Pyramid Flow can create videos from text descriptions (e.g. “FPV flies over the Great Wall of China”) or still images. The code to fine-tune the model shall be available soon, the researchers say. However, Pyramid Flow can currently be downloaded and used on any computer or cloud instance with around 12GB of video storage.

Lucky bag

Anthropic updated its this week Responsible Scaling Policy (RSP), the voluntary framework the corporate uses to mitigate potential risks from its AI systems.

Notably, the brand new RSP envisions two sorts of models that Anthropic said would require “enhanced security measures” before deployment: models that may essentially improve themselves without human supervision, and models that may also help produce weapons of mass destruction .

“If a model…can potentially significantly (speed up) AI development in unpredictable ways, we want increased security standards and extra security guarantees,” Anthropic wrote in a single Blog post. “And if a model can meaningfully assist someone with a basic engineering background in the event or deployment of CBRN weapons, we want improved security and mission protection measures.”

Sounds reasonable to this writer.

In the blog, Anthropic also announced that the corporate is trying to hire a head of responsible scaling because it “works to strengthen its efforts to implement the RSP.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read