HomeNewsThis week in AI: OpenAI moves away from security

This week in AI: OpenAI moves away from security

Keeping up with an industry as fast-moving as AI is a significant challenge. Until an AI can do it for you, here's a handy roundup of the most recent stories from the world of machine learning, in addition to notable research and experiments that we haven't covered alone.

By the way in which, TechCrunch is planning to publish an AI newsletter soon. Stay tuned. In the meantime, we're increasing the frequency of our semi-regular AI column, which previously appeared about twice a month, to weekly – so keep a watch out for further editions.

This week within the AI ​​space, OpenAI once more dominated the news cycle (despite Google's best efforts) with a product launch but additionally some palace intrigue. The company unveiled GPT-4o, its strongest generative model so far, and just days later effectively disbanded a team dedicated to developing controls to stop “superintelligent” AI systems from spiraling uncontrolled.

As expected, the team's dissolution made a whole lot of headlines. Reports – including ours – suggest that OpenAI prioritized the team's security research in favor of launching recent products just like the aforementioned GPT-4o, ultimately resulting in the resignation of the team's two co-leaders, Jan Leike and OpenAI co-founder Ilya Sutskever, led.

Superintelligent AI is currently more theoretical than real; It's not clear when — or if — the tech industry will make the breakthroughs mandatory to create AI able to handling any task a human can. But this week's reporting seems to substantiate one thing: that OpenAI's leadership — particularly CEO Sam Altman — has increasingly chosen to prioritize products over security measures.

Reportedly “Altman”offended“Sutskever by accelerating the introduction of AI-powered features at OpenAI’s first developer conference last November. And he’s it must have been criticized Helen Toner, director on the Georgetown Center for Security and Emerging Technologies and a former board member of OpenAI, over an article she co-authored that forged OpenAI's approach to security in a critical light – to the purpose of attempting to push her away The whiteboard.

Over the last yr or so, OpenAI has had its chatbot shop full of spam and (allegedly) Scrapped data from YouTube Violated the platform's terms of use while expressing ambitions to have their AI generate depictions of porn and blood. Certainly, security seems to have taken a backseat throughout the enterprise – and a growing variety of OpenAI security researchers have concluded that their work can be higher supported elsewhere.

Here are another notable AI stories from recent days:

  • OpenAI + Reddit: In other OpenAI news, the corporate has reached an agreement with Reddit to make use of the social site's data to coach AI models. Wall Street welcomed the take care of open arms — but Reddit users is probably not so pleased.
  • Google's AI: Google held its annual I/O developer conference this week and debuted AI products there. We've rounded them up here, from video-generating Veo to AI-organized ends in Google Search to upgrades to Google's Gemini chatbot apps.
  • Anthropic hires warriors: Mike Krieger, a co-founder of Instagram and most recently co-founder of personalized messaging app Artifact (which TechCrunch parent company Yahoo recently acquired), is joining Anthropic as the corporate's first chief product officer. He will oversee each the corporate's consumer and company efforts.
  • AI for youngsters: Anthropic announced last week that it could allow developers to create kid-focused apps and tools based on its AI models — so long as they follow certain rules. In particular, competitors similar to Google prohibit the mixing of their AI into apps geared toward younger age groups.
  • At the film festival: AI startup Runway held its second-ever AI film festival earlier this month. Take that away? Some of the more powerful moments within the showcase got here not from AI, but from more human elements.

More machine learning

AI security is clearly top of mind this week given OpenAI's departures, but Google Deepmind is moving on with a brand new “Frontier Safety Framework”. Essentially, that is the organization's technique to discover and hopefully prevent runaway capabilities – it doesn't necessarily need to be AGI, it is also a malware generator gone crazy or something similar.

Photo credit: Google Deepmind

The framework consists of three steps: 1. Identify potentially harmful capabilities in a model by simulating its development paths. 2. Evaluate models recurrently to find out after they have reached known “critical performance levels.” 3. Implement a mitigation plan to stop exfiltration (by others or yourself) or problematic deployment. Further details could be found here. It may sound like an obvious sequence of actions, however it's vital to formalize it, otherwise everyone will just wing it. This is the way you get the bad AI.

A really different risk has been identified by Cambridge researchers who’re rightly concerned in regards to the proliferation of chatbots which can be trained on the info of a deceased person to create a superficial likeness of that person. You may (like me) find the entire concept a bit of abhorrent, but when we’re careful, it could possibly be utilized in grief management and other scenarios. The problem is that we should not careful.

Photo credit: University of Cambridge / T. Hollanek

“This area of ​​AI is an ethical minefield” said lead researcher Katarzyna Nowaczyk-Basińska. “We now must take into consideration methods to mitigate the social and psychological risks of digital immortality, since the technology is already here.” The team identifies quite a few scams, possible bad and good outcomes, and discusses the concept on the whole (including fake services). one Article published in Philosophy & Technology. Black Mirror predicts the longer term once more!

In less scary applications of AI, Physicist at MIT are in search of a useful (to them) tool for predicting the phase or state of a physical system, normally a statistical task that may change into tedious for more complex systems. However, should you train a machine learning model based on the best data and associate it with some known material properties of a system, you’ve got a far more efficient approach. Just one other example of how ML is finding niches even in advanced science.

Over at CU Boulder, they're talking about how AI could be utilized in disaster management. The technology could be useful for quickly predicting where resources might be needed, mapping damage, and even helping to coach responders, but persons are (understandably) hesitant to make use of it in life-or-death scenarios. to make use of.

Participants of the workshop.
Photo credit: WITH boulders

Professor Amir Behzadan tries to maneuver the ball forward here, saying, “Human-centered AI results in simpler disaster response and recovery practices by fostering collaboration, understanding and inclusivity amongst team members, survivors and stakeholders.” They're still within the workshop phase, but It's vital to think twice about this stuff before attempting, for instance, to automate aid distribution after a hurricane.

Finally, an interesting piece of labor from Disney ResearchThis explored methods to diversify the output of diffusion image generation models that may consistently produce similar results given some input prompts. Her solution? “Our sampling strategy tempers the conditioning signal by adding planned, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition bias.” I simply couldn't put it higher myself.

Photo credit: Disney Research

The result’s a much greater number of angles, settings and the overall appearance of the image outputs. Sometimes you would like that, sometimes you don't, however it's nice to have the choice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read