HomeNewsThis week in AI: Former OpenAI employees demand safety and transparency

This week in AI: Former OpenAI employees demand safety and transparency

Hey guys, and welcome to TechCrunch's first AI newsletter. It's really exciting to type these words – this text has been an extended time within the making, and we're excited to finally share it with you.

With the launch of TC's AI newsletter, we're discontinuing This Week in AI, the semi-regular column formerly referred to as Perceptron. But you will discover all of the evaluation we included in This Week in AI, including a highlight on notable recent AI models, right here.

This Week in AI, trouble is brewing – again – for OpenAI.

A bunch of former OpenAI employees spoken with Kevin Roose of the New York Times about what they see as glaring security flaws throughout the organization. They – like others who’ve left OpenAI in recent months – claim that the corporate is just not doing enough to stop its AI systems from becoming potentially dangerous and accuse OpenAI of using harsh measures to try to stop employees from raising the alarm.

The group published an open letter on Tuesday calling on leading AI firms, including OpenAI, to offer greater transparency and protection for whistleblowers. “Unless there’s effective government oversight of those firms, current and former employees are among the many few individuals who can hold them accountable to the general public,” the letter said.

Call me pessimistic, but I expect the previous employees' demands will fall on deaf ears. It's hard to assume a scenario wherein AI firms not only conform to “foster a culture of open criticism,” because the signatories recommend, but in addition resolve to not implement non-disparagement clauses or retaliate against current employees who speak out.

Consider that OpenAI's security committee, which the corporate recently created in response to criticism of its security practices, is staffed entirely by company insiders—including CEO Sam Altman. And consider that Altman, who once claimed ignorance of OpenAI's restrictive nondisparagement agreements, is unaware of the founding documents that govern them.

Sure, things could change for the higher at OpenAI tomorrow, but I'm not so optimistic. And even in the event that they did, it will be hard to trust it.

News

AI Apocalypse: OpenAI's AI-powered chatbot platform ChatGPT, in addition to Anthropic's Claude and Google's Gemini and Perplexity all went down at around the identical time this morning. All services have since been restored, however the explanation for the downtime stays unclear.

OpenAI explores fusion: According to the Wall Street Journal, OpenAI is currently negotiating with fusion startup Helion Energy over a deal that might see the AI ​​company buy large amounts of electricity from Helion to power its data centers. Altman owns a $375 million stake in Helion and sits on the corporate's board of directors, but has reportedly withdrawn from the negotiations.

The cost of coaching data: TechCrunch takes a have a look at the expensive data licensing deals which are becoming increasingly common within the AI ​​industry – deals that might make AI research prohibitive for smaller organizations and academic institutions.

Hateful music generators: Malicious actors are abusing AI-powered music generators to create homophobic, racist and propaganda songs – and publishing instructions telling others the best way to do the identical.

Cash for Cohere: Reuters reports that Cohere, an enterprise-focused generative AI startup, has raised $450 million from Nvidia, Salesforce Ventures, Cisco and others in a brand new tranche that values ​​Cohere at $5 billion. Sources aware of TechCrunch report that Oracle and Thomvest Ventures – each returning investors – also participated within the round, which was left open-ended.

Research paper of the week

In a research paper from 2023 with the title “Let's Verify Step by Step”, that OpenAI recently highlighted On OpenAI's official blog, scientists claimed that they had optimized the startup's general-purpose generative AI model, GPT-4, to make it perform higher than expected at solving mathematical problems. The approach could make generative models less susceptible to going astray, the paper's co-authors say – but they indicate several caveats.

In the paper, the co-authors describe intimately how they trained reward models to detect hallucinations or cases where GPT-4 got its facts and/or answers to math problems mistaken. (Reward models are specialized models for evaluating the outcomes of AI models, on this case, math-related results from GPT-4.) The reward models “rewarded” GPT-4 each time it answered a step of a math problem accurately, an approach the researchers call “process monitoring.”

The researchers say that process monitoring improved GPT-4's accuracy on math problems in comparison with previous techniques for “rewarding” models – at the very least of their benchmark tests. But they admit that it's not perfect; GPT-4 still got problem steps mistaken. And it's unclear how the shape of process monitoring the researchers studied generalizes beyond the maths domain.

Model of the week

Weather forecasting may not feel like a science (at the very least when it rains, like I just did), but that's since it's about probabilities, not certainties. And what higher method to calculate probabilities than with a probability model? We've already seen AI getting used to forecast the weather for periods starting from hours to centuries, and now Microsoft is getting in on the act. The company recent Aurora model is moving the ball forward on this rapidly evolving area of ​​the AI ​​world, offering globe-scale predictions with a resolution of about 0.1° (i.e. on the order of 10 km²).

Photo credits: Microsoft

Trained in over one million hours of weather and climate simulations (no real weather? Hmm…) and fine-tuned to a variety of desirable tasks, Aurora outperforms traditional numerical forecasting systems by several orders of magnitude. Even more impressive, it beats Google DeepMind's GraphCast at its own game (although Microsoft picked the sphere), providing more accurate estimates of weather conditions on the one- to five-day scale.

Of course, firms like Google and Microsoft are also in the combination. Both are vying to your online attention by trying to offer you essentially the most personalized web and search experience possible. Accurate and efficient first-hand weather forecasts will play a vital role in that, at the very least until we go outside.

Grab bag

In a thought Pieces last month in palladiumAvital Balwit, chief of staff at AI startup Anthropic, believes that the following three years may very well be the last she and lots of knowledge staff should work due to rapid advances in generative AI. This must be a comfort relatively than a cause for fear, she says, since it may lead to “a world where people's material needs are met, but they now not should work.”

“A renowned AI researcher once told me that he practices (for this tipping point) by taking over activities he is just not particularly good at: jiu-jitsu, browsing, and so forth, and having fun with doing them even without excelling,” Balwit writes. “This way we are able to prepare for our future, where now we have to do things for pleasure relatively than necessity, where we aren’t any longer one of the best at them but still should select how we spend our days.”

That is definitely the “glass half full” view – but I cannot claim that I share it.

If generative AI replaces most knowledge staff inside three years (which seems unrealistic to me given the numerous unresolved technical problems of AI), it could well result in an economic collapse. Knowledge staff make large parts of the workforce and are inclined to be top earners — and subsequently financiers. They drive the wheels of capitalism.

Balwit points to universal basic income and other large-scale social safety nets, but I don't have much confidence that countries just like the US, that are unable to even enact basic federal AI laws, will implement universal basic income programs anytime soon.

With any luck I'm mistaken.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read