HomeNewsThis week in AI: Can we trust OpenAI (and will we ever)?

This week in AI: Can we trust OpenAI (and will we ever)?

Keeping up with an industry as fast-moving as AI is a giant challenge. Until an AI can do it for you, here's a handy roundup of the newest developments from the world of machine learning, in addition to notable research and experiments that we haven't covered individually.

By the way in which, TechCrunch plans to launch an AI newsletter on June 5. Stay tuned. In the meantime, we're increasing the frequency of our semi-regular AI column, which used to seem about twice a month, to weekly—so keep an eye fixed out for more installments.

This week, OpenAI rolled out discounted AI plans for nonprofits and education customers and unveiled its latest efforts to stop malicious actors from abusing its AI tools. There's not much to criticize about that—a minimum of not on this author's opinion. But I’ll say that the flurry of announcements appeared to come at just the suitable time to counteract the bad press the corporate has been getting recently.

Let's start with Scarlett Johansson. OpenAI removed one among the voices of its AI-powered chatbot ChatGPT after users identified that it sounded eerily much like Johansson's voice. Johansson later released an announcement saying she had hired legal counsel to inquire in regards to the voice and get precise details about its development – and he or she had refused repeated requests from OpenAI to license her voice for ChatGPT.

Well, a Article within the Washington Post implies that OpenAI wasn't actually attempting to clone Johansson's voice and that any similarities were coincidental. But why then did OpenAI CEO Sam Altman contact Johansson and urge her to reconsider two days before a spectacular demo through which the similar-sounding voice was heard? That's a bit suspect.

Then there are OpenAI's trust and security issues.

As we reported earlier this month, OpenAI's now-defunct Superalignment team, which was accountable for developing methods to guide and control “superintelligent” AI systems, was promised 20% of the corporate's computing resources – but at all times (and barely) received a fraction of that. This (amongst other things) led to the resignation of the team's two co-leaders, Jan Leike and Ilya Sutskever, OpenAI's former chief scientist.

Nearly a dozen security experts have left OpenAI over the past yr; several, including Leike, have publicly raised concerns that the corporate prioritizes industrial projects over security and transparency efforts. In response to the criticism, OpenAI formed a brand new committee to oversee security decisions related to the corporate's projects and activities. However, the committee was staffed with company insiders – including Altman – fairly than outside observers. This is because OpenAI reportedly considers emergency ditching its nonprofit structure in favor of a conventional for-profit model.

Incidents like these make it harder to trust OpenAI, an organization whose power and influence grows on daily basis (see: its deals with news publishers). Few, if any, firms deserve trust. But OpenAI's market-disrupting technologies make the breaches all of the more troubling.

The incontrovertible fact that Altman himself will not be exactly a model of truthfulness doesn’t help.

When the news of OpenAIs aggressive behavior towards former employees broke the corporate's policy – tactics that involved threatening employees with the lack of their vested shares or stopping them from selling shares in the event that they didn’t sign restrictive non-disclosure agreements – Altman apologized and claimed he had no knowledge of the policies. But, based on VoxAltman's signature is on the founding documents that put the policies into effect.

And if former OpenAI board member Helen Toner It's essential to keep in mind that Altman – one among the previous board members who attempted to remove Altman from his post late last yr – withheld information, misrepresented things that happened at OpenAI, and in some cases outright lied to the board. Toner says that the board learned in regards to the ChatGPT release from Twitter, not Altman; that Altman gave false details about OpenAI's official security practices; and that Altman, unhappy with an educational paper co-authored by Toner that shed a critical light on OpenAI, tried to control board members to force Toner off the board.

None of this bodes well.

Here are another notable AI stories from the previous couple of days:

  • Voice cloning made easy: A brand new report from the Center for Countering Digital Hate finds that AI-powered voice cloning services make falsifying politicians’ statements quite trivial.
  • Google’s AI overviews have problems: AI Overviews, the AI-generated search results that Google began rolling out more widely in Google Search earlier this month, still need improvement. The company admits as much — but claims it's iterating quickly. (We'll see.)
  • Paul Graham on Altman: In a series of posts on X, Paul Graham, co-founder of startup accelerator Y Combinator, refuted claims that Altman was pressured to resign as president of Y Combinator in 2019 resulting from potential conflicts of interest. (Y Combinator owns a small stake in OpenAI.)
  • xAI raises $6 billion: xAI, Elon Musk's AI startup, has raised $6 billion in funding, giving Musk the capital to aggressively compete with rivals like OpenAI, Microsoft and Alphabet.
  • Perplexity’s latest AI feature: With its latest feature “Perplexity Pages,” AI startup Perplexity desires to help users create reports, articles or guides in a more visually appealing format, Ivan reports.
  • Favorite numbers of AI models: Devin writes in regards to the numbers different AI models select when asked to present a random answer. As it seems, they’ve favorites – a mirrored image of the info they were trained with.
  • Mistral releases Codestral: Mistral, the Microsoft-backed French AI startup valued at $6 billion, has released its first generative AI model for programming, called Codestral. However, resulting from Mistral's fairly restrictive license, it can’t be used commercially.
  • Chatbots and data protection: Natasha writes in regards to the European Union ChatGPT taskforce and the way it offers a primary approach to unraveling AI chatbot privacy compliance.
  • The sound generator from ElevenLabs: Voice cloning startup ElevenLabs introduced a brand new tool in February that enables users to create sound effects through input prompts.
  • Connecting elements for AI chips: Technology giants like Microsoft, Google and Intel – but not Arm, Nvidia or AWS – have formed an industry group, the UALink Promoter Group, to assist develop next-generation AI chip components.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read