HomeNewsTony Fadell takes a dig at Sam Altman in TechCrunch Disrupt interview

Tony Fadell takes a dig at Sam Altman in TechCrunch Disrupt interview

iPod creator and Nest Labs founder and investor Tony Fadell took on OpenAI CEO Sam Altman during a energetic interview at TechCrunch Disrupt 2024 in San Francisco on Tuesday. Speaking about his understanding of the longer history of AI development before the Large Language Model (LLM) craze and the intense problems with LLM hallucinations, he said: “I've been in AI for 15 years, guys, I don't talk only S-. I’m not Sam Altman, okay?”

The comment drew surprised murmurs of “oohs” and only a small handful of applause from the shocked crowd.

Fadell was on point during his interview, touching on a variety of topics starting from which “A-holes” can produce great products to what's mistaken with today's LLMs.

While he admitted that LLMs “are great for certain things,” he explained that there have been still serious concerns that needed to be addressed.

“LLMs attempt to be so 'generic' because we're attempting to make science fiction a reality,” he said. “(LLMs are) know-it-alls… I hate know-it-alls.”

Instead, Fadell suggested that he would like to make use of AI agents which might be trained to do certain things and are more transparent about their errors and hallucinations. In this fashion, employees would have the option to learn every part in regards to the AI ​​before “hiring” it for the precise job.

“I hire them to… train me, I hire them to be co-pilots for me, or I hire them to exchange me,” he explained. “I would like to know what this thing is,” adding that governments should step as much as implement such transparency.

Otherwise, firms using AI would risk their reputations for “some bullshit technology,” he said.

“Right now we're all taking this thing on and never knowing what problems it's causing,” Fadell emphasized. He also identified that a recent report found that doctors who used ChatGPT to generate patient reports experienced hallucinations in 90% of them. “They could kill people,” he continued. “We use these items and we don’t even know the way it really works.”

(Fadell gave the impression to be referring to the recent report that researchers on the University of Michigan studying AI transcriptions found an excessive variety of hallucinations that may very well be dangerous in a medical context.)

The Altman comment got here as he told the group that he has been working with AI technologies for years. Nest, for instance, used AI in its thermostat back in 2011.

“We couldn’t speak about AI; “We couldn’t speak about machine learning,” Fadell noted, “because people would get so scared. “I don’t want AI in my house” – now everyone wants AI in every single place.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read