Late Thursday night, Oprah Winfrey hosted a special on AI, aptly titled “AI and Our Future.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI Director Christopher Wray.
There was skepticism – and caution.
Oprah identified in prepared remarks that the genie of artificial intelligence – for higher or for worse – is out of the bottle and that humanity must learn to live with the results.
“AI remains to be beyond our control and to a big extent beyond our understanding,” she said. “But it's here, and we're going to live with a technology that might be each our ally and our rival… We are probably the most adaptable creatures on this planet. We will adapt again. But keep your eyes on reality. The stakes are not any longer there.”
Sam Altman guarantees an excessive amount of
Altman, Oprah's first interview of the evening, made the questionable claim that today's AI learns concepts based on the info it’s trained with.
“We show the system a thousand words in a sequence and ask it to predict what comes next,” he told Oprah. “The system learns to make predictions and learns the underlying concepts in the method.”
Many experts would disagree.
AI systems like ChatGPT and o1, which OpenAI unveiled on Thursday, do indeed predict the most certainly next words in a sentence. But they are only statistical machines – they learn patterns in data. They haven’t any intentionality; they only make educated guesses.
Although Altman could have overestimated the capabilities of today's AI systems, he stressed the importance of determining find out how to test these systems for safety.
“One of the primary things we’d like to do – and this is going on now – is to get the federal government to work out find out how to do safety testing of those systems, like we do for airplanes or latest drugs,” he said. “Personally, I probably refer to someone in the federal government every few days.”
Altman's push for regulation may be self-interest. OpenAI has opposed California's AI safety bill, SB 1047, because it might “stifle innovation.” However, former OpenAI employees and AI experts like Geoffrey Hinton have spoken out in favor of the bill, arguing that it might impose needed safeguards on AI development.
Oprah also questioned Altman about his role as leader of OpenAI. She asked why people should trust him, and he largely dodged the query, saying his company tries to construct trust over time.
Previously, Altman said very direct that folks shouldn’t trust him or anyone else to make sure that AI advantages the world.
OpenAI's CEO later said it was strange to listen to Oprah ask if he was “probably the most powerful and dangerous man on the earth,” as one headline suggested. He disagreed, but said he felt a responsibility to push AI in a positive direction for humanity.
Oprah on deepfakes
As was inevitable in a special broadcast about AI, the subject of deepfakes got here up.
To show how compelling synthetic media is becoming, Brownlee compared sample footage from Sora, OpenAI's AI-powered video generator, with AI-generated footage from a months-old AI system. The Sora sample was miles ahead – illustrating the rapid progress in the sector.
“You can still have a look at parts of it and see that something is unsuitable,” Brownlee said of the Sora footage. Oprah said it looked real to her.
The presentation of the deepfakes served as a transition to an interview with Wray, who recounted the moment he first encountered AI deepfake technology.
“I used to be in a conference room and a few (FBI) people got here together to indicate me find out how to create AI-powered deepfakes,” Wray said. “And they’d created a video of me saying things I had never said before and would never say.”
Wray spoke in regards to the increasing prevalence of AI-assisted sextortion. After According to cybersecurity company ESET, there was a 178% increase in sextortion cases between 2022 and 2023, partly because of AI technology.
“Someone posing as a peer targets a young person,” Wray said, “after which uses (AI-generated) compromising images to persuade the kid to send real images in return. In fact, it's a man behind a keyboard in Nigeria, and once they’ve the photographs, they threaten to blackmail the kid, saying, 'If you don't pay, we're going to share these images that can break your life.'”
Wray also addressed disinformation surrounding the upcoming U.S. presidential election, stressing that it’s “not the time to panic,” but stressed that it’s the duty of “everyone in America” to show “a heightened sense of focus and caution” when using AI, and to take note that AI “might be utilized by bad guys against all of us.”
“All too often we discover that something on social media that appears like Bill from Topeka or Mary from Dayton is definitely some Russian or Chinese intelligence officer on the outskirts of Beijing or Moscow,” Wray said.
In fact, a Statista Opinion poll found that greater than a 3rd of U.S. respondents saw misleading information—or suspected misinformation—on vital issues toward the tip of 2023. This 12 months, misleading, AI-generated images of vice presidential candidates Kamala Harris and former President Donald Trump garnered thousands and thousands of views on social networks like X.
Bill Gates on AI disruption
For a change in technological optimism, Oprah interviewed Microsoft founder Bill Gates, who expressed hope that artificial intelligence would give latest impetus to the fields of education and medicine.
“AI is sort of a third person sitting in on a health care provider's appointment, making a protocol and suggesting a prescription,” Gates said. “Instead of the doctor sitting in front of a pc screen, they interact with you and the software makes sure a extremely good protocol is created.”
However, Gates ignored the potential for bias because of poor AI training.
A recent study showed that speech recognition systems from leading technology firms are twice as more likely to mistranscribe audio from black speakers as from white speakers. Other research has shown that AI systems reinforce long-held, false beliefs that there are biological differences between blacks and whites – falsehoods that lead doctors to misdiagnose health problems.
In the classroom, Gates said, AI may very well be “all the time available” and “understand find out how to motivate you… no matter your level of information.”
This doesn’t necessarily reflect the perception of many classrooms.
Last summer, schools and colleges quickly banned ChatGPT for fear of plagiarism and misinformation. Since then, some reversed their bans. But not everyone seems to be convinced of the potential of GenAI for good and points to Surveys For example, the UK Safer Internet Centre found that over half of kids said they’d seen their peers use GenAI in a negative way – for instance, by creating believable misinformation or using images to upset someone.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) announced at the tip of last 12 months pushed for governments to control the usage of GenAI in education, including introducing age limits for users and guardrails on data protection and user privacy.