Stay up so far with free updates
Simply register for Artificial intelligence myFT Digest – delivered straight to your inbox.
In the film a lonely author named Theodore Twombley falls in love with the disembodied voice of Samantha, a digital assistant played by actress Scarlett Johansson. “I can't consider I'm having this conversation with my computer,” Twombley tells Samantha. “You're not. You're having this conversation with me,” Samantha coos.
The genius of Spike Jonze script is his exploration of the borderlands between the bogus and the actual. But the science fiction film, released in 2013, has received ironic resonance today after OpenAI released its latest GPT-4o multimodal chatbot with artificial intelligence that appears to mimic Johansson's voice.
Johansson said she declined OpenAI's request to make use of her voice, adding that she was “shocked and offended” when she discovered the corporate had used a voice that was “eerily similar” to her own. She demanded more transparency and appropriate laws to make sure that individual rights are protected. OpenAI suspended using voice, which was later explained belonged to a different, unnamed actor.
To officials attending the recent AI Safety Summit in Seoul this week, the incident could have appeared like a distracting celebrity tantrum. But the dispute is in step with three broader concerns about generative AI: identity theft, destruction of mental property and lack of trust. Can AI corporations use the technology responsibly? Worryingly, even a few of those previously chargeable for safety are asking that query.
Last week, Jan Leike resigned as head of a security team at OpenAI after Ilya Sutskever, considered one of the corporate's co-founders and chief scientist, left the corporate. On X, Leike claimed that security at the corporate had lagged behind “shiny products.” He argued that OpenAI should spend way more bandwidth on security, confidentiality, human focus, and societal impact. “These problems are pretty hard to resolve, and I fear we usually are not on the appropriate path to get there,” he wrote.
In his own parting remarks, Sutskever said he was confident OpenAI would develop AI that was “each secure and useful.” However, Sutskever was considered one of the board members who tried to oust the corporate's CEO, Sam Altman, last 12 months. After Altman was reinstated following an worker revolt, Sutskever said he regretted his involvement within the coup. But his own departure will remove one other counterweight to Altman.
However, it just isn’t just OpenAI that has stumbled in adopting AI technology. Google has had its own problems with generative AI when his Gemini chatbot generated ahistorical images of black and Asian Nazi stormtroopers. Both corporations say missteps are inevitable when adopting latest technologies, and so they are quick to reply to their mistakes.
Nevertheless, it could create more trust if the leading AI corporations were more transparent. They still have an extended approach to go, because the Transparency index of the muse modelreleased this week by Stanford University. The index, which analyzes 10 leading model developers across 100 indicators including data access, model trustworthiness, usage policies and downstream impact, shows how the foremost corporations have taken steps to enhance transparency over the past six months but some models remained “extremely opaque.”
“What these models allow and don't allow will shape our culture. It's vital to look at them closely,” Percy Liang, director of the Stanford Center for Research on Foundation Models, tells me. What worries him most is the concentration of corporate power. “What happens when just a few organizations control the content and behavior of future AI systems?”
Such concerns could fuel calls for further regulatory intervention, equivalent to the EU AI law, which was approved by the European Council this month. More than 1 / 4 of the US state parliaments are also considering laws to control AI. However, some within the industry fear that regulation could only increase the influence of huge AI corporations.
“The voices within the room are from the large tech corporations. They can consolidate their power through regulation,” Martin Casado, an investment partner at enterprise capital firm Andreessen Horowitz, tells me. Policymakers must pay way more attention to the little tech corporations, the various startups which might be using open-source AI models to compete against the large players.
At the summit in Seoul this week, ten countries and the EU agreed to establish a world network of security institutes to observe the performance of groundbreaking AI models, which is welcome. But they need to now take heed to Johansson and dig much deeper into the powerful corporate structures that deploy these models.