HomeIndustriesBook laureate Parmy Olson says it's not too late to manage AI

Book laureate Parmy Olson says it's not too late to manage AI

Stay up so far with free updates

“While (Sam) Altman measures success by numbers, whether in investments or people using a product, (Demis) Hassabis chased awards,” writes Parmy Olson in her book concerning the co-founders of DeepMind and OpenAI. “(Hassabis) often told employees that he wanted DeepMind to win between three and five Nobel Prizes in the following decade.”

Just hours after Olson won this week's 2024 Financial Times and Schroders Business Book Award, Hassabis made the primary edition obsolete by accepting the Nobel Prize in Chemistry in Stockholm for his work on an AI system that may predict the structure of can contain all known proteins.

The incontrovertible fact that Hassabis is already one step closer to his ambitious goal of winning a Nobel Prize illustrates the speed at which AI is changing the world. Olson's difficult task was to jot down a book about this rapidly evolving technology that will stand the test of time. In an interview this week, she said she wanted to explain “the struggle for control (of AI) and likewise…” . . for correct supervision”. FT editor Roula Khalaf, chair of the book prize jury, says Olson “brilliantly describes the event of artificial intelligence as an exciting race” between the spiritual-minded, game-obsessed Hassabis and the number-crunching Altman.

Despite the differences between Hassabis and Altman, Olson says she was also fascinated by their similarities. Both believed that AGI – the purpose at which AI surpasses human cognitive abilities – would “solve a lot of our current social ills and problems.” Both shared concerns about a scarcity of regulation and excessive corporate control over AI. “Both tried to place governance structures in place to separate the technology somewhat bit and provides it proper control,” she says, “and each failed.”

Google now controls DeepMind, while Microsoft supports OpenAI, whose public launch of ChatGPT two years ago accelerated using generative AI. “The fairly utopian, almost humanitarian ideals of the founders type of faded into the background as they became increasingly aligned with two very large technology corporations,” says Olson. It was intended, partly, to be a warning concerning the need for appropriate regulation of the emerging tech oligopoly.

But is it too late to impose regulatory and ethical restrictions on AI, given how quickly it’s evolving? Olson, a technology columnist at Bloomberg, doesn't think so. There remains to be time to influence “how technology corporations design their algorithms” to make sure they’re safer and fewer biased.

Laws comparable to the EU AI law, which provides for a strict regulatory system, will impose guardrails. But Olson also points out that corporations that procure generative AI systems will exercise restraint towards technology suppliers. “There is quite a lot of experimentation happening, but actually not that much is being spent on putting these AI systems into practice because there are real concerns about hallucinations.” . and bias,” she says. “Companies like banks and healthcare systems have their very own regulatory systems that they have to ensure compliance with” before launching AI-powered services and products to customers.

When this 12 months's book prize was launched, previous winners were asked what they’d add to their books if given the prospect to jot down a brand new chapter. was only published in September, but Olson is aware that future editions might have to acknowledge the importance of Donald Trump's election last month and entrepreneur Elon Musk's closeness to the U.S. president-elect.

Musk was a co-founder and early backer of OpenAI with Altman before breaking away and founding his own startup xAI, which is training a rival to ChatGPT and Google's Gemini called Grok. Olson says she is “truthfully surprised at how quickly Grok has grown when it comes to its ability to lift money.” But she warns that removed from accelerating a loose regulatory regime for AI, Musk's presence in a Trump administration could “throw a spanner within the works.” “You need to do not forget that Musk is an AI doomer, and he began OpenAI partly because he was so anxious about Google having control over AGI.”

Olson's concern about using AI as a weapon solely for industrial purposes is obvious. Some AI chatbot startups are already fostering an emotional connection between users and bots. She worries about ad-supported models that might trigger an addictive cycle when using chatbots. It will probably be “tougher for the provider to think concerning the user’s well-being because their business model is determined by that person staying on their app for so long as possible.”

Still, Olson says she's not predicting an AI dystopia. When the FT asked ChatGPT to pose an “unexpected query” to them, the bot asked: “If AI were to jot down a definitive history of humanity in 100 years, what perspective or biases might it have?” And how would that history change differ from those written by humans today?”

Olson replies: “If people really try hard” and a more just society emerges, “possibly regardless of the AI ​​writes…” . . will probably be a mirrored image of that. I’m optimistic concerning the future, so I hope it actually seems to be quite an inspiring read.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read