HomeIndustriesAI sceptic Emily Bender: ‘The emperor has no clothes’

AI sceptic Emily Bender: ‘The emperor has no clothes’

Before Emily Bender and I actually have checked out a menu, she has dismissed artificial intelligence chatbots as “plagiarism machines” and “synthetic text extruders”. Soon after the food arrives, the professor of linguistics adds that the vaunted large language models (LLMs) that underpin them are “born shitty”.

Since OpenAI launched its wildly popular ChatGPT chatbot in late 2022, AI firms have sucked in tens of billions of dollars in funding by promising scientific breakthroughs, material abundance and a brand new chapter in human civilisation. AI is already able to doing entry-level jobs and can soon “discover recent knowledge”, OpenAI chief Sam Altman told a conference this month.

According to Bender, we’re being sold a lie: AI won’t fulfil those guarantees, and nor will it kill us all, as others have warned. AI is, despite the hype, pretty bad at most tasks and even one of the best systems available today lack anything that might be called intelligence, she argues. Recent claims that models are developing a capability to know the world beyond the info they’re trained on are nonsensical. We are “imagining a mind behind the text”, she says, but “the understanding is all on our end”.

Bender, 51, is an authority in how computers model human language. She spent her early academic profession in Stanford and Berkeley, two Bay Area institutions which are the wellsprings of the fashionable AI revolution, and worked at YY Technologies, a natural language processing company. She witnessed the bursting of the dotcom bubble in 2000 first-hand.

Her mission now’s to deflate AI, which she is going to only seek advice from in air quotes and says should really just be called automation. “If we would like to get past this bubble, I believe we’d like more people not falling for it, not believing it, and we’d like those people to be in positions of power,” she says.

In a recent book called , she and her co-author, the sociologist Alex Hanna, take a sledgehammer to AI hype and lift the alarm concerning the technology’s more insidious effects. She is evident on her motivation. “I believe what it comes right down to is: no one must have the ability to impose their view on the world,” she says. Thanks to the massive sums invested, a tiny cabal of men has the power to shape what happens to large swaths of society and, she adds, “it really gets my goat”.

Her thesis is that the whizzy chatbots and image-generation tools created by OpenAI and rivals Anthropic, Elon Musk’s xAI, Google and Meta are little greater than “stochastic parrots”, a term that she coined in a 2021 paper. A stochastic parrot, she wrote, is a system “for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, in keeping with probabilistic details about how they mix, but with none reference to meaning”.

The paper shot her to prominence and triggered a backlash in AI circles. Two of her co-authors, senior members of the moral AI team at Google, lost their jobs at the corporate shortly after publication. Bender has also faced criticism from other academics for what they regard as a heretical stance. “It seems like individuals are mad that I’m undermining what they see because the kind of crowning achievement of our field,” she says.

The controversy highlighted tensions between those trying to commercialise AI fast and opponents warning of its harms and urging more responsible development. In the 4 years since, the previous group has been ascendant.

We’re meeting in a low-key sushi restaurant in Fremont, Seattle, not removed from the University of Washington where Bender teaches. We are almost the one patrons on a sun-drenched Monday afternoon in May, and the waiter has uninterested in asking us what we would like after half-hour and three attempts. Instead we turn to the iPad on the table, which guarantees to streamline the method.

It achieves the alternative. “I’m going to get one in every of those,” says Bender: “add to cart. Actual food may differ from image. Good, since the image is grey. This is great. Yeah. Show me the . . . where’s the otoro? There we go. Ah, it might be they don’t have it.” We surrender. The waiter returns and confirms they do actually have the otoro, a fatty cut of tuna belly. Realising I’m British, he lingers to ask which football team I support, offers his commiserations to me on Arsenal ending as runners-up this season and tells me he’s a Tottenham fan. I ponder if it’s too late to revert to the iPad.

Menu

Kamakura Japanese Cuisine and Sushi
3520 Fremont Ave N, Seattle, 98103

Otoro nigiri x2 $31.90
Salmon nigiri x2 $8
Agedashi x2 $8
Avocado maki $5.95
Edamame $3.50
Barley tea x2 $5
Total (including tax and tip) $82.56

Bender was not all the time destined to take the fight to the world’s biggest firms. A decade ago, “I used to be minding my very own business doing grammar engineering,” she says. But after a wave of social movements, including Black Lives Matter, swept through campus, “I began asking, well, where do I sit? What power do I actually have and the way can I take advantage of it?” She arrange a category on ethics in language technology and a couple of years later found herself “having just unending arguments on Twitter about why language models don’t ‘understand’, with computer scientists who didn’t have the primary bit of coaching in linguistics”.

Eventually, Altman himself got here to spar. After Bender’s paper got here out, he tweeted “i’m a stochastic parrot, and so r u”. Ironically, given Bender’s critique of AI as a regurgitation machine, her phrase is now often attributed to him.

She sees her role as “having the ability to speak truth to power based on my academic expertise”. The truth from her perspective is that the machines are inherently much more limited than now we have been led to consider.

Her critique of the technology is layered on a more human concern: that chatbots being lauded as a brand new paradigm in intelligence threaten to speed up social isolation, environmental degradation and job loss. Training cutting-edge models costs billions of dollars and requires enormous amounts of power and water, in addition to staff within the developing world willing to label distressing images or categorise text for a pittance. The ultimate effect of all this work and energy will probably be to create chatbots that displace those whose art, literature and knowledge are AI’s raw data today.

“We are usually not trying to alter Sam Altman’s mind. We are attempting to be a part of the discourse that’s changing other people’s minds about Sam Altman and his technology,” she says.


The table is now filled with dishes. The otoro nigiri is soft, tender and each bit pretty much as good as Bender promised. We have each ordered agedashi tofu, perfectly deep-fried so it stays firm in its pool of dashi and soy sauce. Salmon nigiri, avocado maki and tea also dot the space between us.

Bender and Hanna were writing in late 2024, which they describe within the book as the height of the AI boom. But since then the race to dominate the technology has only intensified. Leading firms including OpenAI, Anthropic and Chinese rival DeepSeek have launched what Google’s AI team describe as “pondering models, able to reasoning through their thoughts before responding”.

The ability to reason would represent a big milestone on the journey towards AI that might outperform experts across the complete range of human intelligence, a goal sometimes called artificial general intelligence, or AGI. Numerous probably the most outstanding people in the sphere — including Altman, OpenAI’s former chief scientist and co-founder Ilya Sutskever and Elon Musk have claimed that goal is at hand.

Anthropic chief Dario Amodei describes AGI as “an imprecise term which has gathered numerous sci-fi baggage and hype”. But by next 12 months, he argues, we could have tools which are “smarter than a Nobel Prize winner across most relevant fields”, “can control existing physical tools” and “prove unsolved mathematical theorems”. In other words, with more data, computing power and research breakthroughs, today’s AI models or something that closely resembles them could extend the boundaries of human understanding and cognitive ability.

Bender dismisses the concept, describing the technology as “a flowery wrapper around some spreadsheets”. LLMs ingest reams of information and base their responses on the statistical probability of certain words occurring alongside others. Computing improvements, an abundance of online data and research breakthroughs have made that process far quicker, more sophisticated and more relevant. But there isn’t a magic and no emergent mind, says Bender.

“If you’re going to learn the patterns of which words go together for a given language, if it’s not within the training data, it’s not going to be within the output system. That’s just fundamental,” she says.

In 2020, Bender wrote a paper comparing LLMs to a hyper-intelligent octopus eavesdropping on human conversation: it would pick up the statistical patterns but would have little hope of understanding meaning or intent, or of having the ability to seek advice from anything outside of what it had heard. She arrives at our lunch today sporting a pair of picket octopus earrings.

There are other sceptics in the sphere, corresponding to AI researcher Gary Marcus, who argue the transformational potential of today’s best models has been massively oversold and that AGI stays a pipe dream. Every week after Bender and I meet, a gaggle of researchers at Apple publish a paper echoing a few of Bender’s critiques. The best “reasoning” models today “face a whole accuracy collapse beyond certain complexities”, the authors write — although researchers were quick to criticise the paper’s methodology and conclusions.

Sceptics are likely to be drowned out by boosters with greater profiles and deeper pockets. OpenAI is raising $40bn from investors led by SoftBank, the Japanese technology investor, while rivals xAI and Anthropic have also secured billions of dollars within the last 12 months. OpenAI, Anthropic and xAI are collectively valued at near $500bn today. Before ChatGPT was launched, OpenAI and Anthropic were valued at a fraction of that and xAI didn’t exist.

“It’s to their profit to have everyone consider that it’s a pondering entity that may be very, very powerful as an alternative of something that’s, you recognize, a glorified Magic 8 Ball,” says Bender.


We have been talking for an hour and a half, the bowl of edamame beans between us steadily dwindling, and our cups of barley tea have been refilled greater than once. As Bender returns to her most important theme, I notice she has quietly constructed an origami bird from her chopstick wrapper. AI’s boosters is perhaps hawking false guarantees, but their actions have real consequences, she says. “The more we construct systems around this technology, the more we push staff out of sustainable careers and in addition cut off the entry-level positions . . . And then there’s all of the environmental impact,” she says.

Bender is entertaining company, a Cassandra with a wry grin and twinkling eye. At times it feels she is playing as much as the role of nemesis to the tech bosses who live down the Pacific coast in and around San Francisco.

But where Bender’s in Silicon Valley might gush over the potential of the technology, she will be able to seem blinkered in one other way. When I ask her if she sees one positive use for AI, all she is going to concede is that it would help her discover a song.

I ask how she squares her twin claims that chatbots are bullshit generators and able to devouring large portions of the labour market. Bender says they will be concurrently “ineffective and detrimental”, and offers the instance of a chatbot that might spin up plausible-looking news articles with none actual reporting — great for the host of a web site earning profits from click-based promoting, less so for journalists and the truth-seeking public.

She argues forcefully that chatbots are born flawed because they’re trained on data sets riddled with bias. Even something as narrow as an organization’s policies might contain prejudices and errors, she says.

Aren’t these really critiques of society relatively than technology? Bender counters that technology built on top of the mess of society doesn’t just replicate its mistakes but reinforces them, because users think “that is so big it’s all-encompassing and it could actually see all the things and so due to this fact it has this view from nowhere. I believe it’s all the time necessary to recognise that there isn’t a view from nowhere.”

Bender dedicates to her two sons, who’re composers, and he or she is particularly animated describing the deleterious impact of AI on the creative industries.

She is scathing, too, about AI’s potential to empathise or offer companionship. When a chatbot tells you that you simply are heard or that it understands, that is nothing but placebo. “When Mark Zuckerberg suggests that there’s a requirement for friendships beyond what we even have and he’s going to fill that demand along with his AI friends, really that’s mainly tech firms saying, ‘We are going to isolate you from one another and be certain that your entire connections are mediated through tech’.”

Yet employers are deploying the technology, and finding value in it. AI has accelerated the speed at which software engineers can write code, and greater than 500mn people often use ChatGPT.

AI can also be a cornerstone of national policy under US President Donald Trump, with superiority within the technology seen as being essential to winning a brand new cold war with China. That has added urgency to the race and drowned out calls for more stringent regulations. We discuss the parallels between the hype of today’s AI moment and the origins of the sphere within the Fifties, when mathematician John McCarthy and computer scientist Marvin Minsky organised a workshop at Dartmouth College to debate “pondering machines”. In the background during that era was an existential competition with the Soviet Union. This time the Red Scare stems from fear that China will develop AGI before the US, and use its mastery of the technology to undermine its rival.

This is specious, says Bender, and beating China to some level of superintelligence is a pointless goal, given the country’s ability to catch up quickly, which was demonstrated by the launch of a ChatGPT rival by DeepSeek earlier this 12 months. “If OpenAI builds AGI today, they’re constructing it for China in three months.”

Nonetheless, competition between the 2 powers has created huge industrial opportunities for US start-ups. On Trump’s first full day of his second term, he invited Altman to the White House to unveil Stargate, a $500bn data centre project designed to cement the US’s AI primacy. The project has since expanded abroad, in what those involved describe as “industrial diplomacy” designed to bolster America’s sphere of influence using the technology.

If Bender is correct that AI is just automation in a shiny wrapper, this unprecedented outlay of economic and political capital will achieve little greater than the erosion of already fragile professions, social institutions and the environment.

So why, I ask, are so many individuals convinced this can be a more consequential technology than the web? Some have a industrial incentive to consider, others are more honest but no less deluded, she says. “The emperor has no clothes. But it’s surprising how many individuals need to be the naked emperor.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read