HomeNewsIs AI a con? A brand new book punctures the hype and...

Is AI a con? A brand new book punctures the hype and proposes some ways to resist

Is AI going to take over the world? Have scientists created a synthetic lifeform that may think by itself? Is it going to switch all our jobs, even creative ones, like doctors, teachers and care employees? Are we about to enter an age where computers are higher than humans at every part?

The answers, because the authors of The AI Con stress, are “no”, “they need”, “LOL” and “definitely not”.



Artificial intelligence is a marketing term as much as a definite set of computational architectures and techniques. AI has turn into a magic word for entrepreneurs to draw startup capital for dubious schemes, an incantation deployed by managers to immediately achieve the status of future-forward leaders.

In a mere two letters, it conjures a vision of automated factories and robotic overlords, a utopia of leisure or a dystopia of servitude, depending in your perspective. It just isn’t just technology, but a robust vision of how society should function and what our future should seem like.

In this sense, AI doesn’t must work for it to work. The accuracy of a big language model could also be doubtful, the productivity of an AI office assistant could also be claimed slightly than demonstrated, but this bundle of technologies, firms and claims can still alter the terrain of journalism, education, healthcare, service work and our broader sociocultural landscape.

Pop goes the bubble

For Emily M. Bender and Alex Hanna, the AI hype bubble must be popped.

Bender is a linguistics professor on the University of Washington, who has turn into a distinguished technology critic. Hanna is a sociologist and former worker of Google, who’s now the director of research on the Distributed AI Research Institute. After teaming as much as mock AI boosters of their popular podcast, Mystery AI Hype Theater 3000, they’ve distilled their insights right into a book written for a general audience. They meet the unstoppable force of AI hype with immovable scepticism.

Step one on this program is grasping how AI models work. Bender and Hanna do a wonderful job of decoding technical terms and unpacking the “black box” of machine learning for lay people.

Driving this wedge between hype and reality, between assertions and operations, is a recurring theme across the pages of The AI Con, and one that ought to progressively erode readers’ trust within the tech industry. The book outlines the strategic deceptions employed by powerful corporations to scale back friction and accumulate capital. If the barrage of examples tends to blur together, the sense of technical bullshit lingers.

What is intelligence? A famous and highly cited paper co-written by Bender asserts that enormous language models are simply “stochastic parrots”, drawing on training data to predict which set of tokens (i.e. words) is more than likely to follow the prompt given by a user. Harvesting thousands and thousands of crawled web sites, the model can regurgitate “the moon” after “the cow jumped over”, albeit in far more sophisticated variants.

Rather than actually understanding an idea in all its social, cultural and political contexts, large language models perform pattern matching: an illusion of pondering.

But I’d suggest that, in lots of domains, a simulation of pondering is sufficient, because it is met halfway by those engaging with it. Users project agency onto models via the well-known Eliza effect, imparting intelligence to the simulation.

Management are pinning their hopes on this simulation. They view automation as a option to streamline their organisations and never be “left behind”. This powerful vision of early adopters vs extinct dinosaurs is one we see repeatedly with the appearance of latest technologies – and one which advantages the tech industry.

In this sense, poking holes within the “intelligence” of artificial intelligence is a losing move, missing the social and financial investment that wishes this technology to work. “Start with AI for each task. No matter how small, try using an AI tool first,” commanded DuoLingo’s chief engineering officer in a recent message to all employees. Duolingo has joined Fiverr, Shopify, IBM and a slew of other firms proclaiming their “AI first” approach.

‘Large language models perform pattern matching: an illusion of pondering.’ Image: Talking to AI 2.0 – Yutong Liu.
Kingston School of Art/https://betterimagesofai.org, CC BY

Shapeshifting technology

The AI Con is strongest when it looks beyond or across the technologies to the ecosystem surrounding them, a perspective I actually have also argued is immensely helpful. By understanding the firms, actors, business models and stakeholders involved in a model’s production, we will evaluate where it comes from, its purpose, its strengths and weaknesses, and what all this might mean downstream for its possible uses and implications. “Who advantages from this technology, who’s harmed, and what recourse have they got?” is a solid start line, Bender and Hanna suggest.

These basic but necessary questions extract us from the weeds of technical debate – how does AI function, how accurate or “good” is it really, how can we possibly understand this complexity as non-engineers? – and provides us a critical perspective. They place the onus on industry to elucidate, slightly than users to adapt or be rendered superfluous.

We don’t must have the ability to elucidate technical concepts like backpropagation or diffusion to know that AI technologies can undermine fair work, perpetuate racial and gender stereotypes, and exacerbate environmental crises. The hype around AI means to distract us from these concrete effects, to trivialise them and thus encourage us to disregard them.

Emily M. Bender.
University of Washington

As Bender and Hanna explain, AI boosters and AI doomers are really two sides of the identical coin. Conjuring up nightmare scenarios of self-replicating AI terminating humanity or claiming sentient machines will usher us right into a posthuman paradise are, in the long run, the identical thing. They place a religious-like faith within the capabilities of technology, which dominates debate, allowing tech firms to retain control of AI’s future development.

The risk of AI just isn’t potential doom in the longer term, the nuclear threat throughout the Cold War, however the quieter and more significant harm to real people in the current. The authors explain that AI is more like a panopticon “that permits a single prison warden to maintain track of lots of of prisoners directly”, or the “surveillance dragnets that track marginalised groups within the West”, or a “toxic waste, salting the earth of a Superfund site”, or a “scabbing employee, crossing the picket line on the behest of an employer who desires to signal to the picketers that they’re disposable. The totality of systems sold as AI are this stuff, rolled into one.”

A decade ago, with one other “game-changing” technology, creator Ian Bogost observed that

slightly than utopia or dystopia, we often find yourself with something less dramatic yet more disappointing. Robots neither serve human masters nor destroy us in a dramatic genocide, but slowly dismantle our livelihoods while sparing our lives.

The pattern repeats. As AI matures (to a point) and is adopted by organisations, it moves from innovation to infrastructure, from magic to mechanism. Grand guarantees never materialise. Instead, society endures a tougher, bleaker future. Workers feel more pressure; surveillance is normalised; truth is muddied with post-truth; the marginal turn into more vulnerable; the planet gets hotter.

Technology, on this sense, is a shapeshifter: the outward form consistently changes, yet the inner logic stays the identical. It exploits labour and nature, extracts value, centralises wealth, and protects the facility and standing of the already-powerful.

Co-opting critique

In The New Spirit of Capitalism, sociologists Luc Boltanski and Eve Chiapello display how capitalism has mutated over time, folding critiques back into its DNA.

After enduring a series of blows around alienation and automation within the Nineteen Sixties, capitalism moved from a hierarchical Fordist mode of production to a more flexible type of self-management over the following twenty years. It began to favour “just in time” production, done in smaller teams, that (ostensibly) embraced the creativity and ingenuity of every individual. Neoliberalism offered “freedom”, but at a price. Organisations adapted; concessions were made; critique was defused.


Verso Books

AI continues this kind of co-option. Indeed, the present moment may be described as the tip of the primary wave of critical AI. In the last five years, tech titans have released a series of greater and “higher” models, with each the general public and students focusing largely on generative and “foundation” models: ChatGPT, StableDiffusion, Midjourney, Gemini, DeepSeek, and so forth.

Scholars have heavily criticised facets of those models – my very own work has explored truth claims, generative hate, ethics washing and other issues. Much work focused on bias: the best way during which training data reproduces gender stereotypes, racial inequality, religious bigotry, western epistemologies, and so forth.

Much of this work is great and seems to have filtered into the general public consciousness, based on conversations I’ve had at workshops and events. However, its flagging of such issues allows tech firms to practise issue resolving. If the accuracy of a facial-recognition system is lower with Black faces, add more Black faces to the training set. If the model is accused of English dominance, fork out some money to supply data on “low-resource” languages.

Companies like Anthropic now often perform “red teaming” exercises designed to spotlight hidden biases in models. Companies then “fix” or mitigate these issues. But attributable to the large size of the info sets, these are likely to be band-aid solutions, superficial slightly than structural tweaks.

For instance, soon after launching, AI image generators were under pressure for not being “diverse” enough. In response, OpenAI invented a method to “more accurately reflect the variety of the world’s population”. Researchers discovered this system was simply tacking on additional hidden prompts (e.g. “Asian”, “Black”) to user prompts. Google’s Gemini model also seems to have adopted this, which resulted in a backlash when images of Vikings or Nazis had South Asian or Native American features.

The point here just isn’t whether AI models are racist or historically inaccurate or “woke”, but that models are political and never disinterested. Harder questions on how culture is made computational, or what form of truths we would like as society, are never broached and subsequently never worked through systematically.

Such questions are actually broader and fewer “pointy” than bias, but additionally less amenable to being translated right into a problem for a coder to resolve.

What next?

How, then, should those outside the academy reply to AI? The past few years have seen a flurry of workshops, seminars and skilled development initiatives. These range from “gee whiz” tours of AI features for the workplace, to sober discussions of risks and ethics, to swiftly organised all-hands meetings debating respond now, and next month, and the month after that.

Alex Hanna.
Will Toft/alex-hanna.com, CC BY

Bender and Hanna wrap up their book with their very own responses. Many of those, like their questions on how models work and who advantages, are easy but fundamental, offering a powerful start line for organisational engagement.

For the technosceptical duo, refusal can be clearly an option, though individuals will obviously have vastly different degrees of agency on the subject of opting out of models and pushing back on adoption strategies. Refusal of AI, as with many technologies which have come before it, often relies to some extent on privilege. The six-figure consultant or coder can have discretion that the gig employee or service employee cannot exercise without penalties or punishments.

If refusal is fraught at the person level, it seems more viable and sustainable at a cultural level. Bender and Hanna suggest generative AI be responded to with mockery: firms who employ it needs to be derided as low-cost or tacky.

The cultural backlash against AI is already in full swing. Soundtracks on YouTube are increasingly labelled “No AI”. Artists have launched campaigns and hashtags, stressing their creations are “100% human-made”.

These moves are attempts to determine a cultural consensus that AI-generated material is derivative and exploitative. And yet, if these moves offer some hope, they’re swimming against the swift current of enshittification. AI slop means faster and cheaper content creation, and the technical and financial logic of online platforms – virality, engagement, monetisation – will at all times create a race to the underside.

The extent to which the vision offered by big tech might be accepted, how far AI technologies might be integrated or mandated, how much individuals and communities will beat back against them – these are still open questions. In some ways, Bender and Hanna successfully display that AI is a con. It fails at productivity and intelligence, while the hype launders a series of transformations that harm employees, exacerbate inequality and damage the environment.

Yet such consequences have accompanied previous technologies – fossil fuels, private cars, factory automation – and hardly dented their uptake and transformation of society. So while praise goes to Bender and Hanna for a book that shows “ fight big tech’s hype and create the longer term we would like”, the problem of AI resonates, for me, with Karl Marx’s commentary that folks “make their very own history, but they don’t make it just as they please”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read