Artificial intelligence (AI) just isn’t just data, chips and code – it’s also the product of the metaphors and narratives we use to discuss it. The Way we represent this technology determines how the general public imagination understands it and, more broadly, how people design it, use it, and what impact it has on society at large.
worrying, many studies show that the dominant depictions of AI – anthropomorphic “assistants”, artificial brains and the ever present humanoid robot – have little basis in point of fact. These images may appeal to firms and journalists, but they’re entrenched Myths that distort the character, capabilities and limitations of current AI models.
If we portray AI in a misleading way, we could have difficulty truly understanding it. And if we don’t understand it, how can we ever hope to make use of it, regulate it, and make it function in ways in which serve our common interests?
The Myth of Autonomous Technology
Distorted representations of AI are a part of a widespread misunderstanding in science Langdon winner was called “autonomous technology” in 1977: the concept machines have developed a lifetime of their very own and have an independent, purposeful and sometimes destructive impact on society.
AI offers us the right embodiment of this, because the narratives surrounding it flirt with the parable of an intelligent, autonomous creation – and likewise with the punishment for assuming this divine function. It's an age-old trope that has given us stories starting from the parable of Prometheus to Frankenstein, Terminator and Ex Machina
This myth already points to the ambitious term “artificial intelligence” coined by computer scientists John McCarthy in 1955. The label prevailed despite – or perhaps due to – the varied Misunderstandings it causes.
As Kate Crawford succinctly explains in her Atlas of AI: “AI is neither artificial nor intelligent. Rather, artificial intelligence is each physical and material, consisting of natural resources, fuel, human labor, infrastructures, logistics, stories and classifications.”
Most of the issues with the dominant narrative of AI could be traced to this tendency to portray it as an independent, almost alien entity, something unfathomable that exists beyond our control or decisions.
Misleading metaphors
The language utilized by many media outlets, institutions, and even experts to debate AI is deeply flawed. It's fuller anthropomorphism and animism, Pictures of robots and brains, (all the time) made-up stories about machines rebelling or acting inexplicably, and debates about them alleged consciousness. This all piles up on one prevailing feeling Urgency, panic and inevitability.
This vision culminates within the narrative that has driven the event of AI since its inception: the promise of general artificial intelligence (GAI), a putative human or superhuman intelligence that may transform the world and even our species. Companies equivalent to Microsoft And Open AI and technology leaders like Elon Musk have predicted GAI as an issue ever-imminent milestone for a while now.
However, the reality is that the trail to this technology is unclear and there just isn’t even a consensus on whether it should ever be possible.
Narrative, power and the AI bubble
This just isn’t only a theoretical problem. The deterministic and animistic view of AI constructs a predetermined future as myths of autonomous technology exaggerate expectations and distract attention from the actual challenges AI poses.
This hinders a more informed and open public debate concerning the technology. A groundbreaking report from the AI Now Institute calls the promise of AI “the argument to finish all arguments,” a solution to avoid any questioning of the technology itself.
In addition to a mix of exaggerated expectations and fears, these narratives are also chargeable for the AI economic bubble becoming so inflated Reports And Technology leader warn about. If the bubble exists and eventually bursts, we should always keep in mind that it was driven not only by technical achievements but additionally by a narrative that’s as misleading because it is convincing.
Change the narrative
To repair the broken AI narrative, we must bring its cultural, social and political dimensions to the fore. We have to move beyond the parable of autonomous technology and begin fascinated about AI as an interaction between technology and other people.
In practice, this implies shifting focus in several ways: from the technology to the individuals who control it; from a techno-utopian future to a gift still under construction; from apocalyptic visions to real and present risks; From portraying AI as unique and inevitable to emphasizing autonomy, alternative and variety amongst humans.
We can drive these changes in a lot of ways. In my book, Technohumanism: A Narrative and Aesthetic Design for Artificial IntelligenceI propose several stylistic recommendations to flee the autonomous AI narrative. This includes not using it as the topic of a sentence when used as a tool and never using anthropomorphic verbs after we discuss it.
Playing with the term “AI” also helps us realize how much words can change our perception of technology. Try replacing it in a sentence with, for instance, “complex task processing,” considered one of the least ambitious but most apt names considered in its early days.
Important debates about AI, from regulation to its impact on education and employment, will proceed to be shaky until we correct the best way we discuss it. Crafting a narrative that highlights the social and technical reality of AI is an urgent ethical challenge. Successfully meeting this challenge will profit technology and society alike.

