HomeNews"Digital brains" that "think" and "feel": Why will we embody AI models...

“Digital brains” that “think” and “feel”: Why will we embody AI models and are these metaphors actually helpful?

The press has all the time used metaphors and examples to simplify complex problems and to know them more easily. With the rise of chatbots which might be driven by artificial intelligence (AI), the tendency towards humanize technology has increased, be it through comparisons with medicine, well-known parable or dystopian scenarios.

Although what’s behind AI is nothing greater than code and circuit, the media often portrays algorithms as human properties. So what will we lose and what will we gain when Ai stops being a mere device and linguistically talking to a human age ego, an entity that “thinks”, “feels” and even “takes care”?



The digital brain

An article within the Spanish newspaper El PaĂ­s presented the Chinese KI model Deepseek As a “digital brain”, which “seems to know the geopolitical context of its birth clearly”.

This kind of letter replaces the technical jargon – the essential model, the parameters, the GPU, etc. – by an organ that all of us recognize because the core of human intelligence. This has two results. It enables people to know the dimensions and nature of the duty carried out by the machine (“think”). However, it also indicates that AI has a “mind” that’s capable of make judgments and to recollect contexts – something that’s currently removed from the technical reality.

This metaphor matches within the classic Conceptual metaphor theory by George Lakoff and Mark JohnsonWhat argues that concepts serve to assist people understand reality and think and act to them. When we speak about AI, we turn out to be difficult, abstract skills (“statistical calculation”) in confidants (“think”).

Although potentially helpful, this tendency consists of the chance of covering the difference between statistical correlation and semantic understanding. It increases the illusion that computer systems can really “know” something.

Machines with feelings

In February 2025 ,, ABC published a report on “emotional AI” That asked: “Will it come a day you possibly can feel?” The text reported on the progress of a Spanish team that attempted to equip Konversations -KI systems with a “digital limbic system”.

Here the metaphor becomes even braver. The algorithm not just thinks, but also can feel suffering or joy. This comparison dramatizes innovation and brings you closer to the reader, but has conceptual mistakes: By definition, feelings are related to physical existence and self -confidence that software cannot have. Presenting AI as an “emotional topic” makes it easier to demand empathy from it or to accuse it for cruelty. It due to this fact shifts the moral focus of the individuals who design and program the machine itself.

An analogous article It reflected that “when artificial intelligence seems human, feelings like an individual and life like an individual … What does it make essential when it’s a machine?”



Robots who maintain it

Humanoid robots are sometimes shown in these terms. A Report within the country In China's older care androids, she described her as machines that “maintain her oldest”. With the statement “care”, the article refers back to the duty of the family to maintain their elders, and the robot is presented as a relative who provides the emotional camaraderie and physical support that was previously provided by family or nursing staff.

This metaphor for the caregiver will not be all bad. It legitimizes innovation in a context of the demographic crisis and at the identical time calms technological fears by represents the robot in view of the dearth of lack of personnel as essential support in contrast to a threat to jobs.

However, it could possibly be seen as a disguise of the moral problems in reference to responsibility if the work of care by a machine is carried out by private firms – not to say the precarious nature of this sort of work.

The medical assistant

In one other Report of the countryLarge -speaking models were presented as an assistant or “extension” of a physician during which the medical records were checked and diagnoses were proposed. The metaphor of the “intelligent scalpel” or the “tireless residents” positions Ki within the health system more as a trustworthy worker than as a substitute.

This hybrid frame – neither an iner device nor an autonomous colleague – promotes public acceptance since it respects medical authority and at the identical time guarantees efficiency. However, it also opens up discussions in regards to the accountability obligation: If the “expansion” makes a mistake, is the fault with the human specialist, the software or the corporate that markets it?

Why does the press relate to metaphor?

These metaphors serve greater than an ornamental pull and serve at the very least three purposes. First and foremost, they make understanding easier. The explanation of deep neural networks takes time and technical jargon, nevertheless it is simpler for readers to speak about “brains”.

Second, they create narrative drama. Journalism lives from stories with protagonists, conflicts and results. Humanizing Ai creates them along with heroes and villains in addition to mentors and apprentices.

Third, metaphors serve to formulate moral judgments. Only if the algorithm is comparable to a subject can or not it’s held accountable or credit.

However, the identical metaphors can hinder public considerations. If AI “feels”, it’s within the view that it ought to be regulated because the residents are. It also only appears, after all, that we should always accept its authority whether it is thought to be superior intelligence.



How to speak about AI

Eliminating these metaphors could be unattainable and it will not be something that we should always strive for. Figurative language is the way in which people understand the unknown, but crucial thing is to make use of it critically. For this purpose, we provide authors and editors some recommendations:

  • First, it will be important so as to add technical counterweights. This signifies that after the introduction of the metaphor, it’s briefly but clearly explained what the system in query does and what will not be.

    It can be essential to avoid an absolute human agency. This signifies that phrases reminiscent of “determine to come to a decision” ought to be qualified: “Does the system recommend”? “Classified” the algorithm?

  • Another secret’s the mentioned human sources. The naming of developers and regulatory authorities reminds us that technology doesn’t arise from a vacuum.

  • We must also diversify metaphors and examine less anthropomorphic images.

While “humananizing” artificial intelligence within the press helps readers to familiarize themselves with complex technology, the simpler it’s to project fears, hopes and responsibilities onto servers and code lines.

While this technology is developing, the duty of being faced with journalists – and their readers – is to search out a sensitive balance between the impressive power of the metaphor and the conceptual precision that we want to proceed to have informed debates in regards to the future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read