HomeNewsHumanizing AI could lead on us to dehumanize ourselves

Humanizing AI could lead on us to dehumanize ourselves

The Irish author John Connolly said once:

The nature of humanity, its essence, is to feel one other's pain as one's own and to act to alleviate that pain.

For most of our history, we believed that empathy was a uniquely human trait—a special ability that sets us other than machines and other animals. But this belief is now being questioned.

As AI becomes a bigger a part of our lives and even penetrates our most intimate spheres, we’re faced with a philosophical conundrum: Could ascribing human qualities to AI diminish our own human nature? Our Research suggests that it is feasible.

Digitize camaraderie

In recent years, AI “companion” apps like Replika have attracted hundreds of thousands of users. Replika allows users to create custom digital partners to have intimate conversations. Members who pay Replica Pro may even turn their AI right into a “romantic partner”.

Physical AI companions should not far behind. Companies like JoyLoveDolls sell interactive sex robots with customizable features like breast size, ethnicity, movement, and AI reactions like moaning and flirting.

Although currently a distinct segment market, history suggests that today's digital trends will grow to be tomorrow's global norms. With about one in 4 As adults suffer from loneliness, the demand for AI companions will increase.

The dangers of humanizing AI

Humans have long attributed human characteristics to nonhuman beings—a bent often known as anthropomorphism. It’s no surprise that we do that with AI tools like ChatGPT that appear to “think” and “feel.” But why is humanizing AI an issue?

On the one hand, it allows AI firms to take advantage of our tendency to form bonds with human-like beings. replica is marketed as “the AI ​​companion that cares”. However, to avoid legal issues, the corporate points out elsewhere that Replika will not be sentient and only learns through hundreds of thousands of user interactions.

Screenshot of conflicting information on Replika's help page in comparison with promoting.

Some AI firms open claim Their AI assistants have empathy and may even anticipate human needs. Such claims are misleading and may make the most of people in search of companionship. Users can grow to be deeply emotionally involved once they consider their AI companion truly understands them.

This raises serious ethical concerns. A user will hesitate to delete (i.e. “abandon” or “kill”) their AI companion once they’ve attributed some form of sentience to it.

But what happens if that companion disappears unexpectedly, reminiscent of when the user can not afford it or the corporate that operates it closes? Even if the partner will not be real, the sentiments related to it are real.

Empathy – greater than a programmable output

Is there a danger that by reducing empathy to a programmable output, we diminish its true essence? To answer this query, we should always first take into consideration what empathy really is.

Empathy means treating other individuals with understanding and compassion. It's while you share your friend's grief when he tells you about his grief, or while you feel joy radiate from someone you care about. It is a profound experience – wealthy and beyond easy measurement methods.

A fundamental difference between humans and AI is that humans feel real emotions, while AI can only simulate them. This affects the difficult problem of consciousnesswhich questions how subjective human experiences arise from physical processes within the brain.

A child with glasses looks closely at a monitor lizard through glass.
Science has yet to resolve the difficult problem of consciousness.
Shutterstock

While AI can simulate understanding, any alleged “empathy” is the results of programming that mimics empathic speech patterns. Unfortunately, AI vendors have a financial incentive to entice users to commit to their seemingly empathetic products.

The dehumanization hypothesis

Our “DehumanAIsation Hypothesis” highlights the moral concerns related to attempting to cut back humans to just a few basic functions that might be reproduced by a machine. The more we humanize AI, the more we risk dehumanizing ourselves.

For example, if we depend on AI for emotional labor, we may be less tolerant of the imperfections of real-world relationships. This could weaken our social bonds and even result in emotional deskilling. Future generations may grow to be less empathetic and lose track of essential human qualities as emotional skills proceed to be commercialized and automatic.

As AI companions grow to be more common, humans could also use them to switch real human relationships. This would likely increase loneliness and alienation – the very problems these systems claim to assist with.

The collection and evaluation of emotional data by AI firms also poses significant risks, as this data could possibly be used to govern users and maximize profits. This would further erode our privacy and autonomy and take surveillance capitalism to the subsequent level.

Hold providers accountable

Regulators must do more to carry AI providers accountable. AI firms must be honest about what their AI can and can’t do, especially in the event that they risk exploiting users' emotional vulnerabilities.

Exaggerated claims of “real empathy” must be banned. Companies making such claims must be fined – and repeat offenders must be shut down.

Privacy policies must also be clear, fair and without hidden conditions that allow firms to take advantage of user-generated content.

We must preserve the unique qualities that outline the human experience. While AI can improve certain features of life, it cannot and shouldn’t replace real human connection.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read