Anthropomorphism is after we attribute human characteristics to non-human beings resembling animals or machines. Chatbots like ChatGPT, Gemini and Copilot encourage this by imitating human language to speak with us, going beyond just using familiar words and phrases to adopt human communication patterns. In this manner, they’re able to have contextualized, coherent conversations and even show emotions resembling humor and empathy.
These systems use language aimed toward making interaction with them natural, fluid and accessible. It facilitates their use in various situations, from customer support to education and entertainment.
Until now, language was considered an exclusively human ability. However, advances within the generative artificial intelligence (AI) that powers these chatbots are forcing us to rethink this concept.
One of probably the most characteristic features of human language is subjectivity, which manifests itself in words or expressions that convey emotional nuances, in addition to within the expression of non-public opinions. This also includes forming opinions about events and using contextual or cultural elements.
The first person: “I” and “we”
A typical feature of human subjectivity in language is the use of non-public pronouns. The first person – “I” or “we” – allows us to specific personal thoughts and experiences. The second person – “you” – interacts with the opposite person and builds a relationship between the 2 participants within the conversation. This is often called Intersubjectivity.
Let's take a look at a comparatively easy example:
-
User: “I organize my home. How can I resolve which items to maintain, donate, or throw away?”
-
Chatbot: “Great query! Organizing your belongings could be overwhelming, but having a transparent strategy will make these decisions easier. Here are some ideas for deciding what to maintain, donate, or throw away.”
The chatbot implicitly uses the primary person. Although he doesn't say “I” or “I,” he takes on the role of an advisor or guide, and phrases like “Here are some ideas” represent the ideas because the chatbot's own.
The chatbot takes on the role of helper, in order that the user has the sensation that they’re being addressed personally, even when the primary person just isn’t explicitly named. Additionally, using “listed below are” reinforces the chatbot’s image as someone who offers something useful.
The second person: the facility of “you”
“You” (and its other forms resembling “your”) address the user directly. We can see this in several places within the previous example, resembling the phrase “organize owners” and “can facilitate these decisions.”
By chatting with you in a private way, the chatbot should make the reader feel like they’re actively participating within the conversation. This kind of language often appears in texts that aim to make one other person feel actively involved.
Other phrases like “Great query!” not only give a positive impression of the user’s concern, but in addition encourage them to get entangled. Phrases like “Organizing your things could be overwhelming” suggest a shared experience and create an illusion of empathy by acknowledging the user’s feelings.
Artificial empathy
The chatbot's use of first person simulates consciousness and attempts to create an illusion of empathy. By assuming a helper position and using the second person, the user is involved and the perception of closeness is increased. This combination creates a conversation that’s human, practical and suitable for giving advice, even when its empathy is predicated on an algorithm and never real understanding.
Getting used to interacting with unconscious entities that simulate identity and personality can have long-term effects, as these interactions can influence our personal, social and cultural lives. As these technologies improve, it becomes increasingly difficult to tell apart a conversation with an actual person from a conversation with an AI system.
This increasingly blurring boundary between the human and the synthetic influences our understanding of authenticity, empathy and conscious presence in communication. We may even find yourself treating AI chatbots as in the event that they were conscious beings, resulting in confusion about their true capabilities.
I find it difficult to consult with other people
Interactions with machines may change our expectations of human relationships. The more we get used to fast, seamless, and conflict-free interactions, the more frustrated we change into in our relationships with real people.
Human interactions are characterised by emotions, misunderstandings and complexity. In the long run, repeated interactions with chatbots can reduce our patience and talent to cope with conflict and accept the natural imperfections of human interactions.
Furthermore, prolonged exposure to simulated human interaction raises ethical and philosophical dilemmas. By attributing human qualities to those entities—resembling the power to feel or have intentions—we may begin to query the worth of conscious life over an ideal simulation. This could open debates about robot rights and the worth of human consciousness.
Interacting with non-sentient entities that mimic human identity can change our perception of communication, relationships, and identity. Although these technologies can offer greater efficiency, it’s important to concentrate on their limitations and the potential impact on the best way we interact, each with machines and with one another.