This month, bots with artificial intelligence slipped into Santa's cave. For one thing, AI-powered gifts are on the rise – as I do know, I used to be just gifted a formidable AI voice recorder.
Meanwhile, retailers like Walmart are offering AI tools to offer holiday help to stressed-out shoppers. If you want, consider these because the digital equivalent of a private elf, providing shortcuts for shopping and gifting. And they appear to work quite well, judging by recent reviews.
But here lies the paradox: Even as AI enters our lives – and Christmas stockings – hostility stays sky-high. For example, earlier this month a British government survey found that 4 in ten people expect AI to bring advantages. However, three in ten expect significant damage as a consequence of breaches of “data security”, “the spread of misinformation” and “job displacement”.
This is probably no surprise. The risks are real and well advertised. However, as we approach 2025, it’s price reflecting on three often-ignored points concerning the current anthropology of AI that may help frame this paradox more constructively.
First, we want to rethink what “A” we use in “AI” today. Yes, machine learning systems are “artificial”. However, bots don’t at all times – or not normally – replace our human brains as an alternative choice to flesh-and-blood perception. Instead, they typically allow us to act faster and complete tasks more effectively. Shopping is only one example.
So perhaps we must always reframe AI as “augmented” or “accelerated” intelligence – or “agentic” intelligence, to make use of the buzzword for “what”. current Nvidia blog calls the “next frontier” of AI. This refers to bots that act as autonomous agents and may perform tasks for humans on their command. It will probably be a central theme in 2025. Or as Google explained when it recently unveiled its latest Gemini AI model: “The agent age of AI is here.”
Second, we want to think beyond the cultural framework of Silicon Valley. So far, “anglophone actors” have “dominated” the AI debate on the world stage, comparable to academics Stephen Cave and Kanta Dihal In the introduction to her book, note . This reflects the technology dominance of the USA.
However, other cultures view AI barely in another way. For example, attitudes in developing countries are inclined to be much more positive than in developed countries, in keeping with James Manyika, co-head of a UN advisory panel on AI and a senior Google official, rrecently told Chatham House.
Countries like Japan are also different. The Japanese public particularly has long shown a much more positive attitude towards robots than their anglophone counterparts. And that is now also reflected within the attitude towards AI systems.
Why is that? One factor is the labor shortage in Japan (and the proven fact that many Japanese are wary of letting immigrants fill the gap, making it easier to just accept robots). Another is popular culture. In the second half of the twentieth century, when Hollywood movies like or spread fear of intelligent machines amongst Anglophone audiences, Japanese audiences were fascinated by the saga that showed robots in a friendly light.
Its creator, Osamu Tezuka, has previously attributed this attributed to the influence of the Shinto religion, which, in contrast to Judeo-Christian traditions, doesn’t draw strict boundaries between animate and inanimate objects. “The Japanese make no distinction between man, the superior creature, and the world around him,” he previously noted. “We accept robots with none problem and the wide world around us, the insects, the stones – it’s all one.”
And that's reflected in how firms like Sony and SoftBank are designing AI products today, says considered one of the essays in Notes: These try to create “robots with hearts” in a way that American consumers might find scary.
Third, this cultural variation shows that our responses to AI do not need to be set in stone, but can evolve as technological changes and cross-cultural influences occur. Consider facial recognition technologies. In 2017, Ken Anderson, an anthropologist working at Intel, and his colleagues studied Chinese and American consumers' attitudes toward facial recognition tools were examined and located that the previous accepted this technology for on a regular basis tasks comparable to banking, however the latter didn’t.
This distinction apparently reflected U.S. concerns about privacy issues. But the identical yr this study was published, Apple introduced facial recognition tools on the iPhone, which were quickly adopted by US consumers. Attitudes modified. So the important thing point is that “cultures” aren’t like sealed and static Tupperware boxes. They are more like slow-moving rivers with muddy banks into which recent streams flow.
Whatever else 2025 may bring, the one thing that could be predicted is that our attitudes towards AI will proceed to subtly change as technology becomes more normalized. This may worry some, nevertheless it could also help us frame the technology debate more constructively and give attention to ensuring that individuals control their digital “agents” – somewhat than the opposite way around. These days investors could also be jumping into AI, but they should ask themselves what “A” they need on this AI tag.