HomeNewsAI appears like an unstoppable force. But it just isn't a panacea...

AI appears like an unstoppable force. But it just isn’t a panacea for businesses or society

In Greek mythology, Prometheus is credited with giving humans the fireplace and “spark” that fueled civilization. One of the unintended consequences of Prometheus's “gift” was that the necessity for heavenly gods diminished. Modern humans have experienced all forms of things which have similar unintended consequences, from the usage of CFCs leading to 1 Hole within the ozone layer to constructing systems that they don't understand or cannot fully control.

In experimenting with artificial intelligence (AI), humans appear to have taken on the role of Prometheus – apparently giving the machines the “fire” that ignited civilization.

Predicting the longer term is best left to shamans and futurologists. But we might be higher informed concerning the dangers posed by how AI works and determine how you can avoid the pitfalls.

First, we must recognize that AI holds enormous promise for human society. AI will omnipresent – from on a regular basis tasks like writing emails to complex settings that require human expertise.

AI – by which we mean large language models (LLMs) that appear to “understand” and produce human language – is Prediction machines. They are trained on large data sets that allow them to make statistical connections between large numbers of variables and predict what comes next.

If you've used Google, you’ll have experienced a version of this through the prediction prompts. For example, type “how you can drive a automobile” and Google will complete it with “how you can drive an automatic automobile.” It's unlikely to finish with “How to Drive a Plane.” Google determines this by taking a look at the history of words that come after “how you can drive.” The larger the information set trained on, the more accurate its prediction can be.

Variations of this logic are utilized in all current applications. The strength of AI is, after all, that it may possibly process countless amounts of knowledge and extrapolate them for the longer term.



But this strength can be its weakness – it makes it vulnerable to a phenomenon that management scientists call “…” “Trust Trap”. This is the tendency to assume that previous decisions have produced positive results and that it should be okay to proceed like this in the longer term.

Consider an example: the intervals between maintenance of critical aircraft parts. If increasing intervals has worked well prior to now (no failures), these might be adopted more broadly and there might be moves to extend intervals further. However, this turned out to be a recipe disaster. Alaska Airlines Flight 261 crashed into the Pacific Ocean, killing all 88 people on board, due to a call made to achieve this, perhaps influenced by previous successes delay maintenance a critical part.

AI could only increase this tendency. It can distract attention Signs that there are problems while AI evaluation feeds into the image to support decision making.

Or AI can extrapolate past results and make decisions without human intervention. Take the instance of the self-driving cars it has been involved in greater than a dozen cases that pedestrians are killed. No data set, regardless of how large, can provide training for each potential motion of a pedestrian. AI cannot yet compete with human discretion in such situations.

More worryingly, AI can degrade human capabilities to such an extent that the flexibility to find out when to intervene is lost. Researchers have found that the usage of AI results in this Ability decline – a specific problem when decisions at work have life-threatening consequences.

Self-driving cars cannot yet make decisions as instinctively because the human brain.
Acumen/Shutterstock

Amazon learned the hard way about letting “prediction machines” make decisions with its internal hiring tool discriminates against women because it was trained on a database that spanned a decade and focused predominantly on men. These are after all examples that we’re aware of. As LLMs change into more complex and their inner workings change into more opaque, we may not even notice when something goes improper.

Review

Since AI reflects the past, this may be the case limited in its performance to initiate radical innovations. By definition, a radical innovation is a break with the past.

Consider the context of photography. Innovative photographers were capable of change the way in which business was conducted History of photojournalism is an example of how something that began as a way of illustrating news step by step acquired the ability of storytelling and was elevated to the status of an art form.

Likewise fashion designers like Coco Chanel modernized women's clothingthereby freeing them from uncomfortable long skirts and corsets that had lost their relevance within the post-war world.

The founding father of the sportswear manufacturer Under Armour, former college football player Kevin Plank, used the discomfort attributable to sweaty cotton undershirts as a chance to develop clothing from them Microfibers wick moisture away from the body. AI can improve these innovations. But due to the way in which it really works in its current form, it’s unlikely to be the source of novelty.

Simply put, AI is unable to see or show us the world in a brand new way, a shortcoming we call that “AI Chris Rock Problem”inspired by a comedian joke about bullets being prohibitively expensive. By proposing a way of curbing violence that included “bullet control” as an alternative of gun control, Rock drew laughs by tapping into the cultural zeitgeist and presenting an modern solution. He also made clear the absurdity of the situation, which requires human perception.

AI shows its shortcomings when what has worked up to now loses relevance or problem-solving power. AI's past success means it should change into more widespread – but this itself represents a trust trap that folks should avoid.

Prometheus was eventually saved by Hercules. There is not any such God waiting within the wings for people. This signifies that more, not less, responsibility lies on our shoulders. This includes ensuring that our elected representatives provide regulatory oversight of AI. Finally, we cannot allow the technocrats to play with fire at our expense.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read