HomeEthics & SocietyStudy challenges the narrative of AI posing an ‘existential threat’

Study challenges the narrative of AI posing an ‘existential threat’

Is AI dangerous or not? It’s the talk that just keeps raging on.

Researchers from the University of Bath and the Technical University of Darmstadt launched a study to judge AI risks within the context of current language models. 

The findings, published as a part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), challenge views that AI, particularly large language models (LLMs) like ChatGPT, could evolve beyond human control and pose an existential threat to humanity.

This confronts the fears expressed by a number of the world’s leading AI researchers, including Geoffrey Hinton and Yoshua Bengio, two of the “godfathers of AI” who conveyed concerns concerning the potential dangers of advanced AI. 

Yann LeCun, the third “godfather of AI” and Meta’s chief AI scientist, alongside Dr. Gary Marcus and others, argues the counter – that AI risks are simply overblown

This divergence in opinion amongst the sphere’s most influential figures has fueled a fierce debate concerning the nature and severity of the risks posed by advanced AI systems.

This latest study probes LLMs’ “emergent abilities,” which consult with a model’s ability to perform tasks for which it was not explicitly trained. 

AI risks are multifaceted, but a minimum of some relate to models developing their very own goals that might harm humans, like shutting down computer systems or leaking data.

The worry under inspection is whether or not or not an LLM might spontaneously develop these skills without instruction or control. 

To investigate this, the research team conducted a series of experiments:

  1. They examined the underlying mechanisms of “in-context learning” (ICL) in LLMs, which allows models to generate responses based on examples provided during interactions. As the study states, “The ability to follow instructions doesn’t imply having reasoning abilities, and more importantly, it doesn’t imply the potential for latent, potentially-dangerous abilities.”
  2. They assessed LLMs’ true capabilities and limitations by evaluating their performance on a variety of tasks, including those who require complex reasoning and problem-solving skills. The researchers argue that LLMs can’t independently develop latest skills
  3. They analyzed the connection between model size, training data, and emergent abilities to find out whether increasing model complexity results in AI developing hazardous skills. The study said, “These observations imply that our findings hold true for any model which exhibits a propensity for hallucination or requires prompt engineering, including those with greater complexity, no matter scale or variety of modalities, akin to GPT-4.”

The researchers conclude from their investigation that “the prevailing narrative that this sort of AI is a threat to humanity prevents the widespread adoption and development of those technologies and likewise diverts attention from the real issues that require our focus.”

This strongly aligns with LeCun and others who imagine AI risks are over-publicized.

However, while evaluating the risks posed by current AI models is clearly essential, accounting for the longer term is a tougher task. 

Each generation of models comes with latest abilities and, thus, latest risks, as shown by some strange behaviors documented in GPT-4o’s test card. 

One red teaming exercise (designed to discover unpredictable AI behaviors) quite shockingly saw GPT-4o’s voice feature unexpectedly clone a user’s voice and begin talking to them in their very own voice. 

Tracking AI risks as and after they emerge is critical, because the goalposts are changing on a regular basis. 

The study makes a salient point that some non-existential AI risks are already knocking on the door, “Future research should due to this fact give attention to other risks posed by the models, akin to their potential for use to generate fake news.”

As the authors admit, then, simply because AI doesn’t pose large-scale threats straight away doesn’t mean safety is a non-issue. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read