HomeOpinionsThree debates facing the AI industry: Intelligence, progress, and safety

Three debates facing the AI industry: Intelligence, progress, and safety

That famous saying: “The more we all know, the more we don’t know,” actually rings true for AI.

The more we find out about AI, the less we appear to know for certain.

Experts and industry leaders often find themselves at bitter loggerheads about where AI is now and where it’s heading, failing to see eye to eye on seemingly elemental concepts like machine intelligence, consciousness, and safety.

Will machines in the future surpass the intellect of their human creators? Is AI advancement accelerating towards a technological singularity, or are we on the cusp of an AI winter?

And perhaps most crucially, how can we make sure that the event of AI stays secure and useful when even the experts can’t agree on what the longer term holds?

We’re immersed in a fog of uncertainty. The best we are able to do is explore perspectives and are available to our own informed yet fluid views in an industry consistently in flux.

Debate one: AI intelligence

With each latest generation of generative AI models comes a renewed debate on machine intelligence.

Elon Musk recently fuelled debate on AI intelligence when he said, “AI will probably be smarter than any single human next yr. By 2029, AI might be smarter than all humans combined.”

AI will probably be smarter than any single human next yr. By 2029, AI might be smarter than all humans combined. https://t.co/RO3g2OCk9x

Musk was immediately disputed by Meta’s chief AI scientist and eminent AI researcher, Yann LeCun, who said, “No. If it were the case, we might have AI systems that might teach themselves to drive a automotive in 20 hours of practice, like every 17 year-old. But we still don’t have fully autonomous, reliable self-driving, though we (you) have tens of millions of hours of *labeled* training data.”

If it were the case, we might have AI systems that might teach themselves to drive a automotive in 20 hours of practice, like every 17 year-old.

But we still don’t have fully autonomous, reliable self-driving, though we (you) have tens of millions of hours of *labeled* training data.

This conversation indicates a small a part of an ambiguous void within the opinion of AI experts and tech leaders.

It’s a conversation that results in a never-ending spiral of interpretation with no conesus, as demonstrated by the wildly contrasting views of technologists and AI leaders during the last yr or so (info from Improve the News):

  • Geoffrey Hinton: “Digital intelligence” could overtake us inside “5 to twenty years.”
  • Yann LeCun: Society is more prone to get “cat-level” or “dog-level” AI years before human-level AI.
  • Demis Hassabis: We may achieve “something like AGI or AGI-like in the following decade.”
  • Gary Marcus: “[W]e will eventually reach AGI… and quite possibly before the top of this century.”
  • Geoffrey Hinton: “Current AI like GPT-4 “eclipses an individual” generally knowledge and will soon achieve this in reasoning as well.
  • Geoffrey Hinton: AI is “very near it now” and shall be “far more intelligent than us in the longer term.”
  • Elon Musk: “We could have, for the primary time, something that’s smarter than the neatest human.”
  • Elon Musk: “I’d be surprised if we don’t have AGI by [2029].”
  • Sam Altman: “[W]e could get to real AGI in the following decade.”
  • Yoshua Bengio: “Superhuman AIs” shall be achieved “between a number of years and a few a long time.”
  • Dario Amodei: “Human-level” AI could occur in “two or three years.”
  • Sam Altman: AI could surpass the “expert skill level” in most fields inside a decade.
  • Gary Marcus: “I don’t [think] we’re all that near machines which are more intelligent than us.”

No party is unequivocally right or flawed in the controversy of machine intelligence. It ultimately hinges on one’s subjective interpretation of intelligence and the way AI systems measure up against that definition.

Pessimists may point to AI’s potential risks and unintended consequences, emphasizing the necessity for caution and stringent safety measures. They argue that as AI systems change into more autonomous and powerful, they might develop goals and behaviors misaligned with human values, resulting in catastrophic outcomes.

Conversely, optimists may give attention to AI’s transformative potential, envisioning a future where machines work alongside humans to unravel complex problems and drive innovation. They may downplay the risks, arguing that concerns about superintelligent AI are largely hypothetical and that the technology’s advantages far outweigh the potential drawbacks.

The crux of the difficulty lies in the issue of defining and quantifying intelligence, especially when comparing entities as disparate as humans and machines.

For example, a fly has advanced neural circuits and might successfully evade our attempts to swat or catch it, outsmarting us on this narrow domain. These sorts of comparisons are potentially limitless.

Pick your examples of intelligence, and everybody may be right or flawed.

Debate two: is AI accelerating or slowing?

Is AI advancement set to speed up or plateau and decelerate?

Some argue that we’re within the midst of an AI revolution, with breakthroughs happening faster than ever. Others contend that progress has hit a plateau, and the sector faces momentous challenges that might slow innovation in the approaching years.

Generative AI is the culmination of a long time of research and billions in funding. When ChatGPT landed in 2022, the technology had already attained a high level in research environments, setting the bar high and throwing society in on the deep end.

The resulting hype also drummed up immense funding for AI startups, from Anthropic to Inflection and Stability AI to MidJourney.

This, combined with immense internal efforts from Silicon Valley veterans Meta, Google, Amazon, Nvidia, and Microsoft, resulted in a rapid proliferation of AI tools. GPT-3 quickly morphed into heavyweight GPT-4, while competitors like LLMs like Claude 3 Opus, xAI’s Grok and Mistral, and Meta’s open-source models have also made their mark.

Some experts and technologists,  equivalent to Sam Altman, Geoffrey Hinton, Yoshio Bengio, Demis Hassabis, and Elon Musk, feel that AI acceleration has just begun.

Musk said generative AI was like “waking the demon,” whereas Altman said AI mind control was imminent (which Musk has evidenced with recent advancements in Neuralink; see below for a way one man played a game of chess through thought alone).

On the opposite hand, experts equivalent to Gary Marcus and Yann LeCun feel we’re hitting brick partitions, with generative AI facing an introspective period or ‘winter.’

This could be exacerbated by practical obstacles, equivalent to rising energy costs, the restrictions of brute-force computing, regulation, and material shortages.

We’ve observed how AI is exceptionally expensive, and monetization isn’t straightforward, so tech corporations need to search out ways to maintain up the momentum so money keeps flowing into the industry.

The jury is out on this one.

Debate three: AI safety

Conversations on AI intelligence and progress even have implications for AI safety. If we cannot agree on what constitutes intelligence or the right way to measure it, how can we make sure that AI systems are designed and deployed in a way that’s secure and useful to society?

The absence of a shared understanding of intelligence makes it difficult to determine appropriate safety measures and ethical guidelines for AI development.

To underestimate AI intelligence is to underestimate the necessity for AI safety controls and regulation.

Conversely, overestimating or exaggerating AI’s abilities warps perceptions and risks over-regulation. This could silo power in Big Tech, which has proven clout in lobbying and out-maneuvering laws. And after they do slip up, they will pay the fines.

Last yr, protracted X debates amongst Yann LeCun, Geoffrey Hinton, Max Tegmark, Gary Marcus, Elon Musk, and various other outstanding figures within the AI community highlighted deep divisions in AI safety. Big Tech has been hard at work self-regulating and creating ‘voluntary guidelines,’ with leaders actively advocating regulation.

Critics suggest that regulation enables Big Tech to bolster market structures, rid themselves of disruptors, and set the terms of play to their liking.

On that side of the controversy, experts like LeCun argue that the existential risks of AI have been overstated and are getting used as a smokescreen by Big Tech corporations to push for regulations that might stifle competition and consolidate their control over the industry.

LeCun and his supporters also indicate that AI’s immediate risks, equivalent to misinformation, deep fakes, and bias, are already harming people and require urgent attention.

Altman, Hassabis, and Amodei are those doing massive corporate lobbying for the time being.
They are those who are trying to perform a regulatory capture of the AI industry.
You, Geoff, and Yoshua are giving ammunition to those that are lobbying for a ban on open AI R&D.


On the opposite hand, Hinton, Bengio, Hassabis, and Musk have sounded the alarm concerning the potential existential risks of AI.

Bengio, LeCun, and Hinton, often often called the ‘godfathers of AI’ for developing neural networking, deep learning, and other AI techniques throughout the 90s and early 2000s, remain influential today. Hinton and Bengio, whose views generally align, sat in a recent rare meeting between US and Chinese researchers on the International Dialogue on AI Safety in Beijing.

The meeting culminated in a press release: “In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again must coordinate to avert a catastrophe that might arise from unprecedented technology.”

It needs to be said that Bengio, Hinton, and various others are highly unlikely disingenuous. They aren’t financially aligned with Big Tech and haven’t any reason to over-egg AI risks.

Hinton raised this point himself in an X spat with LeCun and ex-Google Brain co-founder Andrew Ng, highlighting that he left Google to talk freely about AI risks.

That doesn’t add weight to his views, however it’d be pretty far-out to query the motive of his warnings. Indeed, many great scientists have questioned AI safety through the years, including the late Profession Stephen Hawking, who viewed the technology as an existential risk.

Andrew Ng is claiming that the concept AI could make us extinct is a big-tech conspiracy. A datapoint that doesn’t fit this conspiracy theory is that I left Google in order that I could speak freely concerning the existential threat.

This swirling mixture of polemic exchanges leaves little space for people to occupy the center ground, fueling generative AI’s image as a polarizing technology.

AI regulation, meanwhile, has change into a geopolitical issue, with the US and China tentatively collaborating over AI safety despite escalating tensions in other departments.

So, just as experts disagree about when and the way AI will surpass human capabilities, additionally they differ of their assessments of the risks and challenges of developing secure and useful AI systems.

Debates surrounding AI intelligence aren’t just principled or philosophical in nature also they are a matter of governance.

When experts vehemently disagree over even the essential elements of AI intelligence and safety, regulation can’t hope to serve people’s interests.

Creating consensus would require tough realizations from experts, AI developers, governments, and society at large.

However, along with many other challenges, steering AI into the longer term would require some tech leaders and experts to confess they were flawed. And that’s not going to be easy.


Please enter your comment!
Please enter your name here

Must Read