HomeNewsFriday essay: some tech leaders think AI could outsmart us and wipe...

Friday essay: some tech leaders think AI could outsmart us and wipe out humanity. I’m a professor of AI – and I’m not nervous

In 1989, political scientist Francis Fukuyama predicted we were approaching the top of history. He meant that similar liberal democratic values were taking hold in societies around the globe. How mistaken could he have been? Democracy today is clearly on the decline. Despots and autocrats are on the rise.

You might, nonetheless, be pondering Fukuyama was right all along. But differently. Perhaps we actually are approaching the of history. As in, game over humanity.

Now there are various ways it could all end. A worldwide pandemic. A large meteor (something perhaps the dinosaurs would appreciate). Climate catastrophe. But one end that’s increasingly talked about is artificial intelligence (AI). This is considered one of those potential disasters that, like climate change, appears to have slowly crept up on us but, many individuals now fear, might soon take us down.

In 2022, wunderkind Sam Altman, chief executive of OpenAI – considered one of the fastest-growing corporations within the history of capitalism – explained the professionals and cons:

I believe the great case (around AI) is just so unbelievably good that you just sound like a very crazy person to start out talking about it. The bad case – and I believe this is very important to say – is, like, lights out for all of us.

Sam Altman speaking earlier this yr.
Franck Robichon/AAP

In December 2024, Geoff Hinton, who is commonly called the “godfather of AI” and who had just won the Nobel Prize in Physics, estimated there was a “10% to twenty%” probability AI could lead on to human extinction inside the following 30 years. Those are pretty serious odds from someone who knows loads about artificial intelligence.

Altman and Hinton aren’t the primary to fret about what happens when AI becomes smarter than us. Take Alan Turing, who many consider to be the founder of the sector of artificial intelligence. Time magazine ranked Turing as considered one of the 100 Most Influential People of the twentieth century. In my view, that is selling him short. Turing is up there with Newton and Darwin – considered one of the best minds not of the last century, but of the last thousand years.

Alan Turing in 1951.
Wikimedia Commons, CC BY

In 1950, Turing wrote what is mostly considered to be the primary scientific paper about AI. Just one yr later, he made a prediction that haunts AI researchers like myself today.

Once machines could learn from experience like humans, Turing predicted that “it could not take long to outstrip our feeble powers (…) At some stage subsequently we must always should expect the machines to take control.”

When interviewed by LIFE magazine in 1970, one other of the sector’s founders, Marvin Minsky, predicted,

Man’s limited mind may not have the option to manage such immense mentalities (…) Once the computers get control, we’d never get it back. We would survive at their sufferance. If we’re lucky, they may resolve to maintain us as pets.

So how could machines come to take control? How nervous should we be? And what can we do to stop this?

Irving Good, a mathematician who worked alongside Turing at Bletchley Park during World War II, predicted how. Good called it the “intelligence explosion”. This is the purpose where machines turn out to be smart enough to start out improving themselves.

This is now more popularly called the “singularity”. Good predicted the singularity would create a brilliant intelligent machine. Somewhat ominously he suggested this might be “the last invention that man need ever make”.

When might AI outsmart us?

When exactly machine intelligence might surpass human intelligence could be very uncertain. But, given recent progress in large language models like ChatGPT, many individuals are concerned it may very well be very soon. And so as to add salt to the wound, we’d even be hastening this process.

What surprises me most in regards to the development of AI today is the speed and scale of change. Nearly US$1 billion is being invested in artificial intelligence each day by corporations like Google, Microsoft, Meta and Amazon. That’s around one quarter of the world’s total research and development (R&D) budget.

We’ve never made such massive bets before on a single technology. As a consequence, many individuals’s timelines for when machines match, and shortly after exceed, human intelligence are shrinking rapidly.

Elon Musk has predicted that machines will outsmart us by 2025 or 2026. Dario Amodei, CEO of OpenAI competitor Anthropic, suggested that “we’ll get there in 2026 or 2027”. Shane Legg, the co-founder of Google’s DeepMind, predicted 2028; while Nvidia CEO, Jensen Huang, put the date as 2029. These predictions are all very near for such a portentous event.

Cover of 2062.

Published by Black Inc

Of course, there are also dissenting voices. Yann LeCun, Meta’s chief scientist, has argued “it’ll take years, if not a long time”. Another AI colleague of mine, professor emeritus Gary Marcus has predicted it’ll be “possibly 10 or 100 years from now”. And, to place my cards on the table, back in 2018, I wrote a book titled 2062. This predicted what the world might appear to be in 40 or so years’ time when artificial intelligence first exceeded human intelligence.

The scenarios

Once computers match our intelligence, it could be conceited to think they wouldn’t surpass it. After all, human intelligence is just an evolutionary accident. We’ve often engineered systems to be higher than nature. Planes, for instance, fly further, higher, and faster than birds. And there are various reasons electronic intelligence may very well be higher than biological intelligence.

Computers are, for instance, much faster at many calculations. Computers have vast memories. Computers always remember. And in narrow domains, like playing chess, reading x-rays, or folding proteins, computers already surpass humans.

So how exactly would a super-intelligent computer take us down? Here, the arguments begin to turn out to be moderately vague. Hinton told the New York Times

If it gets to be much smarter than us, it’ll be excellent at manipulation because it’ll have learned that from us, and there are only a few examples of a more intelligent thing being controlled by a less intelligent thing.

British-Canadian scientist Geoffrey Hinton, co-winner of the 2024 Nobel Prize in Physics, speaking in Stockholm.
Pontus Lundahl/AAP

There are counterexamples to Hinton’s argument. Babies control parents but usually are not smarter. Similarly US presidents usually are not smarter than all US residents. But in broad terms, Hinton has some extent. We should, for instance, remember it was intelligence that put us answerable for the planet. And the apes and ants are actually very depending on our goodwill for his or her continued existence.

In a frustratingly catch-22 way, those frightened of artificial super intelligence often argue we cannot know precisely the way it threatens our existence. How could we predict the plans of something so far more intelligent than us? It’s like asking a dog to assume the Armageddon of a thermonuclear war.

A number of scenarios have been recommend.

An AI system could autonomously discover vulnerabilities in critical infrastructure, similar to power grids or financial systems. It could then attack these weaknesses, destroying the material holding together society.

An electric power grid.
Could an AI system attack vulnerabilities in critical infrastructure, similar to power grids?
George Trumpeter/Shutterstock

Alternatively, an AI system could design recent pathogens which might be so lethal and transmissible that the resulting pandemic wipes us out. After COVID-19, this is probably a scenario to which a lot of us can relate.

Other scenarios are far more fantastical. AI doomster Eliezer Yudkowsky has proposed one such scenario. This involves the creation by AI of self-replicating nanomachines that infiltrate the human bloodstream. These microscopic bacteria are composed of diamond-like structures, and may replicate using solar energy and disperse through atmospheric currents. He imagines they’d enter human bodies undetected and, upon receiving a synchronised signal, release lethal toxins, causing every host to die.

These scenarios require giving AI systems agency – a capability to act on the earth. It is particularly troubling that that is precisely what corporations like OpenAI are actually doing. AI agents that may answer your emails or help onboard a brand new worker are this yr’s most trendy product offering.

Giving AI agency over our critical infrastructure could be very irresponsible. Indeed, we now have already put safeguards into our systems to stop malevolent actors from hacking into critical infrastructure. The Australian government, for instance, requires operators of critical infrastructure to “discover, and so far as is fairly practicable, take steps to minimise or eliminate the ‘material risks’ that would have a ‘relevant impact’ on their assets”.

Similarly, giving AI the power to synthesise (potentially harmful) DNA could be highly irresponsible. But again, we now have already put safeguards in place to stop bad (human) actors from mail-ordering harmful DNA. Artificial intelligence doesn’t change this. We don’t want bad actors, human or artificial, from having such agency.

A researcher holds a vial containing DNA
Giving AI the power to synthesise potentially harmful DNA could be highly irresponsible.
Cryptographer/AAP

The European Union leads the way in which in regulating AI immediately. The recent AI Action Summit in Paris highlighted the growing divide between those keen to see more regulation, and people, just like the US, wanting to speed up the deployment of AI. The financial and geopolitical incentives to win the “AI race”, and to disregard such risks, are worrying.

The advantages of super intelligence

Putting agency aside, super intelligence doesn’t greatly concern me for a bunch of reasons. Firstly, intelligence brings wisdom and humility. The smartest person is the one who knows how little they know.

Secondly we have already got super intelligence on our planet. And this hasn’t caused the top of human affairs, quite the alternative. No person knows construct a nuclear power station. But collectively, people have this information. Our collective intelligence far outstrips our individual intelligence.

Thirdly, competition keeps this collective intelligence in check. There is healthy competition between the collective intelligence of corporations like Apple and Samsung. And that is a great thing.

Of course, competition alone isn’t enough. Governments still must step in and regulate to stop bad outcomes similar to rent-seeking monopolies. Markets need rules to operate well. But here again, competition between politicians and between ideas ultimately results in good outcomes. We definitely might want to worry about regulating AI. Just like we now have regulated automobiles and mobile phones and super-intelligent corporations.

We have already seen the European Union step up. The EU AI Act, which got here into force initially of 2025, regulates high-risk uses of AI in areas similar to facial recognition, social credit scoring and subliminal promoting. The EU AI Act will likely prove viral, just as many countries followed the EU’s privacy lead with the introduction of General Data Protection Regulation.

I imagine, subsequently, you needn’t worry an excessive amount of because smart people – even those with Nobel Prizes like Geoff Hinton – are warning of the risks of artificial intelligence. Intelligent people, unsurprisingly, assign a little bit an excessive amount of importance to intelligence.

AI definitely comes with risks, but they’re not recent risks. We’ve adjusted our governance and institutions to adapt to recent technological risks up to now. I see no reason why we will’t do it again with AI.

In fact, I welcome the approaching arrival of smarter artificial intelligence. This is because I expect it’ll result in a greater appreciation, even perhaps an enhancement, of our own humanity.

Intelligent machines might make us higher humans, by making human relationships much more beneficial. Even if we will, in the longer term, program machines with greater emotional and social intelligence, I doubt we’ll empathise with them as we do with humans. A machine won’t fall in love, mourn a dead friend, bang their funny bone, smell an attractive scent, laugh out loud, or be delivered to tears by a tragic movie. These are uniquely human experiences. And since machines don’t share these experiences, we’ll never relate to them as we do to one another.

A sad man and woman watching a movie.
A machine won’t be brought tears at a tragic movie.
Pressmaster/Shutterstock

Machines will lower the price to create a lot of life’s necessities, so the price of living will plummet. However, those things still made by the human hand will necessarily be rarer and reassuringly expensive. We see this today. There is an ever greater appreciation of the handmade, the artisanal and the artistic.

Intelligent machines could enhance us by being more intelligent than we could ever be. AI can, for instance, surpass human intelligence by finding insights in data sets too large for humans to understand, or by crunching more numbers than a human could in a lifetime of calculation. The newest antibiotic was found not by human ingenuity, but by machine learning. We can look forward, then, to a future where science and technology are supercharged by artificial intelligence.

And intelligent machines could enhance us by giving us a greater appreciation for human values. The goal of trying (and in lots of cases, failing) to program machines with ethical values may lead us to a greater understanding of our own human values. It will force us to reply, very precisely, questions we now have often dodged up to now. How will we value different lives? What does it mean to be fair and just? In what type of society do we wish to live?

I hope our future will soon be one with godlike artificial intelligence. These machines will, just like the gods, be immortal, infallible, omniscient and – I believe – all too incomprehensible. But our future is the alternative, ever fallible and mortal. Let us, subsequently, embrace what makes us human. It is all we ever had, and all that we’ll ever have.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read