In 2014, British philosopher Nick Bostrom published a book concerning the way forward for artificial intelligence (AI) with the ominous title Superintelligence: paths, dangers, strategies. It proved extremely influential in promoting the concept that advanced AI systems – “superintelligences” more powerful than humans – could in the future take over the world and destroy humanity.
A decade later, OpenAI boss Sam Altman says superintelligence could be “a number of thousand days” gone. A 12 months ago, Altman's OpenAI co-founder Ilya Sutskever founded a team inside the company focused on “secure superintelligence“, but he and his team have now raised a billion dollars to make it occur your individual startup to pursue this goal.
What exactly are they talking about? Broadly speaking, it’s superintelligence anything that’s more intelligent than humans. But determining what that may mean in practice can get a little bit tricky.
Different sorts of AI
In my opinion, essentially the most useful strategy to give it some thought different levels and sorts of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google.
Their framework lists six levels of AI performance: no AI, emergent, competent, expert, virtuoso, and superhuman. An necessary distinction can be made between narrow systems, which might perform a small range of tasks, and more general systems.
A narrow system without AI is something like a calculator. It performs various mathematical tasks in response to a set of explicitly programmed rules.
There are already many very successful narrow AI systems. As an example of a narrow AI system at a virtuoso level, Morris cites the chess program “Deep Blue,” which defeated the famous world champion Garry Kasparov in 1997.
Some tight systems even have superhuman abilities. An example is Alphafoldwhich uses machine learning to predict the structure of protein molecules and whose creators won the Nobel Prize in Chemistry this 12 months.
What about general systems? This is software that may handle a wider range of tasks, including things like learning recent skills.
A general system without AI may very well be something like this Amazon's mechanical Turk: It can do a wide range of things, however it does it by asking real people.
Overall, general AI systems are far less advanced than their closer relatives. According to Morris, the cutting-edge language models behind chatbots like ChatGPT are general AI – but to this point they’re on the “emerging” level (meaning they’re “on par with or barely higher than an unskilled human”). , and never yet “competent” (virtually 50% of qualified adults).
According to this assessment, we’re still a great distance from general superintelligence.
How intelligent is AI currently?
As Morris points out, accurately determining the placement of a specific system depends upon reliable testing or benchmarks.
Depending on our benchmarks, a picture generating system comparable to DALL E may very well be at a virtuoso level (because it could possibly create images that 99% of humans couldn't draw or paint), or it may very well be nascent (since it creates mistakes that no human would make, comparable to mutated hands and unattainable ones objects).
There is even considerable debate concerning the performance of current systems. A remarkable work from 2023 argued GPT-4 showed “sparks of artificial general intelligence.”
OpenAI says its latest language model, o1can “do complex reasoning” and “competes with the performance of human experts on many benchmarks.”
However, a recent paper by Apple researchers found that o1 and plenty of other language models have significant problems solving real mathematical reasoning problems. Their experiments show that the outcomes of those models seem more like sophisticated pattern matching than real advanced reasoning. This suggests that superintelligence isn’t as imminent as many have suspected.
Is AI getting smarter?
Some people imagine that the rapid pace of AI progress in recent times will proceed and even speed up. Technology corporations invest Hundreds of billions of dollars in AI hardware and capabilities, so this doesn't seem unattainable.
If this happens, we could actually experience general superintelligence inside the “few thousand days” suggested by Sam Altman (in less scientific terms, a few decade). Sutskever and his team mentioned an analogous time-frame of their report Superalignment article.
Many recent successes in AI have come from the applying of a way called “deep learning,” which, in easy terms, finds associative patterns in vast collections of information. In fact, this 12 months's Nobel Prize in Physics was awarded to John Hopfield and in addition received the “Godfather of AI“ Geoffrey Hinton for inventing Hopfield Networks and the Boltzmann machine, which form the premise for a lot of powerful deep learning models used today.
General systems like ChatGPT have relied on human-generated data, much of it in the shape of text from books and web sites. Improvements of their capabilities are largely on account of increasing the dimensions of the system and the quantity of information on which they’re trained.
However, there Human-generated data is probably not enough to advance this process much further (although efforts to make use of data more efficiently, generate synthetic data and improve the transfer of skills between different areas can bring improvements). Even if there have been enough data, some researchers say language models like ChatGPT do that fundamentally incompetent to attain what Morris would call general competence.
A recent article suggested that that is a necessary feature of superintelligence opennessnot less than from a human perspective. It would want to give you the chance to constantly generate results that a human observer could view as novel and learn from.
Existing baseline models will not be trained in an open-ended manner, and existing open-ended systems are quite narrow. This article also highlights that novelty or learnability alone will not be enough. In order to attain superintelligence, a novel, open-ended basic model is required.
What are the risks?
What does all this mean for the risks of AI? At least within the short term, we don't should worry about superintelligent AI taking on the world.
But that doesn't mean that AI doesn't pose risks. Here, too, Morris and Co. have thought of it: the more powerful AI systems grow to be, the greater their autonomy can also grow to be. Different levels of performance and autonomy pose different risks.
For example, when AI systems have little autonomy and folks use them as a type of advisor – after we ask ChatGPT to summarize documents, for instance, or let YouTube's algorithm shape our viewing habits – we could also be liable to trusting an excessive amount of counting on them an excessive amount of.
In the meantime, Morris points out other risks to observe out for as AI systems grow to be more powerful. These range from people forming parasocial relationships with AI systems to mass job displacement and society-wide boredom.
What's next?
Let's assume that in the future we’ve got super-intelligent, fully autonomous AI agents. Will we then be liable to them concentrating power or acting against human interests?
Not necessarily. Autonomy and control can go hand in hand. A system will be highly automated and still provide a high level of human control.
Like many within the AI research community, I imagine it could possibly be done. However, constructing it should be a posh and multidisciplinary task, and researchers may have to tread unbeaten paths to get there.