HomeNewsIn a brand new manifesto, Sam Altman of OpenAI outlines an AI...

In a brand new manifesto, Sam Altman of OpenAI outlines an AI utopia – and divulges glaring blind spots

Many of us are probably conversant in the hype surrounding artificial intelligence. AI is Artists superfluous! AI can Laboratory experiments! AI is End grief!

Even by these standards, OpenAI CEO Sam Altman's recent statement, posted on his personal website this week, seems remarkably over-the-top. We are on the verge of “The Age of Intelligence,” he explains, driven by a “superintelligence” which may be only “a number of thousand days” away. The latest era will bring “amazing triumphs,” including “improving the climate, establishing an area colony, and discovering all of physics.”

Altman and his company – which tries to Raise billions from investors And Building unprecedentedly large data centers to the US government, while Dismissal of key personnel And abandon its charitable roots to present Altman a share of the ownership – can profit greatly from the hype.

But even when you ignore these motivations, it's value among the assumptions underlying Altman's predictions. When examined more closely, they reveal rather a lot in regards to the worldview of the most important AI proponents – and the blind spots of their pondering.

Steam engines to take into consideration?

Altman bases his wonderful predictions on a two-paragraph history of humanity:

Human capabilities have increased dramatically over time; we are able to now achieve things that our predecessors would have thought inconceivable.

This is a story of unbridled progress moving in a single direction, driven by human intelligence. The cumulative discoveries and inventions of science and technology – Altman says – have led us to the pc chip and inexorably to the substitute intelligence that can lead us the remaining of the best way into the long run. This view owes much to the futuristic visions of singularitarian Movement.

Such a story is deceptively easy. If human intelligence has propelled us to ever greater heights, it's hard to not conclude that higher, faster artificial intelligence will take progress even further and better.

This is an old dream. In the 1820s, when Charles Babbage saw steam engines Revolutionizing human physical work In England's industrial revolution, he began to assume constructing similar machines to automate mental work. Babbage's “Analysis engine“ was never built, but the concept that humanity’s ultimate achievement could be the mechanization of thought itself has endured.

According to Altman, we now have now (almost) reached the summit.

Deep learning worked – but for what?

The reason we’re so near this glorious future is easy, says Altman: “Deep learning worked.”

Deep learning is a special type of machine learning that uses artificial neural networks loosely inspired by biological nervous systems. In some areas it has been surprisingly successful: Deep learning is behind models which have proven to be suitable for String words together in a kind of coherent mannerwithin the production beautiful pictures And Videosand even to resolve some scientific problems.

So the contributions of deep learning should not trivial. They are more likely to have significant social and economic impacts (each positive and negative).

But deep learning only “works” for a limited variety of problems. Altman knows this:

Humanity has discovered an algorithm that may actually learn any arbitrary data distribution (or more precisely, the underlying “rules” that produce any arbitrary data distribution).

That's what deep learning does – that's the way it “works.” That's essential, and it's a way that may be applied in quite a lot of fields, but it surely's removed from the one problem that exists.

Not every problem may be reduced to pattern matching. Nor do all problems offer the huge amounts of information that deep learning does its work. This isn’t how human intelligence works.

An enormous hammer searching for nails

What is interesting here is the proven fact that Altman believes that “rules from data” will make a significant contribution to solving all of humanity’s problems.

There is a saying that an individual holding a hammer is more likely to see the whole lot as a nail. Altman is now holding a big and really expensive hammer.

Deep learning may go, but only because Altman and others are starting to think about latest (and construct) a world made up of distributed data. The danger here is that AI will limit quite than expand the form of problem-solving we do.

What is barely visible in Altman's praise of AI are the growing resources required to make deep learning “work” as well. We can acknowledge the nice achievements and noteworthy successes of contemporary medicine, transportation, and communications (to call a number of) without pretending that these don’t come at a major cost.

They have claimed each a portion of humans – for whom the profits of the worldwide North have meant diminishing returns – and animals, plants and ecosystems which have been ruthlessly exploited and destroyed by the exploitative power of capitalism and technology.

While Altman and his fellow supporters might dismiss such views as hair-splitting, the query of cost goes to the center of predictions and concerns in regards to the way forward for artificial intelligence.

Altman is well aware that AI is reaching its limits. He notes: “There are still many details that we want to work out.” One of them is the rapidly increasing energy costs of coaching AI models.

Microsoft recently announced a $30 billion fund to construct AI data centers and generators to power them. The veteran tech giant, which has invested greater than $10 billion in OpenAI, also has a contract with the owners of the Three Mile Island Nuclear Power Plant (infamous for its 1979 meltdown) to provide Performance for AI. The hectic spending suggests that there’s a hint of desperation within the air.

Magic or simply magical pondering?

Even if we accept Altman's rosy view of humanity's progress thus far, given the magnitude of those challenges, we could have to confess that the past is not any reliable indicator of the long run. Resources are finite. Limits have been reached. Exponential growth may end.

The most insightful thing about Altman's contribution isn’t his premature predictions, but quite his unbridled optimism about science and progress.

So it's hard to assume Altman or OpenAI taking the technology's “downsides” seriously. When there's a lot to realize, why worry about a number of small problems? When AI is so near triumph, why stop and think?

What is emerging in reference to artificial intelligence is less an “age of intelligence” and more an “age of inflation” – with increasing resource consumption, rising company valuations and, above all, increasing guarantees that artificial intelligence offers.

It is definitely true that a few of us do things today that will have appeared like magic a century and a half ago. But that doesn’t mean that every one the changes between then and now have been for the higher.

AI has remarkable potential in lots of areas, but to assume that it holds the important thing to solving all of humanity's problems can also be magical pondering.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read