HomeNewsThe hype about generative AI is coming to an end – and...

The hype about generative AI is coming to an end – and now the technology could actually be useful

Less than two years ago, the launch of ChatGPT sparked a hype about generative AI. Some said the technology would fourth industrial revolutionand completely changes the world as we realize it.

In March 2023, Goldman Sachs predicted 300 million jobs could be lost or worsened by AI. It seemed as if an enormous change was underway.

Eighteen months later, Generative AI doesn’t change business. Many projects using this technology are being abandoned, comparable to McDonald's try to Automate drive-through orders what went viral on TikTok afterwards produce comical failures. The government’s efforts to create systems summarize public submissions And Calculate social advantages have suffered the identical fate.

So what happened?

The AI ​​hype cycle

Like many latest technologies, generative AI follows a path often called the “Gartner hype cycle,” first described by the American technology research firm Gartner.

This widely used model describes a recurring process through which the initial success of a technology results in inflated public expectations that ultimately fail to materialize. The initial “peak of inflated expectations” is followed by a “trough of disappointment,” followed by a “slope of enlightenment,” which finally reaches a “plateau of productivity.”


The conversation, From

A Gartner report published in June Most generative AI technologies are either at the height of inflated expectations or are still rising. The report argued that the majority of those technologies still need two to 5 years to be fully productive.

Many convincing prototypes of generative AI products have been developed, but their practical implementation has been less successful. Study published last week A study by the American think tank RAND found that 80% of AI projects fail – twice as often as non-AI projects.

Shortcomings of current generative AI technology

The RAND report lists many difficulties with generative AI, starting from high investment requirements in data and AI infrastructure to a scarcity of the crucial human talent. However, the weird nature of GenAI's limitations poses a critical challenge.

For example, generative AI systems can solve some highly complex university admission tests, but fail quite simple tasksThis makes it very difficult to evaluate the potential of those technologies, resulting in false confidence.

If it could possibly solve complex differential equations or write an essay, it must also have the option to take easy drive-through orders, right?

A current study showed that the capabilities of huge language models comparable to GPT-4 don’t at all times match expectations. In particular, more powerful models performed significantly worse in high-stakes cases where incorrect answers could have catastrophic consequences.

These results suggest that these models can encourage false confidence of their users. Because they answer questions fluently, people may reach optimistic conclusions about their abilities and use the models in situations for which they will not be suited.

Experience from successful projects shows that it’s difficult to get a generative model to follow instructions. For example, Khan Academy's Khanmigo tutoring system often revealed the right answers to questions though students were instructed to not accomplish that.

Why is the hype about generative AI not over yet?

There are several reasons for this.

First, despite its challenges, generative AI technology is improving rapidly, with scale and size being the most important drivers of this improvement.

Studies show that the scale of language models (variety of parameters) in addition to the quantity of information and computational power used for training contribute to improved model performance. In contrast, the architecture of the neural network driving the model appears to have minimal impact.

Large language models also show so-called Emerging SkillsThese are unexpected abilities in tasks for which they’ve not been trained. Researchers have reported New capabilities emerge when models reach a certain critical “breakthrough size.”

Studies have shown that sufficiently complex large language models can develop the power Analogy and even reproduce optical illusions as people experience them. The exact causes of those observations are controversialbut there isn’t any doubt that enormous language models have gotten more sophisticated.

As a result, AI firms proceed to work on larger and costlier models, and technology firms like Microsoft and Apple are banking on the return on their existing investments in generative AI. According to a current estimateGenerative AI must generate annual revenues of $600 billion to justify current investments – and this figure will likely grow to 1 trillion US dollars in the approaching years.

The biggest winner of the generative AI boom is currently Nvidia, the biggest manufacturer of the chips that power the generative AI arms race. A proverbial shovel maker within the gold rush, Nvidia recently became the most respected publicly traded company in history, tripling its share price in a single 12 months to a valuation of three trillion US dollars in June.

What happens next?

As the hype around AI slowly dies down and we undergo a phase of disillusionment, more realistic strategies for the introduction of AI are also emerging.

First, AI is used to support humans moderately than replace them. current survey amongst American firms found that they primarily use AI to enhance efficiency (49%), reduce labor costs (47%), and improve product quality (58%)

Secondly, we also see a Rise of smaller (and cheaper) generative AI modelstrained on specific data and deployed locally to scale back costs and optimize efficiency. Even OpenAI, which is leading the race to construct ever larger models, has released the GPT-4o Mini model to scale back costs and improve performance.

Thirdly, we see a powerful Focus on providing AI competency training and educate the workforce about how AI works, its potential and limitations, and best practices for ethical AI use. We will likely need to learn (and relearn) easy methods to use various AI technologies for years to come back.

Ultimately, the AI ​​revolution will look more like an evolution. Its use will increase over time and steadily change and transform human activities. And that’s a lot better than replacing them.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read