HomeIndustriesCase study from teaching at business schools: Risks of the AI ​​arms...

Case study from teaching at business schools: Risks of the AI ​​arms race

Unlock Editor's Digest free of charge

Prabhakar Raghavan, Google's search chief, was preparing to launch the much-anticipated artificial intelligence-based chatbot in Paris in February last 12 months when he received some unpleasant news.

Two days earlier, its CEO Sundar Pichai had boasted that the chatbot Bard “uses information from around the online to offer timely, high-quality answers.” But just a few hours after Google posted a brief GIF video on Twitter showing Bard in motion, observers noticed that the bot had given an incorrect answer.

Bard's answer to the query “What latest discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old?” was that the telescope had taken the first-ever images of a planet outside of Earth's solar system. In fact, those images were produced nearly twenty years earlier by the European Southern Observatory's Very Large Telescope. It was a mistake that damaged Bard's credibility and 100 billion dollars destroyed of the market value of Google's parent company Alphabet.

The incident highlighted the risks of the high-pressure arms race around AI. It has the potential to enhance accuracy, efficiency and decision-making. But developers are expected to have clear boundaries for his or her actions and act responsibly when bringing technologies to market. Still, the temptation to place profit over reliability stays.

The starting of the AI ​​arms race may be traced back to 2019, when Microsoft CEO Satya Nadella realized that Google’s AI-powered autocomplete feature in Gmail was becoming so effective that his own company was in peril of be left behind in AI development.

Test yourself

Technology startup OpenAI, which needed external capital to secure additional computing resources, presented a possibility. Nadella quietly made an initial investment of $1 billion. He believed that a collaboration between the 2 firms would allow Microsoft to commercialize OpenAI's future discoveries, make Google dance and weaken its dominant market share. He was soon proven right.

Microsoft's quick integration of OpenAI's ChatGPT into Bing was a strategic coup and seemed like technological superiority over Google. Not to be left behind, Google quickly released its own chatbot – though it knew Bard was unwilling to compete with ChatGPT. The hasty mistake cost Alphabet $100 billion in market capitalization.

Today, the dominant approach within the technology industry appears to be a myopic fixation on developing ever more sophisticated AI software. Fear of missing out forces firms to rush unfinished products to market, disregarding the risks and costs involved. Meta, for instance, recently confirmed its intention to double down on its participation within the AI ​​arms race, despite rising costs and an almost 12 percent drop in its share price.

There appears to be a conspicuous lack of purpose-driven initiatives, with the concentrate on profits being placed above social welfare considerations. Tesla, for instance, is rushing to launch its AI-based “Fully Self Driving” (FSD) features, though the technology is much from mature enough for secure use on the road. FSD, where the driving force is inattentive, has been connected to a whole bunch of accidents and dozens of deaths.

As a result, Tesla needed to recall greater than 2 million vehicles as a consequence of FSD/Autopilot issues. Although concerns were noted about drivers' ability to roll back obligatory software updates, regulators argue that Tesla didn’t make these proposed changes a part of the recall.

The problem is exacerbated by the incontrovertible fact that there are increasingly sub-par “mediocre technologies“For example, two latest GenAI-based wearable devices, Rabbit R1 and Humane AI Pin, sparked a backlash over accusations that they were useless, overpriced and didn’t solve meaningful problems.

Unfortunately, this trend isn’t going to decelerate: driven by the need to profit from ChatGPT's incremental improvements as quickly as possible, some startups are rushing to launch “mediocre” GenAI-based hardware devices. They seem to indicate little interest in whether a market exists; the goal appears to be to win every possible AI race, no matter whether it adds value to finish users. In response, OpenAI has warned startups not to interact in an opportunistic and short-term strategy of pursuing futile innovations, mentioning that more powerful versions of ChatGPT are coming to market that may easily replicate any GPT-based apps launched by the startups.

In response, governments are preparing regulations for the event and use of AI. Some technology firms are responding with greater responsibility. A recently published open letter The letter, signed by industry leaders, reinforced the concept: “It is our shared responsibility to make decisions that maximize the advantages of AI and minimize the risks for today and for future generations.”

As the technology industry grapples with the moral and societal implications of AI proliferation, some advisors, clients and out of doors groups are making the case for purposeful innovation. While regulators provide some oversight, to make progress, industry stakeholders must take responsibility for fostering an ecosystem that places the next priority on societal good.

Questions for discussion

  • Do technology firms bear responsibility for firms potentially using artificial intelligence in incorrect and unethical ways?

  • What strategies can technology firms pursue to concentrate on their purpose and think about profit as the results of goal achievement?

  • Should the market introduction of AI be more strictly regulated? And if that’s the case, how?

  • How do you think that the race to the underside trend will affect firms working with AI over the following five to 10 years? What aspects are most significant?

  • What risks do firms face in the event that they don’t take part in the race to the bottom price in AI development? How can these risks be managed by adopting a more goal-oriented strategy? What aspects are vital on this scenario?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read