HomeIndustriesJames Manyika of Google: “Productivity gains from AI usually are not guaranteed”

James Manyika of Google: “Productivity gains from AI usually are not guaranteed”

Last November, OpenAI co-founder Sam Altman predicted at a panel discussion that 2024 would see a breakthrough in artificial intelligence that nobody saw coming. Google executive James Manyika agreed: “And yet one more thing.”

Did the past yr meet expectations? The sell-off in technology stocks over the summer reflected the sensation that the adoption of artificial intelligence would take longer than expected.

Manyika points out what has been achieved. Transformers — the technology that large language models — have enabled Google Translate to greater than double the variety of languages ​​it supports to 243. Google's chatbot Gemini can seamlessly switch between text, photos, and videos (a minimum of in certain contexts), and it also allows users to enter increasingly complex queries.

For example, Manyika hoped to listen to a summary of the most recent research in his field during his commute from San Francisco to Silicon Valley: he desired to enter 100 technical documents into Gemini after which hear two AI voices discuss them. “I can do this now. That's an example of a significant breakthrough.”

Yet many users view LLMs like Gemini as clever curiosities reasonably than mission-critical technology. Does Manyika actually spend his commute listening to AI voices discussing technical documents? The answer appears to be that he still prefers human podcasts.

As Google's senior vice chairman of research, technology and society, Manyika must walk a tightrope: communicating the transformative potential of artificial intelligence while convincing policymakers and the general public that Google shouldn’t be pursuing that potential recklessly.

Last yr, the “godfather of AI”, Geoffrey Hinton, resigned from Google, citing the uncontrollable risks of the technology. Shortly afterwards, Manyika promised that the corporate would act “responsibly from the beginning”. Born and raised in Zimbabwe, he earned a PhD in robotics from Oxford and is now an advocate for the advantages of the technology for developing countries.

Manyika, 59, spent his profession at McKinsey before joining Google in 2022. I'm concerned about how he sees the appliance of AI tools in the true world.

“Right now, everyone from my old colleagues on the McKinsey Global Institute to Goldman Sachs are announcing these extraordinary numbers on economic potential – within the trillions – (but) it is going to require an entire series of actions, innovations, investments and even policies… The productivity gains usually are not guaranteed. They would require a whole lot of work.”

In 1987, economist Robert Solow noted that the pc age was visible all over the place but in productivity statistics. “We could have a version of it where we see this technology all over the place, on our phones, in all these chatbots, but it surely has done nothing to vary the economy in these really fundamental ways.”

Using generative AI to create software code shouldn’t be enough. “In the US, the technology sector makes up about 4 percent of the workforce. Even if your complete technology sector adopted 100% of it, it might be insignificant from a labor productivity perspective.” Instead, the reply lies in “very large sectors” resembling healthcare and retail.

Former British Prime Minister Sir Tony Blair said that folks “could have an AI nurse, probably an AI doctor, similar to there might be an AI tutor.” Manyika is less dramatic: “In most of those cases, these jobs might be supported by AI. I don't think any of those jobs might be replaced by AI, not in any foreseeable future.”

The history shouldn’t be exactly rosy. During his time at McKinsey, Manyika predicted that the pandemic would enable firms to drive digital transformation: he admits that many did so “within the direction of cost reduction.” Now agrees Managers are incentivized to interchange their employees with AI reasonably than having technology support them.

© Charlie Bibby/FT

How could large language models change his old business of management consulting? Manyika emphasizes the potential of models like Gemini for designing and summarizing. “In my role, I even have teams working on many projects: I could say, 'Give me a standing update on project Y,' and I get summaries of all of the documents in my emails and the conversations we’ve got.”

Summarizing and drafting are tasks that young lawyers, for instance, tackle. Will law firms fundamentally change because young employees are not any longer needed? “Yes, but…” says Manyika, stressing that his vision is for firms to make use of AI to extend revenue, not only to chop costs.

“You don't win by cutting costs. You win by creating more priceless outcomes. So I would really like to see these law firms take into consideration, 'OK, now that we’ve got this recent production capability, what additional value-added activities do we want to do to leverage what's now possible?' Those are going to be the winners.”


Google's search engine dominates the web – last month a US court ruled that it was an illegal monopoly. But many publishers fear that AI could make the situation even worse.

Google now answers some search queries with AI summaries. Chatbots offer an alternate source of data. In each cases, web users may find what they’re on the lookout for without having to click on links – thus stopping promoting revenue from the publishers who provided this information.

Before meeting Manyika in July, I asked Gemini, “What are the highest news stories within the Financial Times today?” Gemini replied, “There were several top news stories within the Financial Times today (November 28, 2024)” – sic. The response listed five headlines, most of which seem like from December 2023.

“But it also redirects you to the web site. We still provide links in Gemini, right?” says Manyika. Although Gemini mentioned the FT website in its response, it actually provided only two links – to rival news web sites.

Manyika points to an option in Gemini's answer called “View drafts.” This is Google's attempt to indicate that the chatbot doesn't provide a “single, definitive answer” – when you run the identical query twice, you'll get different answers. I didn't even notice this selection, and I doubt users really consider it is going to compensate for the chatbot's unreliability.

Stopping users from clicking on links can be “a terrible own goal” provided that Google's business model is predicated on promoting, argues Manyika. He compares publishers' concern about traffic to the fear that as search shifted from desktops to smartphones, just one link can be visible and the remainder can be ignored. “People still checked out every part else, and in great detail.”

(After the interview, a Google spokesperson sent me a screenshot of a Gemini response that did indeed contain a link to the FT website. I attempted again, but still couldn't reproduce this.)

The broader query is whether or not Google has been pushed to bring AI products to market faster than it would really like to avoid falling behind OpenAI and others. For example, Google's AI summaries have advised users that it’s healthy to eat rocks.

According to Google, just one in seven million AI summaries violates content policies, for instance by recommending users eat rocks. Manyika admits that she appreciates Google's culture of internal debates: “Half the time people think we shouldn't publish anything. The other half the time people think we're too slow.”

“We're not at all times going to get these items right, I believe that's OK.” Google has “held back a whole lot of things. When I joined the corporate, they decided to not release facial recognition technology.” He politely fails to say that OpenAI's Altman, in contrast, has invested in eye recognition.


Writing via robots brings with it other risks. We can draw conclusions a few person's personality and competence from their writing style. If chatbots grow to be more common, we may not find a way to do that. Manyika argues that I’m romanticizing the current: “If I write you a letter, my assistant could have written it.”

I'll give one other example: Isn't it unhelpful that many teachers can not assign written homework because students are using chatbots? “How often do you think that a teacher who grades 100 essays reads each essay from start to complete?” he replies. That's probably not the purpose.

Manyika is friends with singer will.i.am, who “founded a faculty in Compton, a poor neighborhood in LA. These kids were promised for many years that somebody would come and teach them to code. That person never showed up. If they now use an LLM to put in writing code, is that an excellent thing or a nasty thing?”

The seeming paradox is that at a time when Google is brilliantly distributing the world's information, the world seems more vulnerable to misinformation. “I don't know if I might attribute that to Google,” says Manyika. Maybe not, but upending the way in which we receive and process information has potentially serious, unexpected downsides.

Manyika turns the conversation to quantum computing. “We actually consider that quantum computing will allow us to develop AI in a different way. Stay tuned – you'll see some major milestone news all year long. What hasn't happened to date… is that no person has shown a computation that, in principle, isn't even possible with a classical computer.” The strength of the tech believers is to provide you with a brand new trick while the world remains to be struggling to evaluate the last one.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read