This article is an onsite version of Martin Sandbu’s Free Lunch newsletter. Premium subscribers can join here to get the newsletter delivered every Thursday. Standard subscribers can upgrade to Premium here, or explore all FT newsletters
Hello, Free Lunch readers. I’m Tej Parikh, the FT’s economics leader author, and I’m standing in for Martin Sandbu this week. In the identical vein as my last piece — where I took on the “hot US economy” — I play contrarian again, this time with artificial intelligence.
“Narratives are a significant vector of rapid change in culture, in zeitgeist, and in economic behaviour,” wrote Nobel laureate Robert Shiller in his 2019 book .
Today’s dominant economic and market narrative is the transformative potential of AI. Although US rates of interest have risen to their highest in 20 years, and economic momentum is easing, the S&P 500 has been pushing higher, driven partially by the frenzy for AI-linked stocks.
But narratives can get ahead of themselves, and euphoria could be blinding. That makes it worthwhile to actively search for evidence that may raise doubt on conventional wisdom. (Notably, in recent weeks there have been murmurings of AI scepticism.) So, I trawled the most recent research and spoke to a number of “AI bears” for data points that challenge the bullish outlook. Here’s what I discovered.
1) It continues to be early days
AI continues to be within the so-called picks and shovels phase, when upfront capital expenditure is happening before any major productivity gains could be reaped. This is obvious from stock performance.
AI stocks could be grouped into three buckets: the infrastructure enablers (eg Taiwan Semiconductor Manufacturing Co, Arm), the software firms (eg Salesforce) and the adopters. Recently, semiconductor groups have had essentially the most gains of their value, then the cloud, software and services firms. While some early adopters in information, manufacturing and technical fields have seen gains, valuations for businesses in industries with upside productivity potential remain quite tame.
So what? Well, AI has not yet proven to be adoptable at scale across the economy. That doesn’t mean those gains won’t ever arrive — most analysts forecast greater business integration of AI over the approaching decade. But it’s a reminder that the hype straight away is driven mostly by the enablers of the technology, while its upside for business productivity — which can drive economic growth — continues to be largely theoretical, nevertheless optimistic it could look.
If the productivity gains don’t come into sight soon, it could derail the upward march of the enablers. At the top of June, Nvidia shares tumbled, and insider selling by top executives at the corporate took place on the fastest pace in years.
As AI bear Jim Covello, head of world equity research at Goldman Sachs, put it recently in a research note: “AI bulls seem to only trust that use cases will proliferate because the technology evolves.”
2) Where is the killer application?
That leads nicely to a key query: what if the top adopters don’t profit as much because the bulls think they could?
Earlier this 12 months I spoke to Erik Brynjolfsson, a professor, writer and senior fellow on the Stanford Institute for Human-Centered AI for an FT Economists Exchange. He was optimistic in regards to the potential economy-wide productivity gains from AI adoption. But he warned about what he called the “Turing trap”.
The Turing test was introduced by Alan Turing in 1950. The idea was to set out criteria to measure a machine’s ability to exhibit intelligent behaviour comparable to a human. But Brynjolfsson reckons it has inadvertently inspired a generation of researchers to make machines that emulate human abilities. “I feel it’s becoming apparent that it was the mistaken goal all along and that we must be pondering augment humans and extend our capabilities,” he said.
That leads me to a different Erik. Erik Hoel, an American neuroscientist, posits that the industries AI are disrupting are usually not all that lucrative. He coined the phrase “supply paradox of AI” — the notion that the better it’s to coach AI to do something, the less economically precious that thing is.
“This is because AI performance scales based on its supply of information, that’s, the standard and size of the training set itself,” said Hoel. “So if you find yourself biased towards data sets which have an awesome supply, that, in turn, biases the AI to provide things which have little economic value.”
Hoel raises an interesting point. Generative AI’s current applications include writing, image and video creation, automated marketing, and processing information, based on the US Census Bureau’s Business Trends and Outlook Survey. Those are usually not particularly high value. Using specialist data, sophisticated models could do deeper scientific work, but that data could be briefly supply and even restricted.
The point is that with the AI infrastructure buildout cost projected by some to be greater than a trillion in the approaching years — what trillion-dollar problem will AI actually solve? To cite Covello: “Replacing low-wage jobs with tremendously costly technology is essentially the polar opposite of the prior (lucrative) technology transitions.”
3) Do the capex plans even add up?
Right, so how farfetched do the projected AI capex and AI revenue figures seem? For measure, a number of analysts have done back-of-the-envelope calculations, using various assumptions.
David Cahn, a partner at Sequoia, shouldn’t be an AI bear but thinks revenue expectations will need to select up. He has tried to reconcile the gap between the revenue expectations implied by the AI infrastructure buildout and actual revenue growth in the broader AI ecosystem.
He took Nvidia’s run-rate revenue forecast, and doubled it to cover the fee of AI data centres. “GPUs are half of the whole cost of ownership — the opposite half includes energy, buildings, back-up generators,” he noted. He doubled that figure again to include a 50 per cent gross margin for the ultimate graphic processing unit user. That results in a rough and prepared figure of $600bn in AI revenue needed to pay back the upfront capital investment. (This excludes margin for cloud vendors, which might make the revenue requirement higher).
Barclays got here to an analogous conclusion, using a unique approach. It estimates cumulative incremental AI capex between 2023 and 2026 of $167bn across top players within the industry. It reckons that is sufficient to “support over 12,000 ChatGPT-scale AI products”. But it’s unsure that there’s enough consumer and enterprise demand to soak up this amount.
Another factor here is competition. “LLM (large language models) . . . have grow to be increasingly indistinguishable from each other,” noted Peter Berezin, chief global strategist at BCA Research. “They may find yourself functioning more like highly competitive airlines with thin profit margins fairly than monopolistic social media platforms.”
The point? It is basic maths — with quite a few assumptions — nevertheless it does point to capex spending today far exceeding the potential returns.
4) The macro impact stays unclear
There have been quite a few studies over the past 18 months that estimate the dimensions of the potential AI productivity growth gain. Two have stood out, partly because they find yourself at different ends of the spectrum.
First is from Goldman Sachs economists Joseph Briggs and Devesh Kodnani, who last 12 months forecast a 9 per cent rise in total factor productivity and 15 per cent increase in US GDP following full adoption.
Second is MIT economist Daron Acemoglu’s forecast this 12 months of only a 0.5 per cent increase in TFP and a 0.9 per cent rise in GDP in the subsequent 10 years.
The difference comes right down to three differences in modelling:
i) The share of automatable jobs: Acemoglu assumes GAI will automate only 4.6 per cent of total work tasks in the subsequent 10 years, whereas Goldman’s baseline is 25 per cent over the long term.
ii) The effects of labour reallocation or the creation of recent tasks: Goldman estimates the uplift from displaced staff being re-employed in recent occupations made possible by AI-related advances and recent tasks that boost non-displaced staff’ productivity. Acemoglu’s modelling focuses on cost savings primarily.
iii) Cost savings: Goldman is more bullish here partially since it expects AI automation to create recent tasks and products.
This underscores how differing assumptions of AI’s automatable potential, and its ability to create recent activities and lower costs, can drive swings in its projected impact on national-level productivity. While we’re getting more clarity on each element, lots of uncertainty stays. Most investment today is predicated on firm-level studies of potential productivity gains, but that doesn’t at all times extrapolate well to the national or global level.
Building on this, ING Research says larger sectors may not even be able to make use of AI, thereby limiting the technology’s near-term economic impact. Its economists argue that the more digitalised European sectors, which are likely to be the smallest relative to the economy, are in a greater place to implement AI, and experience productivity improvements.
5) The enabling environment
a killer AI application is found, there continues to be no guarantee that its economic impact will likely be transformative. As my conversation with Brynjolfsson highlighted, the broader economic, social and legal environment also must shift to permit economies to harness the technology’s advantages, and minimise its harms. “Our understanding of the abilities, the organisations and institutions needed shouldn’t be advancing nearly as fast because the technology is,” he said. Here are a number of aspects that may determine each the pace and level of AI transformation:
i) Energy. The AI industry could devour as much energy as a rustic the dimensions of the Netherlands by 2027. With net zero targets, that energy must even be clean. Grids should be rapidly connected, and permitting must be swift to get the infrastructure up alongside the AI capex.
ii) Regulation and governance. AI can be harmful. Deepfakes, privacy violations, market volatility (attributable to AI trading as an example) and cyber crime could be counter-productive. The problem is that regulation is running far behind the technology, and at different paces globally.
iii) Society. How AI interacts with society also matters. For instance, GAI has been tipped to capture revenues from creative sectors. But there’s opposition each from those employed in these sectors, and the general public, who still desire a human touch in some industries. Hollywood writers, for instance, were capable of arrange guardrails for a way AI is utilized in the industry. And even then if there are significant automation-related job losses, social unrest and inequality could stymie growth, particularly if retraining initiatives are usually not widespread.
iv) Skills. Job postings mentioning “natural language processing”, “neural networks”, “machine learning” or “robotics” have picked up. But skillsets will take a while to match the demand. The IBM Global AI Adoption Index 2023 found limited AI skills and expertise as the highest barrier hindering businesses’ successful AI adoption today.
The point is that AI’s potential productivity impacts don’t matter if the enabling economic and legal environment can’t be put in place to reap the benefits of it — the AI transition relies on greater than just the AI innovators.
These should all add not less than a touch of doubt on the to this point exuberant AI outlook. Free Lunch can be all in favour of your bearish findings too.
Of course, it’s early days, recent AI applications will arise and adoption should grow to be easier. Nor is the explosive capex necessarily a foul thing. Bubbles could be destructive, but have to be weighed against the general impact on economic capability — the railroad bubbles within the nineteenth century burst painfully, but left precious infrastructure. Perhaps the euphoria is a crucial vehicle to get money right into a potentially transformative, but not yet proved, technology.
Either way, it does little harm to step back and reassess one’s assumptions. Narratives are by design appealing, but could possibly be meaningless if they can’t get up to scrutiny.
Other readables
The troubles of Europe’s battery industry reveal what’s mistaken with EU green industrial policy, writes Martin Sandbu.
Who is the UK’s recent chancellor of the exchequer? Read the FT’s in-depth profile of Rachel Reeves. And Chris Giles explains why it’s best to concentrate to Reeves’s fiscal statement later this month: it could reveal rather a lot more about how the Labour government will run the economy than yesterday’s King’s Speech.
Ahead of a plenary meeting of the Chinese Communist party’s Central Committee, the country’s official growth rate is slowing, and below the federal government’s goal. That appears to be fuelling a multi-faceted social crisis and rising popular frustration with unfairness and inequality.
More on the similarities and differences between far-right parties in numerous European countries, from our very own John Burn-Murdoch.