Unlock Editor's Digest without spending a dime
FT editor Roula Khalaf selects her favourite stories on this weekly newsletter.
This article is an on-site version of Martin Sandbus' Free Lunch newsletter. Premium subscribers can join Here to receive the newsletter every Thursday. Standard subscribers can upgrade to Premium here or explore all FT newsletters
Greetings to all. Thanks to Tej Parikh for writing last week's Free Lunch on the pessimistic arguments against the bogus intelligence hype. I'm checking in again this week to contribute some accompanying thoughts on how policy pondering on AI is evolving. Over the subsequent three weeks, various FT colleagues will probably be continuing the newsletter so you’ll be able to sustain thus far during what we hope will probably be a restful and energising holiday for all Free Lunch readers.
I discovered Tej's evaluation of the numbers you could have to consider to make expectations of an AI boom quite fascinating. I actually have no reason to query them; if anything, they make me a good larger AI skeptic than I already was. But just as in case you want peace it’s best to prepare for war, or be prepared for pandemics and hope they never occur, we must always also formulate policy for when AI does turn out to be the hugely disruptive technology (financially and otherwise) that many consider it’s prone to be.
I wrote two columns on AI last 12 months: one reflecting on the shock with which the world had received ChatGPT, the opposite following the Bletchley Park AI Summit. In each, I argued for making the AI debate rather more boring. To deal responsibly with this latest technology, we must always have a look at the banal but real harms it’s already causing, not the science-fiction-style risks. The focus ought to be on technocrats, not terminators, I wrote.
So far, plainly my wish has come true. This summer, each the IMF and the Bank for International Settlements – two of the world's most vital technocratic economic institutions – published reports on artificial intelligence. And next week, the EU's latest law on artificial intelligence comes into force. Increasingly, this technical work is quietly crowding out the more sensationalist debate.
The BIS Chapter on AI from his annual economic report — which provides a practical introduction to how AI works for laypeople — highlights the role AI can play in overcoming information bottlenecks within the economic system. For example, correspondent banking has declined since the costs of accelerating information requirements on account of (much-needed) anti-money laundering rules are usually not at all times justified for a comparatively low-margin industry. By significantly reducing the prices of know-your-customer checks and money laundering risk assessment, AI can due to this fact secure a vital aspect of worldwide financial connectivity. The BIS cites lending, insurance and asset management as other examples where efficiency is determined by reasonably priced information processing.
According to the BIS, AI may help central banks do their jobs higher by improving cybersecurity and enabling real-time evaluation of the economy and risks to financial stability.
Both the BIS and the IMF provide overview of the important thing macroeconomic issues. Sensibly, but not surprisingly, they share the view that AI can have a positive impact on productivity, but (a) now we have no idea to what extent, and (b) it might impact different tasks, skill levels and sectors. So there’ll likely be winners and losers, with some earning more because they turn out to be more productive, and others becoming redundant of their old jobs. But again, now we have little idea who will probably be affected and the way.
We should note that the query here shouldn’t be just which employees are made more productive by AI. It also is determined by whether the lower effective cost of the tasks they perform results in a rise in using those tasks (in order that more employees are needed, even when each is more productive) or just a decrease in spending on them (and fewer, more productive employees are needed). There is a parallel here with how, within the three a long time as much as the late Seventies, higher productivity in manufacturing was related to a rise in employment in Western factories (in absolute terms), while after that the other was true, as fewer and fewer employees were needed to provide the quantities of manufactured goods that markets could absorb (even before globalization moved a few of the least expert jobs overseas).
What should we do, given how little prediction we are able to currently make? IMF report has some good ideas. More generous unemployment insurance, especially a system tied to overall employment, can have a huge impact by giving surplus employees in AI-affected jobs time to search out latest and even higher jobs elsewhere. I might add that top demand pressure is essential – as we saw within the post-pandemic recovery.
There is a theme here, albeit a form of photonegative, which is that these are serious but not grandiose ideas. No terminators, singularities or godlike AI takeovers in the longer term, but concrete opportunities and risks within the here and now and a few good advice on the best way to cope with them.
The same goes for the EU's AI law, against which there appears to be a bit an excessive amount of uninformed protest or distrust. Overall, it goals to categorize very real and present risks, not a lot those of the technology itself but of the plausible uses, and to impose some restrictions on the riskier ones. For example, it bans dystopian applications comparable to subliminal manipulation of human users or Chinese-style social credit systems. Surely, this is strictly step one we might expect regulators to take. (If only we had had such rules when online behavioral targeting first emerged!)
Of course, even these more mundane exercises could make mistakes, even when the error shouldn’t be to let science fictional dangers distract from real and present dangers. For example, I fear that the last-minute addition of the AI Act's regulation of foundation models, which unlike the remainder of the Act seeks to manage the technology itself quite than the way it is used, could do more harm than good.
I’m also concerned in regards to the IMF's willingness to say that perhaps we must always change taxation to tax AI directly (quite than simply as a part of the next general tax on capital) since the social costs of the labour market change it brings are particularly high. This looks as if an abdication to me. The best policy response to disruptive but productivity-enhancing technological change can’t be to slow it down – that attitude has led to the survival of wasteful and terribly paid manual jobs within the UK and US. Instead, one should tighten the measures that force corporations elsewhere to compete for employees searching for latest jobs: high demand pressures, lively labour market policies and social programmes that minimise the prices of leaving a job to search for a greater one.
Despite all of the mistakes and disagreements – that are a part of the democratic terrain – such debates are rather more down-to-earth than the initial reactions to the most recent breakthrough. That also makes them rather more useful. More of them, please.
Other readable works
An editorial within the FT calls on Western countries to welcome China's electric vehicles to contribute to its decarbonization goals. In Beijing, advanced manufacturing stays a central a part of the federal government's economic vision.
The President of the Eurogroup of Eurozone Finance Ministers writes in an FT commentary: Europe is facing a fiscal turning point.
India has a brand new budget that guarantees to contain the fiscal deficit while also meeting the needs of the brand new coalition partners in the federal government and promoting investment in infrastructure.
Our Madrid correspondent investigates Spain's backlash against tourism.