HomeIndustriesIntroducing AI Exchange: what does the long run hold for the fast-evolving...

Introducing AI Exchange: what does the long run hold for the fast-evolving technology?

Technological advances at all times raise questions: about their advantages, costs, risks and ethics. And they require detailed, well-explained answers from the people behind them. It was for that reason that we launched our series of monthly Tech Exchange dialogues in February 2022.

Now, 18 months on, it has develop into clear that advances in a single area of technology are raising more questions, and concerns, than some other: artificial intelligence. There are ever more people — scientists, software developers, policymakers, regulators — attempting answers.

Hence, the FT is launching AI Exchange, a brand new spin-off series of long-form dialogues.

Over the approaching months, FT journalists will conduct in-depth interviews with those on the forefront of designing and safeguarding this rapidly evolving technology, to evaluate how the ability of AI will affect our lives.

To give a flavour of what to anticipate, and the topics and arguments that shall be covered, below we offer a collection of probably the most insightful AI discussions to this point, from the unique (and ongoing) Tech Exchange series.

They feature Aidan Gomez, co-founder of Cohere; Arvind Krishna, chief executive of IBM; Adam Selipsky, former head of Amazon Web Services; Andrew Ng, computer scientist and co-founder of Google Brain; and Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board.

From October, AI Exchange will bring you the views of industry executives, investors, senior officials in government and regulatory authorities, in addition to other specialists, to assist assess what the long run will hold.


If AI can replace labour, it’s an excellent thing

Arvind Krishna, chief executive IBM, and Richard Waters, west coast editor

Richard Waters: When you check with businesses and CEOs and so they ask ‘What can we do with this AI thing?’ What do you say to them?

Arvind Krishna: I at all times point to 2 or three areas, initially. One is anything around customer care, answering questions from people . . . it’s a extremely vital area where I imagine we will have a a lot better answer at perhaps around half the present cost. Over time, it might get even lower than half but it might take half out pretty quickly.

A second one is around internal processes. For example, every company of any size worries about promoting people, hiring people, moving people, and these need to be reasonably fair processes. But 90 per cent of the work involved in that is getting the knowledge together. I believe AI can try this after which a human could make the ultimate decision. There are tons of of such processes inside every enterprise, so I do think clerical white collar work goes to have the ability to get replaced by this.

Then, I believe of regulatory work, whether it’s within the financial sector with audits, whether it’s within the healthcare sector. An enormous chunk of that would get automated using these techniques. Then I believe there are the opposite use cases but they’re probably harder and a bit further out . . . things like drug discovery or in attempting to finally end up chemistry.

We do have a shortage of labour in the actual world and that’s due to a demographic issue that the world is facing. So now we have to have technologies that help . . . the United States is now sitting at 3.4 per cent unemployment, the bottom in 60 years. So perhaps we will find tools that replace some portions of labour, and it’s an excellent thing this time.

RW: Do you’re thinking that that we’re going to see winners and losers? And, in that case, what’s going to tell apart the winners from the losers?

AK: There’s two spaces. There is business to consumer . . . then there are enterprises who’re going to make use of these technologies. If you consider many of the use cases I identified, they’re all about improving the productivity of an enterprise. And the thing about improving productivity (is that enterprises) are left with more investment dollars for a way they really advantage their products. Is it more R&D? is it higher marketing? Is it higher sales? Is it acquiring other things? . . . There’s lot of places to go spend that spare money flow.


AI threat to human existence is ‘absurd’ distraction from real risks

Aidan Gomez, co-founder of Cohere, and George Hammond, enterprise capital correspondent

George Hammond: (We’re now at) the sharp end of the conversation around regulation in AI, so I’m considering your view on whether there’s a case — as (Elon) Musk and others have advocated — for stopping things for six months and attempting to get a handle on it.

Aidan Gomez: I believe the six-month pause letter is absurd. It is just categorically absurd . . . How would you implement a six-month clause practically? Who is pausing? And how do you implement that? And how can we co-ordinate that globally? It is unnecessary. The request just isn’t plausibly implementable. So, that’s the primary issue with it.

The second issue is the premise: there’s numerous language in there talking a couple of superintelligent artificial general intelligence (AGI) emerging that may take over and render our species extinct; eliminate all humans. I believe that’s a super-dangerous narrative. I believe it’s irresponsible.

That’s really reckless and harmful and it preys on most people’s fears because, for the higher a part of half a century, we’ve been creating media sci-fi around how AI could go improper: Terminator-style bots and all these fears. So, we’re really preying on their fear.

GH: Are there any grounds for that fear? When we’re talking about . . . the event of AGI and a possible singularity moment, is it a technically feasible thing to occur, albeit improbable?

AG: I believe it’s so exceptionally improbable. There are real risks with this technology. There are reasons to fear this technology, and who uses it, and the way. So, to spend all of our time debating whether our species goes to go extinct due to a takeover by a superintelligent AGI is an absurd use of our time and the general public’s mindspace.

We can now flood social media with accounts which might be truly indistinguishable from a human, so extremely scalable bot farms can pump out a specific narrative. We need mitigation strategies for that. One of those is human verification — so we all know which accounts are tied to an actual, living human being in order that we will filter our feeds to only include the legitimate human beings who’re participating within the conversation.

There are other major risks. We shouldn’t have reckless deployment of end-to-end medical advice coming from a bot with out a doctor’s oversight. That shouldn’t occur.

So, I believe there are real risks and there’s real room for regulation. I’m not anti-regulation, I’m actually quite in favour of it. But I might really hope that the general public knows a number of the more fantastical stories about risk (are unfounded). They’re distractions from the conversations that needs to be occurring.


There is not going to be one generative AI model to rule all of them

Adam Selipsky, former head of Amazon Web Services, and Richard Waters, west coast editor

Richard Waters: What are you able to tell us about your individual work on (generative AI and) large language models? How long have you ever been at it?

Adam Selipsky: We’re perhaps three steps right into a 10K race, and the query shouldn’t be, ‘Which runner is ahead three steps into the race?’, but ‘What does the course appear to be? What are the principles of the race going to be? Where are we attempting to get to on this race?’

If you and I were sitting around in 1996 and considered one of us asked, ‘Who’s the web company going to be?’, it might be a silly query. But that’s what you hear . . . ‘Who’s the winner going to be on this (AI) space?’

Generative AI goes to be a foundational set of technologies for years, perhaps many years to come back. And no one knows if the winning technologies have even been invented yet, or if the winning firms have even been formed yet.

So what customers need is selection. They must have the ability to experiment. There is not going to be one model to rule all of them. That is a preposterous proposition.

Companies will work out that, for this use case, this model’s best; for that use case, one other model’s best . . . That selection goes to be incredibly vital.

The second concept that’s critically vital on this middle layer is security and privacy . . . Lots of the initial efforts on the market launched without this idea of security and privacy. As a result, I’ve talked to not less than 10 Fortune 1000 CIOs who’ve banned ChatGPT from their enterprises because they’re so scared about their company data going out over the web and becoming public — or improving the models of their competitors.

RW: I remember, within the early days of serps, when there was a prediction we’d get many specialised serps . . . for various purposes, nevertheless it ended up that one search engine ruled all of them. So, might we find yourself with two or three big (large language) models?

AS: The most probably scenario — provided that there are 1000’s or perhaps tens of 1000’s of various applications and use cases for generative AI — is that there shall be multiple winners. Again, in case you consider the web, there’s not one winner in the web.


Do we expect the world is healthier off with kind of intelligence?

Andrew Ng, computer scientist and co-founder of Google Brain, and Ryan McMorrow, deputy Beijing bureau chief

Ryan McMorrow: In October (2023), the White House issued an executive order intended to extend government oversight of AI. Has it gone too far?

Andrew Ng: I believe that we’ve taken a dangerous step . . . With various government agencies tasked with dreaming up additional hurdles for AI development, I believe we’re on the trail to stifling innovation and putting in very anti-competitive regulations. 

Having more intelligence on this planet, be it human or artificial, will help all of us higher solve problems

We know that today’s supercomputer is tomorrow’s smartwatch, in order start-ups scale and as more compute (processing power) becomes pervasive, we’ll see increasingly organisations run up against this threshold. Setting a compute threshold makes as much sense to me as saying that a tool that uses greater than 50 watts is systematically more dangerous than a tool that uses only 10W: while it could be true, it’s a really naive strategy to measure risk.

RM: What can be a greater strategy to measure risk? If we’re not using compute as the edge?

AN: When we have a look at applications, we will understand what it means for something to be secure or dangerous and might regulate it properly there. The problem with regulating the technology layer is that, since the technology is used for thus many things, regulating it just slows down technological progress. 

At the guts of it is that this query: do we expect the world is healthier off with kind of intelligence? And it’s true that intelligence now comprises each human intelligence and artificial intelligence. And it is completely true that intelligence may be used for nefarious purposes.

But over many centuries, society has developed as humans have develop into higher educated and smarter. I believe that having more intelligence on this planet, be it human or artificial, will help all of us higher solve problems. So throwing up regulatory barriers against the rise of intelligence, simply because it may very well be used for some nefarious purposes, I believe would set back society.


‘Not all AI-generated content is harmful’

Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board, and Murad Ahmed, technology news editor

Murad Ahmed: This is the yr of elections. More than half of the world has gone to, or goes to, the polls. You’ve helped raise the alarm that this may be the yr that misinformation, particularly AI-generated deepfakes, could fracture democracy. We’re midway through the yr. Have you seen that prophecy come to pass?

Helle Thorning-Schmidt: If you have a look at different countries, I believe you’ll see a really mixed bag. What we’re seeing in India, for instance, is that AI (deepfakes are) very widespread. Also in Pakistan it has been very widespread. (The technology is) getting used to make people say something, though they’re dead. It’s making people speak, after they are in prison. It’s also making famous people back parties that they won’t be backing . . . (But) If we have a look at the European elections, which, obviously, is something I observed very deeply, it doesn’t appear to be AI is distorting the elections. 

What we suggested to Meta is . . . they need to take a look at the harm and not only take something down since it is created by AI. What we’ve also suggested to them is that they modernise their whole community standards on moderated content, and label AI-generated content so that folks can see what they’re coping with. That’s what we’ve been suggesting to Meta.

I do think we’ll change how Meta operates on this space. I believe we’ll find yourself, after a few years, with Meta labelling AI content and likewise being higher at finding signals of consent that they should remove from the platforms, and doing it much faster. This could be very difficult, after all, but they need a excellent system. They also need human moderators with cultural knowledge who will help them do that. (Note: Meta began labelling content as “Made with AI” in May.)

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read