Gaia Marcus, director on the Ada Lovelace Institute, leads a team of researchers investigating certainly one of the thorniest questions in artificial intelligence: power.
There is an unprecedented concentration of power within the hands of a couple of large AI firms as economies and societies are transformed by the technology. Marcus is on a mission to make sure this transition is equitable. Her team studies the socio-technical implications of AI technologies, and tries to supply data and evidence to support meaningful conversations about how you can construct and regulate AI systems.
In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, she explains why we urgently have to think concerning the sort of society we would like to construct within the age of AI.
Melissa Heikkilä: How did you get into this field?
Gaia Marcus: I possibly selected the improper horse out of the gate, so I ended up doing history since it was something that I used to be good at, which I believe is commonly what happens when people don’t quite know what they’re doing next, so that they just go together with where they’ve the strongest grades.
I saw that I increasingly needed numbers to reply the questions that I had of the world. And so, I used to be a social network analyst for the RSA (the London-based Royal Society for Arts) for nearly five years. And I taught myself social network evaluation at the tip of my human rights master’s, mainly, because there was a job that I desired to do, and I didn’t have the skill for it.
My mum’s a translator, who was self-taught, so I’ve at all times been taught that you just teach yourself the talents for the thing that you just need. I, sort of by mistake, ended up being an analyst for nearly five years, and that took me more towards ‘data for good’.
I did possibly one other five years in that digital ‘data for good’ space within the UK charity sector, running an R&D team for Centrepoint, the homeless charity, moving more into data strategy. I used to be liable for Parkinson’s UK’s data strategy, after I saw that the UK government was hiring for head of national data strategy. I did that for a few years after which, was around government for six-and-half years in total.
I’ve at all times ended up in areas where there’s a social justice component because I believe that’s certainly one of my essential motivators.
MH: There was a time in AI where we were occupied with societal impacts loads. And now I feel like we’ve taken a couple of steps, or possibly greater than a couple of steps, back, and we’re on this “let’s construct, let’s go, go, go” phase. How would you describe this moment in AI?
GM: I believe it’s a extremely fragmented moment. I believe possibly tech appears like it’s taken a step back from responsible AI. Well, the hyperscalers feel like they may have taken a step back from, say, ethical use of AI, or responsible use of AI. I believe academia that focuses on AI is as focused on social impact because it ever was.
It does feel to me that, increasingly, individuals are having different conversations. The role that Ada can play at this moment is that as an organisation, we’re a bridge. We seek to take a look at other ways of understanding the identical problems, several types of intelligences, several types of expertise.
You see a variety of hype, of hope, of fear, and I believe attempting to not fall into any of those cycles makes us quite unique.
AI Exchange
This spin-off from our popular Tech Exchange series of dialogues examines the advantages, risks and ethics of using artificial intelligence, by talking to those on the centre of its development.
MH: What we’re seeing within the US is that certain elements of responsibility, or safety, are labelled as ‘woke’. Are you afraid of that stuff landing in Europe and undermining your work?
GM: The (Paris) AI Action Summit was quite a pivotal moment in my pondering, in that it showed you that we were at this crossroads. And there’s one path, which is a path of like-minded countries working together and really searching for to make sure that they’ve an approach to AI and technology, which is aligned with their public’s expectations, during which they’ve the levers to administer the incentives of the businesses operating of their borders.
And then you definately’ve got one other path that is basically about national interest, about often putting corporate interests on top of individuals. And I believe as humans, we’re very bad at each overestimating how much change goes to occur within the medium term, after which not likely pondering how much change has actually just happened within the short term. We’re really in a calibration phase. And fundamentally, I believe businesses and countries and governments should really at all times be asking themselves what are the futures which can be being built with these technologies, and are these the futures that our populations wish to live in.
MH: You’ve done a variety of research on how the general public sector, regulators and the general public take into consideration AI. Can you talk a little bit bit about any changes or shifts you’re seeing?
GM: In March, we launched the second round of a survey that now we have done with the Alan Turing Institute, that appears to know the general public’s understanding, exposure, expectations of AI, linked on really specific-use cases, which I believe is basically vital, and each their hopes of the technologies and the fears they’ve.
At a moment where national governments appear to be stepping back from regulation and where the international conversation appears to be one with a deregulatory, or not less than simplification bent, within the UK, not less than, we’re seeing a rise in people saying that laws and regulations would increase their comfort with AI.
And so, last time we ran the nationally representative survey, 62 per cent of the UK public said that laws and regulation help them feel comfortable. It’s now 72 per cent. That’s quite a big change in two years.
And interestingly, in an area, for instance, where post-deployment powers, the facility to intervene once a product has been released to market, aren’t getting that much traction, 88 per cent of individuals imagine it’s vital that governments or regulators have the facility to stop serious harm to the general public if it starts occurring.

I believe I do worry about this almost two steps of removal now we have with governments. On the one hand, those which can be searching for to judge or understand AI capabilities are sometimes barely out of step with the science because all the pieces is moving so quickly.
And then you’ve one other step of removal, where public comfort is, by way of an expectation of regulation and governance, an expectation of redress if things go improper, an expectation of explainability, and as a general feeling, that things like explainability are more vital than perfect accuracy, and it feels that governments are then one other step faraway from their populations in that. Or not less than within the UK, where now we have data for it.
MH: What advice would you give the federal government? There’s this massive anxiety that Europe is falling behind, and the governments actually need to spice up investment and deregulate. Is that the best approach for Europe? What would you relatively see from governments?
GM: It’s really vital for governments to contemplate where they think their competitive advantage is around AI. Countries just like the UK, and potentially, of Europe, as well, usually tend to be lively on the deployment of AI than on the frontier layer.
A whole lot of the race dynamics and conversation are focused on the frontier layer, but actually, where AI tools could have an actual impact on people is on the deployment layer, and that’s where the science and the speculation hit messy human realities.
One big lesson that we very much had with the AI Opportunities Plan, it’s great that the UK desires to be within the driving seat, however the query for me is the driving seat of what? And actually, something that we possibly didn’t see, is a hard-nosed evaluation of what the precise risks and opportunities are for the UK. Instead of getting 50 recommendations, what are the important thing things for the UK to advance?
This point of really occupied with AI as being socio-technical is basically vital, because I believe there must be a distinction between what a model or a possible tool or application does within the lab, after which what it does when it comes into contact with human realities.
We’d be really keen for governments to do more on really understanding what is occurring, how are our models or products or tools actually performing once they come into contact with people. And really ensuring that the conversations around AI are really predicated on evidence and the best sort of evidence, as an alternative of theoretical claims.
MH: This yr, agents are an enormous thing. Everyone’s very enthusiastic about that, and Europe definitely sees this as a chance for itself. How should we be occupied with this? Is this really the AI tool that was promised? Or are there possibly, perhaps, some risks that folks aren’t really occupied with, but should?
GM: One of the primary things is that you just’re often talking about various things. I believe it’s really vital that we actually drive specificity of what we mean after we’re talking about AI agents.
It’s definitely true that there are systems which can be designed to interact in fluid and natural language-like conversations with users, and so they are designed to play particular roles in guiding and taking motion for users. I believe that’s something that you just’re seeing within the ecosystem. We’ve done some recent evaluation on what we’re seeing up to now, and now we have disaggregated AI assistants, not less than in three key forms, and I’m sure there’ll be more.
One is executive, so things like OpenAI’s Operator, which actually takes motion directly on the world on a user’s behalf, and in order that’s quite low autonomy. There are agents or assistants which can be more like advisers, so these are systems that can guide you thru, possibly, a subject that you just’re not that aware of, or will enable you understand what steps you could take to perform a selected goal.
There’s a legal instruction bot called DoNotPay, and folks have been attempting to do that for a really very long time. I remember after I was working at Centrepoint, there have been chatbots that weren’t in any way agentic, but they were aiming to enable you understand what to do with a parking advantageous or offer you some very basic legal advice.
Then we’ve got these interlocutors, which is a extremely interesting area we must always think more about, that are AI assistants that converse, or have a dialogue with users, and potentially, aim to bring out particular change in a user’s mental state, and these could possibly be like mental health apps.
There’s some really interesting questions on where it’s appropriate for those AI assistants for use, and where it isn’t. And they may change into certainly one of the first interfaces during which people engage with AI, especially with Generative AI. They’re very personalised and personable. They’re well-suited to carrying out these complex open-ended tasks, so you would possibly see that this is definitely where most of the people start interfacing with AI loads more.
And you would possibly see that they’re utilized by most of the people an increasing number of, to perform some early tasks which can be related to early AI assistants. You might see that this becomes a way during which a variety of decisions and tasks are then delegated from a median user to AI. And there’s a possible that these tools could have considerable impacts on people’s mental or emotional states. And due to this fact, there’s the potential for some really profound implications.
That brings forth a few of the more long-standing regulatory or legal questions around AI safety, bias, liability, which we discussed, and privacy. When you’re a market that’s quite concentrated, the more the AI assistants are integrated into people’s lives, the more you raise questions on competition and who’s driving the market.
MH: What type of implications to people’s lives?
GM: The rise of AI companionship is something we must be more, as a society. There have been some pretty stark early use cases from the (United) States, involving children, but there’s that query of what it means for people (more broadly). There were recent reports of individuals within the Bay Area using (Anthropic’s AI chatbot) Claude as almost like a coach, despite knowing that it isn’t.
But there are only things that we don’t know yet, like what does it mean for more people to have discussions, or use tooling that doesn’t have any intelligence, in the actual sense of the word, to guide their decisions. That’s quite an interesting query.
The liability is kind of interesting, especially in case you start having ecosystems of agents, so in case your agent interacts with my agents and something goes improper, whose fault is it, that becomes quite an interesting liability query.
But also, there’s an issue concerning the power of the businesses which can be actually developing these tools, if then these tools are utilized by increasing amounts of the population. The survey that got here out in March showed that about 40 per cent of the UK population have used LLMs (large language models).

What is kind of interesting there’s that there’s quite a difference between habitual users after which folks that have possibly played around with the tool. For different use cases, between 3 and 10 per cent of the population would classify themselves as a habitual user of LLMs. But that, to me, is basically interesting, because a variety of the folks that are opinion formers around LLMs, or driving policy responses, or who’re in the businesses which can be actually constructing these tools, they will be in that 3 to 10 per cent.
There’s that basically interesting query of, what’s the split that you just’re then seeing across the population, where most individuals, which can be opinion formers on this space probably use LLMs quite habitually, but then they then represent quite a small proportion of the general population.
But even now, before AI assistants have change into as mainstream a thing as people think they may change into, we’ve got some data that means that 7 per cent of the population has used a mental health chatbot.
MH: Oh, interesting. That’s greater than I expected.
GM: It does raise questions around where tools which can be marketed or understood as being general purpose go into uses which can be regulated. Providing mental health advice is regulated. And so, what does it mean where a tool that, in its very essence, doesn’t have any actual human understanding or any actual understanding what’s the truth and isn’t the reality?
What does it mean once you start seeing the usage of these tools in increasingly sensitive areas?
MH: As a citizen, how would you approach AI agents, and use these tools?
GM: Firstly, it’s really vital that folks use the democratic levers that they’ve available, to ensure that that their representatives know what their expectations are on this space. There’s that general sense of obviously voting on the ballot box, but there’s also talking to your politicians. Our study suggests . . . that fifty per cent of individuals don’t feel reflected in the selections which can be made about AI governance.
But also, I’d say I don’t think it’s necessarily (the) individual’s responsibilities. I don’t think we must be in a situation, where each individual is having to upskill themselves simply to operate on this planet.
There’s a conversation, possibly, as a parent, what do you could know, in order that what you’re comfortable with (what) your kids (are) interacting and never interacting with. It is fundamentally the state’s responsibility to make sure that now we have the best safeguards and governance, that folks aren’t being unnecessarily put in the best way of harm.
MH: Do you think that the UK government is doing that to a sufficient degree?
GM: This government committed . . . to regulating for probably the most advanced models, within the understanding that there are specific risks which can be introduced on the model layer, that it’s very hard to mitigate on the deployment or application layer, which is where many of the public will interact with them. That laws remains to be forthcoming, so we have an interest to know what the plan is there.
We’d be really interested to know what the federal government’s plans are, by way of protecting people. There’s also the Data (Use and Access) bill going through government in the mean time. We have been giving advice around a few of the provisions around automated decision-making that we don’t think align with what the general public expects.
The public expects to have the best to redress from automatic decisions which can be made, and we’re seeing the danger that those protections are going to be diluted, in order that is out of step with what the general public expects.
MH: What questions should we be asking ourselves about AI?
GM: Amid a variety of the hope that’s being poured into these technologies, we run the danger of losing the elemental incontrovertible fact that the role of technology should at all times be helping people live in worlds that they wish to live in. Something that we’ll be specializing in in our latest strategy is definitely unpacking what public interest in AI even means to numerous members of the general public and different parts of, say, the workforce.
In the past we’ve seen some general-purpose technologies which have really fundamentally shaped how human society operates, and a few of that has been implausible.
Most of my family is in Italy, and I can call them, and video call them, and fly, and these are all things that wouldn’t be possible without previous generations’ general-purpose technologies.
But these technologies will at all times come, also, with risks and harms. And the things that folks must be occupied with is, what futures are being created through these technologies and are these futures that you just want?