HomeIndustriesMargaret Mitchell: artificial general intelligence is ‘just vibes and snake oil’

Margaret Mitchell: artificial general intelligence is ‘just vibes and snake oil’

Margaret Mitchell, researcher and chief ethics scientist at artificial intelligence developer and collaborative platform Hugging Face, is a pioneer in responsible and ethical AI.

One of essentially the most influential narratives across the promise of AI is that, someday, we are going to have the ability to construct artificial general intelligence (AGI) systems which might be not less than as capable or intelligent as people. But the concept is ambiguous at best and poses many risks, argues Mitchell. She founded and co-led Google’s responsible AI team, before being ousted in 2021.

In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, she explains why the concentrate on humans and help them needs to be central to the event of AI, quite than specializing in the technology.


Melissa Heikkilä: You’ve been a pioneer in AI ethics since 2017, once you founded Google’s responsible AI ethics team. In that point, we’ve passed through several different stages of AI and our understanding of responsible AI. Could you walk me through that?

Margaret Mitchell: With the increased potential that got here out of deep learning — that is circa 2012-13 — a bunch of us who were working on machine learning, which is essentially now called AI, were really seeing how there was an enormous paradigm shift in what we were in a position to do. 

We went from not with the ability to recognise a bird to with the ability to inform you all concerning the bird. A really small set of us began seeing the problems that were emerging based on how the technology worked. For me, it was probably circa 2015 or so where I saw the primary glimmers of the long run of AI, almost where we are actually. 

I got really scared and nervous because I saw that we were just going full speed ahead, tunnel vision, and we weren’t noticing that it was making errors that could lead on to pretty harmful outcomes. For example, if a system learns that a white person is an individual and that a Black person is a Black person, then it’s learnt white is default. And that brings with it every kind of issues. 

One of my “aha” moments was once I saw that my system thought that this massive explosion that ended up hurting 43 people was beautiful since it made purples and pinks within the sky (after) it had learnt sunsets!

So it was so clear to me that we were full speed ahead within the direction of constructing these systems an increasing number of powerful without recognising the critical relationship between input and output, and the results that it might have on society. 

So I began what’s now called responsible AI or ethical AI. If you’re on the forefront and also you see the difficulty and nobody else is listening to the difficulty, you’re like, “I suppose I even have a responsibility to do something here.” There was a small set of individuals on the forefront of technology who ended up giving rise to responsible AI. 

(Computer scientist and founding father of advocacy body the Algorithmic Justice League) Joy Buolamwini was one other one who was on the forefront of working on face (recognition). Then she saw the identical type of bias issues I used to be noticing in the way in which that folks were being described. For her, it was how faces were being detected. 

AI Exchange

This spin-off from our popular Tech Exchange series of dialogues examines the advantages, risks and ethics of using artificial intelligence, by talking to those on the centre of its development.

(Around the identical time) Timnit Gebru, who I co-led the Google ethical AI team with, was starting to grasp this technology that she had been working on might be used to surveil people, to create disproportionate harm for people who find themselves low income. 

MH: I keep in mind that era well, because we began to see plenty of changes, corporations applying ways to mitigate these harms. We saw the primary wave of regulation. Now it almost looks like we’ve taken 10 steps back. How does that make you are feeling?

MM: Within technology, and with society generally, that for each motion, there’s a response. The interest in ethics and responsible AI and fairness was something that was pleasantly surprising to me, but in addition what I saw as a part of a pendulum swing. We tried to make as much positive impact as we could while the time was ripe for it. 

I began working on these things when nobody was all for it and pushing back against it as something that’s a waste of time. Then people got enthusiastic about it, because of the work of my colleagues, to your reporting. (It became) dinner table conversation. People (learnt) about bias and AI. That’s amazing. I never would have thought that we might have made that much of an impact. 

(But) now (regulators and industry) are reacting and never caring about (ethics) again. It’s all just a part of the pendulum swing. 

The pendulum will swing back again. It probably won’t be one other massive swing in the opposite direction until there are some pretty horrible outcomes, just by the way in which that society tends to maneuver and regulation tends to maneuver, which tends to be reactive. 

So, yes, it’s disappointing, nevertheless it’s really vital for us to maintain the regular drumbeat of what the problems are and making it clear, in order that it’s possible for the technology to swing back in a way that (takes under consideration) the societal effects. 

MH: Looking at this current generation of technologies, what type of harm do you expect? What keeps you up at night? 

MM: Well, there’s a number of things that keep me up at night. Not all of them are even related to technology. We’re ready now where technology is inviting us to offer away a number of private information that then will be utilized by malicious actors or by government actors to harm us. 

As technology has made it possible for people to be more efficient or have greater reach, be more connected, that’s also include a lack of privacy, a lack of security around the type of data that we share. So it’s quite possible that the kinds of data we’re putting out and sharing now might be used against us. 

This is every little thing from being stopped on the border from entering the country since you’ve said something negative about someone online, to non-consensual intimate information or deepfake pornography that comes out that’s commonly used to take revenge on women.

With the expansion in technological capabilities has come a growth in personal harm to people. I actually worry about how which may change into an increasing number of intense over the following few years. 

MH: One of the contributing aspects to this whole AI boom we’re seeing now’s this obsession with AGI. You recently co-wrote a paper detailing how the industry shouldn’t see AGI as a guideline, or ‘north star’. Why is that?

MM: The issue is that AGI is basically a narrative. Just the term “intelligence” isn’t a term that has consensus on what it means, not in biology, not in psychology, actually not inside education, inside education. It’s a term that’s long been contested and something that’s abstract and nebulous, but throughout time, throughout talking about intelligence, has been used to separate haves and have nots, and has been a segregationist force. 

But at a better level, intelligence as an idea is ill-defined and it’s problematic. Just that aspect of AGI is an element of why shooting for it’s a bit fraught, since it functions to offer an air of positivity, of goodness. It provides us (with) a canopy of something good, when, in actual fact, it’s not actually a concrete thing and as a substitute provides a narrative about moving forward whatever technology we would like to maneuver forward, as people in positions of power throughout the tech industry. 

Then we will have a look at the term “general”. General can also be a term that throughout the context of AI doesn’t have a really clear meaning. So you may give it some thought just when it comes to on daily basis. If you’ve gotten general knowledge about something, what does that mean? It means you understand about maths, science, English. 

I’d say I even have general intelligence but that I don’t know anything about medicine, for instance. And so if I were a technology that’s going for use in all places, I feel it’s very clear to know, OK, I can enable you edit your essay, but I cannot do a surgery. It’s so vital to know that general doesn’t actually mean good at every little thing ever that we will possibly consider. It means good at some things, as measured on benchmarks for specific topics, based on specific topics. 

AGI as a complete is just a brilliant problematic concept that gives an air of objectivity and positivity, when, in actual fact, it’s opening the door for technologists to only do whatever they need.

Close-up of a person’s hand holding an iPhone and using Google AI Mode
Google’s AI mode uses AI and huge language models to process search queries © Smith Collection/Gado via Getty Images

MH: Yes, that’s really interesting, because we’re being sold this promise of this super AI. And in a way, a number of AI researchers, that’s what motivates their work. Is that something you perhaps thought of earlier in your profession? 

MM: One of the basic things that I’ve noticed as someone within the tech industry is that we develop technology because we like to do it. We make explanations about how that is great for society or whatever it’s, but fundamentally, it’s just really fun.

This is something I struggle with, as someone who works on operationalising ethics, but really, what I really like best is programming. So persistently, I’ll see people developing stuff and be like, oh my god, that’s so fun, I wish I could do this. Also, it’s so ethically problematic, I can’t do it.

I feel that a part of this thing about how we’re pursuing AGI, for a number of people, that’s a explanation of just doing what they love, which is advancing technology. They’re saying, there’s this north star, there’s this thing we’re aiming towards, this is a few concrete cut-off date that we’re going to hit, nevertheless it’s not. It’s just, we’re advancing technology, given where we are actually, without really deeply fascinated by where we’re going. 

I do think there are some individuals who have philosophical or perhaps religious type beliefs in AGI as a supreme being. I feel, for essentially the most part, people in technology are only having fun advancing technology.

MH: When you consider the last word goal of AI, what’s it? 

MM: I come from a background that’s focused on AAC (assistive and augmentative communication). And that’s from language generation research. 

I worked on language generation for years and years and years, since 2005. When I used to be an undergrad, I used to be first geeking out over it, and I wrote my thesis there on it. And I’ve continued to take a look at AI through the lens of, how can this assist and augment people? That will be seen as a stark contrast to, how can it replace people? 

I at all times say, it’s vital to complement, not supplant. That comes from this view that the basic goal of technology is to assist with human wellbeing. It’s to assist humans flourish. The AGI thing, the AGI narrative, sidesteps that, and as a substitute actually puts forward technology instead of people. 

For me, AI needs to be grounded and centred on the person and best help the person. But for a number of people, it’s grounded and centred on the technology. 

It is kind of possible to get swept up in excitement about stuff after which be like, oh, wait, that is terrible for me. It’s just like the computer scientist nerd in me still gets really enthusiastic about stuff. 

But also, that’s why it’s vital to have interaction with people who find themselves more reflective of civil society and have studied social science, are more conversant in social movements and just the impact of technology on people. Because it is straightforward to lose sight of (it) if you happen to’re just throughout the tech industry. 

MH: In your paper, you laid out the reason why this AGI narrative will be quite harmful, and different traps it sets out. What would you say the essential harms of this narrative are? 

MM: One of them that I’m really concerned about is what we call the illusion of consensus, which is the undeniable fact that the term AGI is getting used in a way that offers the illusion that there’s a general understanding of what that term means and a consensus on what we want to do. 

We don’t, and there isn’t a conventional understanding. But again, this creates a continuing tunnel vision of moving forward with technology, advancing it based on problematic benchmarks that don’t rigorously test application in the true world, and seems to create the illusion that what we’re doing is the appropriate thing and that we all know what we’re doing. 

There’s also the supercharging bad science trap, which fits to the undeniable fact that we don’t really have the scientific method inside AI. In chemistry, physics, other sciences outside of computer science, there has probably been an extended history of developing scientific methods, like how do you do significance testing, and what’s the hypothesis? 

With computer science, it’s far more engineering-focused and far more exploratory. That’s to not say that’s not science, but that’s to say that we haven’t, inside computer science, understood that when we’ve got a conclusion from our work, that it isn’t necessarily supported by our work. 

There’s an inclination to make pretty sweeping, marvellous claims that aren’t actually supported by what the research does. I feel that’s an indication of the immaturity of the sphere. 

I imagine that Galileo and Newton didn’t have the scientific method that might be useful to use to what they were doing. There was still a certain quantity of exploring after which attending to a conclusion and going backwards and forwards. But because the science matured, it became very clear what must be done as a way to support a conclusion. 

We don’t have that in computer science. With a lot excitement put into what we’re doing, we’ve got a bias to simply accept the conclusions, even in the event that they’re not supported by the work. That creates a feedback effect or a perpetuation effect, where based on an incorrect conclusion, more science builds on top of it. 

MH: Are you seeing any of that in the sphere now? For example, are large language models (LLMs) the appropriate strategy to go? Could we be pouring all this money into the improper thing? 

MM: With every little thing in AI ethics, and to some extent responsible AI, (you could ask) is it right or improper for what? What’s the context? Everything is contextual. For language models, they will be useful for language generation in specific forms of contexts. I did three theses, undergraduate, masters and PhD, all of them on natural language generation, and a number of the work was looking towards, how can we assist people who find themselves non-verbal? 

People who’ve non-verbal autism (or) cerebral palsy, is there a strategy to generate language in order that they’re on top of things? With cerebral palsy, using a button that they’ll mark with their head to pick out amongst a ton of various generated utterances to reflect what they need to say as a way to say how their day was. 

Or people who find themselves blind, can we create systems that generate language in a way that speaks to what they should know for what they’re attempting to do, like navigating a busy room or knowing how persons are reacting to what they’re saying?

Language models will be useful. The current paradigm of huge language models is usually not grounded, which suggests it’s not connected to a concrete reality. 

A serious-looking woman with straight, shoulder-length red hair and black-rimmed glasses speaks during a public hearing
Margaret Mitchell testifying on AI before a US Senate subcommittee on privacy, technology and the law last 12 months © Saul Loeb/AFP via Getty Images

It’s stochastic, it’s probabilistic based on the prompt from the user. I feel that utility (for language models) could be even stronger if grounding were a fundamental a part of LLM development.

I don’t think that LLMs get us to something that would replace humans for tasks that require a number of human expertise. I feel that we make a mistake once we do this. 

MH: What are the real-world implications of blindly buying into this AI narrative?

MM: I’m really concerned about just this growing rift between people who find themselves getting cash and other people who will not be getting cash and people who find themselves losing their jobs. It just looks like a number of persons are losing a number of the things that were in a position to help them make a living.

Their writing, their images, the sorts of things that they’ve created, that’s getting swept up within the advancement of AI technology. People who’re developing AI have gotten richer and richer, and the individuals who have provided the data that AI will be using as a way to advance are losing their jobs and losing their income.

It really seems to me that there’s an enormous risk, and it’s actually already happening with AI, of making an enormous rift in people in positions of power with money and people who find themselves disempowered and struggling to survive. That gap seems to only be widening more intensely on a regular basis.

MH: So is AGI principally only a scam, or . . .?

MM: Some might say it’s snake oil. Two of my colleagues on this space, Arvind Narayanan and Sayash Kapoor, have put out this book, . Although I disagree with a few of their conclusions, the fundamental premise that the general public is being sold something that isn’t actually real and may’t actually meet the needs that they’re being told that it may well meet, that does appear to be happening, and that may be a problem, yes.

This is why we want a more rigorous evaluation approach, higher ideas about benchmarking, what it means to know the way a system will work, how well it’ll work, in what context, that type of theme. But as for now, it’s identical to vibes, vibes and snake oil, which might get you to this point. The placebo effect works relatively well.

MH: Instead of obsessing over AGI, what should the AI sector do as a substitute? How can we create systems which might be actually useful and useful to all?

MM: The most vital thing is to centre the people as a substitute of the technology. So as a substitute of technology first, after which determining the way it is perhaps applied to people, people first, after which determining what technology is perhaps useful for them.

That is a fundamental difference in how technology is being approached. But if we would like something like human flourishing or human wellbeing, then we want to centre people from the beginning.

MH: We’ve talked loads concerning the potential risks and harms, but what’s exciting you in AI straight away?

MM: I’m cautiously enthusiastic about the chances with AI agents. I suck at filling out forms. I’m terrible at doing my taxes. I feel it is perhaps possible to have AI agents that would do my taxes for me accurately.

I don’t think we’re there yet, due to grounding problem, because the way in which technology has been built hasn’t been done in a way that’s grounded. There’s a continuing risk of error and hallucination. But I feel it is perhaps possible to get to a spot where AI agents are grounded enough to supply reasonable information in filling out complex forms.

MH: That is a technology that everybody, I feel, could use. I could use that. Bring it on.

MM: I’m not after the singularity. I’m after things that can help me do things that I completely suck at.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read