Helle-Thorning Schmidt was elected the primary female prime minister of Denmark in 2011, holding her nation’s highest office for 4 years, before later serving as chief executive of the charity Save the Children International.
Perhaps her most distinguished public role today is as co-chair of Meta’s Oversight Board, a body arrange by the social media giant’s chief executive Mark Zuckerberg that began working in 2020. It comprises a world group of journalists, academics and politicians that adjudicates on probably the most high-profile content moderation cases on platforms resembling Facebook and Instagram.
Funded by a $280mn trust, the board has been forged as an independent, almost quasi-judicial body: a “supreme court” for online speech.
It has handled a few of the thorniest issues on Meta’s platforms, resembling upholding — with caveats — Donald Trump’s suspension from Facebook. Last 12 months, the board demanded Meta review policies around “manipulated media” after its moderators refused to take down an edited video on Facebook that wrongfully described US President Joe Biden as a paedophile.
Meta has given the board authority over a narrow set of issues, resembling whether content needs to be reinstated or removed. On top of issuing decisions in cases, nevertheless, it will possibly also make recommendations for policy changes.
These limits have led to accusations that Meta is merely allowing limited self-regulation that staves off more serious intervention. Thorning-Schmidt has commonly criticised Meta’s practices, while arguing the corporate has made substantive changes in response to the Oversight Board’s work.
Ahead of an appearance on the Financial Times’s TNW conference in Amsterdam today, she spoke to the FT’s technology news editor, Murad Ahmed. They discussed the rise of AI-generated deepfakes, the sometimes drastic consequences of the board’s rulings, and the way Meta’s decisions affect her personal popularity.
Murad Ahmed: This is the 12 months of elections. More than half of the world has gone to, or goes to, the polls. You’ve helped raise the alarm that this is also the 12 months that misinformation, particularly AI-generated deepfakes, could fracture democracy. We’re midway through the 12 months. Have you seen that prophecy come to pass?
Helle Thorning-Schmidt: If you have a look at different countries, I feel you’ll see a really mixed bag. What we’re seeing in India, for instance, is that AI (deepfakes are) very widespread. Also in Pakistan it has been very widespread. (The technology is) getting used to make people say something, though they’re dead. It’s making people speak, once they are in prison. It’s also making famous people back parties that they won’t be backing. . . (But) If we have a look at the European elections, which, obviously, is something I observed very deeply, it doesn’t appear to be AI is distorting the elections.
What we suggested to Meta is. . . they need to have a look at the harm and not only take something down since it is created by AI. What we’ve also suggested to them is that they modernise their whole community standards on moderated content, and label AI-generated content so that individuals can see what they’re coping with. That’s what we’ve been suggesting to Meta. We’re very joyful that, particularly with the Biden case, this has meant that Meta will probably be changing their policies.
MA: You’re independent of Meta, so far as you possibly can be. You’re ready where you’re effectively lobbying Meta, moderately than with the ability to force them to vary on this case. Has that worked up to now?
HTS: I feel it’s working thoroughly. Meta is following all our decisions. As far as I do know, it’s all but one decision they’ve followed. This is what they promised from the outset, and I feel it’s very clear that they’re sticking to that promise.
I particularly like that they’ve accepted (a suggestion) they need to tell people if a word of their content means (it) needs to be taken down. So they’re actually informing people, “if you happen to change this word, your piece of content is not going to be removed”. That gives plenty of transparency. It helps plenty of people, and that has helped hundreds of thousands already internationally to vary their content so it’s permissible on the platforms.
Meta will not be obliged to take all our recommendations, but I feel they take them very seriously, and we’re quite joyful how far we now have include this tool of giving recommendations to Meta.
MA: You mentioned that they accepted all of your decisions but one, what was the one?
HTS: It was the one in Cambodia.
MA: Tell us about that.
HTS: Well, in Cambodia, there was (then prime minister Hun Sen) who was putting up content that was extremely harmful and threatening, particularly to the opposition. We looked very thoroughly at this case and really helpful that this particular user . . . must have a sanction of being taken off the platform for six months.
Meta got here back, recognised what we were saying, but (said a ban) would stifle free speech in Cambodia. That was their argument. We didn’t agree with Meta on this. We still don’t agree. We have a certain understanding for his or her argument, since it is true that Meta’s platforms perhaps would have been abolished in Cambodia, not helping free speech in that country.
That was a transparent disagreement between Meta and the Oversight Board.
MA: That had some pretty real-world consequences for you on the time, right?
HTS: Yes, all of the board members at the moment are considered in Cambodia. That’s quite an enormous step for a government to take, particularly for me, who was also a former prime minister. . . but that also shows that they take our decisions seriously.
MA: Isn’t this one in all those situations that results in criticism of Meta and the Oversight Board as well? The Oversight Board made its decision, it wasn’t in a position to implement it. In Meta’s case, you would say it was a free speech decision, but you would say they took the industrial imperative. They would moderately be open in Cambodia than hearken to your decision.
HTS: You could say whatever you would like. But I also think all over the place is the marketplace for Meta, everyone knows that. . .
And I’m not defending Meta on this, but I do have a certain understanding for a perspective which is, principally, if you happen to take down these platforms in Cambodia, for instance, the opposition could have much narrower opportunities to place up their positions and their points of view. So it might stifle free speech in Cambodia if those platforms didn’t exist in Cambodia, there’s little question about it.
That’s at all times a balancing act. I do think the entire case underlines the complexity in content moderation. So I don’t actually mind. We have this transparent discussion with Meta. That’s also what the board is about. We take all these discussions, these very hard selections on the subject of content moderation, and put them out within the open. Now, we will discuss this, so people can have an opinion about this.
We give public ownership to those discussions. Everyone can participate. They can see our reasoning. They can see it briefly or long form on our website. And what we now have created is a system where Meta will not be the last decision-maker on probably the most difficult content moderation decisions, because we’re.
These decisions at all times, at all times come all the way down to that cross-point between free speech and other human rights. That is what it’s at all times about. That’s what we discuss endlessly. I’m hoping that, with our work, everyone can see that these discussions are very difficult and take plenty of consideration, and everybody can participate, because we at all times allow public comments.
MA: I would like to come back back to a few of these decisions. You’re deliberately given the sting cases, you’re given the hard cases to cope with.
HTS: We take the hard cases, because we take greater than we’re given. We take the cases ourselves, we decide them ourselves. Meta can refer cases to us as well, but most of our cases are only us deciding which cases we would like to do.
MA: Sure. Before I ask more concerning the Oversight Board and where you’d prefer to take it, I just wanted to return to the conversation about AI and deepfakes. What are you seeing on this world, popping up, and what’s concerning you?
HTS: Absolutely every part you possibly can imagine. Deepfakes — having a politician saying something he didn’t say, or she didn’t say, doing something they didn’t actually do. I feel that was an excellent case with the Biden case. It wasn’t even an AI-generated (video), it was spliced. You can (manipulate) content in order that it looks like an individual is doing something they didn’t do. People can get fooled by that. So that’s one big area.
Another area that we’re very occupied with immediately is AI pornography, also with public figures, but it can spill over to non-public figures, which is a deep problem. This is an infinite problem, because it will possibly create plenty of real-life harm very, very fast. Meta needs to be very, excellent at taking this type of content down and finding signals of non-consent, higher than they’re doing immediately. . .
I feel that could make an impact in how Meta treats AI-generated content that impacts particularly women but actually everyone, trans people, men as well, and likewise impacts female politicians, with these AI-generated nudes and even porn.
I do think we’ll change how Meta operates on this space. I feel we’ll find yourself, after a few years, with Meta labelling AI content and likewise being higher at finding signals of consent that they should remove from the platforms, and doing it much faster. This may be very difficult, in fact, but they need a excellent system. They also need human moderators with cultural knowledge who can assist them do that. (Note: Meta began labelling content as “Made with AI” in May.)
What we’re also taking a look at is whether or not they are equally good at doing this of their principal markets, or big markets, just like the US, as in other markets. So it is a very big issue for us, and I feel we’ll have the opportunity to vary Meta on this space over the approaching years.
MA: Two things to choose up from what you said. The very first thing is, why is Meta not good at picking up signals of non-consent? Are there some issues, some structural issues, that mean that they’re poor at this and wish to enhance?
HTS: I feel for Meta it’s probably a balancing act of not over-enforcing. Particularly within the space of nudity, they don’t need to (over)implement, so I feel that’s the balancing act that Meta’s attempting to make. You’ll see more about that once we finish our case.
MA: You’ve got a current case about this?
HTS: We’ve got two current cases going (about) public figures and nudity/pornography.
MA: Then the second thing I believed was interesting is that you think that that Meta will get to a stage where they are going to just flag all AI content on their platforms. Why don’t they simply implement this policy straight away? Why take two years to get to this stage?
HTS: I’m not saying it should take two years, I’m just saying that that is what we’re looking into. I don’t think anything should take two years. . . We could have more knowledge about how AI’s impacting people, women, elections, all these items that we’re talking about here. After the Biden case, Meta did say they’d change how they cope with AI-generated content. We have suggested to them that they need to label AI content, not take it down routinely.
Because, as I’m saying, not all AI-generated content is harmful. You can even do it in a way where it’s actually really clear where it’s satire, where it’s funny. There is plenty of AI-generated content that may stay on the platform. (Meta) should look into labelling the AI-generated content and taking the content down whether it is harmful.
MA: Are you frightened that Meta is spending less on moderation? The company has undergone a “12 months of efficiency” cutting costs, including 1000’s of layoffs. Some of that has even affected the Oversight Board. There needed to be restructuring there with people losing their jobs on the Oversight Board as well.
HTS: Tech is cutting all over the place. It’s what everyone seems to be doing immediately. I’d have been surprised if that hadn’t impacted the Oversight Board as well. We are very, very sad to see colleagues leave, but we’re also very clear that we are going to do the identical amount of cases moving forward.
We have gone from being a start-up and having quite hand-held procedures inside our organisation to being more streamlined. For example, in our case selection, we now have higher systems, higher priorities for the way we select cases.
We have modified loads during the last 4 years. We will still create the identical variety of cases and policy advisory opinions, and we will certainly not stop all these debates that we’re starting, for instance with election moderation.
MA: Are you frightened that they can be quite well invested in English language moderation, but less so in other languages all over the world, and that creates problems and holes of their moderation coverage?
HTS: Yes, there are gaps. We know that. We’ve also said that to Meta, and so they actually also made a considerable change. First of all, they’ve translated the community standards into numerous latest languages. That’s one in all our first recommendations and so they have actually done that.
The other thing is, in fact, that we now have asked them for his or her election processes, the protocol they’ve around elections, which they’ve made due to Trump decision, where we advised them to have election protocols. It is my understanding that they’re significantly better at taking a look at other countries. They were significantly better using their protocol within the Brazil election (and) in some African languages as well.
(Meta) is best now at with the ability to moderate and understand the cultural context, but they’re definitely not adequate. That’s why, for instance, in these two latest AI cases that we’re taking a look at, we will probably be taking a look at whether or not they are as vigilant in other markets as they’re within the US market, for instance.
It may very well be that they’re very, very fast to take a public figure down within the US, but perhaps not as vigilant and fast in other markets. And we’re taking a look at that every one the time.
That’s why it’s so essential that we’re a world board. We are from the north, the south, east and west. We are from all over the place. It really creates an enormous difference in our conversations, that the angle of worldwide voices comes into our decisions.
MA: What more would you want from Meta into the long run? You can’t have a look at political promoting cases, because that’s outside the remit of the board. I feel you’ve been public about your desire to have the opportunity to grab cases related to that. Is there anything that you want to more from Meta, to have the opportunity to do your jobs higher?
HTS: I spend every single day asking more of Meta, so, in fact, today isn’t any exception. . .
We would love to grasp more of how they “shadow-ban” (the practice of restricting a user’s content in ways in which should not often apparent to them) and the way they use AI to shadow-ban and sanction users. That’s a little bit of an opaque area still, and we would love to go more into that.
Then we’ll proceed taking a look at AI. We have had some sensible recommendations in that space. Meta has accepted those recommendations, but we’ll keep looking into that. So we should not done with moderating AI content or deepfake content. So we’ll keep looking into that.
We think Meta has improved in how they’re serving users, with more transparency and advising users on how they will treat their content to stay awake on the platform. But we still hear from users that they’re very confused about how Meta reacts to things, so we’ll keep looking into how Meta can treat their users higher, and we’ll see progress in that space, as well.
As I keep saying, I feel Meta has grown up within the last five years. The Oversight Board has played an element in that. But I would like Meta to be the safest, best platform of all of them. And we’ll push Meta to turn out to be higher on a regular basis.
We also invite other platforms to utilize our services. I feel another platforms really could use a little bit of independent oversight, so we’re inviting other platforms to have interaction. But for now, we’re pushing Meta every single day to turn out to be higher, more transparent, and to treat their users higher.
MA: On shadow-banning. Are you saying that it is a practice where they are going to not offer you any insight into how they give you their decisions, or it’s outside your remit?
HTS: It’s not exactly in our remit immediately, but we’re pushing for us to have the opportunity to seek out cases in that space. You asked me what are the following things we would like to push Meta into, and these are a few of the things that interest us. AI, shadow-banning, their sanction system on the whole.
MA: Why do you would like to get more focused on this?
HTS: It is something that users don’t understand and something that. . . if you happen to have a look at our cases, it’s principally been about attempting to clear some things up that users didn’t understand, change the community standards once they weren’t clear. They have done that, and make it more transparent to users how things are being moderated. I feel that empowers the users.
We are also quite joyful with the tools that users have as of late. There is parental supervision. There is loads a person user can do to avoid certain kinds of content or engagement on their personal platforms or their personal accounts. We like this technique. . .
I feel we’re slowly constructing a system where you could have regulators doing their bit. Europe’s starting but that may move to the entire world. You could have the social media platforms, particularly Meta — I do know Meta best — doing their bit and improving on a regular basis with more transparency.
Then, you could have the independent Oversight Board. . . keep saying to Meta, you possibly can do higher on this space, with the give attention to users and transparency for users and rights for users.
Then, in fact, you could have user pools where users get more agency when it comes to what they’re seeing on their very own accounts. So (all these elements) are a part of the ecosystem that I feel will create higher moderated content online.
MA: Cynics who’ve checked out the creation of the Oversight Board have argued it’s a way for Meta to have an arm’s-length approach to the hardest moderation decisions. It’s a way of co-opting some great minds with great reputations to vindicate the best way that Meta runs its platform.
You are a former prime minister of your country. You’ve gone on to do other great philanthropic work. Your popularity is now attached to how Meta develops and changes over time. Have you ever thought concerning the impact on you, because you could have standing and weight on this planet?
HTS: Absolutely, a fantastic query. I feel all 22 of us on the board, the one thing we’re most concerned about is our independence. None of us would have accepted this role if we weren’t guaranteed our independence. Meta can’t tell me what to say or do, or which decisions we take. They can’t actually do away with us, either. They’re form of stuck with us. . .
They have also given 22 completely independent individuals a voice to criticise them on a regular basis. What I’m taking a look at is 2 things. Is Meta changing? Is Meta being more transparent — are they being more considerate of their content moderation?
To be honest, I can’t see another tech corporations which have modified as much as Meta has during the last 4, five years. I’m not saying that’s solely due to Oversight Board, but I do think that Meta has grown up in numerous ways. That’s how I measure things. Is Meta changing, and do I feel that the Oversight Board plays an element in that change? Definitely.
Everyone is saying, Meta is doing this to push (difficult) decisions away from themselves. Perhaps. Well, they are literally also pushing decisions away from themselves, so it’s not Meta that has the ultimate word on a few of the most difficult content moderation decisions. They have, actually, given those decisions to an independent body.
So I’m undecided I completely understand the scepticism. I could understand it if, 4 years in, there had been no changes, Meta didn’t perform our decisions, or didn’t care about our recommendations. But the other is true. I’m in it because I would like to indicate that content moderation is feasible, and that we’re a part of an even bigger ecosystem of regulation, platforms taking responsibility, independent regulation, and users themselves getting more agency.
That is what I consider in. I don’t think you possibly can get good content moderation if platforms are doing it themselves. I don’t think you possibly can get good content moderation if governments are doing content moderation, in order that’s why we now have to create something in the center.
I still think it was very brave and daring of Meta to create that. They didn’t have to try this. Other corporations haven’t done that. That’s why I’m in it, because I would like to see if this may change how content is being moderated, not only on Meta’s platforms but on the whole.
I’m seeing progress. And so long as there’s progress and independence, I don’t care a lot about my popularity, because I would like to see things change in real life for users and without spending a dime speech.