HomeNewsWomen in AI: Sandra Watcher, Professor of Data Ethics at Oxford

Women in AI: Sandra Watcher, Professor of Data Ethics at Oxford

To give AI-focused women academics and others their well-deserved – and overdue – time within the highlight, TechCrunch is launching a series of interviews specializing in notable women who’ve contributed to the AI ​​revolution. As the AI ​​boom continues, we’ll publish several articles all year long highlighting necessary work that always goes unrecognized. You can find more profiles here.

Sandra Wachter is a professor and senior researcher in data ethics, AI, robotics, algorithms and regulation on the Oxford Internet Institute. She can also be a former member of the Alan Turing Institute, the UK's national institute for data science and AI.

During his time on the Turing Institute, Watcher assessed the moral and legal elements of knowledge science and highlighted cases through which opaque algorithms have turn into racist and sexist. She also explored ways to check AI to combat disinformation and promote justice.

questions and answers

In short, how did you start with AI? What attracted you to this field?

I can't remember a time in my life after I didn't consider that innovation and technology have incredible potential to enhance people's lives. However, I also know that technology can have devastating consequences on people's lives. And so – not least due to my strong sense of justice – I at all times strived to seek out a option to ensure this perfect middle ground. Enabling innovation while protecting human rights.

I even have at all times felt that law plays a vital role. Law might be the center ground that each protects people and enables innovation. Law as a discipline was a given for me. I like challenges, I like understanding how a system works, seeing how I can exploit it, find gaps after which close them.

AI is an incredibly transformative force. It can be implemented within the areas of finance, employment, criminal justice, immigration, health and the humanities. This might be good and bad. And whether it is nice or bad is a matter of design and politics. I used to be naturally drawn to it because I had the sensation that the law could make a meaningful contribution to making sure that innovation advantages as many individuals as possible.

What work are you most happy with (within the AI ​​space)?

I believe the work I'm most happy with immediately is a piece co-authored with Brent Mittelstadt (a philosopher), Chris Russell (a pc scientist), and myself as a lawyer.

Our latest work on bias and fairness: “The injustice of fair machine learning“showed the harmful effects of enforcing many “group justice” measures in practice. Specifically, fairness is achieved by “leveling up” or making everyone worse off, relatively than helping disadvantaged groups. This approach could be very problematic and ethically questionable within the context of EU and UK anti-discrimination laws. In one Piece in Wired We've talked about how detrimental downgrading might be in practice – in healthcare, for instance, enforcing group equity could end in more cancer cases being missed than strictly needed, while making a system overall less accurate.

For us, this was frightening and something that is essential for people in technology, politics and really everyone to know. In fact, we’ve got worked with UK and EU regulators and reported our alarming findings to them. I deeply hope that this provides policymakers the influence they should implement latest policies that prevent AI from causing such serious harm.

How do you overcome the challenges of the male-dominated technology industry and due to this fact also the male-dominated AI industry?

The interesting thing is that I even have never viewed technology as something that “belongs to men”. It wasn't until I began school that I noticed in society that technology had no place for people like me. I remember after I was 10 years old, the curriculum required girls to knit and stitch while boys built birdhouses. I also wanted to construct a birdhouse and applied to be transferred to the boys' class, but my teachers told me that “girls don't try this.” I even went to the principal and tried to overturn the choice but unfortunately failed on the time.

It's very hard to fight the stereotype that you just shouldn't be a part of this community. I wish I could say that things like this don't occur anymore, but unfortunately that's not true.

However, I even have been incredibly fortunate to work with allies like Brent Mittelstadt and Chris Russell. I used to be privileged to have incredible mentors like my PhD advisor. My manager and I even have a growing network of like-minded people of all genders who’re doing their best to paved the way and improve the situation for everybody involved in technology.

What advice would you give to women wanting to enter the AI ​​field?

Above all, try to seek out like-minded people and allies. Finding your people and supporting one another is crucial. My most impactful work is at all times talking to open-minded people from other backgrounds and disciplines to resolve common problems we face. Recognized wisdom alone cannot solve novel problems, so women and other groups which have historically faced barriers to entering AI and other areas of technology have the tools to actually innovate and offer something latest.

What are a number of the most pressing issues facing AI because it continues to evolve?

I believe there are a number of issues that require serious legal and policy consideration. To name a number of: AI suffers from distorted data that results in discriminatory and unfair results. AI is inherently opaque and obscure, yet it’s tasked with deciding who gets a loan, who gets the job, who goes to prison, and who gets to go to school.

Generative AI has related problems, however it also contributes to misinformation, is filled with hallucinations, violates privacy and mental property rights, endangers people's jobs, and contributes more to climate change than the airline industry.

There isn’t any time to waste; We should have addressed these issues yesterday.

What issues should AI users concentrate on?

I believe there's a bent to consider in a certain narrative that claims, “AI is here and here to remain, get in, or be left behind.” I believe it's necessary to take into consideration who’s driving this narrative and who advantages from it. It's necessary to recollect where the actual power lies. The power lies not with those that innovate, but with those that buy and implement AI.

Consumers and corporations should due to this fact ask themselves: “Does this technology actually help me and in what way?” “AI” is now integrated into electric toothbrushes. For whom is that? Who needs that? What is being improved here?

In other words, ask yourself what’s broken and what must be fixed, and whether AI can actually fix the issue.

This style of pondering will shift market power and innovation will hopefully move in a direction that focuses on profit to a community and not only profit.

What is the perfect option to construct AI responsibly?

There are laws that require responsible AI. Here too, a really unhelpful and unfaithful narrative tends to dominate: that regulation stifles innovation. That shouldn’t be true. Regulation stifles innovation. Good laws encourage and nurture ethical innovation; That's why we’ve got protected cars, planes, trains and bridges. Society has lost nothing if regulation prevents this
Creating AI that violates human rights.

Traffic and safety regulations for cars would also “stifle innovation” and “restrict autonomy”. These laws prevent people from driving with out a license, prevent cars from coming onto the market that should not have seat belts and airbags, and penalize individuals who don’t obey the speed limit. Imagine what the automotive industry's safety record could be if there have been no laws regulating vehicles and drivers. AI is currently at the same tipping point, and resulting from heavy industry lobbying and political pressure, it continues to be unclear which path it should take.

How can investors higher advance responsible AI?

I wrote an article a number of years ago called “How fair AI could make us richer.” I firmly consider that AI that respects human rights and is unbiased, explainable and sustainable shouldn’t be only legally, ethically and morally right, but may also be profitable.

I actually hope that investors understand that by pursuing responsible research and innovation, they may also recover products. Bad data, bad algorithms and bad design decisions result in worse products. While I can't persuade you that you need to take ethical actions since it's the proper thing to do, I hope you realize that ethical actions are also more profitable. Ethics needs to be viewed as an investment, not a hurdle to beat.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read