HomeNewsWomen in AI: Ewa Luger explores how AI affects culture — and...

Women in AI: Ewa Luger explores how AI affects culture — and vice versa

To give AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a series of interviews specializing in remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces all year long because the AI boom continues, highlighting key work that always goes unrecognized. Read more profiles here.

Ewa Luger is co-director on the Institute of Design Informatics, and co-director of the Bridging Responsible AI Divides (BRAID) program, backed by the Arts and Humanities Research Council (AHRC). She works closely with policymakers and industry, and is a member of the U.K. Department for Culture, Media and Sport (DCMS) college of experts, a cohort of experts who provide scientific and technical advice to the DCMS.

Luger’s research explores social, ethical and interactional issues within the context of data-driven systems, including AI systems, with a selected interest in design, the distribution of power, spheres of exclusion, and user consent. Previously, she was a fellow on the Alan Turing Institute, served as a researcher at Microsoft, and was a fellow at Corpus Christi College on the University of Cambridge.

Q&A

Briefly, how did you get your start in AI? What attracted you to the sphere?

After my PhD, I moved to Microsoft Research, where I worked within the user experience and design group within the Cambridge (U.K.) lab. AI was a core focus there, so my work naturally developed more fully into that area and expanded out into issues surrounding human-centered AI (e.g., intelligent voice assistants).

When I moved to the University of Edinburgh, it was attributable to a desire to explore problems with algorithmic intelligibility, which, back in 2016, was a distinct segment area. I’ve found myself in the sphere of responsible AI and currently jointly lead a national program on the topic, funded by the AHRC.

What work are you most pleased with within the AI field?

My most-cited work is a paper concerning the user experience of voice assistants (2016). It was the primary study of its kind and continues to be highly cited. But the work I’m personally most pleased with is ongoing. BRAID is a program I jointly lead, and is designed in partnership with a philosopher and ethicist. It’s a genuinely multidisciplinary effort designed to support the event of a responsible AI ecosystem within the U.K.

In partnership with the Ada Lovelace Institute and the BBC, it goals to attach arts and humanities knowledge to policy, regulation, industry and the voluntary sector. We often overlook the humanities and humanities in terms of AI, which has at all times seemed bizarre to me. When COVID-19 hit, the worth of the creative industries was so profound; we all know that learning from history is critical to avoid making the identical mistakes, and philosophy is the foundation of the moral frameworks which have kept us protected and informed inside medical science for a few years. Systems like Midjourney depend on artist and designer content as training data, and yet someway these disciplines and practitioners have little to no voice in the sphere. We want to alter that.

More practically, I’ve worked with industry partners like Microsoft and the BBC to co-produce responsible AI challenges, and we’ve worked together to search out academics that may reply to those challenges. BRAID has funded 27 projects to this point, a few of which have been individual fellowships, and we’ve a brand new call going live soon.

We’re designing a free online course for stakeholders looking to have interaction with AI, organising a forum where we hope to have interaction a cross-section of the population in addition to other sectoral stakeholders to support governance of the work — and helping to blow up among the myths and hyperbole that surrounds AI for the time being.

I do know that type of narrative is what floats the present investment around AI, but it surely also serves to cultivate fear and confusion amongst those people who find themselves probably to suffer downstream harms. BRAID runs until the tip of 2028, and in the following phase, we’ll be tackling AI literacy, spaces of resistance, and mechanisms for contestation and recourse. It’s a (relatively) large program at £15.9 million over six years, funded by the AHRC.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

That’s an interesting query. I’d start by saying that these issues aren’t solely issues present in industry, which is usually perceived to be the case. The academic environment has very similar challenges with respect to gender equality. I’m currently co-director of an institute — Design Informatics — that brings together the college of design and the college of informatics, and so I’d say there’s a greater balance each with respect to gender and with respect to the sorts of cultural issues that limit women reaching their full skilled potential within the workplace.

But during my PhD, I used to be based in a male-dominated lab and, to a lesser extent, after I worked in industry. Setting aside the apparent effects of profession breaks and caring, my experience has been of two interwoven dynamics. Firstly, there are much higher standards and expectations placed on women — for instance, to be amenable, positive, kind, supportive, team-players and so forth. Secondly, we’re often reticent in terms of putting ourselves forward for opportunities that less-qualified men would quite aggressively go for. So I’ve needed to push myself quite far out of my comfort zone on many occasions.

The other thing I’ve needed to do is to set very firm boundaries and learn when to say no. Women are sometimes trained to be (and seen as) people pleasers. We could be too easily seen because the go-to person for the sorts of tasks that may be less attractive to your male colleagues, even to the extent of being assumed to be the tea-maker or note-taker in any meeting, no matter skilled status. And it’s only really by saying no, and ensuring that you simply’re aware of your value, that you simply ever find yourself being seen in a special light. It’s overly generalizing to say that that is true of all women, but it surely has actually been my experience. I should say that I had a female manager while I used to be in industry, and she or he was wonderful, so nearly all of sexism I’ve experienced has been inside academia.

Overall, the problems are structural and cultural, and so navigating them takes effort — firstly in making them visible and secondly in actively addressing them. There aren’t any easy fixes, and any navigation places yet more emotional labor on females in tech.

What advice would you give to women searching for to enter the AI field?

My advice has at all times been to go for opportunities that assist you to level up, even if you happen to don’t feel that you simply’re 100% the proper fit. Let them decline relatively than you foreclosing opportunities yourself. Research shows that men go for roles they think they may do, but women only go for roles they feel they already can or are doing competently. Currently, there’s also a trend toward more gender awareness within the hiring process and amongst funders, although recent examples show how far we’ve to go.

If you take a look at U.K. Research and Innovation AI hubs, a recent high-profile, multi-million-pound investment, all the nine AI research hubs announced recently are led by men. We should really be doing higher to make sure gender representation.

What are among the most pressing issues facing AI because it evolves?

Given my background, it’s perhaps unsurprising that I’d say that essentially the most pressing issues facing AI are those related to the immediate and downstream harms which may occur if we’re not careful within the design, governance and use of AI systems.

The most pressing issue, and one which has been heavily under-researched, is the environmental impact of large-scale models. We might select in some unspecified time in the future to simply accept those impacts if the advantages of the applying outweigh the risks. But immediately, we’re seeing widespread use of systems like Midjourney run simply for fun, with users largely, if not completely, unaware of the impact every time they run a question.

Another pressing issue is how we reconcile the speed of AI innovations and the flexibility of the regulatory climate to maintain up. It’s not a brand new issue, but regulation is one of the best instrument we’ve to make sure that AI systems are developed and deployed responsibly.

It’s very easy to assume that what has been called the democratization of AI — by this, I mean systems comparable to ChatGPT being so available to anyone — is a positive development. However, we’re already seeing the results of generated content on the creative industries and inventive practitioners, particularly regarding copyright and attribution. Journalism and news producers are also racing to make sure their content and types will not be affected. This latter point has huge implications for our democratic systems, particularly as we enter key election cycles. The effects could possibly be quite literally world-changing from a geopolitical perspective. It also wouldn’t be an inventory of issues without not less than a nod to bias.

What are some issues AI users should pay attention to?

Not sure if this pertains to corporations using AI or regular residents, but I’m assuming the latter. I feel the primary issue here is trust. I’m considering, here, of the numerous students now using large language models to generate academic work. Setting aside the moral issues, the models are still not ok for that. Citations are sometimes incorrect or out of context, and the nuance of some academic papers is lost.

But this speaks to a wider point: You can’t yet fully trust generated text and so should only use those systems when the context or end result is low risk. The obvious second issue is veracity and authenticity. As models turn out to be increasingly sophisticated, it’s going to be ever harder to know of course whether it’s human or machine-generated. We haven’t yet developed, as a society, the requisite literacies to make reasoned judgments about content in an AI-rich media landscape. The old rules of media literacy apply within the interim: Check the source.

Another issue is that AI is just not human intelligence, and so the models aren’t perfect — they could be tricked or corrupted with relative ease if one has a mind to.

What is one of the best method to responsibly construct AI?

The best instruments we’ve are algorithmic impact assessments and regulatory compliance, but ideally, we’d be on the lookout for processes that actively seek to do good relatively than simply searching for to reduce risk.

Going back to basics, the apparent first step is to handle the composition of designers — ensuring that AI, informatics and computer science as disciplines attract women, people of color and representation from other cultures. It’s obviously not a fast fix, but we’d clearly have addressed the problem of bias earlier if it was more heterogeneous. That brings me to the problem of the information corpus, and ensuring that it’s fit-for-purpose and efforts are made to appropriately de-bias it.

Then there comes the necessity to train systems architects to pay attention to moral and socio-technical issues — placing the identical weight on these as we do the first disciplines. Then we want to present systems architects more time and agency to think about and fix any potential issues. Then we come to the matter of governance and co-design, where stakeholders must be involved within the governance and conceptual design of the system. And finally, we want to thoroughly stress-test systems before they get anywhere near human subjects.

Ideally, we should always even be ensuring that there are mechanisms in place for opt-out, contestation and recourse — though much of this is roofed by emerging regulations. It seems obvious, but I’d also add that you need to be prepared to kill a project that’s set to fail on any measure of responsibility. There’s often something of the fallacy of sunk costs at play here, but when a project isn’t developing as you’d hope, then raising your risk tolerance relatively than killing it could possibly lead to the premature death of a product.

The European Union’s recently adopted AI act covers much of this, after all.

How can investors higher push for responsible AI?

Taking a step back here, it’s now generally understood and accepted that the entire model that underpins the web is the monetization of user data. In the identical way, much, if not all, of AI innovation is driven by capital gain. AI development specifically is a resource-hungry business, and the drive to be the primary to market has often been described as an arms race. So, responsibility as a worth is at all times in competition with those other values.

That’s to not say that corporations don’t care, and there has also been much effort made by various AI ethicists to reframe responsibility as a way of really distinguishing yourself in the sphere. But this looks like an unlikely scenario unless you’re a government or one other public service. It’s clear that being the primary to market is at all times going to be traded off against a full and comprehensive elimination of possible harms.

But coming back to the term . To my mind, being responsible is the least we will do. When we are saying to our youngsters that we’re trusting them to be responsible, what we mean is, don’t do anything illegal, embarrassing or insane. It’s literally the basement in terms of behaving like a functioning human on the earth. Conversely, when applied to corporations, it becomes some type of unreachable standard. You must ask yourself, how is that this even a discussion that we discover ourselves having?

Also, the incentives to prioritize responsibility are pretty basic and relate to wanting to be a trusted entity while also not wanting your users to come back to newsworthy harm. I say this because plenty of individuals on the poverty line, or those from marginalized groups, fall below the edge of interest, as they don’t have the economic or social capital to contest any negative outcomes, or to lift them to public attention.

So, to loop back to the query, it will depend on who the investors are. If it’s one in every of the massive seven tech corporations, then they’re covered by the above. They must decide to prioritize different values in any respect times, and never only when it suits them. For the general public or third sector, responsible AI is already aligned to their values, and so what they have an inclination to wish is sufficient experience and insight to assist make the proper and informed selections. Ultimately, to push for responsible AI requires an alignment of values and incentives.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read