HomeNewsWomen in AI: Sarah Myers West says we should always ask: “Why...

Women in AI: Sarah Myers West says we should always ask: “Why develop AI in any respect?”

To give academics and other women focused on AI their well-deserved — and long overdue — time within the highlight, TechCrunch has published a series of interviews specializing in notable women who’ve contributed to the AI ​​revolution. We'll be publishing these pieces all year long because the AI ​​boom continues, highlighting vital work that always goes unrecognized. Find more profiles here.

Sarah Myers West is the chief director of the AI ​​Now Institute, an American research institute that studies the social impacts of AI and conducts policy research that addresses the concentration of power within the technology industry. She previously served as senior advisor for AI on the U.S. Federal Trade Commission and is a visiting scholar at Northeastern University and a research associate on the Citizens and Technology Lab at Cornell University.

How did you get into AI? What attracted you to this field?

I even have spent the last 15 years studying the role of technology firms as powerful political actors as they moved to the front lines of international governance. Earlier in my profession, I watched firsthand how U.S. technology firms emerged and adjusted the political landscape all around the world—in Southeast Asia, China, the Middle East, and elsewhere. I wrote a book examining how industry lobbying and regulation shaped the origins of the business model of online surveillance, at the same time as technologies offered alternatives that didn’t take hold.

I even have often asked myself throughout my profession, “Why can we cling to this dystopian vision of the long run?” The answer has less to do with the technology itself and more to do with politics and commercialization.

That's essentially been my project ever since, each in my research profession and now in my policy work as co-director of AI Now. If AI is an element of the infrastructure of our day by day lives, we want to critically examine the institutions that produce it and make sure that that as a society there may be enough friction – whether regulatory or organizational – to make sure that at the tip of the day the needs of the general public are met, not those of the tech firms.

What work in AI are you most happy with?

I'm really happy with the work we've done on the FTC, the U.S. agency on the front lines of regulatory enforcement in the world of ​​artificial intelligence, amongst other things. I've loved rolling up my sleeves and dealing on cases. I've been in a position to leverage my methodological training as a researcher into investigative work since the toolkits are essentially the identical. It's been gratifying to have the option to make use of those tools to carry power on to account, and to see that work have a direct impact on the general public, whether it's addressing how AI is getting used to devalue staff and drive up prices, or tackling the anti-competitive behavior of huge tech firms.

We've managed to bring on board a implausible team of technologists reporting to the White House Office of Science and Technology Policy, and it's been exciting to see how the foundations we've laid there have immediate relevance given the emergence of generative AI and the importance of cloud infrastructure.

What are probably the most pressing problems facing AI because it advances?

First of all, the very fact is that AI technologies are widely utilized in highly sensitive contexts—in hospitals, schools, at borders, etc.—but they usually are not yet adequately tested and validated. This technology is vulnerable to errors, and we all know from independent research that these errors usually are not evenly distributed; they disproportionately harm communities which have long borne the brunt of discrimination. We should set the bar much, much higher. But what worries me is how powerful institutions are using AI—whether it really works or not—to justify their actions, from using weapons against civilians in Gaza to disenfranchising staff. This will not be an issue of technology, but of discourse: how we align our culture with technology and the concept certain decisions or behaviors will change into more “objective” or in some way pass muster with AI.

What is the very best option to construct AI responsibly?

We must all the time start with the query: why develop AI in any respect? What makes the usage of artificial intelligence crucial and is AI technology suitable for this purpose? Sometimes the reply is: develop. In this case, developers should ensure compliance with the law, thoroughly document and validate their systems, and make as much as possible open and transparent in order that independent researchers can do the identical. But sometimes the reply is: don't develop in any respect: we don't need more “responsibly developed” weapons or surveillance technology. The end user is essential to this query and that is where we want to begin.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read