HomeNewsWomen in AI: Urvashi Aneja explores the social impact of AI in...

Women in AI: Urvashi Aneja explores the social impact of AI in India

To give AI-focused women academics and others their well-deserved – and overdue – time within the highlight, TechCrunch is launching a series of interviews specializing in notable women who’ve contributed to the AI ​​revolution. As the AI ​​boom continues, we’ll publish several articles all year long highlighting essential work that usually goes unrecognized. You can find more profiles here.

Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research initiative that goals to look at the interaction between technology and society within the Global South. She can be an Associate Fellow within the Asia Pacific Program at Chatham House, an independent policy institute based in London.

Aneja's current research focuses on the social impact of algorithmic decision-making systems in India, where she lives, and on platform governance. Aneja recently authored a study on the present use of AI in India, examining use cases in various sectors, including police and agriculture.

questions and answers

In short, how did you start with AI? What attracted you to this field?

I started my profession in research and policy engagement within the humanitarian sector. For several years I even have studied the usage of digital technologies in protracted crises in resource-poor contexts. I quickly learned that there’s a high-quality line between innovation and experimentation, especially on the subject of vulnerable populations. The lessons learned from this experience left me deeply concerned concerning the techno-solutionist narratives surrounding the potential of digital technologies, particularly AI. At the identical time, India had launched its Digital India mission and National Strategy for Artificial Intelligence. I used to be disturbed by the dominant narratives that saw AI as a panacea for India's complex socio-economic problems and the entire lack of critical discourse on the subject.

What work are you most happy with (within the AI ​​space)?

I’m proud that we’ve got succeeded in drawing attention to the political economy of AI production, in addition to the broader implications for social justice, labor relations, and environmental sustainability. Very often, narratives about AI concentrate on the advantages of specific applications and, at best, the advantages and risks of that application. However, this misses the forest for the trees – a product-centric view obscures the broader structural impacts reminiscent of AI's contribution to epistemic injustice, the deskilling of the workforce and the perpetuation of unaccountable power in the bulk world. I'm also proud that we've managed to translate these concerns into concrete policies and regulations – whether it's designing procurement policies for AI use in the general public sector or providing evidence in lawsuits against Big Tech corporations in the worldwide south.

How do you overcome the challenges of the male-dominated technology industry and subsequently also the male-dominated AI industry?

By letting my work speak. And by consistently asking: Why?

What advice would you give to women wanting to enter the AI ​​field?

Develop your knowledge and expertise. Make sure your technical understanding of the issues is solid, but don't just concentrate on AI. Instead, study broadly so you may make connections across fields and disciplines. Not enough people understand AI as a sociotechnical system that could be a product of history and culture.

What are a few of the most pressing issues facing AI because it continues to evolve?

I feel essentially the most pressing issue is the concentration of power inside a handful of tech corporations. While this problem will not be latest, it’s being exacerbated by latest developments in large language models and generative AI. Many of those corporations at the moment are stoking fears concerning the existential risks of AI. This not only distracts from the present harms, but additionally positions these corporations as vital to handle AI-related harms. In some ways, we’re losing a few of the momentum of the “tech madness” that emerged after the Cambridge Analytica episode. In countries like India, I also worry about AI being positioned as vital for socio-economic development and providing a possibility to beat persistent challenges. This not only exaggerates the potential of AI, but additionally misses the purpose that it will not be possible to skip the institutional development required to develop protective measures. Another issue that we aren’t taking seriously enough is the environmental impact of AI – current developments are unlikely to be sustainable. In the present ecosystem, those most exposed to the impacts of climate change are unlikely to profit from AI innovations.

What issues should AI users concentrate on?

Users have to be made aware that AI will not be magic and has nothing to do with human intelligence. This is a type of computer statistics that has many useful uses, but is ultimately only a probabilistic estimate based on historical or past patterns. I'm sure there are another issues that users need to pay attention to, but I would like to warn you that we must be wary of attempts to shift responsibility downward to users. I see this most recently in the usage of generative AI tools in resource-poor contexts in the bulk world – slightly than being cautious about these experimental and unreliable technologies, the main focus often shifts to the best way end users, reminiscent of farmers or frontline medical examiners, should be further qualified.

What is the very best technique to construct AI responsibly?

This must first start with assessing the necessity for AI. Is there an issue that AI alone can solve, or are other means possible? And if we wish to develop AI, is a posh black box model vital, or could an easier logic-based model work just as well? We also must bring domain knowledge back into constructing AI. In the obsession with big data, we’ve got sacrificed theory – we want to construct a theory of change based on domain knowledge, and this must be the idea for the models we construct, not only big data alone. Of course, this is applicable along with key issues reminiscent of participation, inclusive teams, labor rights, etc.

How can investors higher advance responsible AI?

Investors need to contemplate your entire lifecycle of AI production – not only the outcomes of AI applications. This would require examining plenty of questions, reminiscent of whether work is being valued appropriately, the impact on the environment, the corporate's business model (ie is it based on business monitoring?) and internal accountability measures throughout the company. Investors also must demand higher and more robust evidence of the alleged advantages of AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read