HomeNewsKarine Perset helps governments understand AI

Karine Perset helps governments understand AI

To give AI-focused women academics and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a series of interviews specializing in remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces all year long because the AI boom continues, highlighting key work that always goes unrecognized. Read more profiles here.

Karine Perset works for the Organization for Economic Co-operation and Development (OECD), where she runs its AI Unit and oversees the OECD.AI Policy Observatory and the OECD.AI Networks of Experts inside the Division for Digital Economy Policy.

Perset makes a speciality of AI and public policy. She previously worked as an advisor to the Internet Corporation for Assigned Names and Numbers (ICANN)’s Governmental Advisory Committee and as Conssellor of the OECD’s Science, Technology, and Industry Director.

What work are you most pleased with (within the AI field)?

I’m extremely pleased with the work we do at OECD.AI. Over the previous few years, the demand for policy resources and guidance on trustworthy AI has really increased from each OECD member countries and likewise from AI ecosystem actors. 

When we began this work around 2016, there have been only a handful of nations that had national AI initiatives. Fast forward to today, and the OECD.AI Policy Observatory – a one-stop shop for AI data and trends – documents over 1,000 AI initiatives across nearly 70 jurisdictions. 

Globally, all governments are facing the identical questions on AI governance. We are all keenly aware of the necessity to strike a balance between enabling innovation and opportunities AI has to supply and mitigating the risks related to the misuse of the technology. I believe the rise of generative AI in late 2022 has really put a highlight on this. 

The ten OECD AI Principles from 2019 were quite prescient within the sense that they foresaw many key issues still salient today – 5 years later and with AI technology advancing considerably. The Principles function a guiding compass towards trustworthy AI that advantages people and the planet for governments in elaborating their AI policies. They place people at the middle of AI development and deployment, which I believe is something we are able to’t afford to lose sight of, irrespective of how advanced, impressive, and exciting AI capabilities grow to be.  

To track progress on implementing the OECD AI Principles, we developed the OECD.AI Policy Observatory, a central hub for real-time or quasi-real-time AI data, evaluation, and reports, which have grow to be authoritative resources for a lot of policymakers globally. But the OECD can’t do it alone, and multi-stakeholder collaboration has at all times been our approach. We created the OECD.AI Network of Experts – a network of greater than 350 of the leading AI experts globally – to assist tap their collective intelligence to tell policy evaluation. The network is organized into six thematic expert groups, examining issues including AI risk and accountability, AI incidents, and the long run of AI.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

When we take a look at the info, unfortunately, we still see a gender gap regarding who has the abilities and resources to effectively leverage AI. In many countries, women still have less access to training, skills, and infrastructure for digital technologies. They are still underrepresented in AI R&D, while stereotypes and biases embedded in algorithms can prompt gender discrimination and limit women’s economic potential. In OECD countries, greater than twice as many young men than women aged 16-24 can program, a vital skill for AI development. We clearly have more work to do to draw women to the AI field.

However, while the private sector AI technology world is very male-dominated, I’d say that the AI policy world is a little more balanced. For instance, my team on the OECD is near gender parity. Many of the AI experts we work with are truly inspiring women, corresponding to Elham Tabassi from the U.S National Institute of Standards and Technology (NIST); Francesca Rossi at IBM; Rebecca Finlay and Stephanie Ifayemi from the Partnership on AI; Lucilla Sioli, Irina Orssich, Tatjana Evas and Emilia Gomez from the European Commission; Clara Neppel from the IEEE; Nozha Boujemaa from Decathlon; Dunja Mladenic on the Slovenian JSI AI lab; and naturally my very own amazing boss and mentor Audrey Plonk, simply to name just a few, and there are so many more. 

We need women and diverse groups represented within the technology sector, academia, and civil society to bring wealthy and diverse perspectives. Unfortunately, in 2022, just one in 4 researchers publishing on AI worldwide was a lady. While the variety of publications co-authored by at the least one woman is increasing, women only contribute to about half of all AI publications in comparison with men, and the gap widens because the variety of publications increases. All this to say, we want more representation from women and diverse groups in these spaces.

So to reply your query, how do I navigate the challenges of the male-dominated technology industry? I show up. I’m very grateful that my position allows me to fulfill with experts, government officials, and company representatives and speak in international forums on AI governance. It allows me to have interaction in discussions, share my perspective, and challenge assumptions. And, in fact, I let the info speak for itself.

What advice would you give to women in search of to enter the AI field?

Speaking from my experience within the AI policy world, I might say to not be afraid to talk up and share your perspective. We need more diverse voices across the table after we develop AI policies and AI models. We all have our unique stories and something different to bring to the conversation. 

To develop safer, more inclusive, and trustworthy AI, we must take a look at AI models and data input from different angles, asking ourselves: what are we missing? If you don’t speak up, then it’d lead to your team missing out on a extremely necessary insight. Chances are that, because you might have a unique perspective, you’ll see things that others don’t, and as a worldwide community, we may be greater than the sum of our parts if everyone contributes. 

I might also emphasize that there are lots of roles and paths within the AI field. A level in computer science just isn’t a prerequisite to work in AI. We already see jurists, economists, social scientists, and plenty of more profiles bringing their perspectives to the table. As we move forward, true innovation will increasingly come from mixing domain knowledge with AI literacy and technical competencies to give you effective AI applications in specific domains. We see already that universities are offering AI courses beyond computer science departments. I actually consider interdisciplinarity will likely be key for AI careers. So, I might encourage women from all fields to think about what they’ll do with AI. And to not shrink back for fear of being less competent than men.

What are a number of the most pressing issues facing AI because it evolves?

I believe probably the most pressing issues facing AI may be divided into three buckets.

First, I believe we want to bridge the gap between policymakers and technologists. In late 2022, generative AI advances took many by surprise, despite some researchers anticipating such developments. Understandingly, each discipline is AI issues from a singular angle. But AI issues are complex; collaboration and interdisciplinarity between policymakers, AI developers, and researchers are key to understanding AI issues in a holistic manner, helping keep pace with AI progress and shut knowledge gaps.

Second, the international interoperability of AI rules is mission-critical to AI governance. Many large economies have began regulating AI. For instance, the European Union just agreed on its AI Act, the U.S. has adopted an executive order for the secure, secure, and trustworthy development and use of AI, and Brazil and Canada have introduced bills to manage the event and deployment of AI. What’s difficult here is to strike the fitting balance between protecting residents and enabling business innovations. AI knows no borders, and plenty of of those economies have different approaches to regulation and protection; it can be crucial to enable interoperability between jurisdictions.

Third, there’s the query of tracking AI incidents, which have increased rapidly with the rise of generative AI. Failure to deal with the risks related to AI incidents could exacerbate the shortage of trust in our societies. Importantly, data about past incidents may also help us prevent similar incidents from happening in the long run. Last yr, we launched the AI Incidents Monitor. This tool uses global news sources to trace AI incidents world wide to grasp higher the harms resulting from AI incidents. It provides real-time evidence to support policy and regulatory decisions about AI, especially for real risks corresponding to bias, discrimination, and social disruption, and the sorts of AI systems that cause them.

What are some issues AI users should concentrate on?

Something that policymakers globally are grappling with is methods to protect residents from AI-generated mis- and disinformation – corresponding to synthetic media like deepfakes. Of course, mis- and disinformation has existed for a while, but what’s different here is the size, quality, and low price of AI-generated synthetic outputs.

Governments are well aware of the difficulty and are ways to assist residents discover AI-generated content and assess the veracity of the data they’re consuming, but this remains to be an emerging field, and there remains to be no consensus on methods to tackle such issues. 

Our AI Incidents Monitor may also help track global trends and keep people informed about major cases of deepfakes and disinformation. But in the long run, with the increasing volume of AI-generated content, people must develop information literacy, sharpening their skills, reflexes, and talent to envision reputable sources to evaluate information accuracy. 

What is the most effective approach to responsibly construct AI?

Many of us within the AI policy community are diligently working to search out ways to construct AI responsibly, acknowledging that determining the most effective approach often hinges on the precise context through which an AI system is deployed. Nonetheless, constructing AI responsibly necessitates careful consideration of ethical, social, and safety implications throughout the AI system lifecycle.

One of the OECD AI Principles refers back to the accountability that AI actors bear for the correct functioning of the AI systems they develop and use. This implies that AI actors must take measures to make sure that the AI systems they construct are trustworthy. By this, I mean that they need to profit people and the planet, respect human rights, be fair, transparent, and explainable, and meet appropriate levels of robustness, security, and safety. To achieve this, actors must govern and manage risks throughout their AI systems’ lifecycle – from planning, design, and data collection and processing to model constructing, validation and deployment, operation, and monitoring.

Last yr, we published a report on “Advancing Accountability in AI,” which provides an summary of integrating risk management frameworks and the AI system lifecycle to develop trustworthy AI. The report explores processes and technical attributes that may facilitate the implementation of values-based principles for trustworthy AI and identifies tools and mechanisms to define, assess, treat, and govern risks at each stage of the AI system lifecycle.

How can investors higher push for responsible AI?

By advocating for responsible business conduct in the businesses they spend money on. Investors play a vital role in shaping the event and deployment of AI technologies, they usually shouldn’t underestimate their power to influence internal practices with the financial support they supply.

For example, the private sector can support developing and adopting responsible guidelines and standards for AI through initiatives corresponding to the OECD’s Responsible Business Conduct (RBC) Guidelines, which we’re currently tailoring specifically for AI. These guidelines will notably facilitate international compliance for AI firms selling their services and products across borders and enable transparency throughout the AI value chain – from suppliers to deployers to end-users. The RBC guidelines for AI may also provide a non-judiciary enforcement mechanism – in the shape of national contact points tasked by national governments to mediate disputes – allowing users and affected stakeholders to hunt remedies for AI-related harms.

By guiding firms to implement standards and guidelines for AI — like RBC – private sector partners can play an important role in promoting trustworthy AI development and shaping the long run of AI technologies in a way that advantages society as a complete.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read