HomeNewsWomen in AI: Miriam Vogel emphasizes the necessity for responsible AI

Women in AI: Miriam Vogel emphasizes the necessity for responsible AI

To give academics and other women focused on AI their well-deserved — and long overdue — time within the highlight, TechCrunch has published a series of interviews specializing in notable women who’ve contributed to the AI ​​revolution. We'll be publishing these pieces all year long because the AI ​​boom continues, highlighting vital work that usually goes unrecognized. Find more profiles here.

Miriam Vogel is CEO of EqualAI, a nonprofit organization founded to scale back unconscious bias in AI and promote responsible AI governance. She also chairs the recently formed National AI Advisory Committee, which was mandated by Congress to advise President Joe Biden and the White House on AI policy, and teaches technology law and policy at Georgetown University Law Center.

Vogel previously served as an assistant attorney general on the Department of Justice, advising the Attorney General and Deputy Attorney General on a broad range of legal, policy, and operational issues. As a board member of the Responsible AI Institute and senior advisor to the Center for Democracy and Technology, Vogel advised White House leadership on initiatives starting from women's, economic, regulatory, and food safety policy to criminal justice issues.

How did you get into AI? What attracted you to this field?

I began my profession in government, first as a Senate intern, the summer before eleventh grade. I caught the politics bug and spent the subsequent few summers working on Capitol Hill after which within the White House. My focus on the time was civil rights, which just isn’t the traditional path to artificial intelligence, but looking back, it makes perfect sense.

After law school, my profession evolved from an entertainment attorney specializing in mental property to civil rights and social work in the chief branch. During my time within the White House, I had the privilege of leading the Equal Pay Task Force, and through my time as Assistant Attorney General under former Deputy Attorney General Sally Yates, I led the creation and development of implicit bias training for federal law enforcement.

Because of my experience as a lawyer within the technology space and my background in policy to combat bias and systematic harm, I used to be asked to steer EqualAI. I used to be drawn to this organization because I noticed that AI is the subsequent frontier for civil liberties. Without vigilance, a long time of progress may very well be undone in lines of code.

I even have all the time been excited by the chances that innovation creates and remain convinced that AI can offer amazing latest opportunities for successful living for a bigger population – but provided that we take care during this critical phase to make sure that more people can take part in its creation and development in a meaningful way.

How do you overcome the challenges of the male-dominated technology industry and, more broadly, the male-dominated AI industry?

I fundamentally consider that all of us have to do our part to be certain our AI is as effective, efficient and useful as possible. That signifies that as we develop it, we want to higher support the voices of girls (who, by the best way, account for greater than 85% of purchases within the US, so ensuring their interests and safety are taken into consideration is a great business move), in addition to the voices of other underrepresented populations of various ages, geographies, ethnicities and nationalities who will not be sufficiently involved.

As we move toward gender equality, we want to make sure more voices and perspectives are considered to create AI that works for all consumers – not only AI that works for developers.

What advice would you give to women who need to enter the AI ​​field?

First, it's never too late to begin. Ever. I encourage all grandparents to try OpenAI's ChatGPT, Microsoft's Copilot, or Google's Gemini. We all have to turn out to be AI literate to reach an economy that might be powered by AI in the long run. And that's exciting! We all have a job to play. Whether you're starting a profession in AI or using AI to support your work, women should check out AI tools, see what those tools can and might't do, see in the event that they work for them, and usually turn out to be AI literate.

Second, responsible AI development requires greater than just ethical computer scientists. Many people consider that AI requires a level in computer science or one other STEM field. In reality, nevertheless, AI needs the perspectives and expertise of men and women of all backgrounds. Get involved! Your voice and perspective are needed. Your engagement is critical.

What are probably the most pressing problems facing AI because it advances?

First, we want more AI skills. At EqualAI, we’re “AI net positive,” meaning we consider AI will unlock unprecedented opportunities for our economy and improve our each day lives—but provided that those opportunities are equally available and helpful to a bigger cross-section of our population. We have to equip our current workforce, the subsequent generation, our grandparents—with the knowledge and skills to profit from AI.

Second, we want to develop standardized measures and metrics to guage AI systems. Standardized assessments might be critical to constructing trust in our AI systems and enabling consumers, regulators, and downstream users to know the restrictions of the AI ​​systems they work with and choose whether the system deserves our trust. Understanding who a system is designed for and what use cases it is meant for will help us answer the important thing query: who could it fail for?

What issues should AI users concentrate on?

Artificial intelligence is just that: it’s developed by humans to “imitate” human reasoning and assist humans of their endeavors. We must maintain an appropriate level of skepticism when using this technology and exercise due diligence to make sure that we place our trust in systems that deserve our trust. AI can complement humanity – but not replace it.

We must be clear that AI is made up of two important components: algorithms (created by humans) and data (reflecting human conversations and interactions). As a result, AI reflects and adapts to our human weaknesses. Bias and harm can creep in throughout the lifecycle of AI, whether through the algorithms written by humans or through the information that represents a snapshot of human life. However, every human touchpoint is a possibility to discover and mitigate the potential harm.

Because you’ll be able to only imagine things so far as your experience allows, and AI programs are limited by the constructs they’re built around, the more individuals with diverse perspectives and experiences you have got on a team, the more likely they’re to discover bias and other safety concerns embedded of their AI.

What is the perfect method to construct AI responsibly?

It is our responsibility to develop AI that’s worthy of our trust. We cannot expect another person to do it for us. We must first ask three fundamental questions: (1) Who is that this AI system being developed for? (2) What were the intended use cases, and (3) who can it fail for? Even keeping these questions in mind, there’ll inevitably be pitfalls. To minimize these risks, designers, developers, and deployers must follow best practices.

At EqualAI, we encourage good “AI hygiene,” which incorporates planning your framework and ensuring accountability, in addition to standardizing testing, documentation, and routine audits. We also recently published a guide to designing and implementing a responsible AI governance framework, which sets out the values, principles, and framework for implementing AI responsibly in a company. The document serves as a resource for organizations of any size, industry, and maturity adopting, developing, using, and implementing AI systems and committing internally and publicly to doing so responsibly.

How can investors higher promote more responsible AI?

Investors play a paramount role in ensuring our AI is secure, effective and responsible. Investors can make sure that the businesses searching for funding are aware of the potential harms and liabilities of their AI systems and are enthusiastic about tips on how to minimize them. Even just asking, “How have you ever implemented AI governance practices?” is a crucial first step to achieving higher outcomes.

Not only are these efforts good for the general public, but also they are in the perfect interest of investors who need to make sure that the businesses they spend money on and are related to will not be related to bad publicity or burdened by litigation. Trust is one in every of the few non-negotiables for a corporation to succeed, and a commitment to responsible AI governance is the perfect method to construct and maintain public trust. Robust and trustworthy AI makes business sense.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read