To give AI-focused women academics and others their well-deserved – and overdue – time within the highlight, TechCrunch is launching a series of interviews specializing in notable women who’ve contributed to the AI revolution. As the AI boom continues, we are going to publish several articles all year long highlighting vital work that always goes unrecognized. You can find more profiles here.
Anna Korhonen is a professor of natural language processing (NLP) on the University of Cambridge. she is also senior research associate Churchill CollegeFellow of the Association for Computational Linguistics, and Fellow on the European Laboratory for Learning and Intelligent Systems.
Korhonen was previously a fellow at Alan Turing Institute and she or he has a doctorate in computer science and a master's degree in computer science and linguistics. She researches NLP and the way Develop, adapt and apply computer techniques to fulfill the needs of AI. She has a special interest in responsible and “human-centered” NLP, which is, in her own words, “based on an understanding of human cognitive, social and artistic intelligence.”
questions and answers
In short, how did you start with AI? What attracted you to this field?
I even have all the time been fascinated by the sweetness and complexity of human intelligence, particularly because it pertains to human language. However, my interest in STEM subjects and practical applications led me to check engineering and computer science. I selected to specialise in AI since it is an area that enables me to mix all of those interests.
What work in AI are you most happy with?
While the science of constructing intelligent machines is fascinating and it's easy to wander away on the earth of language modeling, the actual reason we construct AI is for its practical potential. I’m most happy with the work where my fundamental research in natural language processing has led to the event of tools that may support social and global good. For example, tools that may also help us higher understand how diseases reminiscent of cancer or dementia develop and could be treated, or apps that may support education.
Much of my current research is predicated on the mission of developing AI that may improve people's lives for the higher. AI has enormous positive potential for social and global well-being. A big a part of my work as an educator is encouraging the subsequent generation of AI scientists and leaders to deal with realizing this potential.
How do you overcome the challenges of the male-dominated technology industry and due to this fact also the male-dominated AI industry?
I’m fortunate to work in a field of AI where there’s a big female population and established support networks. I discovered this to be extremely helpful in overcoming skilled and private challenges.
For me, the largest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever larger AI models at any price is example. This has huge implications for the priorities of each academia and industry and has far-reaching socio-economic and environmental implications. Do we want larger models and what are their global costs and advantages? I feel we’d have asked these questions much earlier in the sport if we had a greater gender balance on the sphere.
What advice would you give to women wanting to enter the AI field?
AI urgently needs more women in any respect levels, but especially on the management level. The current leadership culture shouldn’t be necessarily attractive to women, but lively engagement can change this culture – and ultimately the culture of AI. Women are notoriously not all the time good at supporting one another. I’d definitely prefer to see a change in attitude here: we’ve to actively network and help one another if we would like to attain a greater gender balance on this area.
What are a number of the most pressing issues facing AI because it continues to evolve?
AI has evolved incredibly quickly: it went from a tutorial field to a world phenomenon in lower than a decade. During this time, most efforts were put into scaling using large data and calculations. Little effort has been made to think about how this technology needs to be developed in order that it could possibly best serve humanity. People have good reason to be concerned in regards to the safety and trustworthiness of AI and its impact on jobs, democracy, the environment, and other areas. We urgently must put human needs and safety at the guts of AI development.
What issues should AI users concentrate on?
Current AI, even when it appears very fluid, ultimately lacks human knowledge of the world and the power to know the complex social contexts and norms with which we operate. Even the perfect technology today makes mistakes, and our ability to stop or predict these mistakes is restricted. AI could be a very useful gizmo for a lot of tasks, but I wouldn't trust it to lift my children or make vital decisions for me. We humans should remain responsible.
What is the perfect option to construct AI responsibly?
AI developers are inclined to take into consideration ethics as an afterthought – after the technology has already been developed. The best option to give it some thought is that each one development begins. Questions like: “Do I even have a sufficiently diverse team to develop a good system?” or “Is my data really free to make use of and representative of your complete user population?” or “Are my techniques robust?” should actually be asked in the beginning.
Although we are able to address a few of this problem through education, we are able to only implement it through regulation. The recent development of national and global AI regulations is very important and must proceed to make sure that future technologies are safer and more trustworthy.
How can investors higher advance responsible AI?
AI regulations are emerging and firms will ultimately should comply. We can consider responsible AI as sustainable AI that is absolutely value investing in.