To give AI-focused female academics and others their well-deserved – and overdue – time within the highlight, TechCrunch has published a series of interviews specializing in notable women who’ve contributed to the AI revolution. As the AI boom continues, we'll publish these articles all year long, highlighting necessary work that usually goes unrecognized. You can find more profiles here.
Today within the highlight: Rachel Coldicutt is the founding father of Careful industries, which explores the social impact of technology on society. Customers include Salesforce and the Royal Academy of Engineering. Prior to Careful Industries, Coldicutt was CEO of the think tank Doteveryone, which also conducted research on how technology impacts society.
Before Doteveryone, she worked for a long time in digital strategy for corporations corresponding to the BBC and the Royal Opera House. She attended the University of Cambridge and received an OBE (Order of the British Empire) for her work in digital technology.
In short, how did you start with AI? What attracted you to this field?
I began working in tech within the mid-90s. My first real tech job was working on Microsoft Encarta in 1997. Before that, I helped construct content databases for reference books and dictionaries. Over the past three a long time, I've worked with all forms of latest and emerging technologies, so it's difficult to pinpoint exactly after I “got into AI” because I've been using automated processes and data to drive decisions, Creating experiences and producing artistic endeavors because the 2000s. Instead, I feel the query might be “When did AI grow to be the technologies that everybody desired to discuss?” and I feel the reply might be around 2014 when DeepMind was acquired by Google – that was the moment in Britain, where AI overtook all the things else, although lots of the underlying technologies we now call “AI” were things that were already in fairly widespread common use.
I got into tech almost by accident within the Nineteen Nineties, and what has kept me in the sector despite many changes is the incontrovertible fact that it is stuffed with fascinating contradictions: I like how empowering latest skills will be to learn and make things, I’m fascinated by what we are able to discover from structured data and will happily spend the remainder of my life observing and understanding how people make and design the technologies we use.
What work in AI are you most happy with?
Much of my AI work has been related to policy development and social impact assessments, working with government agencies, charities and every type of corporations to assist them use AI and related technologies in a targeted and trustworthy manner.
In the 2010s I ran Doteveryone – a responsible technology think tank – which helped change the framework for a way UK policymakers take into consideration latest technologies. Our work has made clear that AI shouldn’t be a set of technologies without consequences, but somewhat something that has diffuse real-world implications for people and societies. I’m particularly happy with the free offer Consequence Scan Tool The tool we developed is now utilized by teams and corporations around the globe, helping them anticipate the social, environmental and political impacts of their decisions when shipping latest products and features.
More recently, the yr 2023 AI and society forum was one other proud moment. Ahead of the UK government's industry-dominated AI Security Forum, my team at Care Trouble quickly convened and curated a gathering of 150 people from across civil society to collectively make the case for making AI usable for 8 billion people to make , not only 8 billionaires.
How do you overcome the challenges of the male-dominated technology industry and subsequently also the male-dominated AI industry?
As a relative old-timer within the tech world, I feel like among the progress we've made in gender representation within the tech world during the last five years has been lost. Research from the Turing Institute shows that lower than 1% of investments made within the AI sector have been in women-led startups, while women still only make up 1 / 4 of the whole tech workforce. When I attend AI conferences and events, the gender mix – particularly by way of who’s given a platform to share their work – jogs my memory of the early 2000s, which I find really sad and shocking.
I'm capable of navigate the sexist attitudes of the tech industry because I actually have the good privilege of starting and leading my very own organization: I spent much of my early profession experiencing sexism and sexual harassment every day – with it This gets in the way in which of great work and represents an unnecessary cost of entry for many ladies. Instead, I've prioritized making a feminist company where together we try for justice in all the things we do, and I hope we show that other ways are also possible.
What advice would you give to women wanting to enter the AI field?
Don't think you could have to work in a “women's issues” field, don't be delay by the hype and hunt down like-minded people and construct friendships with other people so you could have an energetic support network. What has kept me going all these years is my network of friends, former colleagues and allies – we provide one another support, a never-ending supply of pep talks and sometimes a shoulder to cry on. Without that it may well feel very lonely; You'll be the one woman within the room so often that it's necessary to have a secure place to go to chill out.
As soon as you get the possibility, hire well. Don’t replicate the structures you’ve seen or entrench the expectations and norms of an elitist, sexist industry. Every time you hire, challenge the establishment and support your latest employees. This way you’ll be able to start constructing a brand new normal anywhere.
And search for the work of among the amazing women who’re pioneering great AI research and practice: start by reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, all of whom have done fundamental research that advances our understanding of it has shaped how AI works, changes and interacts with society.
What are among the most pressing issues facing AI because it continues to evolve?
AI is an amplifier. It could appear that among the uses are inevitable, but as societies we want to give you the chance to clarify decisions about what’s value intensifying. Currently, the increased use of AI is primarily increasing the ability and bank balances of a comparatively small variety of male CEOs, and it seems unlikely to shape a world through which many individuals wish to live. I would love to see more people, especially in industry and politics, begin to take into consideration what more democratic and responsible AI looks like and whether it’s even possible.
The climate impacts of AI – the usage of water, energy and significant minerals – in addition to the health and social justice impacts for people and communities affected by the exploitation of natural resources have to be at the highest of the list for responsible development. The incontrovertible fact that LLMs particularly are so energy intensive suggests that the present model shouldn’t be fit for purpose; In 2024 we want innovations that protect and restore nature, and extractive models and ways of working have to be eliminated.
We must even be realistic in regards to the surveillance implications of an increasingly data-driven society and the incontrovertible fact that in an increasingly volatile world, all general-purpose technologies are more likely to be used for unimaginable horrors in warfare. Anyone working in AI have to be realistic in regards to the historical, long-standing connection between technical research and development and military development. We must advocate for, support and demand innovation that starts in and is driven by communities in order that we achieve outcomes that strengthen society and don’t result in more destruction.
What issues should AI users pay attention to?
In addition to the environmental and economic advantages built into lots of the current AI business and technology models, it is absolutely necessary to think in regards to the on a regular basis implications of increased use of AI and what this implies for on a regular basis human interactions.
While among the issues which have made headlines have more to do with existential risks, it's value maintaining a tally of how the technologies you utilize help or hinder you in your on a regular basis life: which automations you’ll be able to turn off and bypass, which of them deliver? real profit, and where are you able to as a consumer vote together with your feet that you simply really need to proceed talking to an actual person and never a bot? We don't should accept inferior automation and may join forces to attain higher results!
What is the perfect method to construct AI responsibly??
Responsible AI starts with good strategic decisions – as a substitute of just using an algorithm and hoping for the perfect, it is feasible to make targeted decisions about what to automate and the way. I've been talking in regards to the Just Enough Internet idea for a couple of years now, and it seems to me to be a extremely useful concept that can assist guide us as we develop latest technologies. Instead of consistently pushing the boundaries, can we construct AI to maximise profit to people and the planet and minimize harm?
We have evolved a strong process for this at Careful Trouble, where we work with boards and leadership teams, starting with mapping how AI can and can’t support your vision and values; Understanding where problems are too complex and variable for automation to enhance and where it provides advantages; and at last, the event of an energetic risk management framework. Responsible development shouldn’t be a one-time application of a set of principles, but a continuous technique of monitoring and mitigation. Continuous delivery and social adaptation mean that quality assurance must not end with the delivery of a product; As AI developers, we want to construct the capability for iterative social awareness and treat responsible development and deployment as a living process.
How can investors higher advance responsible AI?
By investing more patiently, supporting more diverse founders and teams, and never looking for exponential returns.