The age of AI-powered online violence isn’t any longer upon us. It has arrived. And it’s changing the threat landscape for ladies working in the general public sector all over the world.
Our newly published report on behalf of UN Women provides the primary, urgent evidence that generative AI is already getting used to silence and harass women, whose voices are crucial to preserving democracy.
These include journalists exposing corruption, activists mobilizing voters, and human rights defenders on the front lines halting democratic backsliding.
Based on a world survey of human rights defenders, activists, journalists and other public communicators from 119 countries, our research shows the extent to which generative AI is being weaponized to supply abusive content – ​​in a wide range of forms – at scale.
We surveyed 641 women in five languages ​​(Arabic, English, French, Portuguese and Spanish). The surveys were disseminated through the trusted networks of UN Women, UNESCO, the International Center for Journalists and a panel of twenty-two expert advisors representing intergovernmental organizations, the legal community, civil society organizations, industry and academia.
According to our evaluation, of the 70% of respondents who reported experiencing online violence in the middle of their work, nearly one in 4 (24%) identified abuse created or amplified by AI tools. In the report, we define online violence as any act using digital tools that results or may lead to physical, sexual, psychological, social, political or economic harm or other violations of rights and freedoms.
However, the incidence just isn’t evenly distributed across professions. Women who discover as writers or other public communicators, comparable to social media influencers, reported the very best exposure to AI-powered online violence at 30.3%. Female human rights defenders followed closely behind at 28.2%. Journalists and media staff reported a still alarming exposure rate of 19.4%.
Since the general public launch of free, widely accessible generative AI tools like ChatGPT in late 2022, the barriers to entry and costs for producing sexually explicit deepfake videos have increased, gender-based disinformationand other types of gender-based online violence significantly reduced. The speed of distribution has now increased.
The result’s a digital landscape by which anyone with a smartphone and access to a generative AI chatbot can quickly generate harmful, misogynistic content. Meanwhile, social media algorithms are tuned to extend the reach of hateful and offensive material, which then proliferates. And it will probably have significant personal, political and sometimes significant implications financial gains for the perpetrators and supporters, including technology firms.
Meanwhile, recent research highlights that AI is each a driver of disinformation and a possible solution, powering synthetic content detection and countermeasure systems. But There is proscribed evidence how effective these detection tools are.
Many jurisdictions also still lack clear legal frameworks that address deepfake abuse and other harms brought on by AI-generated media, comparable to financial fraud and digital identity theft. This is especially the case when the attack is gender-based in nature and never purely political or financial. This is attributable to the inherently nuanced and sometimes insidious nature of misogynistic hate speech, in addition to lawmakers' apparent indifference to the suffering of girls.
Our results highlight an urgent two-fold challenge. There is an urgent need for stronger tools to detect, monitor, report and mitigate AI-powered attacks. And legal and regulatory mechanisms have to be put in place that require platforms and AI developers to forestall their technologies from getting used to undermine women's rights.
When online abuse results in attacks within the “real world.”
We cannot view these AI-related results as isolated statistics. They exist amid increasing online violence against women in public life. They also operate inside a broader and deeply troubling pattern – the disappearing boundary between online violence and offline harm.
Four in ten (40.9%) of the ladies we surveyed said that they had experienced offline attacks, abuse or harassment that they related to online violence. This includes physical violence, stalking, hit and verbal harassment. The data confirms what survivors have been telling us for years: digital violence just isn’t “virtual” in any respect. In fact, it is commonly just the primary act in a vicious cycle of escalating harm.
EPA/Christopher Neundorf
The trend is especially clear amongst female journalists. In one comparable survey 202020% of respondents said that they had experienced offline attacks related to online violence. But five years later, that number has greater than doubled to 42%. This dangerous development needs to be a wake-up call News organizationsgovernments and large technology firms even.
When online violence results in physical intimidation, the deterrent effect extends far beyond the person victim. It becomes a structural threat to freedom of expression and democracy.
In the context of rising authoritarianism, where online violence and networked misogyny are typical features of the playbook for rolling back democracy, the role of politicians in perpetrating online violence can’t be ignored. In the 2020 UNESCO survey of female journalists, 37% of respondents said politicians and public officials were essentially the most common offenders.
The situation has only gotten worse since 2020 as a continuum of violence against women in public spaces has developed. Offline abuse, comparable to Politicians and officials attack female journalists B. during media conferences, can trigger an escalation of online violence, which in turn can result in this Aggravate offline damage.
This cycle was documented all around the worldwithin the stories of well-known journalists comparable to Maria Ressa within the Philippines, Rana Ayyub in India and the murdered Maltese investigative journalist Daphne Caruana Galizia. These women bravely spoke truth to power and located themselves targeted by their respective governments – online and offline.
The evidence of abuse against women in public life that our research uncovered signals a necessity for more creativity technical interventions Applying the principles of “Human rights through design”. These are safeguards really useful by numerous international organizations that incorporate human rights protection at every stage of AI design. It also signals the necessity for stronger and more proactive legal and policy responsesgreater platform accountability, political responsibility and higher safety and support systems for ladies in public life.

