HomeNewsOnline spaces are stuffed with toxicity. Well-designed AI tools will help eliminate...

Online spaces are stuffed with toxicity. Well-designed AI tools will help eliminate these

Imagine scrolling through social media or playing an internet game only to be interrupted by abusive and harassing comments. What if a man-made intelligence (AI) tool stepped in to eliminate the abuse before you even see it?

This isn't science fiction. Commercial AI tools like ToxMod And Bodyguard.ai are already used to watch interactions on social media and gaming platforms in real time. You can recognize and reply to toxic behavior.

The idea of ​​an all-seeing AI monitoring our every move may sound Orwellian, but these tools may very well be the important thing to creating the web safer.

However, for AI moderation to achieve success, it must prioritize values ​​corresponding to privacy, transparency, explainability and fairness. So can we make sure that AI makes our online spaces more trustworthy? Our two recent research projects on AI-driven moderation show that this is feasible – with much more work to do.

Negativity thrives online

Online toxicity is a growing problem. Almost half of young Australians have experienced almost some type of negative online interaction One in five experiences cyberbullying.

Whether it's a single offensive comment or ongoing harassment, such harmful interactions are a part of on a regular basis life for a lot of Internet users.

The severity of online toxicity is considered one of the explanations the Australian government has proposed banning social media for youngsters under 14.

However, this approach doesn’t completely solve a central fundamental problem: the design of online platforms and moderation tools. We have to rethink how online platforms are designed to attenuate harmful interactions for all users, not only children.

Unfortunately, lots of the tech giants which have power over our online activities have been slow to assume more responsibility, resulting in significant gaps carefully and security measures.

Here, proactive AI moderation offers the chance to create safer and more respectful online spaces. But can AI really deliver on this promise? Here's what we found.

“Chaos” in online multiplayer games

In our GAIM (Games and Artificial Intelligence Moderation) project, we aim to grasp the moral opportunities and pitfalls of AI-controlled moderation in online multiplayer games. We conducted 26 in-depth interviews with players and industry experts to learn how they use and take into consideration AI in these areas.

Respondents saw AI as a obligatory tool to make games safer and combat the “chaos” attributable to toxicity. With hundreds of thousands of players, human moderators can't capture the whole lot. But tireless and proactive AI can pick up on what individuals are missing and help reduce it Stress and burnout related to moderating toxic messages.

However, many players also expressed confusion over using AI moderation. They didn't understand why they were receiving account suspensions, bans, and other punishments, and were often frustrated that their very own reports of toxic behavior seemed lost within the void unanswered.

Online multiplayer games are fun, but without moderation, the general public chat feature is usually a breeding ground for toxicity.
Daniel Krason/Shutterstock

Participants were particularly concerned about privacy in situations where AI is used to moderate voice chats in games. One player shouted: “My God, is that this even legal?” It is – and it's already happening in popular online games corresponding to Call of duty.

Our study showed that there is gigantic positive potential for AI moderation. However, gaming and social media firms still have to do loads more work to make these systems transparent, powerful and trustworthy.

It is currently assumed that AI moderation acts similarly to a police officer in an opaque justice system. What if the AI ​​took the shape of a teacher, guardian, etc. as a substitute? Rising star – Educate, empower or support users?

Enter AI Ally

Here is our second project AI ally comes into play, an initiative funded by the eSafety Commissioner. In response to high rates of technology-based gender-based violence in AustraliaTogether we’re developing an AI tool to assist girls, women and gender diverse people navigate safer online spaces.

We surveyed 230 people from these groups and located that 44% of our respondents “often” or “all the time” experienced gender-based harassment on at the least one social media platform. This most frequently occurred in response to on a regular basis online activities corresponding to posting photos of themselves, particularly in the shape of sexist comments.

Interestingly, our respondents reported that documenting incidents of online abuse was particularly useful when supporting other victims of harassment, for instance by collecting screenshots of offensive comments. However, only a couple of of those surveyed have put this into practice. Understandably, many also feared for their very own safety in the event that they intervened by defending someone and even speaking out in a public comment thread.

These are worrying findings. In response, we're designing our AI tool as an optional dashboard that detects and documents toxic comments. To help us within the design process, we created a series of “personas” that capture a few of our goal users and were inspired by our survey respondents.

Some of the user “personas” that lead the event of the AI ​​Ally tool.
Ren Galwey/Research rendered

We enable users to make their very own decisions about whether to filter, flag, block or report harassment in an efficient manner that suits their very own preferences and private safety.

In this fashion, we hope to make use of AI to offer young individuals with easily accessible support in managing online security, while instilling autonomy and a way of empowerment.

We all have a task to play

AI Ally shows that we are able to use AI to make online spaces safer without sacrificing values ​​like transparency and user control. But there may be rather more to do.

Other similar initiatives include Harassment managerwhich was designed to discover and document abuse on Twitter (now X), and HeartMoba community where victims of online harassment can seek support.

Until ethical AI practices turn out to be more widespread, users must remain informed. Before joining a platform, check whether it’s transparent about its policies and offers users control over moderation settings.

The Internet connects us to resources, work, play and community. Everyone has the best to enjoy these advantages without harassment and abuse. It is as much as all of us to be proactive and advocate for smarter, more ethical technology that protects our values ​​and our digital spaces.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read