The trust and safety team at social gaming platform Rec Room has seen tremendous success in reducing toxicity over the past 18 months. In this VB Spotlight, we'll dive into the metrics, tools, and methods they've used to make players happier, increase engagement, and alter the sport.
Watch now free of charge on demand!
Improving the player experience and safety needs to be a top priority for game developers. In this latest VB Spotlight, Mark Frumkin, Director of Account Management at Modulate, and Yasmin Hussain, Head of Trust and Safety at Rec Room, talked about protecting players from toxicity from the angle of Rec Room's Trust and Safety team and their work with ToxMod, a proactive voice chat moderation solution powered by machine learning.
Launched in 2016, Rec Room is a social gaming platform with over 100 million lifetime users. Players interact in real-time via text and voice chat across PC, mobile, VR headsets and console, using avatars to bring the experience to life.
“Rec Room was created to make room for hundreds of thousands of worlds and different spaces – not only what we create, but what our players can create,” said Hussain. “Trust and safety are a critical a part of that.”
But real-world interactions and real-time voice chat inevitably result in people behaving badly. How do you modify the behavior of players who don't follow community standards?
In the last 12 months of experimenting and iterating on this concept, Rec Room was in a position to reduce cases of toxic voice chat by about 70%, Hussain said, but that didn't occur immediately.
Combating toxicity step-by-step
The first step was to expand continuous voice moderation to all public spaces. This helped maintain consistency within the platform's behavioral expectations. The next step was to find out essentially the most effective response when players step out of line. The team ran a wide selection of tests, from various mute and ban durations to 2 varieties of warnings – a really stern warning and one which offered positive encouragement concerning the kind of behavior they desired to see.
They found that the one-hour mute, once they immediately identified violations, had a big impact on reducing bad behavior. It was an instantaneous and really tangible reminder to players that toxicity is not going to be tolerated. This real-time feedback not only modified players' behavior within the moment, but in addition kept them in the sport, Hussain said.
While this wasn't an entire cure for in-game toxicity, it did significantly curb the issue. When they dug deeper, they realized that a really small percentage of the player base was chargeable for greater than half of the violations. How could they directly goal this specific group?
“There was a disproportionate association between these very small cohorts of players and a really large variety of violations, which then gave us the impetus to do one other experiment,” she said. “If we modify the way in which we intervene – muting you the primary time or supplying you with a warning after which muting you many times but you don't learn that lesson – possibly we will begin to stack our interventions in order that they reinforce one another. We're seeing some great results doing that.”
Creating and conducting test and security experiments
There are certain metrics to control to enhance players' moderation strategies, Frumkin said. These include the profile and prevalence of toxicity: What are people saying? How often are they saying it? Who are these rule breakers, what number of are there, and the way often do they violate the code of conduct?
At the start, you furthermore may must be clear about what the hypothesis is, what behavior you would like to change, what result you expect and what success looks like.
“The hypothesis is vital,” Hussain said. “When we initially tested the interventions and the appropriate option to reduce violations, it was very different than once we tried to alter the behavior of a subset of our player population.”
Iteration can be crucial – to learn, fine-tune and optimize. But it’s equally necessary to be certain your experiments run long enough to gather the info you would like and influence player behavior.
“We want them to stick to community standards and be positive members of this community. That means they must unlearn certain things that they could have been doing for some time,” she said. “We need those three, 4, six weeks for that to have an effect while people experience this latest normal that they're in, learn from it and alter their behavior.”
However, there’s all the time more to do. Sometimes you make progress on a selected issue, but then the issue evolves. This means you might have to continuously improve your moderation strategies and evolve in parallel. For example, moderating speech in real time is a big challenge, however the Rec Room team is amazingly confident that their interventions at the moment are accurate and their players feel more confident.
“We've had tremendous success in reducing violations and improving the texture of our platform – around 90 percent of our players say they feel protected, welcome and have a good time within the Rec Room, which is incredible,” she said. “We're realizing that it's not enough for justice to be done or for us to encourage our players to alter their behavior. Other players have to see this happening so that they may have reassurance and reassurance that we're upholding our community standards.”
The way forward for AI-powered voice moderation
To ultimately make Rec Room an excellent safer and more fun place, ToxMod is continually analyzing data on policy violations, language and player interactions, Frumkin said. But moderation must also evolve. You need to discourage behavior that violates standards and rules of conduct – but you furthermore may need to encourage behavior that improves the mood or enhances the experience for other Rec Room players.
“We're also beginning to develop our ability to acknowledge pro-social behavior,” he added. “If players are good partners, in the event that they're supporting other members in the identical room – in the event that they're good at defusing certain situations that are likely to get heated – we would like to have the ability to indicate not only where there are problems, but where there are role models. There's lots you may do to extend and amplify the impact of those positive influences in your community.”
Voice moderation is amazingly complex, especially with real-time audio, but AI-powered tools are having a big impact on moderation strategies and what teams can actually achieve.
“It means you may raise your ambitions. Things you thought were inconceivable yesterday suddenly turn into possible whenever you start doing them,” Hussain said. “We see that in how available, how efficient and the way effective machine learning is becoming at ever-increasing scale. There's an enormous opportunity for us to leverage that and keep our community as protected as possible.”
To learn more concerning the challenges of toxicity in games, strategies to effectively change player behavior, and the way machine learning is changing the sport, don't miss this VB Spotlight, free on demand.
agenda
- How language moderation works to detect hate and harassment
- Rec Rooms Successes and lessons learned in constructing a voice moderation strategy
- Key insights from voice moderation data that each game developer should collect
- How reducing toxicity can increase player retention and engagement
Moderators
- Yasmin HussainHead of Trust & Safety, Rec Room
- Mark FrumkinDirector of Account Management, Modulate
- Rachel KaserTechnology Writer, VentureBeat (Host)