One of the topics that got here up on the GamesBeat Summit was the proliferation and potential of AI in gaming – specifically Will Wright's talk on the long run of AI in game development. Another conversation on this topic was with Kim Kunes, Microsoft's Vice President of Gaming Trust & Safety, who joined me for a fireplace chat concerning the use of AI in the realm of ​​trust and safety. According to Kim, AI won’t ever replace humans in protecting other humans, but it may be used to mitigate potential harm to human moderators.
Kunes said there are lots of nuances in player safety because there are lots of nuances in human interaction as well. Xbox's current safety features include safety standards and each proactive and reactive moderation features. Xbox's recent transparency report shows that it has added certain AI-driven features comparable to Image Pattern Matching and Auto Labelling, each of that are designed to detect toxic content by identifying patterns based on content previously flagged as toxic.
One of the questions was about using AI in collaboration with humans. Kunes said it may help protect and support human moderators who would otherwise be too busy doing routine work to tackle larger problems: “Our human moderators can give attention to what matters most to them: improving their environments at scale over time. Before, they didn't have as much time to give attention to the more interesting facets where they may really use their skills. They were too busy taking a look at the identical forms of toxic or non-toxic content over and yet again. That also has an impact on their health. So there's an amazing symbiotic relationship between AI and humans. We can let AI do a few of these tasks which are either too mundane, or avoid wasting of this toxic content from repeated exposure by humans.”
Kunes also stated categorically that AI won’t ever replace humans. “In security, we are going to never get to some extent where we eliminate humans from the equation. Security will not be something we will arrange and ignore, only to come back back a 12 months later and see what happened. It absolutely doesn’t work that way. That's why we’d like to have these people at the middle who’re experts moderately and security.”