Call of Duty rolls out AI-powered voice chat monitoring to crack down on hate speech
One of the world’s most popular video games has taken a drastic step to crack down on hate speech and other “toxic” behaviour.
Call of Duty players will have their in-game voice chat monitored by artificial intelligence in real-time as part of a major crackdown on hate speech and other harmful behaviour.
Activision announced in a blog post on Wednesday that it had partnered with tech firm Modulate to roll out its AI-powered voice chat moderation tool ToxMod, to “identify in real-time and enforce against toxic speech — including hate speech, discriminatory language, harassment and more”.
An initial beta rollout of the new system began on Wednesday to North American players of the popular online shooter series, which includes Call of Duty: Modern Warfare II and Call of Duty: Warzone.
The AI tool will be released worldwide — excluding Asia — on November 10, coinciding with the launch of the Call of Duty: Modern Warfare III on November 10.
Support will begin in English with additional languages to follow at a later date.
“This new development will bolster the ongoing moderation systems led by the Call of Duty anti-toxicity team, which includes text-based filtering across 14 languages for in-game text (chat and usernames) as well as a robust in-game player reporting system,” Activision said.
“Since the launch of Modern Warfare II, Call of Duty’s existing anti-toxicity moderation has restricted voice and/or text chat to over one million accounts detected to have violated the Call of Duty Code of Conduct. Consistently updated text and username filtering technology has established better real-time rejection of harmful language.”
Activision said data showed 20 per cent of players “did not reoffend after receiving a first warning”.
“Those who did reoffend were met with account penalties, which include but are not limited to feature restrictions (such as voice and text chat bans) and temporary account restrictions,” it said.
“This positive impact aligns with our strategy to work with players in providing clear feedback for their behaviour. Teams across Call of Duty are dedicated to combating toxicity within our games. Utilising new technology, developing critical partnerships, and evolving our methodologies is key in this ongoing commitment. As always, we look forward to working with our community to continue to make Call of Duty fair and fun for all.”
In a question-and-answer section, Activision said in-game voice chat was monitored and recorded “for the express purpose of moderation”.
“Call of Duty’s voice chat moderation system is focused on detecting harm within voice chat versus specific keywords,” it said.
“Players that do not wish to have their voice moderated can disable in-game voice chat in the settings menu.”
While the AI tool detects and flags toxic language in real-time “categorised by its type of behaviour and a rated level of severity based on an evolving model”, Activision will still be responsible for enforcement decisions.
“Detected violations of the Code of Conduct may require additional reviews of associated recordings to identify context before enforcement is determined,” it said.
“Therefore, actions taken will not be instantaneous. As the system grows, our processes and response times will evolve.”
The gaming giant clarified that “trash talk” was not banned.
“The system helps enforce the existing Code of Conduct, which allows for ‘trash-talk’ and friendly banter,” it said. “Hate speech, discrimination, sexism, and other types of harmful language, as outlined in the Code of Conduct, will not be tolerated.”
Activision chief technology officer Michael Vance said in a statement that there was “no place for disruptive behaviour or harassment in games ever”. “Tackling disruptive voice chat particularly has long been an extraordinary challenge across gaming,” he said.
“With this collaboration, we are now bringing Modulate’s state of the art machine learning technology that can scale in real-time for a global level of enforcement. This is a critical step forward to creating and maintaining a fun, fair and welcoming experience for all players.”
Mike Pappas, chief executive of Modulate, said the company was “enormously excited to team with Activision to push forward the cutting edge of trust and safety”.
“This is a big step forward in supporting a player community the size and scale of Call of Duty, and further reinforces Activision’s ongoing commitment to lead in this effort.”
According to Modulate, ToxMod does not just detect flagged words but “analyses the tone, context, and perceived intention of those filtered conversations using its advanced machine learning processes”.
“ToxMod’s powerful toxicity analysis assesses the tone, timbre, emotion, and context of a conversation to determine the type and severity of toxic behaviour,” it says on its website.
“ToxMod is the only voice moderation tool built on advanced machine learning models that go beyond keyword matching to provide true understanding of each instance of toxicity. ToxMod’s machine learning technology can understand emotion and nuance cues to help differentiate between friendly banter and genuine bad behaviour.”
ToxMod’s ethics policy states that it may also take into account the race, gender identity, sexuality or other demographics of the speaker to determine whether certain behaviour is acceptable.
“We do occasionally consider an individual’s demographics when determining the severity of a harm,” it says.
“We … recognise that certain behaviours may be fundamentally different depending on the demographics of the participants.”
For example, “while the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities”.
“While Modulate does not detect or identify the ethnicity of individual speakers, it will listen to conversational cues to determine how others in the conversation are reacting to the use of such terms,” it says.
“If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation.”