Intel’s AI censor ‘Bleep’ to battle video game hate speech
Posted April 11, 2021 4:23 pm.
Last Updated April 11, 2021 5:43 pm.
CALGARY (CityNews) – Anyone who has grown up with the internet or video games is no stranger to the problems only partially hidden online.
While the debate over regulation continues politically, some technology companies are taking up the challenge of tackling hate speech head on.
Intel is planning to enable gamers to filter hate speech from their video game experience by creating an AI censor program called ‘Bleep.’
The tech giant is working with Spirit AI to create a more inclusive space in response to toxicity online.
Intel's announcement of Bleep, an AI censor for hate speech online, is sparking conversations about the future of regulation in online spaces. Thanks to @VictraGG & @CanadianCMF for your time on this one, airing tonight on @citynewscalgary at 6 & 11 p.m. #yyc pic.twitter.com/IK9C1CGRJJ
— Jo Horwood (@JournoJo_) April 11, 2021
Victor Ly, the co-founder of the Alberta Esports Association (AESA), says that hate through online platforms is a problem that goes beyond video games to a multitude of online spaces.
“You’re hidden behind the internet, you’re not sitting beside someone where you’re susceptible to social pressures, to behave a particular way,” said Ly. “You can really say and do what you want without much repercussion.
“This is nothing new to gaming.”
While Ly acknowledges that the coming technology is a good first step, mixed reviews followed the company’s March announcement.
The censor allows end users to adjust a sliding scale that filters problematic speech including xenophobia and racism, ableism, misogyny, and name-calling.
The filter even includes an on-off switch for the N-word.
Ly laughed at the idea of adjusting hate speech on a sliding scale.
“Does misogyny really need to be — allow a little bit, allow some, or disable it all? There should really just be a switch.”
However, Ly does note that it’s an improvement on what he calls the “archaic” methods still being used to monitor online discussion.
“You have a moderator or a supervisor who would filter through the chat manually and ban speakers who are using hate speech, but that’s not very scalable and it’s not very sustainable.”
CEO of the Canadian Cultural Mosaic Foundation, Iman Bukhari, calls for a heavier responsibility to be placed on the tech companies hosting online platforms.
“I personally don’t even think that big tech companies have even begun to understand their personal, ethical accountability when it comes to tech platforms creating hostile, hateful, and toxic environments,” says Bukhari.
“The onus really needs to come back on those companies that create these technologies to hold themselves accountable because they are a breeding ground for hate.”
Ly notes that the factor of anonymity online provides a refuge from social repercussions for expressing hate.
Despite putting policies and procedures in place to address and prevent incidents of hate speech at AESA events, the organization has still had to manage persistent online trolls.
“We’ve had to moderate the Twitch chat quite heavily, and even through social media we’ve had a number of issues of people within our community who violate our code of conduct, but we take the appropriate action,” said Ly.
“We’re very thankful to have our presence in this community to maintain that safety.”
Bukhari says that the censor is one of many steps companies should be taking to address hate speech online.
“They can have a censor, but I really think they need to look at stronger policies,” she says.
“They need to look at training, they need to be transparent, and I think they need to involve the community because perhaps the people that they have, have more of a technical background than an actual background in terms of understanding history and the way it’s playing out right now.”
Ly leaves the conversation with a reminder on what video games are supposed to be about.
“It’s to have an opportunity to play games in a safe and inclusive environment, so it’s great to see industry leaders like Intel taking this step to make this a safer environment for everyone.”
Intel plans to launch the tool later in 2021.