fbpx
The Stack Archive

An AI algorithm to fight the new ‘hate codes’ of the alt-right

Fri 17 Mar 2017

Researchers have used machine learning to develop the capability to identify online hate speech hiding behind ‘hate codes’ – which substitute some of the most common words on the internet for some of the most offensive.

In the paper Detecting the Hate Code on Social Media, three researchers from the University of Rochester, led by Rijel Magu, use Support Vector Machine-based training techniques to build an algorithm capable of penetrating the cloaked racism which came to prominence most especially during the American presidential campaign of late 2016, when users associated with far-right extremist thought (‘alt-right’) began to substitute saturated net terms such as ‘Skype’, ‘Yahoo’, ‘Bing’ and ‘Google for derogatory terms of racist intent (respectively ‘Jew’, ‘Mexican’, ‘Chinese’ and ‘Black’ – though ‘Skittle’ and ‘Butterfly’, relating to ‘Muslim’ and ‘Gay’, are less easily shrouded by net-noise) in Twitter missives.

The scientists used a Jefferson-Henrique web-scraper script to establish an initial database of more than a million tweets gathered when the storm of election-based Twitter-hate began, in the week beginning 23rd September 2016.

The database was pared down to around 250,000 tweets ‘of interest’, at which point in-depth analysis began.

'Pairwise co-occurrence values in percentage'

‘Pairwise co-occurrence values in percentage’

‘From a temporal perspective, we noted a sharp spike in the use of code words in the first week of October, peaking around the 4th of October. This coincided with the second presidential debate.’

The paper posits a potential AI-driven approach that could at least bring cloaked hate speech to the attention of real-world moderators, and eventually form the basis of automated flagging or deletion of tweets or (presumably with human approval) their originating accounts.

The subject of online hate speech has come to resurgent public interest in the last few weeks, with calls for large social media influencers such as Google to take stronger measures against the haters, despite (as the Rochester paper makes clear) a huge level of ongoing innovation from the racist posters fuelling the debate.

In a context where human-monitored comment/tweet moderation is an unfashionably expensive option, and where there is a notable PR penalty for the unintelligent deletion of incorrectly-identified hate speech, it seems imperative that AI be further developed towards the problem – or that publishers get their hands in their pockets until it does.

Tags:

cybercrime news politics research
Send us a correction about this article Send us a news tip