fbpx
The Stack Archive

Scientists develop algorithm that can auto-ban internet trolls

Mon 13 Apr 2015

troll-comments

Researchers at Cornell University claim to be able to identify internet trolls with more than 80% AUC*, positing the possibility of creating automated methods to identify or even auto-ban forum and comment-thread pests.

Justin Cheng, Cristian Danescu-Niculescu-Mizil and Jure Leskovec submitted the paper Antisocial Behavior in Online Discussion Communities [PDF] in early April, which details the findings from an 18-month study of banned commenters over three high-traffic communities: news giant cnn.com, political hub breitbart.com and the vocal gaming communities at ign.com.

The study, which was partly-funded by Google and had the cooperation of the Disqus commenting ecosphere, compared anti-social users (‘Future Banned Users’ or FBUs) ‘destined to be permanently banned after joining the community with those joiners who are not permanently banned (Never Banned Users or NBUs) in the study-period.

Many of the study’s findings could have been anticipated by anyone who has ever had a comment thread hijacked by an interloper who seems more intent on causing disruption and friction than participating in a reasonable discussion. For example over the 10,000 FBUs studied, nearly all began their commenting life at a lower perceived standard of literacy and/or clarity than the median for their host groups, with even that standard dropping in the final stretch towards a moderator ban. Additionally those last pre-ban troll posts tend to home in on a smaller number of comment threads relative to the number of posts – the classic characteristic of digging in for a sustained flaming match either with the host community or one or more members of it who have decided to engage the troll.

The study found that on CNN the studied trolls were more likely to initiate new posts or sub-threads, whilst at Breitbart and IGN they were more likely to weigh into existing threads.

The report does not exonerate host communities of all blame for troll behaviour, finding that immediately intolerant communities are more likely to foster trolls:

[communities] may play a part in incubating antisocial behavior. In fact, users who are excessively censored early in their lives are more likely to exhibit antisocial behavior later on. Furthermore, while communities appear initially forgiving (and are relatively slow to ban these antisocial users), they become less tolerant of such users the longer they remain in a community. This results in an increased rate at which their posts are deleted, even after controlling for post quality,”

The broad profile of the FBU as presented by the paper is that of a semi-literate, provocative and fairly persistent poster, whose descent into totally anti-social behaviour is summoned at inverse speed to that with which the host community rejects them, and whose final posts before a permanent ban are characterised by persistent and heated battle on a small number of topics.

Regarding the possibility of developing automated methods for identifying and even banning trolls, the researchers are circumspect, since 1 in 5 of users were misclassified by their analysis system, which otherwise claims to spot a persistent comment pest within as few as ten posts.While we present effective mechanisms for identifying and potentially weeding antisocial users out of a community, taking extreme action against small infractions can ex- acerbate antisocial behavior (e.g., unfairness can cause users to write worse), “


* ‘Area Under the Curve’ – a replacement for the ‘accuracy’ scale which takes false positives into account

Tags:

analytics news research
Send us a correction about this article Send us a news tip