Facebook building new ‘counter speech’ tools to fight terrorism
Fri 12 Feb 2016
In the latest move to stamp out terrorist activity on social media, Facebook has introduced a new tool – ‘counter speech’ – to recognise and encourage users who discredit extremist views with their own posts, images and videos.
According to reports, a team led by Facebook’s head of global management, Monika Bickert, meets every Tuesday morning to plan initiatives to support counter speech, such as competitions, and to ensure that these campaigns are reaching target audiences.
One such programme, conducted by the social media giant alongside the State Department and consulting firm Edventure Partners, involved an inter-college contest which challenged students around the world to create counter-terrorist messages. Competitors were granted budgets of $2,000 (approx. £700) and $200 in ad credit.
A further strategy saw counter speakers rewarded with up to $1,000 worth of ad credit. These activists included German comedian Arbi el Ayachi who posted a video on the network, countering claims made by a Greek right-wing group that eating halal meat is poisonous. He explained to the Wall Street Journal that the clip was a “take on how humour can be used to diffuse a false claim.”
In 2015, another initiative organised by Facebook allowed four former right-wing and Islamist extremists to set up fake accounts and create relationships with current members via private messaging. According to Ross Frenett of the Institute for Strategic Dialogue, who conducted the experiment, the results were more encouraging than expected.
At the beginning of this year, a number of top U.S. tech firms, including Apple, Facebook and Google, met with the Obama administration to discuss ways of combatting terrorist organisations online – and to thwart ISIS’ recruitment and propaganda drives across social media.
Following the gruesome San Bernadino shootings in December last year, Facebook had been accused of being slack in its approach to terrorist material and the amount of hate speech spread over its network. The suspects in the killings were reportedly promoting ISIS through their accounts, and helping to recruit members.
Twitter has also recently made a public stand against extremism and terrorist-related accounts on its platform. This month the microblogging site revealed that it had shut down over 125,000 accounts linked to terrorists since 2015. In a statement released last week, Twitter said: “We condemn the use of Twitter to promote violent terrorism […] This type of behaviour, or any violent threats, is not permitted on our service.