Eric Schmidt proposes ‘hate spell-checker’ to suppress radical and terrorist content
Tue 8 Dec 2015
Eric Schmidt, the executive chairman of Alphabet Inc. (aka Google), has proposed the creation of automated tools to limit ‘the empowerment of the wrong people, and the wrong voices’ on the internet, in a putative concerted effort across digital industries to limit hate-speech, radicalisation and terrorist recruitment.
In an opinion piece for the New York Times on ‘How to Build a Better Web’, the software engineer, now worth around $9 billion, addresses many of the issues that have set government and private web companies such as Google onto a collision course in the last sixteen months, and seems to be veering significantly further towards the kind of regulation and oversight that governments are pressuring the industry towards. Schmidt writes: ‘The Internet is showing us the raw reality of the lives of oppressed people and their real needs, and it is also allowing some of our worst traits — in the form of envy, oppression and hate — to come into full view as well.’
Schmidt contends that while the internet has brought people together, it still tends to resolve into neighbourhoods and cliques, noting ‘It’s all too easy to use the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to. This sort of tribalism masks the need for common values and strong leadership. Societies are built one value, and one bargain, at a time.’
Schmidt concludes an interesting and apparently idealistic essay with the notion of vaguely defined ‘tools’ capable of limiting the spread of ‘terrorist messages’ and other negative forms of communication which he discusses:
‘We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.’
Comment This is a worrying document for a number of reasons. For one thing it implies some level of automation in the recognition of the ‘language of hate’. Though cash-rich internet giants such as Google and Facebook are making massive investment efforts into artificial intelligence, there is no evidence at the moment that automated processes could make worthwhile or reliable value judgements as sentinels of content on websites or social media. If the objective is to ‘flag’ dangerous individuals’ lines of communication, ‘content quarantines’ would alert them whilst mere surveillance wouldn’t stop the social media flash fires and lightning strikes.
For another, the values being discussed here are very elastic. Schmidt initially advances a hyper-tolerant, Voltaire-like attitude towards free speech, and then proceeds to define where it ends and begins – no minor matter in a climate where the application of the definition ‘terrorism’ is being consistently broadened because of the number of legislative impedances it removes.
Schmidt makes a good point when he indicates the tendency of people to self-reflect on the internet among groups and other individuals which already accord with their interests and demographics. But ironically this level of retrenchment can be so deep that it can take a truly devastating foray from another culture – such as a controversial Isis video – to even get the viewer’s attention and break through the caul of cultural disinterest. It is the worst possible way to have one’s cultural view expanded, but it’s actually the only one that has had any major impact in this regard in the last fifteen years. We need some more ‘good news terrorists’, for sure – disaffected individuals plotting random acts of love and generosity – but in the absence of that, an AI-driven ‘thought-filter’ comes a very distant second.