How social media is adapting to new censorship
Wed 9 Mar 2022
The world’s largest social media platforms are bigger than ever. With Facebook having close to three billion monthly active users, it dominates the social media landscape, leaving Twitter, Instagram and TikTok following behind. But as these websites and apps continue to add more users, moderating content and ensuring that these platforms are safe for all becomes increasingly harder.
On Twitter alone, it has been estimated that around 200 billion tweets are sent per year. Making it impossible for human moderators to keep on top of even a small percentage of the communication on the platform.
A vast array of toxic and abusive messages, posts and comments are sent to users of these social media apps on a daily basis, with these platforms being accused of not taking this issue seriously enough. Press freedom organisation Reporters Without Borders filed a lawsuit last year claiming that Facebook “allows disinformation and hate speech to flourish on its network – hatred in general and hatred against journalists – contrary to the claims made in its terms of service and through its ads.”
Governments and regulators, too, are also considering how best to oversee these tech giants, with both the European Union and the United Kingdom proposing new oversight powers and regulatory regimes to bolster competition and protect users.
Combating misinformation and fake news is no small task with new untrustworthy websites appearing as soon as previous ones are closed or removed from sharing. Facebook launched a fact-checking initiative in 2016 that works with a range of independent fat checking organizations to ensure that untrue and fake news stories are flagged up to users.
Despite this, researchers also found that fake news on Facebook is spread faster than other social media platforms. Researchers followed the internet usage of more than 3000 Americans and discovered that Facebook was the referrer site for untrustworthy news sources over 15% of the time.
In a blog post from Facebook’s vice president of News Feed, Adam Mosseri, acknowledges the harm the false news can have on users and explains how the platform is actively taking steps to curb the spread of fake news and in power users to make informed decisions.
Mosseri says that an effective approach to combat fake news is to remove any economic incentives for the publishers of such information. “We’ve found that a lot of fake news is financially motivated. These spammers make money by masquerading as legitimate news publishers and posting hoaxes that get people to visit their sites, which are often mostly ads,” adds Mosseri.
Fast-growing app TikTok has come under pressure from parents and Internet safety experts for not doing enough to stop young people from accessing the app and viewing videos that are inappropriate for their age. In recent days, TikTok announced a range of updates that are aimed at making the app design more age-appropriate, as well as introducing the ability for creators on the platform to select the age that their content is appropriate for.
Due to the complex and often not transparent nature of the algorithms deployed by social media platforms, content that should not be widely shared and could even be harmful, is sometimes picked up and amplified. A Media Matters report from last year found that TikTok algorithm promoted homophobia and anti-trans violence, despite the platform having community guidelines that outlaw hateful behaviour towards individuals or groups based on their sexual orientation or gender identity.
There is clearly no silver bullet that can address the myriad of challenges that social media giants face when attempting to police their content for inappropriate material. But with governments, regulators and users all raising issue with problematic content. a solution is now closer in reach than before, even if many of the proposals are imperfect.