Social media curator algorithms may be detrimental to user
Thu 19 Jan 2017
A team of researchers from Oxford University has argued that a sufficiently sophisticated recommender algorithm, curating content to engage social networking customers, may use curation strategies that are detrimental to users.
A recommendation algorithm sifts through data, predicting what a user will like best among a list of given items. In a social networking context, a recommendation algorithm is built to analyze all of the details of an individual’s interactions on the network, tracking responses to the various stimuli present in the user’s history. This creates a feedback mechanism that can then be used to optimize the user’s interaction with the networking site, increasing the user’s engagement with the system and maximizing the effectiveness of advertising on the site.
The curator algorithm is created to respond to a recurring question: what is the optimal content to provide to this individual user at this moment? The researchers pose the question of whether a recommendation engine may develop sophisticated strategies for manipulating users, with the intention of optimizing content selection.
Examples of ways in which manipulation could be detrimental to the user include the idea that the emotional state of a social network user can be changed by selectively filtering the content produced by friends. A recommender algorithm seeking to optimize results based on content choices could manipulate the user’s emotional state in order to promote a particular product.
The researchers also considered a basic tenet of behavioral psychology that states rewards-based manipulation works best when rewards are delivered on a variable schedule.
Changing content to exploit an individual’s emotional state or triggers is one method that a recommendation algorithm could be expected to employ that would create an advantage for the advertisers and managers of a social network, to the detriment of the user.
As an example, the team points to the prevalence of ‘fake news’ stories on social media during the U.S. presidential election of 2016. Companies looking to optimize advertising revenue during this time found, through trial and error, that targeted fake news stories were effective as clickbait. Users access the stories, advertisers get more hits, revenue is increased. Fake news stories proliferated on social media at this time, created in massive amounts without regard to the effect the stories had on the users.
“With the same objective,” the researchers note, “Even a comparatively simple curator algorithm would be capable of developing this strategy.”
While the trend in social media algorithms is toward deep learning, decreasing the transparency of the algorithms at work, there is also a movement toward the oversight and regulation of data collection and analysis, such as those used in targeted advertising and social media. Both regulatory agencies and users themselves are becoming more aware of, and cautious about, the way that the information collected on social media platforms is collected, distributed, and used.