The Computational Propaganda Research Project.
What a name!
I came across the project, which is based at the University of Oxford’s Internet Institute, through a working paper it recently published titled Computational Propaganda.
The researchers define computational propaganda as the use of algorithms, automation and human curation to purposefully distribute misleading information over social media networks. South Africa’s local version would be the allegations made against the Gupta family and UK public relations firm Bell Pottinger, regarding elements that have been dubbed “paid Twitter”.
“One person, or a small group of people, can use an army of political bots on Twitter to give the illusion of large scale consensus,” the report reads. “Regimes use political bots, built to look and act like real citizens, in efforts to silence opponents and to push official state messaging.
“Political campaigns, and their supporters, deploy political bots – and computational propaganda more broadly – during elections in attempts to sway the vote or defame critics,” it states.
We already saw examples of this in the 2016 election in the US and the 2017 election in France. The report also points to examples in the Brexit vote and the rise of the right-wing populist political party UKIP in the UK.
The report makes for some pretty crazy reading and what really terrified me is not the fact that forms of propaganda like this exist, but the scale on which computational propaganda is being used around the world.
The study looked at the use of social media in manipulating public opinion across nine countries – Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine and the US – and is the first of its kind. The report analyses qualitative, quantitative, and computational evidence collected between 2015 and 2017.
Interestingly, the researchers point out that “political actors” were adapting their campaigns in response to the research team’s work. “This suggests that the campaigners behind fake accounts and the people doing their ‘patriotic programming’ are aware of the negative coverage that this gets in the news media,” it states.
The researchers made a number of fascinating discoveries. The report states that some social media platforms, in particular political contexts, are either “fully controlled” by or “dominated” by governments and organised disinformation campaigns.
According to the report, case studies show that authoritarian governments direct computational propaganda at their own population and at populations in other countries. Chinese campaigns targeting political actors in Taiwan, and Russian campaigns targeting political actors in Poland and Ukraine are cited as examples of the latter.
The report states that the “most powerful” forms of computational propaganda involve both algorithmic and human curation, “bots and trolls working together”. It singles out Chinese-led social media propaganda in Taiwan as an example of this.
The researchers highlight social media posts about Ukraine as “perhaps the most globally advanced case of computational propaganda”.
“Numerous online disinformation campaigns have been waged against Ukrainian citizens on VKontakte, Facebook and Twitter,” reads the report. “The industry that drives these efforts at manipulation has been active in this particular country since the early 2000s.”
The report states that as much as 45% of Twitter activity in Russia is managed by highly automated accounts and that Polish political debate on Twitter is produced by a handful of right wing and nationalist accounts.
In Brazil it points to the role that computational propaganda played during three recent political events: the 2014 presidential elections, the impeachment of former president Dilma Rousseff and the 2016 municipal elections in Rio de Janeiro.
The researchers say that numerous interviews with politicians, campaigners, and elections officials reinforced the fact that social media computational propaganda is being used to manipulate online discussion.
Again we didn’t really need the politicians and campaigners, or this report, to confirm what we knew was already happening. But the scale on which this strategy is being employed is terrifying.
Computational propaganda is not just a threat to an informed public; it has become a political weapon. The report goes as far as stating that computational propaganda is now one of the “most powerful tools against democracy”.
“Social media firms may not be creating this nasty content, but they are the platform for it,” reads the report. “They need to significantly redesign themselves if democracy is going to survive social media.”
Democracy surviving social media, now there’s a thought.
When Mark Zuckerberg launched Facebook from his Harvard University dorm room or the team at Twitter decided that 140 characters was a limit to self-expression that people could live with, neither would have guessed that their platforms would be abused in this way.
It feels like just the other day that the world was cheering and gleeful that social media had fuelled democracy by playing a pivotal role in the Arab Spring.
This article originally appeared in the 6 July edition of finweek. Buy and download the magazine here.
The scourge of social media
The Computational Propaganda Research Project.