from Fast Company
In early September, President Trump retweeted a video allegedly showing an “black lives matter/antifa” activist pushing a woman into a subway car. The video is nearly a year old, and the man in question was mentally ill and had no connection to either group.
As a researcher studying social media, propaganda, and politics in 2016, I thought I’d seen it all. At the time, while working at University of Oxford, I was in the thick of analyzing Twitter bot campaigns pushing #Proleave messaging during Brexit. As a research fellow at Google’s thinktank Jigsaw that same year, I bore witness to multinational disinformation campaigns aimed at the U.S. election.
That is nothing compared to what I am seeing in 2020. The cascade of incidents surrounding both this year’s U.S. Presidential contest as well as a multitude of other contentious political events around the globe is staggering. From doctored videos, “smart” robocalls, spoofed texts and—yes—bots, there’s an overwhelming amount of disinformation circulating on the internet.
Meanwhile, political polarization and partisanship inflamed by these technologies continues to rise. As I sift through social media data relating to the ongoing U.S. election, I’m constantly confronted with new forms of white supremacist, anti-Black, anti-Semitic, and anti-LBGTQ content across massive social media sites like YouTube, Twitter, and Facebook. This rhetoric also shows up on private chat applications such as Parler, Telegram, and WhatsApp.
I thought we’d have made progress in addressing the problems of propaganda and disinformation on social media by now, and on the face of things we have. Major tech firms have banned political advertisements, flagged dis-informative posts by politicians, and made tweaks to their algorithms in attempts to stop recommending conspiracy-laden content. But, in the grand scheme of things, these actions have done little to quell the sheer amount of both low-tech and algorithmically generated propaganda online.