Facebook announced it will ban posts associated with QAnon conspiracy theories from its platform. The announcement is one more step towards combating extremism on social media and stopping the spread of conspiracy theories across their channels. While banning QAnon from Facebook-owned platforms helps to raise the barrier to access, more steps must be taken to eradicate the threat posed by QAnon and extremists who use social media with malicious intent.
QAnon emerged from a conspiracy theory started on 4chan in the Fall of 2017 by an individual claiming to be a high-level government official dubbed Q. Falsely asserting “the world is run by a cabal of Satan-worshiping pedophiles who are plotting against President Trump.” These false claims have exploded through social media like wildfire. What started out on lesser known sites such as 4chan, 8kun, and Gab.com, has been growing over the past few years, and has now moved into more mainstream social media sites such as Facebook. During the coronavirus pandemic, far-right extremist and conspiracy theorist activity has increased on social media.
As a tool, social media has provided many benefits to our society, but for individuals espousing extremist beliefs, it has become a platform to organize, recruit, and radicalize individuals. Together, they create a reinforcing worldview view which limits access to opposing viewpoints.
Social media sites are designed to make connections; the goal of any social media channel is to keep the user on the site to be exposed to more ad content to generate revenue. This is achieved by employing an algorithm to curate content tailored to each user.
Facebook takes into account thousands of factors to determine what posts to prioritize in people’s feeds. The average user spends an average of 2 hours and 24 minutes a day on social media, providing plenty of time and data for social media channels to refine a customized social media experience. With each new comment, like, share, and interaction with 3rd party websites visited, Facebook’s algorithm reinforces patterns of sharing in close-knit cluster communities. It also supports an insular existence driven by user behavior, and pushes the user into an echo chamber, filtering what they see and don’t see. An increased personalization of news feeds decreases the likelihood of an individual seeing different viewpoints or contrasting views. Within a user’s newsfeed, extremist attitudes and perspectives could be on repeat and displayed as truth within their particular online community.
For a majority of social media users, their time online leads them to a personalized, yet benign, online community of centrism and inclusivity, but for a small minority, it empowers and speeds up polarization and extremism. Users may start down an innocuous path of curiosity and end up filtered into the land of extremism by social media algorithms. In recent years, we have seen an exponential growth of echo chambers in which political views and ideologies are reflected from all sides like shouting in a cave.
These echo chambers on social media can affect users in different ways. Unchecked or unregulated social media poses a problem for society both on and offline. Congress and social media companies have been grappling with the question of regulation for a while. Partisan lines have been drawn as Democrats work to focus on extremism, hate, and violence, while their GOP counterparts argue that social media companies are biased and censor conservative voices. The issue of social media regulation is not easily solved, but it must be tackled immediately.
Other social media companies must follow Facebook’s lead and adopt a zero tolerance policy of sharing disinformation and peddling conspiracy theories in an effort to eradicate extremists from their platforms and take a more aggressive approach. While other lesser known platforms will allow the posting of extremist and conspiracy theorist content, the importance of filtering out disinformation from mainstream social media channels is key to preventing the spread of extremist views.
Aside from banning QAnon disinformation campaigns, there are other things that social media companies should be doing. Fact checking by individuals and AI has been employed, but with 55 million status updates posted each day, this can be hard for any person or computer to sift through and fact check. Algorithms must be updated to penalize users who share disinformation and extremist content.
Conspiracy theorists and extremists do not take any days off, and as they become more nimble in the social media space, it is imperative for social media companies to come together and create industry standards to thwart extremism on their platforms.
America has one of the highest social network penetration rates in the world, and the need for online governance is imperative. In 2016, America saw what happens when social media is weaponized by Russian actors to disrupt elections. Currently, a portion of the electorate is voting based on QAnon disinformation regarding a global sex-trafficking ring. Social media safeguards will not happen overnight, and until social media companies change their algorithms and insert safeguards to stop people from promoting conspiracy theories and unfounded disinformation, extremists will continue to dominate the space, threatening American democracy one social media post at a time.