Lessons from January Sixth: Media and Misinformation
By Beck Reiferson ‘23
As we continue to learn more about the insurrection at the US Capitol on January 6th, one takeaway is already clear: social media platforms can be the gasoline to conspiracy theories’ fire, allowing them to grow far more rapidly and extensively than has previously been possible, and thereby increasing the likelihood of dangerous culminations. In the immediate aftermath of the storming of the Capitol, public attention turned to the culpability of Parler and President Donald Trump’s Twitter account. Though criticism of Parler and Trump is justified, this focus is far too narrow. By focusing too heavily on Parler and Trump’s Twitter, we risk distracting ourselves from those most responsible for the spread of misinformation: major social media companies and how they generate revenue.
It has grown increasingly apparent that social media was a significant vehicle for the dissemination of misinformation about the election results and for the planning of the attack itself. According to The Wall Street Journal, “Facebook’s own research found that American Facebook Groups became a vector for the rabid partisanship and even calls for violence that inflamed the country after the election” and that “blatant misinformation… [filled] the majority of platform’s top ‘civic’ Groups.” Additionally, “Facebook was far and away the most cited social media site in charging documents the Justice Department filed against members of the Capitol Hill Mob,” followed by YouTube and Instagram. We clearly see here that both Facebook and Google are much more responsible for the dissemination of false information than the far-right Parler.
It is clear that social media companies contributed to the spread of misinformation and the organizing of violence. This begs the natural question, how is this spread enabled? The answer lies in the way these companies make money: ad revenue. As long as social media companies generate revenue in this way, and as long as their primary goal is to earn as much profit as possible, they will collect large amounts of data on their users so that they can target advertisements to specific users based on their known interests. Since a user is presumably more likely to engage with an ad that algorithms have determined they should be interested in, ads on social media are uniquely valuable. Thus brands will pay social media companies more to feature their advertisements. Targeted ads are only the beginning of this media ecosystem that encourages rabid consumption.
Social media companies also make more money if their users encounter and engage with more ads, incentivizing the companies to keep the user on the platform for as long as possible. To achieve this goal, these companies will again make use of the arsenal of data they collect on each user, recommending to them groups and content that they are likely to enjoy and that will encourage them to spend more time on the platform. From a business perspective, it is easy to see why this is a sound strategy: if a far-left user were to open Facebook and find themself bombarded with articles from, say, Breitbart, the user would presumably not want to spend too much time on Facebook, and would therefore be less likely to encounter the advertisements that provide Facebook with revenue. From a political perspective, then, social media sites will recommend to users groups and content with which they are more likely to agree. What one social media site looks like for a liberal is thus very different from the perspective of a far-right conservative. Pre-existing beliefs are reinforced and partisanship is amplified.
The existence of large social media “bubbles” in which radical and like-minded individuals disproportionately encounter each other, enable the proliferation of misinformation on an unprecedented scale. When people are placed in political echo chambers, there are few checks against more outlandish claims, since people are less inclined to challenge information supporting their beliefs. As the American Psychological Association notes, there is a “link between people’s moral convictions and their assessment of facts.” Thus, when social media companies suggest to users that they join groups of people with the same opinions as them or recommend to users articles that reinforce their beliefs, they are increasing the likelihood that misinformation will be spread to the very people who are most likely to believe in the false information.
Many people have suggested social media companies should place fact-check disclaimers on misleading or inaccurate posts in order to clearly identify fake news. This, however, is not a surefire solution to the problem of misinformation. First of all, there are legitimate concerns about partisan biases affecting the validity of fact-checks, which are, by definition, supposed to be nonpartisan. But even if fact-checkers were immune to bias, fact-checking would still not be sufficient, because the perception of the partisanship of fact-checkers is more important than the reality. Put differently, as long as people think fact-checkers are biased—and, according to the Pew Research Center, about half of Americans think they are—people will be skeptical of fact-checks, and they will not be very effective.
There is even some reason to believe fact-checks can be counter-productive. For a recent study, researchers showed participants a series of tweets from President Trump about voter fraud, and also showed some of the participants fact-checking corrections from Twitter. The researchers found that “belief that mail voter fraud occurs was more than 13%” higher among Republicans whom they had shown the corrections to than among Republicans to whom they had not shown the corrections.
Instead of fact-checks, the best way to limit the spread of misinformation would be to limit the amount of data social media companies collect on their users. This way, social media companies would not be able to provide users with as precise group and content recommendations as they do now. Conservatives would be less likely to encounter conservative content, and liberals would be less likely to encounter liberal content, since social media companies would have a less accurate gauge on the political opinions of their users. Though misinformation would still be in circulation, it would find a less receptive audience because there would be less preaching to the choir. A data tax, where social media companies pay in proportion to the quantity of data they collect, should achieve the goal of limiting data collection; one expert featured in the documentary The Social Dilemma, which highlights the role of social media in fomenting hyperpartisanship, proposed this sort of tax. It is past time for Congress to look into this solution.