Brief #22 Technology
Can Social Media Companies Regulate Their Own Content?
By Scout Burchill
October 25, 2020
Summary:
As the 2020 election approaches, social media platforms have been taking major actions to moderate content in an attempt to combat growing amounts of misinformation. The controversial steps signal a major policy shift for the social media giants, especially Facebook and Twitter, on issues of free speech and the need for content regulation.
The list of actions that social media companies have taken to slow the spread of misinformation grows longer by the day, however the most high-profile cases have become the topic of mainstream political discourse. On October 14th, the New York Post published a story on Hunter Biden’s business connections in Ukraine that contained unverified emails obtained through questionable means. In response, Twitter quickly responded by blocking users’ ability to share the story. Facebook, taking a different approach, limited the extent to which the story would appear in users’ News Feeds.
This incident, along with others including crackdowns on QAnon conspiracy groups across social media platforms, comes at a time when the spread of misinformation continues to grow online and social media companies are facing increasing scrutiny from politicians and the public. According to a new study by the Digital New Deal project of the German Marshall Fund, engagement with media outlets that regularly publish misleading or false articles on Facebook has almost tripled between the third quarter of 2016 and the third quarter of 2020.
Analysis:
The actions that social media companies have taken in the run up to the 2020 election to curb the spread of misinformation and conspiracy theories online mark an unprecedented shift in how these companies will be scrutinized and held accountable going forward.
The actions taken by social media companies, which include bans, content removals and new features designed to slow the spread of misleading articles such as labels warning users of misleading information, have largely proven to be ineffective. As noted earlier, studies show that misinformation is now more popular than in the run up to the 2016 election. Furthermore, newly published data suggests that Twitter’s aggressive moderation tactics seemed to have backfired. The unprecedented decision to ban users from sharing the story made it go viral. In a report from MIT’s Technology Review, which used data from Zignal Labs, a media intelligence firm, it was found that shares of the New York Post article roughly doubled after Twitter attempted to suppress it. This sequence of events perfectly illustrates what is known as the Streisand Effect: the phenomenon in which attempts to conceal, censor or suppress information has the opposite effect and ends up bringing more attention to it[1]. On top of this unintended consequence, Twitter and other social media platforms came under immediate attack by Republican politicians, pundits and influencers for having a left-wing bias and censoring speech online. This incident seems to have lent further credence to this long-running political talking point on the right.
Twitter explained its decision, which even went so far as to lock the account of White House Press Secretary Kayleigh McEnany, by stating that the story violated its policies of distributing hacked materials. Obviously, this policy is extremely hard to defend considering that banning hacked or stolen materials would inevitably block countless important stories from reaching the public. For example, just recently the New York Times published a damning expose of Trump’s tax returns that, according to this Twitter policy, could conceivably be subject to the same treatment as the New York Post’s Hunter Biden story. Even the CEO of Twitter, Jack Dorsey, facing a massive amount of political pressure especially from Republicans, acknowledged that it was wrong to block the article and the company announced it would change its policy, citing fears that the policy was too sweeping and would affect journalists and whistle-blowers.
It was not so long ago that both Facebook and Twitter championed unregulated free speech on their platforms. In a sense, it was simply smart business strategy. For years social media platforms have made the argument that they are platforms not publishers and therefore cannot be held responsible for content. However, with political pressures mounting, as calls for government regulation and anti-monopoly sentiment grow on both sides of the aisle, social media companies seem to be back-tracking from this previous position and are attempting to impose their own content-moderation regimes. In some instances this new stance has been celebrated, in others condemned. As of recently, this approach has been somewhat of a disaster as new crackdowns and moderation tactics have only brought further scrutiny and a growing sense of distrust. Ultimately, these incidents bring to the fore an increasingly urgent and important question: Should we trust private companies to regulate content and political discourse online?
As many commentators point out, social media platforms have little economic incentive to actually slow the spread of misinformation. At the end of the day, they are private companies motivated by profit and viral content is good for their bottom lines. These current content-moderation decisions are probably attempts to weather the political storm that is brewing concerning how much power these companies wield in shaping political discourse. However, in the absence of a coherent and well-articulated framework for content regulation that clearly draws the line between acceptable and unacceptable speech attempts to moderate will likely continue to backfire.
Resistance Resources:
Countering Truth Decay Initiative:
https://www.rand.org/research/projects/truth-decay/fighting-disinformation.html
Tools to educate and combat misinformation: https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html
Maplight Election Deception Tracker:
https://maplight.org/story/election-deception-tracker/
[1] This term was coined after a 2003 incident in which the singer Barbra Streisand sued a photographer for taking aerial photos of her Malibu mansion and posting them online in a project documenting coastal erosion in California. Prior to Mrs. Streisand’s lawsuit, the obscure photo was one of thousands in a database and had been viewed only several times, a few of those by her own lawyer. However, all that changed when she attempted to have it removed from the database. The lawsuit was reported in the media and subsequently hundreds of thousands of people searched for it, viewed it and shared it.