Navigating the Information Jungle: Social Media, Disinformation, and Political Polarization
Technology Policy Brief #93 | By: Inijah Quadri | July 31, 2023
Photo taken from: www.aa.com.tr
__________________________________
Social media has emerged as a pivotal platform for communication, entertainment, and information dissemination in the 21st century. Platforms like Facebook, Twitter, Instagram, and YouTube command billions of users, profoundly influencing global culture, policy debates, and political discourse. While the rise of social media has undoubtedly democratized access to information, it has also become a fertile ground for disinformation and political polarization.
Disinformation—the deliberate creation and sharing of false information with the intent to deceive or mislead—is not just misinformation or rumors. It is an orchestrated campaign designed to propagate lies, distortions, and half-truths, ultimately impacting public opinion, aggravating political division, and undermining democratic processes.
Significant evidence points to the role of social media in propagating disinformation, with repercussions visible in landmark political events, such as the 2016 US Presidential Election and the Brexit referendum. A 2019 report from Oxford University’s Computational Propaganda Research Project disclosed organized social media manipulation in 70 countries, a substantial increase from 48 the previous year and 28 the year before that. The gravity of these findings suggests that unchecked disinformation can destabilize societies and impede democratic processes.
Analysis
Political polarization—intense divergence of political attitudes to ideological extremes—isn’t a novelty. However, social media has catalyzed and amplified its effects by creating echo chambers and filter bubbles, where algorithms curate content that aligns with a user’s existing beliefs and interests.
These algorithms are essentially complex computational procedures designed to present users with content that they would likely find engaging. By analyzing data like previous likes, shares, and time spent on different types of posts, they make educated guesses on what a user might want to see next. This, however, can often lead to the amplification of misinformation as controversial and sensational content tends to generate high engagement. This selective exposure to information bolsters cognitive biases, with a 2022 study demonstrating how these echo chambers can foster extremism by reinforcing and amplifying partisan views.
Disinformation thrives in these digitally-fueled environments. Its seeds are sown deep and spread rapidly within the echo chambers, often remaining unchecked or contested. The 2020 US Presidential Election stands as a stark testament to this, as false narratives about election fraud circulated widely on social media platforms despite the claims being thoroughly debunked by fact-checkers. The aftermath—an unprecedented attack on the US Capitol on January 6, 2021—laid bare the dangerous repercussions of this disinformation.
Addressing the role of social media in fanning the flames of disinformation and political polarization is a challenging task. Policymakers are navigating a tightrope between the need for information integrity and the preservation of freedom of speech. However, steps can be taken. Greater transparency from tech companies about their algorithms, stronger and more proactive fact-checking measures, education focused on media literacy, and potential regulation of social media platforms are strategies currently under consideration.
But the key is fostering a multidimensional approach that involves all stakeholders—governments, tech companies, civil society, and users—to ensure that the digital public square can support a vibrant, yet responsible, exchange of ideas. More specifically, stakeholders need to converge on the adoption of standards and guidance to address the spread of disinformation effectively.
As an example, the recent legislation proposed by Elizabeth Warren and Lindsey Graham can serve as a pivotal reference. This law attempts to introduce stricter rules for social media platforms in curbing the spread of misinformation, thereby holding them more accountable. As such, stakeholders must critically examine, discuss, and build upon such legislative efforts to achieve a more comprehensive and resilient approach against disinformation.
Engagement Resources
- Center for Humane Technology (https://www.humanetech.com/): Advocating for the redesign of technology to better align with human interests, focusing on the harmful effects of social media and disinformation.
- The Poynter Institute (https://www.poynter.org/): An international leader in journalism, promoting the responsible production and sharing of news, with significant resources on fact-checking and combating disinformation.
- Digital Forensic Research Lab (https://www.digitalsherlocks.org/): An entity dedicated to the study and exposure of disinformation and misinformation on social media, providing digital tools to discern truth in the digital age.
- The Media Manipulation Casebook (https://mediamanipulation.org/): A digital research platform combining theory, methods, and practice for mapping media manipulation and disinformation campaigns.
- First Draft (https://firstdraftnews.org/): A nonprofit that provides resources and training for journalists in the digital age, including a focus on verification and fact-checking to combat misinformation and disinformation.
- Stanford Internet Observatory (https://cyber.fsi.stanford.edu/io): The observatory conducts research on abuse in current information technologies, with a focus on social media and the integrity of information.