Unmasking the Misinformation CrisisA persuasive essay about fighting misinformation | Date | ||
|---|---|---|---|
| Category | Society | ||
| Author | Aron | License | CC BY-ND |
We live in a world where nearly half of the Earth’s inhabitants use social media to access news and other information. The line between truth and flagrant misinformation have become blurred by poor vetting and a lack of platform oversight. Misinformation isn’t some small concern affecting only a few small groups of people. It is a global crisis that has affected public health, democracy, and the ability to think critically. According to researchers, these falsehoods travel much faster and farther than accurate truths, which is alarming. (Muhammad & Mathew, 2022, Line 2). Social media platforms have demonstrated their power to distort reality on a global scale. They’ve become a fertile sandbox for conspiracy theories and life-threatening health advice. Unfortunately, addressing this issue requires an approach that considers not only regulation, but public education and platform accountability. This means getting the very people who are affected by the poison of misinformation onboard.
During the COVID-19 pandemic, there were exceptional amounts of anti-vaccine conspiracies and cures that lacked any scientific foundation circulating widely on platforms such as Facebook and X. Sadly, this misinformation led to real-life consequences. According to Muhammad and Mathew (2022), this not only undermines medical guidance, but it also endangers the collective safety (Line 273). These doings extend their reach far beyond individual behavior. They erode trust in science and government. This widespread distrust makes it difficult to implement essential health guidelines like vaccines or quarantines. One solution to this issue is the collaboration between social media companies and public health authorities in creating a system that labels or limits the reach of misleading posts. Facebook introduced some semblance of this during the pandemic, which created its own set of issues, but it does demonstrate that tech companies can play a vital role in the limiting of misinformation (Muhammad & Mathew, 2022, Line 273). The cost of these interventions should be seen as miniscule compared to the threat misinformation poses to human life. Human moderators and algorithmic adjustments could be made to make this solution more feasible. These facts alone urge deeper examination into what can be done on a global preventative scale.
A far-reaching consequence of misinformation occurs when it enters the political sphere. False claims about election fraud, foreign interference, or voting machine tampering have dealt catastrophic damage to the public trust. Democratic institutions once seen as trustworthy organizations and platforms by which to exercise Constitutional liberties are now unjustly scrutinized and lambasted for the sake of political bias. The Harvard Law Review (2024) discusses the Platform Accountability and Transparency Act (PATA), a proposed piece of legislation that would legally require social media platforms make their algorithms more transparent as well as grant researchers access to moderation data (Line 2105). Requiring these platforms to divulge how their content is ranked and moderated, this legislation would usher in federal audits which could be used to prevent the misuse of social media during elections. While there is no question this would involve several legal hurdles, we can see the beginnings of an enforceable path toward greater accountability. When we think of annual revenues earned by these social media giants, the cost for compliance is relatively minor. The benefit to democratic integrity would be substantial.
Public education is probably the most sustainable solution that could be offered to combat misinformation campaigns. A study published in the Proceedings of the National Academy of Sciences found that a reasonably simple set of media literacy tips increased people’s ability to determine whether or not headlines were false by 26.5% (Guess et al., 2020, Line 15536). The study revealed the intervention was not only cost-effective but retained its impact weeks later (Line 15537). Programs can be developed into school curriculum as well as awareness campaigns targeting both youth and adults. The sustainability of this solution is extremely high given that governments and educational institutions already have the channels for distribution at hand. Routine assessment and refinement of educational materials as needed would ensure relevant and impactful learning far into the future.
Misinformation on social media is a toxic poison and a pervasive threat that undermines our very lives, our democracy, and the general public trust. To combat it in a meaningful way, legislative oversight, platform accountability, and public education must work together to form short-term measures and long-term shifts in our culture. With these agencies working together, society can push back against the tide of falsehood. These solutions are realistic and essential.
Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117
Harvard Law Review. (2024). Platform Accountability and Transparency Act, S. 1876, 118th Cong. (2023): Recent proposed legislation. Harvard Law Review, 137(7), 2104–2108. https://harvardlawreview.org/print/vol-137/platform-accountability-and-transparency-act-s-1876-118th-cong-2023
Muhammed, T. S., & Mathew, S. K. (2022). The disaster of misinformation: A review of research in social media. International Journal of Data Science and Analytics, 13, 271– 285. https://doi.org/10.1007/s41060-022-00311-6