Tag: misinformation

Right and left, partisanship predicts (asymmetric) vulnerability to misinformation

We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter. While our results confirm prior findings that online misinformation sharing is strongly correlated with right-leaning partisanship, we also uncover a similar, though weaker, trend among left-leaning users. Because of the correlation between a user’s partisanship and their position within a partisan echo chamber, these types of influence are confounded. To disentangle their effects, we performed a regression analysis and found that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.

Read the full article at: misinforeview.hks.harvard.edu

The Manufacture of Political Echo Chambers by Follow Train Abuse on Twitter

Christopher Torres-Lugo, Kai-Cheng Yang, Filippo Menczer

A growing body of evidence points to critical vulnerabilities of social media, such as the emergence of partisan echo chambers and the viral spread of misinformation. We show that these vulnerabilities are amplified by abusive behaviors associated with so-called ”follow trains” on Twitter, in which long lists of like-minded accounts are mentioned for others to follow. This leads to the formation of highly dense and hierarchical echo chambers. We present the first systematic analysis of U.S. political train networks, which involve many thousands of hyper-partisan accounts. These accounts engage in various suspicious behaviors, including some that violate platform policies: we find evidence of inauthentic automated accounts, artificial inflation of friends and followers, and abnormal content deletion. The networks are also responsible for amplifying toxic content from low-credibility and conspiratorial sources. Platforms may be reluctant to curb this kind of abuse for fear of being accused of political bias. As a result, the political echo chambers manufactured by follow trains grow denser and train accounts accumulate influence; even political leaders occasionally engage with them.

Understanding and reducing the spread of misinformation online

Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, David Rand

 

The spread of false and misleading news on social media is of great societal concern. Why do people share such content, and what can be done about it? In a first survey experiment (N=1,015), we demonstrate a disconnect between accuracy judgments and sharing intentions: even though true headlines are rated as much more accurate than false headlines, headline veracity has little impact on sharing. We argue against a “post-truth” interpretation, whereby people deliberately share false content because it furthers their political agenda. Instead, we propose that the problem is simply distraction: most people do not want to spread misinformation, but are distracted from accuracy by other salient motives when choosing what to share. Indeed, when directly asked, most participants say it is important to only share accurate news. Accordingly, across three survey experiments (total N=2775) and an experiment on Twitter in which we messaged N=5,482 users who had previously shared news from misleading websites, we find that subtly inducing people to think about the concept of accuracy increases the quality of the news they share. Together, these results challenge the popular post-truth narrative. Instead, they suggest that many people are capable of detecting low-quality news content, but nonetheless share such content online because social media is not conducive to thinking analytically about truth and accuracy. Furthermore, our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms.

Source: psyarxiv.com