Crowdsourcing is the best weapon in fight against fake news
The problem of misinformation isn’t new, but it gained widespread attention during the 2016 presidential election when blatantly false stories (“fake news”) spread widely on social media.
Since then, a broad consensus has emerged that we must better understand why people believe and share misinformation and figure out how to stop it.
Limiting the spread of fake news, hyperpartisan content, conspiracy theories and other kinds of misinformation is important for our democracy. It seems likely to decrease the gap between liberals and conservatives about basic facts and to diffuse some of the cross-party animosity that is so prevalent today. Less misinformation may also make it harder for individuals to win elections based on blatantly false claims.
While there has been a lot of scholarly work documenting the spread of misinformation, there has been much less study of possible solutions. And most of the solutions that social media companies have been deploying so far haven’t been very effective; they also have been almost exclusively focused on fake news rather than other kinds of problematic content.
For example, partnering with professional fact-checkers isn’t scalable because they can’t keep up with the rapid creation of false stories, and fact-checkers are sometimes accused of having a liberal bias.
Furthermore, putting warnings on content found to be false can be counterproductive because it makes misleading stories that didn’t get checked seem more accurate — the so-called “implied truth” effect. And nobody wants social media platforms to be deciding themselves what is trustworthy and what material they should censor.
So, we have been working to figure out ways to effectively fight misinformation on social media. In a recent paper, we document one approach that seems surprisingly promising: using crowdsourcing to identify unreliable outlets and then making content from those outlets less likely to appear in the newsfeed.
Our investigation builds off a similar policy proposal made by Facebook last year to have its community determine which sources are trustworthy. While that proposal received widespread scorn, we wanted to see if this type of strategy could actually work.
Could users accurately judge the trustworthiness of outlets, or are our attitudes toward news totally dominated by our partisanship, such that the most biased outlets would receive the highest trust scores?
To find out, we asked individuals to rate their familiarity with, and trust in, news sources across three categories: mainstream media outlets, hyperpartisan websites and websites that produce blatantly false content.
In one study, we used the online labor market Amazon Mechanical Turk, and in a second study we used a market research company to recruit a pool of people that was nationally representative across age, gender, ethnicity and political preferences. We also asked professional fact-checkers the same questions to compare their answers with the lay people.
Our results show that there is actually reason for some optimism — both about the crowdsourcing approach and about the American public more generally. We found that laypeople trust reputable outlets much more than those that create misinformation and that the trust ratings of the laypeople surveyed closely matched the trust ratings of the professional fact-checkers. This shows that laypeople are much better than many would have expected at knowing which outlets to trust.
There certainly were big partisan differences, with Republicans trusting Fox News much more and all other mainstream outlets less than Democrats. Yet, there was a remarkable consensus across parties about non-mainstream outlets being untrustworthy.
For example, the average Republican participant trusted mainstream outlets that are seen as left-leaning, such as CNN or MSNBC, more than right-leaning hyperpartisan sites, such as Breitbart. This shows that attitudes toward media courses are not as dominated by partisanship as one might have thought.
Since laypeople are effective at judging news source quality, crowdsourcing could be employed as a scalable approach to fighting the spread of misinformation. Social media outlets could survey people from time to time about the trustworthiness of specific news outlets. And content from outlets with higher trust ratings could then be promoted above those with lower trust ratings.
An important caveat is that familiarity heavily impacts trust rankings. If people are not familiar with a news outlet, they overwhelmingly distrust it. This will be a problem for relatively new but high-quality sources.
To overcome this issue, social media platforms may need to provide samples of recent articles before asking if the outlet is trustworthy. Further, it’s also not clear how the study’s results would generalize internationally beyond the U.S.
Despite these possible issues, crowdsourcing is a promising approach to fighting the spread of fake news. Rating at the website level, rather than focusing on individual stories, does not require ratings to keep pace with the posting of headlines.
In addition, using laypeople rather than third-party professional fact-checkers means large numbers of ratings can be easily and frequently acquired.
This is a bit of rare good news about misinformation. Things may not be as hopeless as they often seem, given that even this quite simple approach has the potential to be effective. Social media platforms can — and must — take substantive steps to decrease misinformation.
David Rand is associate professor of management science and brain and cognitive sciences at the MIT Sloan School of Management and the director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT. Gordon Pennycook is assistant professor of behavioral science at University of Regina’s Hill/Levene Schools of Business.They are co-authors of “Fighting misinformation on social media using crowdsourced judgments of news source quality,” which was published in PNAS.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.