Searching for or reviewing evidence improves crowdworkers’ misinformation judgments and reduces partisan bias
Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists’ assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist’s ratings.
Collective Intelligence, Vol 2:2, pp. 1–15
Access the paper here: https://doi.org/10.1177/26339137231173407
— Paul Resnick, Aljohara Alfayez, Jane Im, Eric Gilbert