Press release: New version of Iffy Quotient shows steady drop of questionable information on social media, partners with NewsGuard for better data


A press release has been issued by Michigan News on CSMR seeing a continued decline in questionable content on Facebook and Twitter. This finding comes courtesy of the newest version of our Iffy Quotient metric, the first of our platform health metrics designed to track how well media platforms are meeting their public responsibilities. The latest Iffy Quotient figures indicate that the percentages of the most popular news URLs on Facebook and Twitter that are from “iffy” sites (ones that frequently publish unreliable information) have fallen over the period of October 1, 2018, to July 1, 2019. On Facebook, questionable content dropped from 12.2% to 7.2% during that time, while on Twitter it dropped only slightly from 11.1% to 10.9%.

CSMR has also formed a new, exciting partnership with NewsGuard, who will now serve as our primary source for vetting and rating news and information sites.

The NewsGuard News Website Reliability Index provides a way to differentiate between generally reliable and generally unreliable sites. NewsGuard rates each site based on nine binary, apolitical criteria of journalistic practice, including whether a site repeatedly publishes false content, whether it regularly corrects or clarifies errors, and whether it avoids deceptive headlines.

Weighted points are awarded for each criterion and then summed up; a score of less than 60 earns a “red” rating, while 60 and above earns a “green” rating, which indicates it is generally reliable. NewsGuard also identifies which sites are satire—for example, the popular publication The Onion.

For the purposes of calculating the Iffy Quotient, a site with a NewsGuard “red” rating that is not identified as satire is considered iffy.

More details of how the Iffy Quotient is calculated are included in our Iffy Quotient report. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Platform Health Metrics page.

See the full press release:

New research on best practices and policies to reduce consumer harms from algorithmic bias


On May 22, 2019, CSMR Director Paul Resnick was among the featured expert speakers at the Brookings Institution's Center for Technology Innovation, which hosted a discussion on algorithmic bias. This panel discussion related to the newly released Brookings paper on algorithmic bias detection and mitigation, co-authored by Nicol Turner Lee, Resnick, and Genie Barton. It offers government, technology, and industry leaders a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies that promote the fair and ethical deployment of artificial intelligence systems and machine learning algorithms.

The full video of the panel discussion is below. (Please note that there are about 12 minutes of introductory remarks before the start of the panel discussion.)