Unlike in 2016, there was no spike in misinformation this election cycle

Cyberwarriors and influence peddlers spread plausible misinformation as a cost-effective way to advance their cause – or just to earn ad revenue. In the run-up to the 2016 elections, Facebook and Twitter performed poorly, amplifying a lot of misinformation. CSMR Faculty Director Paul Resnick writes in The Conversation that their performance looks much different in the 2018 cycle.

We think the Russian Trolls are still out there: attempts to identify them with ML

A publicly-available dataset of 2,848 Twitter accounts that were flagged as Russian Trolls under the Mueller investigation was used to train a machine learning model. Then we applied the model to select journalists’ Twitter feeds and identified Russian Trolls attempting to influence them. This paper, originally posted on Medium, describes the Center for Social Media Responsibility’s ongoing research in this area.

CSMR Advances Algorithm Auditing

CSMR Advances Algorithm Auditing

 

CSMR faculty member Christian Sandvig is coordinating a cross-university and cross-industry partnership to develop “algorithm audits:” new methods to provide accountability to automated decision-making on social media platforms.

Algorithm auditing is an emerging term of art for a research design that has shown promise in identifying unwanted consequences of automation on social media platforms. Auditing in this sense takes its name from the social scientific “audit study” where one feature is manipulated in a field experiment, although it is also reminiscent of a financial audit. An overview of the area was recently published in Nature.

A CSMR-led multidisciplinary team, described at http://auditingalgorithms.science/ has produced events, reading lists, educational activities, and will publish a white paper that aims to coalesce this new area of inquiry. Based at Michigan, the effort includes the University of Illinois, Harvard University, and participants who have worked at social media and tech companies like Facebook, Google, Microsoft, and IBM. Participants are working to clarify the potential dangers of social media algorithms and to specify these dangers as new research problems. They have presented existing methods for auditing as well as the need for new methods. Ultimately, they hope to define a research agenda that can provide new insights that advance science and benefit society in the area of social media responsibility.

This initiative is sponsored by the National Science Foundation.