Catching Fake News

Two years after the 2016 election, are we winning the war against digital misinformation and manipulation? CSMR faculty and affiliates described the technical and journalistic challenges of identifying fake news and manipulated information online and assess the effectiveness of the response by platforms like Facebook in the U.S., Europe, and around the world.

Brendan Nyhan, Professor, Ford School, acted as moderator, and panelists included Mark Ackerman, Professor, School of Information; Ceren Budak, Assistant Professor, School of Information; Fredrik Laurin, Knight-Wallace Fellow, Special Projects Editor for Current Affairs, SVT (Swedish Television); and Rada Mihalcea, Professor, Electrical Engineering and Computer Science. 

The Dissonance event series is composed of conversations at the confluence of technology, policy, privacy, security and law and is co-sponsored by the School of Information. 

Social Support, Reciprocity, and Anonymity in Responses to Sexual Abuse Disclosures on Social Media

Researchers provide an understanding of disclosure, support seeking, and support providing behaviors in the context of sexual abuse on social media, and the role of anonymity in the seeking and provision of social support.

NAZANIN ANDALIBI, University of Michigan School of Information
OLIVER L. HAIMSON, University of Michigan School of Information
MUNMUN DE CHOUDHURY, Georgia Institute of Technology
ANDREA FORTE, Drexel University

ABSTRACT – Seeking and providing support is challenging. When people disclose sensitive information, audience responses can substantially impact the discloser’s wellbeing. We use mixed methods to understand responses to online sexual abuse-related disclosures on Reddit. We characterize disclosure responses, then investigate relationships between post content, comment content, and anonymity. We illustrate what types of support sought and provided in posts and comments co-occur. We find that posts seeking support receive more comments, and comments from “throwaway” (i.e., anonymous) accounts are more likely on posts also from throwaway accounts. Anonymous commenting enables commenters to share intimate content such as reciprocal disclosures and supportive messages, and commenter anonymity is not associated with aggressive or unsupportive comments. We argue that anonymity is an essential factor in designing social technologies that facilitate support seeking and provision in socially stigmatized contexts, and provide implications for social media site design. CAUTION: This paper includes content about sexual abuse.

Human-centered computing → Social media; Human-centered computing → Collaborative and social computing; Human-centered computing → Human computer interaction (HCI); Human-centered computing → Social networking sites; Information systems → Social networking sites

ACM Transaction on Computer-Human Interaction (TOCHI), Volume 25 Issue 5, October 2018 Article No. 28: TBD. http://dx.doi.org/10.1145/3234942 .

Designing Effective Privacy Notices and Controls

Researchers at CSMR, CMU, and RAND outline design principles to facilitate the development of privacy notices and controls tailored to the requirements, opportunities, and limitations of specific systems.

Florian Schaub, University of Michigan
Rebecca Balebako, RAND Corporation
Lorrie Faith Cranor, Carnegie Mellon University
Abstract – Privacy notice and choice are essential aspects of privacy and data protection regulation worldwide. Yet, today’s privacy notices and controls are surprisingly ineffective at informing users or allowing them to express choice. We analyze why existing privacy notices fail to inform users and tend to leave them helpless, and discuss principles for designing more effective privacy notices and controls.
Index Terms – Privacy, Public Policy Issues, Human Factors, Human-Computer Interaction
IEEE Internet Computing ( Early Access ) June, 2017 – https://doi.org/10.1109/MIC.2017.265102930 .

Unlike in 2016, there was no spike in misinformation this election cycle

Cyberwarriors and influence peddlers spread plausible misinformation as a cost-effective way to advance their cause – or just to earn ad revenue. In the run-up to the 2016 elections, Facebook and Twitter performed poorly, amplifying a lot of misinformation. CSMR Faculty Director Paul Resnick writes in The Conversation that their performance looks much different in the 2018 cycle.

We think the Russian Trolls are still out there: attempts to identify them with ML

A publicly-available dataset of 2,848 Twitter accounts that were flagged as Russian Trolls under the Mueller investigation was used to train a machine learning model. Then we applied the model to select journalists’ Twitter feeds and identified Russian Trolls attempting to influence them. This paper, originally posted on Medium, describes the Center for Social Media Responsibility’s ongoing research in this area.

Threading is Sticky: How Threaded Conversations Promote Comment System User Retention

The Guardian newspaper’s introduction of single-layer hierarchical threading to its comment section creates a natural experiment for CSMR researchers to better understand the consequences of this design change. Consistent with the publisher’s aims, their research shows that the new design was followed by an increase in the rate of individuals returning to post again, both on any given article, and via the commenting service as a whole.

Ceren Budak, University of Michigan
R. Kelly Garrett, Ohio State University
Paul Resnick, University of Michigan
Julia Kamin, University of Michigan

The Guardian—the fifth most widely read online newspaper in the world as of 2014—changed conversations on its commenting platform by altering its design from non-threaded to single-level threaded in 2012. We studied this naturally occurring experiment to investigate the impact of conversation threading on user retention as mediated by several potential changes in conversation structure and style. Our analysis shows that the design change made new users significantly more likely to comment a second time, and that this increased stickiness is due in part to a higher fraction of comments receiving responses after the design change. In mediation analysis, other anticipated mechanisms such as reciprocal exchanges and comment civility did not help to explain users’ decision to return to the commenting system; indeed, civility did not increase after the design change and reciprocity declined. These analyses show that even simple design choices can have a significant impact on news forums’ stickiness. Further, they suggest that this influence is more powerfully shaped by affordances—the new system made responding easier—than by changes in users’ attention to social norms of reciprocity or civility. This has an array of implications for designers.

Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 27 (November 2017), 20 pages. https://doi.org/10.1145/3134662 

Social Media as Social Transition Machinery

Social Media as Social Transition Machinery

CSMR research into life transitions describes the ways that different Social Media platforms work together to enable people to carry out different types of transition work, while drawing from different types of support networks. To best facilitate online transition work, Social Media platforms should be designed to foster social connectivity while acknowledging the importance of platform separation.

Oliver L. Haimson, University of Michigan School of Information

Social media, and people’s online self-presentations and social networks, add complexity to people’s experiences managing changing identities during life transitions. I use gender transition as a case study to understand how people experience liminality on social media. I qualitatively analyzed data from transition blogs on Tumblr (n=240), a social media blogging site on which people document their gender transitions, and in-depth interviews with transgender bloggers (n=20). I apply ethnographer van Gennep’s liminality framework to a social media context and contribute a new understanding of liminality by arguing that reconstructing one’s online identity during life transitions is a rite of passage. During life transitions, people present multiple identities simultaneously on different social media sites that together comprise what I call social transition machinery. Social transition machinery describes the ways that, for people facing life transitions, multiple social media sites and networks often remain separate, yet work together to facilitate life transitions.

KEYWORDS Social media; social network sites; life transitions; identity transitions; online identity; Tumblr; Facebook; transgender; non-binary; LGBTQ.

PACM Human-Computer Interaction, Vol. 2, No. CSCW, Article 63. Publication date: November 2018. https://doi.org/10.1145/3274332 .

CSMR Advances Algorithm Auditing

CSMR Advances Algorithm Auditing

 

CSMR faculty member Christian Sandvig is coordinating a cross-university and cross-industry partnership to develop “algorithm audits:” new methods to provide accountability to automated decision-making on social media platforms.

Algorithm auditing is an emerging term of art for a research design that has shown promise in identifying unwanted consequences of automation on social media platforms. Auditing in this sense takes its name from the social scientific “audit study” where one feature is manipulated in a field experiment, although it is also reminiscent of a financial audit. An overview of the area was recently published in Nature.

A CSMR-led multidisciplinary team, described at http://auditingalgorithms.science/ has produced events, reading lists, educational activities, and will publish a white paper that aims to coalesce this new area of inquiry. Based at Michigan, the effort includes the University of Illinois, Harvard University, and participants who have worked at social media and tech companies like Facebook, Google, Microsoft, and IBM. Participants are working to clarify the potential dangers of social media algorithms and to specify these dangers as new research problems. They have presented existing methods for auditing as well as the need for new methods. Ultimately, they hope to define a research agenda that can provide new insights that advance science and benefit society in the area of social media responsibility.

This initiative is sponsored by the National Science Foundation.

9th Annual U-M Social Media Day

9th Annual U-M Social Media Day

UMSI Assistant Professor Florian Schaub on Social Media Privacy and Action: https://twitter.com/UMich/status/1012800331363831808 "Privacy is not just about protecting yourself, it's about protecting your community."

When Online Harassment is Perceived as Justified

When Online Harassment is Perceived as Justified

CSMR students and faculty presented a paper on online vigilantism and counterbalancing intervention at the Twelfth International AAAI Conference on Web and Social Media. Their research helps platform companies understand and moderate the effects of social conformity and the propensity for retributive justice.
Lindsay Blackwell, University of Michigan School of Information Tianying Chen, University of Michigan School of Information Sarita Schoenebeck, University of Michigan School of Information Cliff Lampe, University of Michigan School of Information Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing— behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that exposure to a bystander intervention reduces this perception. We discuss alternative approaches and designs for responding to harassment online.
Association for the Advancement of Artificial Intelligence (AAAI) International Conference on Web and Social Media, June 27, 2018. https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17902