Two years after the 2016 election, are we winning the war against digital misinformation and manipulation? CSMR faculty and affiliates described the technical and journalistic challenges of identifying fake news and manipulated information online and assess the effectiveness of the response by platforms like Facebook in the U.S., Europe, and around the world.
Brendan Nyhan, Professor, Ford School, acted as moderator, and panelists included Mark Ackerman, Professor, School of Information; Ceren Budak, Assistant Professor, School of Information; Fredrik Laurin, Knight-Wallace Fellow, Special Projects Editor for Current Affairs, SVT (Swedish Television); and Rada Mihalcea, Professor, Electrical Engineering and Computer Science.
The Dissonance event series is composed of conversations at the confluence of technology, policy, privacy, security and law and is co-sponsored by the School of Information.
Researchers provide an understanding of disclosure, support seeking, and support providing behaviors in the context of sexual abuse on social media, and the role of anonymity in the seeking and provision of social support.
NAZANIN ANDALIBI, University of Michigan School of Information
OLIVER L. HAIMSON, University of Michigan School of Information
MUNMUN DE CHOUDHURY, Georgia Institute of Technology
ANDREA FORTE, Drexel University
ABSTRACT – Seeking and providing support is challenging. When people disclose sensitive information, audience responses can substantially impact the discloser’s wellbeing. We use mixed methods to understand responses to online sexual abuse-related disclosures on Reddit. We characterize disclosure responses, then investigate relationships between post content, comment content, and anonymity. We illustrate what types of support sought and provided in posts and comments co-occur. We find that posts seeking support receive more comments, and comments from “throwaway” (i.e., anonymous) accounts are more likely on posts also from throwaway accounts. Anonymous commenting enables commenters to share intimate content such as reciprocal disclosures and supportive messages, and commenter anonymity is not associated with aggressive or unsupportive comments. We argue that anonymity is an essential factor in designing social technologies that facilitate support seeking and provision in socially stigmatized contexts, and provide implications for social media site design. CAUTION: This paper includes content about sexual abuse.
Human-centered computing → Social media; Human-centered computing → Collaborative and social computing; Human-centered computing → Human computer interaction (HCI); Human-centered computing → Social networking sites; Information systems → Social networking sites
ACM Transaction on Computer-Human Interaction (TOCHI), Volume 25 Issue 5, October 2018 Article No. 28: TBD. http://dx.doi.org/10.1145/3234942 .
Researchers at CSMR, CMU, and RAND outline design principles to facilitate the development of privacy notices and controls tailored to the requirements, opportunities, and limitations of specific systems.
Florian Schaub, University of Michigan
Rebecca Balebako, RAND Corporation
Lorrie Faith Cranor, Carnegie Mellon University
Abstract – Privacy notice and choice are essential aspects of privacy and data protection regulation worldwide. Yet, today’s privacy notices and controls are surprisingly ineffective at informing users or allowing them to express choice. We analyze why existing privacy notices fail to inform users and tend to leave them helpless, and discuss principles for designing more effective privacy notices and controls.
Index Terms – Privacy, Public Policy Issues, Human Factors, Human-Computer Interaction
Cyberwarriors and influence peddlers spread plausible misinformation as a cost-effective way to advance their cause – or just to earn ad revenue. In the run-up to the 2016 elections, Facebook and Twitter performed poorly, amplifying a lot of misinformation. CSMR Faculty Director Paul Resnick writes in The Conversation that their performance looks much different in the 2018 cycle.
A publicly-available dataset of 2,848 Twitter accounts that were flagged as Russian Trolls under the Mueller investigation was used to train a machine learning model. Then we applied the model to select journalists’ Twitter feeds and identified Russian Trolls attempting to influence them. This paper, originally posted on Medium, describes the Center for Social Media Responsibility’s ongoing research in this area.