Unlike in 2016, there was no spike in misinformation this election cycle

Cyberwarriors and influence peddlers spread plausible misinformation as a cost-effective way to advance their cause – or just to earn ad revenue. In the run-up to the 2016 elections, Facebook and Twitter performed poorly, amplifying a lot of misinformation. CSMR Faculty Director Paul Resnick writes in The Conversation that their performance looks much different in the 2018 cycle.

We think the Russian Trolls are still out there: attempts to identify them with ML

A publicly-available dataset of 2,848 Twitter accounts that were flagged as Russian Trolls under the Mueller investigation was used to train a machine learning model. Then we applied the model to select journalists’ Twitter feeds and identified Russian Trolls attempting to influence them. This paper, originally posted on Medium, describes the Center for Social Media Responsibility’s ongoing research in this area.

Pseudonymous Parents: Comparing Parenting Roles and Identities on the Mommit and Daddit Subreddits

Pseudonymous Parents: Comparing Parenting Roles and Identities on the Mommit and Daddit Subreddits

CSMR researchers explore gender-role conformity and community, with support from the Mozilla Corporation and the National Science Foundation. Their work helps platforms better understand and serve parents seeking peer-support and discussion on issues of discipline, competition, purchases, faith, and online behavior.
Tawfiq Ammari, University of Michigan School of Information Sarita Schoenebeck, University of Michigan School of Information Daniel M. Romero, University of Michigan School of Information ABSTRACT - Gender equality between mothers and fathers is critical for the social and economic wellbeing of children, mothers, and families. Over the past 50 years, gender roles have begun to converge, with mothers doing more work outside of the home and fathers doing more domestic work. However, popular parenting sites in the U.S. continue to be heavily gendered. We explore parenting roles and identities on the platform Reddit.com which is used by both mothers and fathers. We draw on seven years of data from three major parenting subreddits—Parenting, Mommit, and Daddit—to investigate what topics parents discuss on Reddit and how they vary across parenting subreddits. We find some similarities in topics across the three boards, such as sleep training, as well as differences, such as fathers talking about custody cases and Halloween. We discuss the role of pseudonymity for providing parents with a platform to discuss sensitive parenting topics. We conclude by highlighting the benefits of both gender-inclusive and rolespecific parenting boards. This work provides a roadmap for using computational techniques to understand parenting practices online at large scale. Author Keywords Reddit; gender; parenting; anonymity; pseudonymity; social media; language. ACM Classification Keywords H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
ACM 2018 CHI Conference on Human Factors in Computing Systems, April 21–26, 2018, Montreal, QC, Canada. https://doi.org/10.1145/3173574.3174063 .

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Researchers at CSMR and Illinois Institute of Technology have developed predictive analytics for forecasting both the presence and intensity of hostility in Instagram comments ten or more hours in the future. Their work helps resource-constrained platform-providers to better prioritize posts for moderation.
Ping Liu and Joshua Guberman, Illinois Institute of Technology Libby Hemphill, University of Michigan Aron Culotta, Illinois Institute of Technology Abstract - Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).
Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018). Publication Date: April, 2018. https://arxiv.org/abs/1804.06759

Can AI Really Solve Facebook’s Problems?

Can AI Really Solve Facebook’s Problems?

Florian Schaub, Assistant Professor of Information, was interviewed by Scientific American for an article on Facebook's efforts to reassure Congress that artificial intelligence can help identify fake news and protect privacy. Excerpt:
It seems Facebook is going through a phase of reckoning and now starting to realize how socially impactful their platform is,” Schaub says. “For a long time they felt they were serving a great social function in getting people connected, and now they’re realizing there’s actually a lot of responsibility that comes with that. That seems to be a little bit of a shift, but at the same time this is not the first time we’re hearing Zuckerberg apologize for indiscretions on Facebook.

CSMR Researchers Engage with Industry

CSMR Researchers Engage with Industry

In 2018, CSMR faculty and students have been busy engaging with media platforms to help them deal with misinformation, privacy, harassment, content moderation, polarization, and other issues. Engagements include: Engagements range from participation in industry-sponsored roundtables to lectures at companies about our research to paid consulting engagements and internships.

Online Harassment and Content Moderation: The Case of Blocklists

Online Harassment and Content Moderation: The Case of Blocklists

Online harassment is a multi-faceted problem with no easy solutions. Social Media platforms are squeezed between charges of indifference to harassment and suppression of free speech. CSMR faculty participated in research into the challenge of designing technical features and seeding social practices that promote constructive discussion and discourage abusive behavior.
Shagun Jhaver, Georgia Institute of Technology Sucheta Ghoshal, Georgia Institute of Technology Amy Bruckman, Georgia Institute of Technology Eric Gilbert, John Derby Evans Associate Professor, University of Michigan School of Information Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this paper, we present a rich description of Twitter blocklists - why they are needed, how they work, and their strengths and weaknesses in practice. Next, we use blocklists to interrogate online harassment - the forms it takes, as well as tactics used by harassers. Speci€fically, we interviewed both people who use blocklists to protect themselves, and people who are blocked by blocklists. We €nd that users are not adequately protected from harassment, and at the same time, many people feel they are blocked unnecessarily and unfairly. Moreover, we €nd that not all users agree on what constitutes harassment. Based on our €findings, we propose design interventions for social network sites with the aim of protecting people from harassment, while preserving freedom of speech. CCS Concepts: • Human-centered computing →Empirical studies in collaborative and social computing; Ethnographic studies; Additional Key Words and Phrases: Online harassment, moderation, blocking mechanisms, GamerGate, blocklists
ACM Transactions on Computer-Human Interaction, Vol. 25, No. 2, Article 1. Publication date: March 2018. https://doi.org/10.1145/3185593 .

Why privacy policies are falling short…

Florian Schaub, Assistant Professor of Information, partnered with Facebook on the Trust Transparency Control (TTC) Labs Initiative: Initiated and supported by Facebook, and built on collaboration, TTCLabs has grown to include over sixty organizations, including major global businesses, startups, civic organizations and academic institutions.