Social Media as Social Transition Machinery

Social Media as Social Transition Machinery

CSMR research into life transitions describes the ways that different Social Media platforms work together to enable people to carry out different types of transition work, while drawing from different types of support networks. To best facilitate online transition work, Social Media platforms should be designed to foster social connectivity while acknowledging the importance of platform separation.

Oliver L. Haimson, University of Michigan School of Information

Social media, and people’s online self-presentations and social networks, add complexity to people’s experiences managing changing identities during life transitions. I use gender transition as a case study to understand how people experience liminality on social media. I qualitatively analyzed data from transition blogs on Tumblr (n=240), a social media blogging site on which people document their gender transitions, and in-depth interviews with transgender bloggers (n=20). I apply ethnographer van Gennep’s liminality framework to a social media context and contribute a new understanding of liminality by arguing that reconstructing one’s online identity during life transitions is a rite of passage. During life transitions, people present multiple identities simultaneously on different social media sites that together comprise what I call social transition machinery. Social transition machinery describes the ways that, for people facing life transitions, multiple social media sites and networks often remain separate, yet work together to facilitate life transitions.

KEYWORDS Social media; social network sites; life transitions; identity transitions; online identity; Tumblr; Facebook; transgender; non-binary; LGBTQ.

PACM Human-Computer Interaction, Vol. 2, No. CSCW, Article 63. Publication date: November 2018. https://doi.org/10.1145/3274332 .

CSMR Advances Algorithm Auditing

CSMR Advances Algorithm Auditing

 

CSMR faculty member Christian Sandvig is coordinating a cross-university and cross-industry partnership to develop “algorithm audits:” new methods to provide accountability to automated decision-making on social media platforms.

Algorithm auditing is an emerging term of art for a research design that has shown promise in identifying unwanted consequences of automation on social media platforms. Auditing in this sense takes its name from the social scientific “audit study” where one feature is manipulated in a field experiment, although it is also reminiscent of a financial audit. An overview of the area was recently published in Nature.

A CSMR-led multidisciplinary team, described at http://auditingalgorithms.science/ has produced events, reading lists, educational activities, and will publish a white paper that aims to coalesce this new area of inquiry. Based at Michigan, the effort includes the University of Illinois, Harvard University, and participants who have worked at social media and tech companies like Facebook, Google, Microsoft, and IBM. Participants are working to clarify the potential dangers of social media algorithms and to specify these dangers as new research problems. They have presented existing methods for auditing as well as the need for new methods. Ultimately, they hope to define a research agenda that can provide new insights that advance science and benefit society in the area of social media responsibility.

This initiative is sponsored by the National Science Foundation.

When Online Harassment is Perceived as Justified

When Online Harassment is Perceived as Justified

CSMR students and faculty presented a paper on online vigilantism and counterbalancing intervention at the Twelfth International AAAI Conference on Web and Social Media. Their research helps platform companies understand and moderate the effects of social conformity and the propensity for retributive justice.
Lindsay Blackwell, University of Michigan School of Information Tianying Chen, University of Michigan School of Information Sarita Schoenebeck, University of Michigan School of Information Cliff Lampe, University of Michigan School of Information Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing— behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that exposure to a bystander intervention reduces this perception. We discuss alternative approaches and designs for responding to harassment online.
Association for the Advancement of Artificial Intelligence (AAAI) International Conference on Web and Social Media, June 27, 2018. https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17902

“Genderfluid” or “Attack Helicopter”: Responsible HCI Practice with Non-Binary Gender Variation in Online Communities

"Genderfluid" or "Attack Helicopter": Responsible HCI Practice with Non-Binary Gender Variation in Online Communities

Researchers at CSMR and Yahoo have developed guidelines and a practical case study in the careful and ethical analysis of gender in Social Media platforms. The authors argue that careful and sensitive study design, analysis and interpretation is an important commitment for the HCI research community.
Samantha Jaroszewski, Yahoo Danielle Lottridge, Yahoo Oliver L. Haimson, University of Michigan School of Information Katie Quehl, Yahoo ABSTRACT - As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants' genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes "trolls'" opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research. Author Keywords - Survey research; social media; gender; non-binary; transgender; LGBTQ; online communities; trolling; Tumblr; Fantasy sports. ACM Classification Keywords - H.5.3. Information interfaces and presentation (e.g., HCI): Group and Organization Interfaces: Collaborative computing, Computer-supported cooperative work, Web-based interaction.
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Publication date: April 2018. https://doi.org/10.1145/3173574.3173881 .

Announcing Pregnancy Loss on Facebook: A Decision-Making Framework for Stigmatized Disclosures on Identified Social Network Sites

Announcing Pregnancy Loss on Facebook: A Decision-Making Framework for Stigmatized Disclosures on Identified Social Network Sites

CSMR researchers have developed a six-factor framework for disclosures of pregnancy loss on Social Media sites (Facebook, e.g.): self-related, audiencerelated, societal, platform and affordance-related, network-level, and temporal. While pregnancy loss was the focus, the framework could be applicable to other sensitive Social Media disclosures.
Nazanin Andalibi Andrea Forte ABSTRACT - Pregnancy loss is a common experience that is often not disclosed in spite of potential disclosure benefits such as social support. To understand how and why people disclose pregnancy loss online, we interviewed 27 women in the U.S. who are social media users and had recently experienced pregnancy loss. We developed a decision-making framework explaining pregnancy loss disclosures on identified social network sites (SNS) such as Facebook. We introduce network-level reciprocal disclosure, a theory of how disclosure reciprocity, usually applied to understand dyadic exchanges, can operate at the level of a social network to inform decision-making about stigmatized disclosures in identified SNSs. We find that 1) anonymous disclosures on other sites help facilitate disclosure on identified sites (e.g., Facebook), and 2) awareness campaigns enable sharing about pregnancy loss for many who would not disclose otherwise. Finally, we discuss conceptual and design implications. CAUTION: This paper includes quotes about pregnancy loss.
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Publication date: April 2018. https://doi.org/10.1145/3173574.3173732 .

Pseudonymous Parents: Comparing Parenting Roles and Identities on the Mommit and Daddit Subreddits

Pseudonymous Parents: Comparing Parenting Roles and Identities on the Mommit and Daddit Subreddits

CSMR researchers explore gender-role conformity and community, with support from the Mozilla Corporation and the National Science Foundation. Their work helps platforms better understand and serve parents seeking peer-support and discussion on issues of discipline, competition, purchases, faith, and online behavior.
Tawfiq Ammari, University of Michigan School of Information Sarita Schoenebeck, University of Michigan School of Information Daniel M. Romero, University of Michigan School of Information ABSTRACT - Gender equality between mothers and fathers is critical for the social and economic wellbeing of children, mothers, and families. Over the past 50 years, gender roles have begun to converge, with mothers doing more work outside of the home and fathers doing more domestic work. However, popular parenting sites in the U.S. continue to be heavily gendered. We explore parenting roles and identities on the platform Reddit.com which is used by both mothers and fathers. We draw on seven years of data from three major parenting subreddits—Parenting, Mommit, and Daddit—to investigate what topics parents discuss on Reddit and how they vary across parenting subreddits. We find some similarities in topics across the three boards, such as sleep training, as well as differences, such as fathers talking about custody cases and Halloween. We discuss the role of pseudonymity for providing parents with a platform to discuss sensitive parenting topics. We conclude by highlighting the benefits of both gender-inclusive and rolespecific parenting boards. This work provides a roadmap for using computational techniques to understand parenting practices online at large scale. Author Keywords Reddit; gender; parenting; anonymity; pseudonymity; social media; language. ACM Classification Keywords H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
ACM 2018 CHI Conference on Human Factors in Computing Systems, April 21–26, 2018, Montreal, QC, Canada. https://doi.org/10.1145/3173574.3174063 .

Keeping a Low Profile? Technology, Risk and Privacy among Undocumented Immigrants

Keeping a Low Profile? Technology, Risk and Privacy among Undocumented Immigrants

CSMR researchers interviewed 17 Latinx undocumented immigrants for insights into technology use practices, risk perceptions and protective strategies. Their findings demonstrate an opportunity for the design and provision of educational resources, and the design of transparency and privacy mechanisms.
Tamy Guberek, Allison McDonald, Sylvia Simioni, Abraham H. Mhaidli, Kentaro Toyama, Florian Schaub, University of Michigan ABSTRACT - Undocumented immigrants in the United States face risks of discrimination, surveillance, and deportation. We investigate their technology use, risk perceptions, and protective strategies relating to their vulnerability. Through semi-structured interviews with Latinx undocumented immigrants, we find that while participants act to address offline threats, this vigilance does not translate to their online activities. Their technology use is shaped by needs and benefits rather than risk perceptions. While our participants are concerned about identity theft and privacy generally, and some raise concerns about online harassment, their understanding of government surveillance risks is vague and met with resignation. We identify tensions among self-expression, group privacy, and self-censorship related to their immigration status, as well as strong trust in service providers. Our findings have implications for digital literacy education, privacy and security interfaces, and technology design in general. Even minor design decisions can substantially affect exposure risks and well-being for such vulnerable communities. ACM Classification Keywords H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous; K.4.2 Computers and Society: Social Issues. Author Keywords Technology use; privacy; online risk; surveillance; undocumented immigrants; immigration; integration
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - Best Paper Award. Publication date: April 2018. https://doi.org/10.1145/3173574.3173688 .

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Researchers at CSMR and Illinois Institute of Technology have developed predictive analytics for forecasting both the presence and intensity of hostility in Instagram comments ten or more hours in the future. Their work helps resource-constrained platform-providers to better prioritize posts for moderation.
Ping Liu and Joshua Guberman, Illinois Institute of Technology Libby Hemphill, University of Michigan Aron Culotta, Illinois Institute of Technology Abstract - Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).
Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018). Publication Date: April, 2018. https://arxiv.org/abs/1804.06759

Fragmented U.S. Privacy Rules Leave Large Data Loopholes for Facebook and Others

Fragmented U.S. Privacy Rules Leave Large Data Loopholes for Facebook and Others

Florian Schaub, Assistant Professor of Information, writes for The Conversation, reprinted by permission in Scientific American:
Facebook CEO Mark Zuckerberg’s Congressional testimony will discuss ways to keep people’s online data private, which I’m interested in as a privacy scholar. Facebook and other U.S. companies already follow more comprehensive privacy laws in other countries. But without comparable requirements at home, there’s little reason for them to protect U.S. consumers the same way.

Online Harassment and Content Moderation: The Case of Blocklists

Online Harassment and Content Moderation: The Case of Blocklists

Online harassment is a multi-faceted problem with no easy solutions. Social Media platforms are squeezed between charges of indifference to harassment and suppression of free speech. CSMR faculty participated in research into the challenge of designing technical features and seeding social practices that promote constructive discussion and discourage abusive behavior.
Shagun Jhaver, Georgia Institute of Technology Sucheta Ghoshal, Georgia Institute of Technology Amy Bruckman, Georgia Institute of Technology Eric Gilbert, John Derby Evans Associate Professor, University of Michigan School of Information Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this paper, we present a rich description of Twitter blocklists - why they are needed, how they work, and their strengths and weaknesses in practice. Next, we use blocklists to interrogate online harassment - the forms it takes, as well as tactics used by harassers. Speci€fically, we interviewed both people who use blocklists to protect themselves, and people who are blocked by blocklists. We €nd that users are not adequately protected from harassment, and at the same time, many people feel they are blocked unnecessarily and unfairly. Moreover, we €nd that not all users agree on what constitutes harassment. Based on our €findings, we propose design interventions for social network sites with the aim of protecting people from harassment, while preserving freedom of speech. CCS Concepts: • Human-centered computing →Empirical studies in collaborative and social computing; Ethnographic studies; Additional Key Words and Phrases: Online harassment, moderation, blocking mechanisms, GamerGate, blocklists
ACM Transactions on Computer-Human Interaction, Vol. 25, No. 2, Article 1. Publication date: March 2018. https://doi.org/10.1145/3185593 .