"Genderfluid" or "Attack Helicopter": Responsible HCI Practice with Non-Binary Gender Variation in Online CommunitiesResearchers at CSMR and Yahoo have developed guidelines and a practical case study in the careful and ethical analysis of gender in Social Media platforms. The authors argue that careful and sensitive study design, analysis and interpretation is an important commitment for the HCI research community.
Samantha Jaroszewski, Yahoo Danielle Lottridge, Yahoo Oliver L. Haimson, University of Michigan School of Information Katie Quehl, Yahoo ABSTRACT - As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants' genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes "trolls'" opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research. Author Keywords - Survey research; social media; gender; non-binary; transgender; LGBTQ; online communities; trolling; Tumblr; Fantasy sports. ACM Classification Keywords - H.5.3. Information interfaces and presentation (e.g., HCI): Group and Organization Interfaces: Collaborative computing, Computer-supported cooperative work, Web-based interaction.Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Publication date: April 2018. https://doi.org/10.1145/3173574.3173881 .
Announcing Pregnancy Loss on Facebook: A Decision-Making Framework for Stigmatized Disclosures on Identified Social Network SitesCSMR researchers have developed a six-factor framework for disclosures of pregnancy loss on Social Media sites (Facebook, e.g.): self-related, audiencerelated, societal, platform and affordance-related, network-level, and temporal. While pregnancy loss was the focus, the framework could be applicable to other sensitive Social Media disclosures.
Nazanin Andalibi Andrea Forte ABSTRACT - Pregnancy loss is a common experience that is often not disclosed in spite of potential disclosure benefits such as social support. To understand how and why people disclose pregnancy loss online, we interviewed 27 women in the U.S. who are social media users and had recently experienced pregnancy loss. We developed a decision-making framework explaining pregnancy loss disclosures on identified social network sites (SNS) such as Facebook. We introduce network-level reciprocal disclosure, a theory of how disclosure reciprocity, usually applied to understand dyadic exchanges, can operate at the level of a social network to inform decision-making about stigmatized disclosures in identified SNSs. We find that 1) anonymous disclosures on other sites help facilitate disclosure on identified sites (e.g., Facebook), and 2) awareness campaigns enable sharing about pregnancy loss for many who would not disclose otherwise. Finally, we discuss conceptual and design implications. CAUTION: This paper includes quotes about pregnancy loss.Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Publication date: April 2018. https://doi.org/10.1145/3173574.3173732 .
Forecasting the presence and intensity of hostility on Instagram using linguistic and social featuresResearchers at CSMR and Illinois Institute of Technology have developed predictive analytics for forecasting both the presence and intensity of hostility in Instagram comments ten or more hours in the future. Their work helps resource-constrained platform-providers to better prioritize posts for moderation.
Ping Liu and Joshua Guberman, Illinois Institute of Technology Libby Hemphill, University of Michigan Aron Culotta, Illinois Institute of Technology Abstract - Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018). Publication Date: April, 2018. https://arxiv.org/abs/1804.06759
University of Michigan Experts Discuss Facebook & Cambridge AnalyticaUniversity of Michigan Teach-out on Privacy, Reputation, and Identity, in the Digital Age.
Sol Bermann, University Privacy Officer Garlin Gilchrist II, Director of the Center for Social Media Responsibility Florian Schaub, Assistant Professor, School of Information The segment provides learners with ideas on questions such as:
The Facebook & Cambridge Analytica segment is part of the University of Michigan Teach-Out: “Privacy, Reputation, and Identity in a Digital Age."
- What are actions users can take to help protect their privacy or better manage their data on social media platforms?
- What are the biggest lessons companies, organizations, and users of social media should take away from the Facebook-Cambridge Analytica matter?
- What is the role of privacy laws, regulations, and norms? How do they affect both industries and consumers?
- Aside from following its own policies and the law, does a social media titan like Facebook have a greater ethical and social responsibility?
Can AI Really Solve Facebook’s Problems?Florian Schaub, Assistant Professor of Information, was interviewed by Scientific American for an article on Facebook's efforts to reassure Congress that artificial intelligence can help identify fake news and protect privacy.
It seems Facebook is going through a phase of reckoning and now starting to realize how socially impactful their platform is,” Schaub says. “For a long time they felt they were serving a great social function in getting people connected, and now they’re realizing there’s actually a lot of responsibility that comes with that. That seems to be a little bit of a shift, but at the same time this is not the first time we’re hearing Zuckerberg apologize for indiscretions on Facebook.
CSMR Researchers Engage with IndustryIn 2018, CSMR faculty and students have been busy engaging with media platforms to help them deal with misinformation, privacy, harassment, content moderation, polarization, and other issues. Engagements include:
More Specificity, More Attention to Social Context: Reframing How We Address “Bad Actors”CSMR faculty research online harassment, identifying how articulating community standards and providing more-appropriate ways of connecting with others can help mitigate "bad actor" behavior, and lead to more effective intervention strategies.
Libby Hemphill, University of Michigan To address “bad actors” online, I argue for more specific definitions of acceptable and unacceptable behaviors and explicit attention to the social structures in which behaviors occur. Author Keywords feminism, harassment, online communities ACM Classification Keywords H.5.m [Information interfaces and presentation (e.g., HCI)]: MiscellaneousCHI 2018 Workshop paper: Understanding "Bad Actors" Online. Publication date: February 2018. https://arxiv.org/pdf/1802.08612.pdf
Florian Schaub, Assistant Professor of Information, partnered with Facebook on the Trust Transparency Control (TTC) Labs Initiative: TTCLabs has grown to include over sixty organizations, including major global businesses, startups, civic organizations and academic institutions.