NEWS & EVENTS

See the latest at CSMR.
Filter by:
news img
news

Nature News: Deepfakes, trolls and cybertroopers: how social media could sway elections in 2024

Jan 31, 2024

Paul Resnick has the (literal) last word in a Nature News story about how social media could sway elections around the world in 2024 and how researchers are responding to challenging changes in the landscape.

Read the article from Nature News

news img
other

Assessment of Discoverability Metrics for Harmful Content

Dec 22, 2023

Many stakeholders are interested in tracking prevalence metrics of harmful content on social media platforms. TrustLab tracks several metrics and has produced a report for the European Commission's Code of Practice. One of TrustLab's prevalence metrics, which they refer to as "discoverability," is calculated by simulating a set of user searches, classifying the results as harmful or not, and reporting the proportion of harmful results.

At TrustLab's request, the University of Michigan Center for Social Media Responsibility (CSMR) has identified key concerns and considerations in producing and interpreting any discoverability metric, and some possible approaches for addressing these concerns. There is a family of possible discoverability metrics, each based on alternative design choices. This report analyzes the impacts of design alternatives on two key principles from measurement theory, validity and reliability, alongside a third principle, "robustness to strategic actors."

- Validity: the accuracy of the metric – whether it correctly measures the thing that it is supposed to measure. For discoverability metrics in particular, a key element of validity is comparability – the extent to which comparisons across platforms, countries, and time periods are meaningful. For example, is harmful content more prevalent on X or YouTube? Is it more prevalent in Slovakia or Poland? Did the prevalence decline on YouTube in Poland since last quarter?
- Reliability: the consistency of the metric – a measure could be accurate on average but have a high degree of variability between repeated individual measurements. In that case, any particular measurement would have to be treated as unreliable.
- Robustness to strategic actors: whether, for example, a platform could manipulate or game the discoverability metric without changing what real users experience on the platform.

Read the assessment report

Paul Resnick, Siqi Wu, and James Park

news img
news

Wired: The New Era of Social Media Looks as Bad for Privacy as the Last One

Nov 06, 2023

Since Twitter began its “slow-motion implosion,” a slew of new social media platforms has cropped up, including Bluesky, Mastodon, and Threads. This is good news for users who still want to connect, but bad news for their privacy.

A new story from Wired reveals these new social media platforms are collecting incredible amounts of data on users, leaving them vulnerable.

University of Michigan School of Information assistant professor Nazanin Andalibi says social media platforms have a lot of power over users, especially since users will still use their platforms despite concerns about data collection.

“People might still use a platform that they believe will not respect their privacy,” she says.

Read the article from Wired

Read the announcement from the U-M School of Information
Noor Hindi, UMSI public relations specialist

news img
other

The Shapes of the Fourth Estate During the Pandemic: Profiling COVID-19 News Consumption in Eight Countries

Oct 04, 2023

News media is often referred to as the Fourth Estate, a recognition of its political power. New understandings of how media shape political beliefs and influence collective behaviors are urgently needed in an era when public opinion polls do not necessarily reflect election results and users influence each other in real-time under algorithm-mediated content personalization. In this work, we measure not only the average but also the distribution of audience political leanings for different media across different countries. The methodological components of these new measurements include a high-fidelity COVID-19 tweet dataset; high-precision user geolocation extraction; and user political leaning estimated from the within-country retweet networks involving local politicians. We focus on geolocated users from eight countries, profile user leaning distribution for each country, and analyze bridging users who have interactions across multiple countries. Except for France and Turkey, we observe consistent bi-modal user leaning distributions in the other six countries, and find that cross-country retweeting behaviors do not oscillate across the partisan divide. More importantly, this study contributes a new set of media bias estimates by averaging the leaning scores of users who share the URLs from media domains. Through two validations, we find that the new average audience leaning scores strongly correlate with existing media bias scores. Lastly, we profile the COVID-19 news consumption by examining the audience leaning distribution for top media in each country, and for selected media across all countries. Those analyses help answer questions such as: Does center media Reuters have a more balanced audience base than partisan media CNN and Fox News in the US? Does far-right media Breitbart attract any left-leaning readers in any countries? Does CNN reach a more balanced audience base in the US than in UK and Spain? In sum, our data-driven methods allow us to study media that are not often collected in editor-curated media bias reporting, especially in non-English-speaking countries. We hope that such cross-country research would inform media outlets of their effectiveness and audience bases in different countries, inform non-government and research organizations about the country-specific media audience profiles, and inform individuals to reflect on our day-to-day media diet.

Proceedings of the ACM on Human-Computer Interaction, Volume 7, Issue CSCW2, Article No.: 317, pp. 1–29

Access the paper here: https://doi.org/10.1145/3610108

— Cai Yang, Lexing Xie, Siqi Wu

news img
other

Remove, Reduce, Inform: What Actions do People Want Social Media Platforms to Take on Potentially Misleading Content?

Oct 04, 2023

To reduce the spread of misinformation, social media platforms may take enforcement actions against offending content, such as adding informational warning labels, reducing distribution, or removing content entirely. However, both their actions and their inactions have been controversial and plagued by allegations of partisan bias. When it comes to specific content items, surprisingly little is known about what ordinary people want the platforms to do. We provide empirical evidence about a politically balanced panel of lay raters' preferences for three potential platform actions on 368 news articles. Our results confirm that on many articles there is a lack of consensus on which actions to take. We find a clear hierarchy of perceived severity of actions with a majority of raters wanting informational labels on the most articles and removal on the fewest. There was no partisan difference in terms of how many articles deserve platform actions but conservatives did prefer somewhat more action on content from liberal sources, and vice versa. We also find that judgments about two holistic properties, misleadingness, and harm, could serve as an effective proxy to determine what actions would be approved by a majority of raters.

Proceedings of the ACM on Human-Computer Interaction, Volume 7, Issue CSCW2, Article No.: 291, pp. 1–33

Access the paper here: https://doi.org/10.1145/3610082

— Shubham Atreja, Libby Hemphill, Paul Resnick

news img
other

GuesSync!: An Online Casual Game To Reduce Affective Polarization

Oct 04, 2023

The past decade in the US has been one of the most politically polarizing in recent memory. Ordinary Democrats and Republicans fundamentally dislike and distrust each other, even when they agree on policy issues. This increase in hostility towards opposing party supporters, commonly called affective polarization, has important ramifications that threaten democracy. Political science research suggests that at least part of the polarization stems from Democrats' misperceptions about Republicans' political views and vice-versa. Therefore, in this work, drawing on insights from political science and game studies research, we designed an online casual game that combines the relaxed, playful nonpartisan norms of casual games with corrective information about party supporters' political views that are often misperceived. Through an experiment, we found that playing the game significantly reduces negative feelings toward outparty supporters among Democrats, but not Republicans. It was also effective in improving willingness to talk politics with outparty supporters. Further, we identified psychological reactance as a potential mechanism that affects the effectiveness of depolarization interventions. Finally, our analyses suggest that the game versions with political content were rated to be just as fun to play as a game version without any political content suggesting that, contrary to popular belief, people do like to mix politics and play.

Proceedings of the ACM on Human-Computer Interaction, Volume 7, Issue CSCW2, Article No.: 341, pp. 1–33

Access the paper here: https://doi.org/10.1145/3610190

— Ashwin Rajadesingan, Daniel Choo, Jessica Zhang, Mia Inakage, Ceren Budak, Paul Resnick

news img
other

How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments

Sep 12, 2023

Computational social science research has made advances in machine learning and natural language processing that support content moderators in detecting harmful content. These advances often rely on training datasets annotated by crowdworkers for harmful content. In designing instructions for annotation tasks to generate training data for these algorithms, researchers often treat the harm concepts that we train algorithms to detect—"hateful," "offensive," "toxic," "racist," "sexist," etc.—as interchangeable. In this work, we studied whether the way that researchers define "harm" affects annotation outcomes. Using Venn diagrams, information gain comparisons, and content analyses, we reveal that annotators do not use the concepts "hateful," "offensive," and "toxic" interchangeably. We identify that features of harm definitions and annotators' individual characteristics explain much of how annotators use these terms differently. Our results offer empirical evidence discouraging the common practice of using harm concepts interchangeably in content moderation research. Instead, researchers should make specific choices about which harm concepts to analyze based on their research goals. Recognizing that researchers are often resource constrained, we also encourage researchers to provide information to bound their findings when their concepts of interest differ from concepts that off-the-shelf harmful content detection algorithms identify. Finally, we encourage algorithm providers to ensure their instruments can adapt to contextually-specific content detection goals (e.g., soliciting instrument users' feedback).

Access the paper here: https://arxiv.org/abs/2309.15827

— Angela Schöpke-Gonzalez, Siqi Wu, Sagar Kumar, Paul Resnick, Libby Hemphill

news img
other

When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset

Aug 28, 2023

Annotators are not fungible. Their demographics, life experiences, and backgrounds all contribute to how they label data. However, NLP has only recently considered how annotator identity might influence their decisions. Here, we present POPQUORN (the Potato-Prolific dataset for Question-Answering, Offensiveness, text Rewriting and politeness rating with demographic Nuance). POPQUORN contains 45,000 annotations from 1,484 annotators, drawn from a representative sample regarding sex, age, and race as the US population. Through a series of analyses, we show that annotators’ background plays a significant role in their judgments. Further, our work shows that backgrounds not previously considered in NLP (e.g., education), are meaningful and should be considered. Our study suggests that understanding the background of annotators and collecting labels from a demographically balanced pool of crowd workers is important to reduce the bias of datasets. The dataset, annotator background, and annotation interface are available at https://github.com/Jiaxin-Pei/potato-prolific-dataset.

Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII) at ACL, June 2023

Access the paper here: https://arxiv.org/abs/2306.06826

— Jiaxin Pei, David Jurgens

news img
other

Profile Update: The Effects of Identity Disclosure on Network Connections and Language

Aug 18, 2023

Our social identities determine how we interact and engage with the world surrounding us. In online settings, individuals can make these identities explicit by including them in their public biography, possibly signaling a change to what is important to them and how they should be viewed. Here, we perform the first large-scale study on Twitter that examines behavioral changes following identity signal addition on Twitter profiles. Combining social networks with NLP and quasi-experimental analyses, we discover that after disclosing an identity on their profiles, users (1) generate more tweets containing language that aligns with their identity and (2) connect more to same-identity users. We also examine whether adding an identity signal increases the number of offensive replies and find that (3) the combined effect of disclosing identity via both tweets and profiles is associated with a reduced number of offensive replies from others.

Access the paper here: https://arxiv.org/abs/2308.09270

— Minje Choi, Daniel Romero, David Jurgens

news img
other

How to Train Your YouTube Recommender to Avoid Unwanted Videos

Aug 02, 2023

YouTube provides features for users to indicate disinterest when presented with unwanted recommendations, such as the "Not interested" and "Don't recommend channel" buttons. These buttons are purported to allow the user to correct "mistakes" made by the recommendation system. Yet, relatively little is known about the empirical efficacy of these buttons. Neither is much known about users' awareness of and confidence in them. To address these gaps, we simulated YouTube users with sock puppet agents. Each agent first executed a "stain phase," where it watched many videos of one assigned topic; it then executed a "scrub phase," where it tried to remove recommendations of the assigned topic. Each agent repeatedly applied a single scrubbing strategy, either indicating disinterest in one of the videos visited in the stain phase (disliking it or deleting it from the watch history), or indicating disinterest in a video recommended on the homepage (clicking the "not interested" or "don't recommend channel" button or opening the video and clicking the dislike button). We found that the stain phase significantly increased the fraction of the recommended videos dedicated to the assigned topic on the user's homepage. For the scrub phase, using the "Not interested" button worked best, significantly reducing such recommendations in all topics tested, on average removing 88% of them. Neither the stain phase nor the scrub phase, however, had much effect on videopage recommendations. We also ran a survey (N = 300) asking adult YouTube users in the US whether they were aware of and used these buttons before, as well as how effective they found these buttons to be. We found that 44% of participants were not aware that the "Not interested" button existed. However, those who were aware of this button often used it to remove unwanted recommendations (82.8%) and found it to be modestly effective (3.42 out of 5).

To appear in Proceedings of the 18th International AAAI Conference on Web and Social Media (June 3 - 6, 2024, Buffalo, NY, USA)

Access the paper here: https://arxiv.org/abs/2307.14551

— Alexander Liu, Siqi Wu, Paul Resnick

news img
other

Bursts of contemporaneous publication among high- and low-credibility online information providers

Jul 31, 2023

In studies of misinformation, the distinction between high- and low-credibility publishers is fundamental. However, there is much that we do not know about the relationship between the subject matter and timing of content produced by the two types of publishers. By analyzing the content of several million unique articles published over 28 months, we show that high- and low-credibility publishers operate in distinct news ecosystems. Bursts of news coverage generated by the two types of publishers tend to cover different subject matter at different times, even though fluctuations in their overall news production tend to be highly correlated. Regardless of the mechanism, temporally convergent coverage among low-credibility publishers has troubling implications for American news consumers.

New Media and Society, July 2023

Access the paper here: https://doi.org/10.1177/14614448231183617

Ceren Budak, Lia Bozarth, Robert M Bond, Drew Margolin, Jason J Jones, R Kelly Garrett

news img
other

Prevalence Estimation in Social Media Using Black Box Classifiers

Jun 02, 2023

Many problems in computational social science require estimating the proportion of items with a particular property. This counting task is called prevalence estimation or quantification. Frequently, researchers have a pre-trained classifier available to them. However, it is usually not safe to simply apply the classifier to all items and count the predictions of each class, because the test dataset may differ in important ways from the dataset on which the classifier was trained, a phenomenon called distribution shift. In addition, a second type of distribution shift may occur when one wishes to compare the prevalence between multiple datasets, such as tracking changes over time. To cope with that, some assumptions need to be made about the nature of possible distribution shifts across datasets, a process that we call extrapolation.

This tutorial will introduce an end-to-end framework for prevalence estimation using black box (pre-trained) classifiers, with a focus on social media datasets. The framework consists of a calibration phase and an extrapolation phase, aiming to address the two types of distribution shifts described above. We will provide hands-on exercises that walk the participants through solving a real world problem of quantifying positive tweets in datasets from two separate time periods. All datasets, pre-trained models, and example codes will be provided in a Jupyter notebook. After attending this tutorial, participants will be able to understand the basics of the prevalence estimation problem in social media, and construct a data analysis pipeline to conduct prevalence estimation for their projects.

17th International AAAI Conference on Web and Social Media (ACL 2023), June 2023

Access the tutorial here: https://avalanchesiqi.github.io/prevalence-estimation-tutorial/

Siqi Wu, Paul Resnick

news img
other

Bridging Nations: Quantifying the Role of Multilinguals in Communication on Social Media

Jun 02, 2023

Social media enables the rapid spread of many kinds of information, from pop culture memes to social movements. However, little is known about how information crosses linguistic boundaries. We apply causal inference techniques on the European Twitter network to quantify the structural role and communication influence of multilingual users in cross-lingual information exchange. Overall, multilinguals play an essential role; posting in multiple languages increases betweenness centrality by 13%, and having a multilingual network neighbor increases monolinguals’ odds of sharing domains and hashtags from another language 16-fold and 4-fold, respectively. We further show that multilinguals have a greater impact on diffusing information is less accessible to their monolingual compatriots, such as information from far-away countries and content about regional politics, nascent social movements, and job opportunities. By highlighting information exchange across borders, this work sheds light on a crucial component of how information and ideas spread around the world.

Proceedings of the 17th International AAAI Conference on Web and Social Media (ACL 2023), June 2023

*Awarded Outstanding Methodology Paper Award for ICWSM 2023*

Access the paper here: https://doi.org/10.1609/icwsm.v17i1.22174

— Julia Mendelsohn, Sayan Ghosh, David Jurgens, Ceren Budak

news img
other

Searching for or reviewing evidence improves crowdworkers’ misinformation judgments and reduces partisan bias

May 26, 2023

Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists’ assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist’s ratings.

Collective Intelligence, Vol 2:2, pp. 1–15

Access the paper here: https://doi.org/10.1177/26339137231173407

Paul Resnick, Aljohara Alfayez, Jane Im, Eric Gilbert

news img
other

Wisdom of Two Crowds: Misinformation Moderation on Reddit and How to Improve this Process---A Case Study of COVID-19

Apr 16, 2023

Past work has explored various ways for online platforms to leverage crowd wisdom for misinformation detection and moderation. Yet, platforms often relegate governance to their communities, and limited research has been done from the perspective of these communities and their moderators. How is misinformation currently moderated in online communities that are heavily self-governed? What role does the crowd play in this process, and how can this process be improved? In this study, we answer these questions through semi-structured interviews with Reddit moderators. We focus on a case study of COVID-19 misinformation. First, our analysis identifies a general moderation workflow model which encompasses various processes adopted by participants for handling COVID-19 misinformation. Further, we show that the moderation workflow revolves around three elements: content facticity, user intent, and perceived harm. Next, our interviews also reveal that Reddit moderators rely on two types of crowd wisdom for misinformation detection. Almost all participants are heavily reliant on reports from crowds of ordinary users to identify potential misinformation. A second crowd--participants' own moderation teams and expert moderators of other communities--provide support when participants encounter difficult, ambiguous cases. Finally, we use design probes to better understand how different types of crowd signals---from ordinary users and moderators---readily available on Reddit can assist moderators with identifying misinformation. We observe that nearly half of all participants preferred these cues over labels from expert fact-checkers because these cues can help them discern user intent. Additionally, a quarter of the participants distrust professional fact-checkers, raising important concerns about misinformation moderation.

Proceedings of the ACM on Human-Computer Interaction, Volume 7, Issue CSCW1, Article No.: 155, pp. 1–33

Access the paper here: https://doi.org/10.1145/3579631

— Lia Bozarth, Jane Im, Christopher Quarles, Ceren Budak

news img
news

Paul Resnick files amicus brief for Gonzalez v. Google Supreme Court case

Jan 23, 2023

A Supreme Court case could alter algorithm recommendation systems on websites like YouTube, challenging a 1993 law. The law concerns section 230 of the Communications Decency Act, which protects websites from liability arising from user content.

In an amicus brief filed by Paul Resnick and a group of distinguished information science scholars, the experts argue that websites should continue to be protected under this law.

The case, Gonzalez v. Google, No. 21-1333, is set to be heard in the spring. Resnick’s early research on automated online recommendation systems continues to be influential.

“If the Supreme Court were to rule that algorithmically recommending items incurs the same liability as publishing those items, many service providers would radically alter or stop providing those services,” Resnick says. “Recommender systems, search engines, and spam filters all provide great value to consumers, so it would be a huge loss to society if they went away.”

The lawsuit was filed by the family of a student killed in a November 2015 terrorist attack. The family argues that YouTube’s algorithm pushed ISIS recruitment videos and inspired the attack.

“We filed this brief because we thought it would be valuable for the court to have a technical perspective on how recommender systems and search engines work now, and how they worked at the time of the legislation,” Resnick says.

Read the announcement from the U-M School of Information
Noor Hindi, UMSI public relations specialist

Image credit: Senate Democrats, CC BY 2.0, via Wikimedia Commons

news img
other

CSMR's WiseDex team completes Phase 1 of the NSF Convergence Accelerator

Sep 30, 2022

Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation.

WiseDex is the venture of a multidisciplinary team of researchers, led by CSMR experts, who recently completed Phase 1 of the National Science Foundation (NSF) Convergence Accelerator. The Phase 1 team is composed of:

Paul Resnick
David Jurgens
James Park
Amy Zhang, University of Washington
David Rand, MIT
Adam Berinsky, MIT
Benji Loney, TrustLab

Please watch WiseDex's overview video for a fuller introduction to WiseDex.

To learn more, please visit the WiseDex website.

See Paul Resnick give a quick preview of WiseDex in this short video.

news img
events

Nazanin Andalibi to give keynote speech at Reddit’s Mod Summit

Sep 14, 2022

Nazanin Andalibi will be the keynote speaker at the Third Annual Reddit Mod Summit on September 17, 2022. The virtual event brings together moderators on the popular social media site Reddit to discuss what’s new on the platform and interact with each other and the company’s leadership.

Andalibi's talk, “The Promises and Perils of Online Communities for Social Support and Wellbeing,” will highlight her work on how social media can facilitate both social support and community as well as harm and exclusion for marginalized people.

“I am excited about the opportunity to share key insights from my research with an audience that is so well-positioned to make practical changes for improving online communities,” says Andalibi. “To me, this invitation is a signal of my work's broader impact beyond the academic community.”

Andalibi hopes that her talk inspires Reddit moderators, designers, and decision-makers to “consider how, where, and for whom inequities manifest in subreddits, and how to foster and sustain more explicitly and meaningfully inclusive and compassionate online spaces supportive of diverse, marginalized life experiences and identities.”

She adds, “If Reddit as a platform decides to implement some of my design recommendations, then all the better!”

This summit is being held online and is open only to invited participants.

Read the announcement from the U-M School of Information

news img
news

Washington Post: Facebook bans hate speech but still makes money from white supremacists

Aug 11, 2022

The Washington Post reports that despite Facebook’s long-time ban on white nationalism, the platform still hosts 119 Facebook pages and 20 Facebook groups associated with white supremacy organizations. Libby Hemphill said these groups have changed their approach to keep pages up.

“The people who are creating this content have become very tech savvy, so they are aware of the loopholes that exist and they’re using it to keep posting content,” said Hemphill. “Platforms are often just playing catch-up.”

More than 40% of searches for white supremacy organizations also presented users with an advertisement — meaning, Facebook continues to make money from hate groups’ pages.

Read the article from The Washington Post
Read the announcement from the U-M School of Information

news img
news

Pennsylvania gubernatorial nominee courting voters on Gab pushes the GOP further right, experts say

Jul 21, 2022

Libby Hemphill tells the Pennsylvania Capital-Star that the Pennsylvania Republican gubernatorial nominee’s use of the far-right social media platform Gab is pushing the Republican party further to the right.

Hemphill explains that Gab emerged as a refuge for hate speech after mainstream platforms like Facebook and Twitter took steps to moderate and remove racist and bigoted content.

The Capital-Star reports that nominee Doug Mastriano paid $5,000 to Gab.com for campaign consulting. “When a mainstream politician says, 'I want to reach hate groups where they meet,' that’s scary to us,” Hemphill said.

She said that it remains to be seen how voters will react to Mastriano’s association with Gab, noting that it could motivate voters for or against him. But either way, “for a statewide office, it’s a risky move,” she said.

“Even if this doesn’t work in 2022, if it moves the window of what is acceptable to include engaging with white supremacists,” Hemphill said. “That is a significant push for the Republican party.”

Read the article from the Pennsylvania Capital-Star
Read the announcement from the U-M School of Information

news img
events

Meet CSMR's WiseDex team at the NSF Convergence Accelerator Expo 2022

Jul 13, 2022

Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation. WiseDex is a joint venture of collaborators from CSMR, University of Washington, MIT, and Trust Lab, and is a 2021 cohort member of the National Science Foundation (NSF) Convergence Accelerator.

See Paul Resnick give a quick preview of WiseDex in this short video.

The WiseDex team is presenting at the NSF Convergence Accelerator Expo 2022 on July 27 and 28, 2022. This virtual event is an opportunity to see unique, use-inspired research solutions funded by the NSF Convergence Accelerator. Last year’s event drew over 2,000 registrants from academia, industry, nonprofit, and government—and this year’s event is expected to be even larger. During the event, you will have the opportunity to connect with the WiseDex team and to learn more about WiseDex through various live presentations.

The Convergence Accelerator Expo 2022 is open to the public. Researchers, innovators, technology and business practitioners, and media from academia, industry, government, nonprofit, and other communities are encouraged to attend.

To register, please visit https://nsf-ca-expo2022.vfairs.com. Registration is free.

Read the announcement from the U-M School of Information
Watch Paul Resnick's WiseDex preview on YouTube

news img
other

New website provides information for marginalized communities after social media bans

Jun 22, 2022

An interdisciplinary team has launched a new online resource for understanding and dealing with social media content bans. The Online Identity Help Center (OIHC), the brainchild of Oliver Haimson, aims to help marginalized people navigate the gray areas in social media moderation.

Having content taken down from social media can be a jarring and upsetting experience. Being silenced can feel even more isolating if you are a person from a marginalized group. Those who are sexual, gender, and/or racial minorities may already feel unwelcome online, and content censorship can narrow a sense of community even further.

After content removal there is often an accompanying notification, but people may not fully understand why the content was deemed inappropriate. Often, it is unclear how to avoid censorship in the future. Even worse, people may be unsure how to reinstate an account post-ban.

OIHC aims to help people understand different social media platforms’ policies and rules about content takedowns. It also provides easy-to-read resources on different social media guidelines and what to do if your content is taken down.

"I just want people who are experiencing what can be really stressful situations to be able to find this resource and use it to help them in some way," says Haimson. "We think of it kind of as a digital literacy resource, helping people to learn more about this digital world.

"In an ideal world, I would love it if this could influence social media policy in some way to have social media policy managers better understand these challenges that people face."

Visit the Online Identity Help Center (OIHC)
Read the announcement from the U-M School of Information

(Image credit: Irene Falgueras via blush.design)

news img
news

KQED: Sarita Schoenebeck on what a healthy online commons would look like

May 27, 2022

A podcast episode from KQED (California) on reimagining the future of digital public spaces features Sarita Schoenebeck, who shares some ways in which content moderation on social media platforms can become more effective through the use of shared community values and human rights.

"It's important to remember social media is not a blank slate any more than any community offline is," Schoenebeck says. "We can look at our local schools, for example, and they all have policies and values that shape people’s experiences.

"Free speech is at the government level, but platforms can make their decisions around what values they want to have and what they want to enforce,” she adds.

Listen to the podcast episode from KQED
Read the announcement from the U-M School of Information

news img
other

David Jurgens receives NSF CAREER award

May 09, 2022

David Jurgens has been awarded a five-year, $581,433 grant from the National Science Foundation. The NSF CAREER award will fund his research on "Fostering Prosocial Behavior and Well-Being in Online Communities," which focuses on identifying and measuring how social media can have a positive impact on people’s lives.

Although many studies have focused on how social media platforms can negatively impact people, understanding the positive effects is still unclear. Jurgens plans on studying the effects of prosocial behavior in social media and how it can boost the psychological well-being of users.

"In general, I'm an optimist, but an optimist who likes to measure things," says Jurgens. "I've been curious whether social media has actual benefit in our lives for all its downsides."

As part of the project, Jurgens plans on releasing tools for everyday users to help people track the good and bad interactions and content on their social media feeds. He says the tools will "hopefully help folks make informed decisions about when it's time to take a break from social media."

Jurgens says the CAREER award will fund new experiments aimed at helping people be more prosocial by fostering strategic encouragement. "I'm hoping to give folks a hand, and a helpful nudge, to show more care and compassion towards others."

Read the announcement from the U-M School of Information

news img
news

As Russia invades Ukraine, social media users “fly to quality”

Mar 14, 2022

A swift drop late last month in social media engagement with content from "Iffy" sources has prompted CSMR researchers to ask whether Facebook and Twitter users have been experiencing a "flight to news quality," or at least a flight away from less trustworthy sites, during that time—and if so, why.

After the beginning of the COVID-19 pandemic in 2020, our Iffy Quotient platform health metric indicated a flight to quality on Facebook and Twitter. We observed a sudden dip on both platforms in the sharing of popular URLs from sites whose journalistic practices are less trustworthy and are thus deemed "Iffy." This phenomenon was accompanied by a surge in the sharing of popular URLs from mainstream news sites, thereby suggesting a turn to tried-and-true sources of information during times of uncertainty.

Recently over the two-week period from February 14 to February 28, 2022, another flight away from less trustworthy sources seems to have taken place on the two social media platforms. For the week ending February 14, 8.8% of the engagement with the most popular URLs on Facebook was dedicated to content from Iffy sites, and on Twitter this percentage was 8.2%. For the week ending February 28, the numbers dropped to 6.1% and 5.4%, respectively—more than a 30% drop.

Such a dramatic shift is noteworthy enough, but a closer examination reveals further intrigue. While this engagement trended downward in relatively stable fashion on Facebook, Twitter first experienced a rapid growth in engagement with Iffy sites, up to 10.4% for the week ending February 21, followed by an even more rapid decline.

We suggest these trends may be influenced in part by media coverage of the recent invasion of Ukraine by Russian forces. While Iffy sites can be more entertaining to readers, when being informed takes primary importance, readers seem to turn their attention away from Iffy sites, much like what was seen in the spring of 2020. Twitter’s Iffy Quotient trajectory in particular loosely maps onto the Russian invasion timeline: the rise in engagement with Iffy site URLs takes place around the same time as the onset of rumors of an invasion taking place on or around February 16, while its descent quickens around February 24, when news was breaking of Russian forces moving into Ukraine.

The Iffy Quotient’s daily details page (for example, the daily details for February 14) adds further color. Before the Russian invasion, the most engaged-with URLs from Iffy sites more often revolved around topics that hit closer to the U.S., especially COVID-19 (e.g. mask mandates, vaccines, etc.), Justin Trudeau and the Canadian trucker protests, and the Trump-Russia scandal (e.g. John Durham, Hillary Clinton, etc.). Stories about a possible Russia-Ukraine conflict emerged in the most engaged-with URLs from both trustworthy and Iffy sites beginning around February 21, with greater engagement with trustworthy site content. Once the invasion began on February 24, trustworthy sites’ coverage of it received a lot of engagement and those stories tended towards reportage.

On the other hand, the Iffy sites’ more limited popular content on the invasion appeared to take a more interpretive approach—for example, analyses that offer highly partisan perspectives of the invasion’s impact, especially on the U.S. By February 28 the Iffy sites’ most popular stories were less frequently about the Russian invasion and had reverted back to the topics that were trending pre-invasion, while the trustworthy sites’ most popular stories were nearly all dedicated to the Russia-Ukraine situation.

The overall drop in social media engagement with Iffy sites over the two-week period in question may have also been influenced by actions taken by both Facebook and Twitter against Russian state media. Both platforms have implemented their own labeling methods to make readers aware of news coming from sources with ties to the Russian government, and Facebook has stopped recommending it globally and blocked it entirely in the EU. This may have deterred social media users from clicking on links that came from sites like RT and Sputnik (both considered to be “Iffy” according to site ratings issued by NewsGuard) and thus reduced the engagement share of Iffy sites. These were not among the most popular Iffy sites, however. For example, on February 20, 2022, prior to the invasion, RT had just one article in the top 10 from Iffy sites; among all articles, it was ranked 704 in terms of engagement on Facebook and 500 on Twitter. Thus, the platform actions taken against those media are not sufficient to explain the reduced user engagement with Iffy sites after the invasion.

Putting it all together, it is clear that social media users had shifting reactions to Iffy site content in the latter half of last month. There are strong indications that they once again experienced a flight from lower news quality and were aided by actions taken by social media platforms. Moreover, there are compelling reasons to believe that this flight was influenced by the onset of the Russia-Ukraine conflict and the media coverage it received. Once fighting began, engagement with Iffy site content on Twitter dropped rapidly, continuing a downward trend that began a few days before the invasion started. Such results might suggest not only that a flight to quality occurs when social media users are looking for accurate information, but also that readers may be treating the “hot takes” and highly partisan perspectives more as entertainment than as information sources, and entertainment is less captivating in times of uncertainty.


The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.


Written by James Park, Sagar Kumar, and Paul Resnick

news img
other

CSMR awarded NSF Convergence Accelerator grant

Nov 04, 2021

The National Science Foundation (NSF) Convergence Accelerator, a new program designed to address national-scale societal challenges, has awarded $750,000 to a multidisciplinary team of researchers led by CSMR experts.

The project, “Misinformation Judgments with Public Legitimacy,” aims to develop jury-led services to identify and rectify misinformation on communication platforms. The goal of the services will be to make partially subjective judgments on content moderation in a way that the public accepts as legitimate, regardless of their political leaning. Paul Resnick is the project director, David Jurgens is one of the co-principal investigators, and James Park is the project manager.

“Almost everyone agrees that search and social media platforms should limit the spread of misinformation. They don't always agree, however, about exactly which content is misinforming,” says Resnick. “The core idea is to provide juries with fact-checks and other information related to a post and have the jurors deliberate before rendering judgments. By deferring to publicly legitimate judgments, platforms will act on misinformation more transparently, consistently, and effectively."

Joining Resnick, Jurgens, and Park on this initiative are:

Amy Zhang, University of Washington
David Rand, Massachusetts Institute of Technology
Adam Berinsky, Massachusetts Institute of Technology
Benji Loney, TrustLab

As part of the Convergence Accelerator’s Trust & Authenticity in Communication Systems research track, the project addresses the urgent need for tools and techniques to help the nation effectively prevent, mitigate, and adapt to critical threats to communication systems.

Read the announcement from the U-M School of Information

news img
events

Paul Resnick featured in upcoming webinar on misinformation

Aug 11, 2021

Paul Resnick will be a featured guest in an upcoming webinar hosted by NewsWhip on Friday, August 13, 2021, at 11:30 AM EDT.

The webinar, "Misinformation 2021: Trends in public engagement," will explore the challenges of misinformation and data-informed reporting, historical trends demonstrating the staying power of misinformation, and proposed solutions for lessening the impact of misinformation. As part of this exploration, Paul will highlight insights drawn from our Iffy Quotient platform health metric. NewsGuard's Executive Vice President of Partnerships, Sarah Brandt, will also be featured in the conversation.

To attend the webinar, please register at https://lu.ma/2s2psf6e.

news img
other

Cross-Partisan Discussions on YouTube: Conservatives Talk to Liberals but Liberals Don’t Talk to Conservatives

May 22, 2021

We present the first large-scale measurement study of cross-partisan discussions between liberals and conservatives on YouTube, based on a dataset of 274,241 political videos from 973 channels of US partisan media and 134M comments from 9.3M users over eight months in 2020. Contrary to a simple narrative of echo chambers, we find a surprising amount of cross-talk: most users with at least 10 comments posted at least once on both left-leaning and right-leaning YouTube channels. Cross-talk, however, was not symmetric. Based on the user leaning predicted by a hierarchical attention model, we find that conservatives were much more likely to comment on left-leaning videos than liberals on right-leaning videos. Secondly, YouTube's comment sorting algorithm made cross-partisan comments modestly less visible; for example, comments from conservatives made up 26.3% of all comments on left-leaning videos but just over 20% of the comments were in the top 20 positions. Lastly, using Perspective API's toxicity score as a measure of quality, we find that conservatives were not significantly more toxic than liberals when users directly commented on the content of videos. However, when users replied to comments from other users, we find that cross-partisan replies were more toxic than co-partisan replies on both left-leaning and right-leaning videos, with cross-partisan replies being especially toxic on the replier's home turf.

Proceedings of the 15th International AAAI Conference on Web and Social Media, June 2021

Access the paper here: https://doi.org/10.1609/icwsm.v15i1.18105

Siqi Wu, Paul Resnick

news img
news

Need to settle old scores shows up in iffy social media content during pandemic, election

Mar 25, 2021

If you are among the Facebook and Twitter users who thought posts you read during the heart of the pandemic and election featured stories that seemed conspiracy laden, politically one-sided, and just flat-out antagonistic, you might have been onto something.

CSMR has published Part 2 of a two-part blog post describing some of the trending news topics on social media during the transition from spring to the summer months in 2020. This is another entry in a series of guest posts for NewsWhip, one of our partners for the Iffy Quotient platform health metric.

In Part 1 of this story we revealed that COVID-19, race relations in the U.S., and the U.S. presidential election were the three most frequent topics of popular news items from both trustworthy (what we call “OK”) and Iffy websites, when measured by their popularity on Facebook and Twitter. These three topics comprised over 75% of OK sites’ popular news stories and a little under 60% of Iffy sites’, and the data showed some surprising differences.

Beyond the three main news topics, about 23% of the OK sites’ stories covered miscellaneous current news, compared to over 28% of Iffy sites’ stories.

But the real surprise comes from the other remaining popular stories. Almost 13% from Iffy sites were what we call “axe-grinding” stories that tried to settle old scores and revisit old grievances that appeared to their authors to have been mishandled previously—typically by the opposing ideological party. These stories reflected a deeply politically conservative stance and often used words like “scam” and “witch hunt.” In contrast, OK sites had a small amount of axe-grinding content, a little over 1%, generally involving retrospective critiques (in the form of opinion pieces) of President Trump, his administration, or his political allies.

During the period of our analysis there were more popular axe-grinding stories from Iffy sites than there were COVID-19 stories from the same sites, ~13% vs. ~11%.

A fuller discussion of our findings can be found in our NewsWhip blog post. The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.

Read the full blog post written by James Park and Paul Resnick
Read the Michigan News press release

news img
news

Social media bans, restrictions have mixed impact on potential misinformation following Capitol riot

Jan 22, 2021

Following the wave of social media account suspensions and shutdowns, including former president Donald Trump’s, after the storming of the U.S. Capitol on January 6, the Iffy Quotient has shown different results on Facebook and Twitter.

Between January 5 and January 13 the Iffy Quotient on Twitter fell from 14.8% to 11.5%, while on Facebook it went from 10.9% to 11.6%. This means that on Twitter fewer URLs from Iffy sites made the top 5,000 most popular URLs in the immediate days after the platform took action to ban the president permanently and suspend some 70,000 user accounts.

Worth noting is the fact that Facebook’s Iffy Quotient was already lower than Twitter’s on January 5 and, on average, has actually been lower than Twitter’s over almost the last two years, but it is encouraging to see a marked drop in Twitter’s Iffy Quotient after they very publicly intervened on their platform.

In addition to seeing fewer Iffy sites among the most popular 5,000 URLs on Twitter, relative engagement with Iffy content was down on both platforms, though barely so on Facebook. On January 5 the engagement share of Iffy content on Twitter was 24.3%, but by January 13 it was down to 9.5%. On Facebook the engagement share was 16.9% on January 5 and 16.8% on January 13. Over this eight-day period on Twitter, the URLs that were most engaged with were less and less often from Iffy sites.

These things naturally fluctuate, but it’s promising when there is measurably less robust engagement with Iffy site URLs on Twitter after they announce that they’ve taken some direct actions.

The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.

Read the Michigan News press release

news img
other

Center for Social Media Responsibility names Hemphill to leadership role

Dec 08, 2020

CSMR is pleased to announce the appointment of Libby Hemphill to the position of associate director.

Libby has served on our faculty council since its inception, and she studies the democratic potential and failures of social media, as well as ways to facilitate the analysis of groups, and differences between groups, within the realm of social media. In her new role, she will help us expand our efforts particularly in making meaningful, transparent comparisons of the experiences different groups of users have on various platforms.

Libby says: "I look forward to exploring how we can help platforms recognize the ways in which various groups might change their behaviors in order to exploit the platforms' features and mechanisms. At the same time, I'm excited about finding ways to amplify the positive effects of social media, like connection, discussion, and social support, and providing users and platforms with information and metrics to recognize changes over time."

With Libby in an expanded role at CSMR, we will continue to enthusiastically pursue our mission of helping social media platforms meet their public responsibilities. Welcome, Libby, to the leadership team!

Read the Michigan News announcement

news img
news

Racism, election overtake COVID-19 as "iffy" news on popular social sites

Nov 16, 2020

Amidst the pandemic, one might expect that the most popular news URLs from Iffy websites shared on Facebook and Twitter would frequently be about COVID-19. Upon closer inspection, though, this isn’t exactly the case.

CSMR has published Part 1 of a two-part blog post describing some of the trending news topics on social media during the transition from spring to the summer months. This is the latest entry in a series of guest posts for NewsWhip, one of our partners for the Iffy Quotient platform health metric.

Using the Iffy Quotient’s daily details pages, we analyzed a sample of the most popular URLs on both Facebook and Twitter from Iffy sites and OK sites from May 1 to July 16, 2020. Much to our surprise, we found that not only was COVID-19 not the main topic of popular content from Iffy sites, it represented a relatively smaller fraction of popular Iffy content compared to other timely topics.

Almost three times as many popular Iffy site stories appeared to be about race-related issues (32.9%) than were about COVID-19 (11.4%), and U.S. presidential election-themed stories (14.3%) also outpaced pandemic ones. On the other hand, the most popular stories from OK sites were related to COVID-19 (37.1%), followed by stories about race-related issues (28.6%) and the U.S. presidential election (10%).

These three topics were the most popular and represented topics from both OK and Iffy websites, when measured by their popularity on Facebook and Twitter. But Iffy sites experienced the most success among social media users with their race-related stories, while the OK sites’ coverage of COVID-19 was their most popular, though their own race-related stories also garnered considerable attention. Iffy sites’ COVID-19 stories were far less frequently popular.

A fuller discussion of our findings can be found in our NewsWhip blog post. The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.

Read the full blog post written by James Park and Paul Resnick
Read the Michigan News press release

news img
other

CSMR featured in "Disinformation, Misinformation, and Fake News" Teach-Out

Sep 07, 2020

CSMR's work on addressing questionable content on social media, in particular our Iffy Quotient platform health metric, has been featured in the University of Michigan's "Disinformation, Misinformation, and Fake News" Teach-Out. In a video contribution, James Park discusses the methodology and uses of the Iffy Quotient.

Other CSMR experts also appear in the Teach-Out: Mark Ackerman explains Warrant Theory; Cliff Lampe addresses social media and misinformation, the role of social media platforms, and deep fakes; and Paul Resnick and James Park have a conversation with NewsGuard's Sarah Brandt about misinformation, media trustworthiness, and their implications for social media and beyond.

Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.

The "Disinformation, Misinformation, and Fake News" Teach-Out teaches participants how to navigate the digital information landscape and identify questionable news, and it introduces critical skills in media and information literacy. The content can be accessed on the Coursera and FutureLearn learning platforms.

news img
news

People “fly to quality” news on social sites when faced with uncertainty

Jul 22, 2020

When information becomes a matter of life or death or is key to navigating economic uncertainty, as it has been during the COVID-19 pandemic, it appears people turn to tried-and-true sources of information rather than iffy sites that have become a greater part of the social media news diet in recent years.

CSMR has published a guest blog post for NewsWhip, one of our partners for the Iffy Quotient platform health metric, detailing this “flight to quality.” This type of behavior is similar to what people exhibit with their investments when financial markets are volatile.

In late February, we noticed a drop in the Iffy Quotient—fewer of the popular URLs on both Facebook and Twitter were coming from iffy sites. We were inspired to see if there was an associated surge in sharing of articles from mainstream sources, which might be interpreted as a flight to quality in uncertain times. To do this we calculated a Mainstream Quotient, which would tell us about the fraction of popular URLs that came from mainstream news sites. (Please see our NewsWhip blog post for more details on the Mainstream Quotient's methodology.)

Ultimately, we found that during the initial stages of the COVID-19 crisis (Feb. 1 - May 1, 2020) there was both a relative reduction in sharing of content from Iffy sites on Facebook and Twitter, and a relative increase in the sharing of content from mainstream news sites. Presumably, this reflects a combination of changes in user behaviors and actions taken by the platforms. (On a number of occasions Facebook and Twitter have publicly announced their efforts to combat the spread of misinformation about COVID-19, ranging from the removal of such content to the addition of warning labels and messages and, on Facebook, a “COVID-19 Information Center.”)

A fuller discussion of our findings can be found in our NewsWhip blog post. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.

Read the full blog post written by Paul Resnick and James Park
Read the Michigan News press release

news img
news

Press release: New version of Iffy Quotient shows steady drop of questionable information on social media, partners with NewsGuard for better data

Jul 23, 2019

A press release has been issued by Michigan News on CSMR seeing a continued decline in questionable content on Facebook and Twitter. This finding comes courtesy of the newest version of our Iffy Quotient metric, the first of our platform health metrics designed to track how well media platforms are meeting their public responsibilities. The latest Iffy Quotient figures indicate that the percentages of the most popular news URLs on Facebook and Twitter that are from “iffy” sites (ones that frequently publish unreliable information) have fallen over the period of October 1, 2018, to July 1, 2019. On Facebook, questionable content dropped from 12.2% to 7.2% during that time, while on Twitter it dropped only slightly from 11.1% to 10.9%.

CSMR has also formed a new, exciting partnership with NewsGuard, who will now serve as our primary source for vetting and rating news and information sites.

The NewsGuard News Website Reliability Index provides a way to differentiate between generally reliable and generally unreliable sites. NewsGuard rates each site based on nine binary, apolitical criteria of journalistic practice, including whether a site repeatedly publishes false content, whether it regularly corrects or clarifies errors, and whether it avoids deceptive headlines.

Weighted points are awarded for each criterion and then summed up; a score of less than 60 earns a “red” rating, while 60 and above earns a “green” rating, which indicates it is generally reliable. NewsGuard also identifies which sites are satire—for example, the popular publication The Onion.

For the purposes of calculating the Iffy Quotient, a site with a NewsGuard “red” rating that is not identified as satire is considered iffy.

More details of how the Iffy Quotient is calculated are included in our Iffy Quotient report. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.

Read the full press release

news img
events

New research on best practices and policies to reduce consumer harms from algorithmic bias

May 23, 2019

On May 22, 2019, Paul Resnick was among the featured expert speakers at the Brookings Institution's Center for Technology Innovation, which hosted a discussion on algorithmic bias. This panel discussion related to the newly released Brookings paper on algorithmic bias detection and mitigation, co-authored by Nicol Turner Lee, Resnick, and Genie Barton. It offers government, technology, and industry leaders a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies that promote the fair and ethical deployment of artificial intelligence systems and machine learning algorithms.

The full video of the panel discussion can be watched on YouTube. (Please note that this link starts the video at the beginning of the panel discussion, which follows about 12 minutes of introductory remarks.)

news img
news

Unlike in 2016, there was no spike in misinformation this election cycle

Nov 05, 2018

In a piece written for The Conversation, Paul Resnick reflects on the 2016 election cycle and the rampant misinformation amplification that took place on Facebook and Twitter during that time. He notes that in the 2018 cycle, things have looked different—and perhaps a bit better, at least on Facebook.

Read Paul's article at The Conversation

news img
news

AP: Social media’s misinformation battle: No winners, so far

Nov 03, 2018

With the U.S. midterm elections just a few days away, CSMR's Iffy Quotient indicates that Facebook and Twitter have been making some progress over the last two years in the fight against online misinformation and hate speech. But, as the Associated Press reports in its coverage of the Iffy Quotient and other research on social media-driven misinformation, there is still a long way to go in the platforms' efforts.

Read the article from the Associated Press.

news img
news

We think the Russian Trolls are still out there: attempts to identify them with ML

Nov 02, 2018

A CSMR team has built machine learning (ML) tools to investigate Twitter accounts potentially operated by Russian trolls actively seeking to influence American political discourse. This work is being undertaken by Eric Gilbert, David Jurgens, Libby Hemphill, Eshwar Chandrasekharan, and Jane Im, and is further described in a paper written by Jane.

Read Jane's paper at Medium

news img
news

The Hill: Researchers unveil tool to track disinformation on social media

Oct 10, 2018

The launch of CSMR's Iffy Quotient, our first platform health metric, has been covered by The Hill. Paul Resnick's comments on the Iffy Quotient's context and utility are also quoted.

Read the article from The Hill.

news img
other

Paul Resnick selected for inaugural Executive Board of Wallace House

Sep 14, 2018

Wallace House at the University of Michigan has announced the formation of its inaugural Executive Board, which will include Paul Resnick.

Wallace House is committed to fostering excellence in journalism. It is the home to programs that recognize, sustain, and elevate the careers of journalists to address the challenges of journalism today, foster civic engagement, and uphold the role of a free press in a democratic society.

The Executive Board will provide strategic support for Wallace House's existing programs and guidance in developing new initiatives. It will advise the Knight-Wallace Fellowships for Journalists, the Livingston Awards, and the Wallace House Presents event series. Composed of acclaimed journalists and accomplished University of Michigan faculty, the board will play an active role in leading the organization through a period of growth and expanded vision.

news img
news

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Jul 02, 2018

Libby Hemphill is a co-author of a paper investigating a possible prediction method for online hostility on Instagram.

Abstract:
Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).

Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018)

Access the paper here: https://arxiv.org/abs/1804.06759

news img
other

University of Michigan experts discuss Facebook & Cambridge Analytica

May 14, 2018

University of Michigan experts weigh in on the Facebook and Cambridge Analytica controversy in the University's "Privacy, Reputation, and Identity in a Digital Age" Teach-Out. CSMR's Garlin Gilchrist and Florian Schaub join Sol Bermann, University Privacy Officer, in a video discussion about the relevant issues of, and takeaways from, the controversy.

Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.

The "Privacy, Reputation, and Identity in a Digital Age" Teach-Out considers questions of privacy, reputation, and identity using a case study approach and real-world scenarios across multiple topic areas.

The full video discussion of the Facebook and Cambridge Analytica controversy can be watched on YouTube or at Michigan Online.

news img
news

Wall Street Journal: Trolls Take Over Some Official U.S. Twitter Accounts

May 03, 2018

Libby Hemphill was quoted in a Wall Street Journal article about trolls taking over some official U.S. government Twitter accounts.

Read the article from The Wall Street Journal

news img
news

"Genderfluid" or "Attack Helicopter": Responsible HCI Research Practice with Non-binary Gender Variation in Online Communities

May 01, 2018

Oliver Haimson is a co-author of a paper investigating how to collect, analyze, and interpret research participants' genders with sensitivity to non-binary genders. The authors offer suggestions for responsible HCI research practices with gender variation in online communities.

Abstract:
As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants' genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes "trolls'" opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research.

CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, April 2018, Paper No.: 307, Pages 1–15

Access the paper here: https://doi.org/10.1145/3173574.3173881

news img
news

Fragmented U.S. Privacy Rules Leave Large Data Loopholes for Facebook and Others

Apr 10, 2018

In a piece written for The Conversation—and reprinted in Scientific American—Florian Schaub comments on Facebook CEO Mark Zuckerberg’s Congressional testimony on ways to keep people’s online data private, and argues that Facebook has little reason to protect U.S. consumers the same way they are required to in other countries, due to more comprehensive privacy laws abroad.

Read Florian's article at The Conversation or Scientific American.

news img
news

What your kids want to tell you about social media

Apr 09, 2018

Sarita Schoenebeck was quoted in a HealthDay article about the effects of parents' social media use.

Read the article at Medical Xpress (via HealthDay)

news img
news

Why privacy policies are falling short...

Apr 01, 2018

Florian Schaub shares some thoughts on the shortcomings of technology-related privacy policies in an "Insights" article written for the Trust, Transparency and Control (TTC) Labs initiative.

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy. Initiated and supported by Facebook, and built on collaboration, the movement includes major global businesses, startups, civic organizations, and academic institutions.

Read Florian's TTC Labs Insights article

news img
news

Online Harassment and Content Moderation: The Case of Blocklists

Mar 26, 2018

Eric Gilbert is a co-author of a paper investigating online harassment through the practices and consequences of Twitter blocklists. Based on their research, Gilbert and his co-authors also propose strategies that social media platforms might adopt to protect people from harassment and still respect their freedom of speech.

Abstract:
Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this article, we present a rich description of Twitter blocklists – why they are needed, how they work, and their strengths and weaknesses in practice. Next, we use blocklists to interrogate online harassment – the forms it takes, as well as tactics used by harassers. Specifically, we interviewed both people who use blocklists to protect themselves, and people who are blocked by blocklists. We find that users are not adequately protected from harassment, and at the same time, many people feel that they are blocked unnecessarily and unfairly. Moreover, we find that not all users agree on what constitutes harassment. Based on our findings, we propose design interventions for social network sites with the aim of protecting people from harassment, while preserving freedom of speech.

ACM Transactions on Computer-Human Interaction, Vol. 25, Issue 2, March 2018, Article No.: 12

Access the paper here: https://doi.org/10.1145/3185593

news img
news

AP: Social media offers dark spaces for political campaigning

Mar 11, 2018

Garlin Gilchrist was quoted in an Associated Press article about the dark spaces for political campaigning found on social media.

Read the article at Business Insider (via the Associated Press)

news img
news

MLive: New University of Michigan center to tackle "social media responsibility"

Mar 10, 2018

Garlin Gilchrist discussed the launch of the Center for Social Media Responsibility and the center's mission on MLive.

Read the article from MLive

news img
news

Classification and Its Consequences for Online Harassment: Design Insights from HeartMob

Jan 08, 2018

A paper, authored by Lindsay Blackwell, Jill Dimond (Sassafras Tech Collective), Sarita Schoenebeck, and Cliff Lampe, about design insights from the HeartMob system will be presented at the CSCW conference in October 2018. These insights may help platform companies, when designing classification systems for harmful content, take into account secondary impacts on harassment targets as well as primary impacts on the availability of the content itself.

Abstract:
Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star’s classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users’ needs into the design and moderation of online platforms.

Proceedings of the ACM on Human-Computer Interaction, Vol. 1, Issue CSCW, December 2017, Article No.: 24

Access the paper here: https://doi.org/10.1145/3134659