NEWS & EVENTS

See the latest at CSMR.
Filter by:
news img
news

Racism, election overtake COVID-19 as "iffy" news on popular social sites

Nov 16, 2020

Amidst the pandemic, one might expect that the most popular news URLs from Iffy websites shared on Facebook and Twitter would frequently be about COVID-19. Upon closer inspection, though, this isn’t exactly the case.

CSMR has published Part 1 of a two-part blog post describing some of the trending news topics on social media during the transition from spring to the summer months. This is the latest entry in a series of guest posts for NewsWhip, one of our partners for the Iffy Quotient platform health metric.

Using the Iffy Quotient’s daily details pages, we analyzed a sample of the most popular URLs on both Facebook and Twitter from Iffy sites and OK sites from May 1 to July 16, 2020. Much to our surprise, we found that not only was COVID-19 not the main topic of popular content from Iffy sites, it represented a relatively smaller fraction of popular Iffy content compared to other timely topics.

Almost three times as many popular Iffy site stories appeared to be about race-related issues (32.9%) than were about COVID-19 (11.4%), and U.S. presidential election-themed stories (14.3%) also outpaced pandemic ones. On the other hand, the most popular stories from OK sites were related to COVID-19 (37.1%), followed by stories about race-related issues (28.6%) and the U.S. presidential election (10%).

These three topics were the most popular and represented topics from both OK and Iffy websites, when measured by their popularity on Facebook and Twitter. But Iffy sites experienced the most success among social media users with their race-related stories, while the OK sites’ coverage of COVID-19 was their most popular, though their own race-related stories also garnered considerable attention. Iffy sites’ COVID-19 stories were far less frequently popular.

A fuller discussion of our findings can be found in our NewsWhip blog post. The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.

Read the full blog post
Read the Michigan News press release

news img
other

CSMR featured in "Disinformation, Misinformation, and Fake News" Teach-Out

Sep 07, 2020

CSMR's work on addressing questionable content on social media, in particular our Iffy Quotient platform health metric, has been featured in the University of Michigan's "Disinformation, Misinformation, and Fake News" Teach-Out. In a video contribution, James Park discusses the methodology and uses of the Iffy Quotient.

Other CSMR experts also appear in the Teach-Out: Mark Ackerman explains Warrant Theory; Cliff Lampe addresses social media and misinformation, the role of social media platforms, and deep fakes; and Paul Resnick and James Park have a conversation with NewsGuard's Sarah Brandt about misinformation, media trustworthiness, and their implications for social media and beyond.

Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.

The "Disinformation, Misinformation, and Fake News" Teach-Out teaches participants how to navigate the digital information landscape and identify questionable news, and it introduces critical skills in media and information literacy. The content can be accessed on the Coursera and FutureLearn learning platforms.

news img
news

People “fly to quality” news on social sites when faced with uncertainty

Jul 22, 2020

When information becomes a matter of life or death or is key to navigating economic uncertainty, as it has been during the COVID-19 pandemic, it appears people turn to tried-and-true sources of information rather than iffy sites that have become a greater part of the social media news diet in recent years.

CSMR has published a guest blog post for NewsWhip, one of our partners for the Iffy Quotient platform health metric, detailing this “flight to quality.” This type of behavior is similar to what people exhibit with their investments when financial markets are volatile.

In late February, we noticed a drop in the Iffy Quotient—fewer of the popular URLs on both Facebook and Twitter were coming from iffy sites. We were inspired to see if there was an associated surge in sharing of articles from mainstream sources, which might be interpreted as a flight to quality in uncertain times. To do this we calculated a Mainstream Quotient, which would tell us about the fraction of popular URLs that came from mainstream news sites. (Please see our NewsWhip blog post for more details on the Mainstream Quotient's methodology.)

Ultimately, we found that during the initial stages of the COVID-19 crisis (Feb. 1 - May 1, 2020) there was both a relative reduction in sharing of content from Iffy sites on Facebook and Twitter, and a relative increase in the sharing of content from mainstream news sites. Presumably, this reflects a combination of changes in user behaviors and actions taken by the platforms. (On a number of occasions Facebook and Twitter have publicly announced their efforts to combat the spread of misinformation about COVID-19, ranging from the removal of such content to the addition of warning labels and messages and, on Facebook, a “COVID-19 Information Center.”)

A fuller discussion of our findings can be found in our NewsWhip blog post. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.

Read the full blog post
Read the Michigan News press release

news img
news

Press release: New version of Iffy Quotient shows steady drop of questionable information on social media, partners with NewsGuard for better data

Jul 23, 2019

A press release has been issued by Michigan News on CSMR seeing a continued decline in questionable content on Facebook and Twitter. This finding comes courtesy of the newest version of our Iffy Quotient metric, the first of our platform health metrics designed to track how well media platforms are meeting their public responsibilities. The latest Iffy Quotient figures indicate that the percentages of the most popular news URLs on Facebook and Twitter that are from “iffy” sites (ones that frequently publish unreliable information) have fallen over the period of October 1, 2018, to July 1, 2019. On Facebook, questionable content dropped from 12.2% to 7.2% during that time, while on Twitter it dropped only slightly from 11.1% to 10.9%.

CSMR has also formed a new, exciting partnership with NewsGuard, who will now serve as our primary source for vetting and rating news and information sites.

The NewsGuard News Website Reliability Index provides a way to differentiate between generally reliable and generally unreliable sites. NewsGuard rates each site based on nine binary, apolitical criteria of journalistic practice, including whether a site repeatedly publishes false content, whether it regularly corrects or clarifies errors, and whether it avoids deceptive headlines.

Weighted points are awarded for each criterion and then summed up; a score of less than 60 earns a “red” rating, while 60 and above earns a “green” rating, which indicates it is generally reliable. NewsGuard also identifies which sites are satire—for example, the popular publication The Onion.

For the purposes of calculating the Iffy Quotient, a site with a NewsGuard “red” rating that is not identified as satire is considered iffy.

More details of how the Iffy Quotient is calculated are included in our Iffy Quotient report. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.

Read the full press release

news img
events

New research on best practices and policies to reduce consumer harms from algorithmic bias

May 23, 2019

On May 22, 2019, Paul Resnick was among the featured expert speakers at the Brookings Institution's Center for Technology Innovation, which hosted a discussion on algorithmic bias. This panel discussion related to the newly released Brookings paper on algorithmic bias detection and mitigation, co-authored by Nicol Turner Lee, Resnick, and Genie Barton. It offers government, technology, and industry leaders a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies that promote the fair and ethical deployment of artificial intelligence systems and machine learning algorithms.

The full video of the panel discussion can be watched on YouTube. (Please note that this link starts the video at the beginning of the panel discussion, which follows about 12 minutes of introductory remarks.)

news img
news

Unlike in 2016, there was no spike in misinformation this election cycle

Nov 05, 2018

In a piece written for The Conversation, Paul Resnick reflects on the 2016 election cycle and the rampant misinformation amplification that took place on Facebook and Twitter during that time. He notes that in the 2018 cycle, things have looked different—and perhaps a bit better, at least on Facebook.

Read Paul's article at The Conversation

news img
news

AP: Social media’s misinformation battle: No winners, so far

Nov 03, 2018

With the U.S. midterm elections just a few days away, CSMR's Iffy Quotient indicates that Facebook and Twitter have been making some progress over the last two years in the fight against online misinformation and hate speech. But, as the Associated Press reports in its coverage of the Iffy Quotient and other research on social media-driven misinformation, there is still a long way to go in the platforms' efforts.

Read the article from the Associated Press.

news img
news

We think the Russian Trolls are still out there: attempts to identify them with ML

Nov 02, 2018

A CSMR team has built machine learning (ML) tools to investigate Twitter accounts potentially operated by Russian trolls actively seeking to influence American political discourse. This work is being undertaken by Eric Gilbert, David Jurgens, Libby Hemphill, Eshwar Chandrasekharan, and Jane Im, and is further described in a paper written by Jane.

Read Jane's paper at Medium

news img
news

The Hill: Researchers unveil tool to track disinformation on social media

Oct 10, 2018

The launch of CSMR's Iffy Quotient, our first platform health metric, has been covered by The Hill. Paul Resnick's comments on the Iffy Quotient's context and utility are also quoted.

Read the article from The Hill.

news img
other

Paul Resnick selected for inaugural Executive Board of Wallace House

Sep 14, 2018

Wallace House at the University of Michigan has announced the formation of its inaugural Executive Board, which will include Paul Resnick.

Wallace House is committed to fostering excellence in journalism. It is the home to programs that recognize, sustain, and elevate the careers of journalists to address the challenges of journalism today, foster civic engagement, and uphold the role of a free press in a democratic society.

The Executive Board will provide strategic support for Wallace House's existing programs and guidance in developing new initiatives. It will advise the Knight-Wallace Fellowships for Journalists, the Livingston Awards, and the Wallace House Presents event series. Composed of acclaimed journalists and accomplished University of Michigan faculty, the board will play an active role in leading the organization through a period of growth and expanded vision.

news img
news

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

Jul 02, 2018

Libby Hemphill is a co-author of a paper investigating a possible prediction method for online hostility on Instagram.

Abstract:
Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).

Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018)

Access the paper here: https://arxiv.org/abs/1804.06759

news img
other

University of Michigan experts discuss Facebook & Cambridge Analytica

May 14, 2018

University of Michigan experts weigh in on the Facebook and Cambridge Analytica controversy in the University's "Privacy, Reputation, and Identity in a Digital Age" Teach-Out. CSMR's Garlin Gilchrist and Florian Schaub join Sol Bermann, University Privacy Officer, in a video discussion about the relevant issues of, and takeaways from, the controversy.

Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.

The "Privacy, Reputation, and Identity in a Digital Age" Teach-Out considers questions of privacy, reputation, and identity using a case study approach and real-world scenarios across multiple topic areas.

The full video discussion of the Facebook and Cambridge Analytica controversy can be watched on YouTube or at Michigan Online.

news img
news

Wall Street Journal: Trolls Take Over Some Official U.S. Twitter Accounts

May 03, 2018

Libby Hemphill was quoted in a Wall Street Journal article about trolls taking over some official U.S. government Twitter accounts.

Read the article from The Wall Street Journal

news img
news

"Genderfluid" or "Attack Helicopter": Responsible HCI Research Practice with Non-binary Gender Variation in Online Communities

May 01, 2018

Oliver Haimson is a co-author of a paper investigating how to collect, analyze, and interpret research participants' genders with sensitivity to non-binary genders. The authors offer suggestions for responsible HCI research practices with gender variation in online communities.

Abstract:
As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants' genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes "trolls'" opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research.

CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, April 2018, Paper No.: 307, Pages 1–15

Access the paper here: https://doi.org/10.1145/3173574.3173881

news img
news

Fragmented U.S. Privacy Rules Leave Large Data Loopholes for Facebook and Others

Apr 10, 2018

In a piece written for The Conversation—and reprinted in Scientific American—Florian Schaub comments on Facebook CEO Mark Zuckerberg’s Congressional testimony on ways to keep people’s online data private, and argues that Facebook has little reason to protect U.S. consumers the same way they are required to in other countries, due to more comprehensive privacy laws abroad.

Read Florian's article at The Conversation or Scientific American.

news img
news

What your kids want to tell you about social media

Apr 09, 2018

Sarita Schoenebeck was quoted in a HealthDay article about the effects of parents' social media use.

Read the article at Medical Xpress (via HealthDay)

news img
news

Why privacy policies are falling short...

Apr 01, 2018

Florian Schaub shares some thoughts on the shortcomings of technology-related privacy policies in an "Insights" article written for the Trust, Transparency and Control (TTC) Labs initiative.

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy. Initiated and supported by Facebook, and built on collaboration, the movement includes major global businesses, startups, civic organizations, and academic institutions.

Read Florian's TTC Labs Insights article

news img
news

Online Harassment and Content Moderation: The Case of Blocklists

Mar 26, 2018

Eric Gilbert is a co-author of a paper investigating online harassment through the practices and consequences of Twitter blocklists. Based on their research, Gilbert and his co-authors also propose strategies that social media platforms might adopt to protect people from harassment and still respect their freedom of speech.

Abstract:
Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this article, we present a rich description of Twitter blocklists – why they are needed, how they work, and their strengths and weaknesses in practice. Next, we use blocklists to interrogate online harassment – the forms it takes, as well as tactics used by harassers. Specifically, we interviewed both people who use blocklists to protect themselves, and people who are blocked by blocklists. We find that users are not adequately protected from harassment, and at the same time, many people feel that they are blocked unnecessarily and unfairly. Moreover, we find that not all users agree on what constitutes harassment. Based on our findings, we propose design interventions for social network sites with the aim of protecting people from harassment, while preserving freedom of speech.

ACM Transactions on Computer-Human Interaction, Vol. 25, Issue 2, March 2018, Article No.: 12

Access the paper here: https://doi.org/10.1145/3185593

news img
news

AP: Social media offers dark spaces for political campaigning

Mar 11, 2018

Garlin Gilchrist was quoted in an Associated Press article about the dark spaces for political campaigning found on social media.

Read the article at Business Insider (via the Associated Press)

news img
news

MLive: New University of Michigan center to tackle "social media responsibility"

Mar 10, 2018

Garlin Gilchrist discussed the launch of the Center for Social Media Responsibility and the center's mission on MLive.

Read the article from MLive

news img
news

Classification and Its Consequences for Online Harassment: Design Insights from HeartMob

Jan 08, 2018

A paper, authored by Lindsay Blackwell, Jill Dimond (Sassafras Tech Collective), Sarita Schoenebeck, and Cliff Lampe, about design insights from the HeartMob system will be presented at the CSCW conference in October 2018. These insights may help platform companies, when designing classification systems for harmful content, take into account secondary impacts on harassment targets as well as primary impacts on the availability of the content itself.

Abstract:
Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star’s classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users’ needs into the design and moderation of online platforms.

Proceedings of the ACM on Human-Computer Interaction, Vol. 1, Issue CSCW, December 2017, Article No.: 24

Access the paper here: https://doi.org/10.1145/3134659