NEWS & EVENTS

Paul Resnick files amicus brief for Gonzalez v. Google Supreme Court case
A Supreme Court case could alter algorithm recommendation systems on websites like YouTube, challenging a 1993 law. The law concerns section 230 of the Communications Decency Act, which protects websites from liability arising from user content.
In an amicus brief filed by Paul Resnick and a group of distinguished information science scholars, the experts argue that websites should continue to be protected under this law.
The case, Gonzalez v. Google, No. 21-1333, is set to be heard in the spring. Resnick’s early research on automated online recommendation systems continues to be influential.
“If the Supreme Court were to rule that algorithmically recommending items incurs the same liability as publishing those items, many service providers would radically alter or stop providing those services,” Resnick says. “Recommender systems, search engines, and spam filters all provide great value to consumers, so it would be a huge loss to society if they went away.”
The lawsuit was filed by the family of a student killed in a November 2015 terrorist attack. The family argues that YouTube’s algorithm pushed ISIS recruitment videos and inspired the attack.
“We filed this brief because we thought it would be valuable for the court to have a technical perspective on how recommender systems and search engines work now, and how they worked at the time of the legislation,” Resnick says.
Read the announcement from the U-M School of Information
— Noor Hindi, UMSI public relations specialist
Image credit: Senate Democrats, CC BY 2.0, via Wikimedia Commons

CSMR's WiseDex team completes Phase 1 of the NSF Convergence Accelerator
Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation.
WiseDex is the venture of a multidisciplinary team of researchers, led by CSMR experts, who recently completed Phase 1 of the National Science Foundation (NSF) Convergence Accelerator. The Phase 1 team is composed of:
Paul Resnick
David Jurgens
James Park
Amy Zhang, University of Washington
David Rand, MIT
Adam Berinsky, MIT
Benji Loney, TrustLab
Please watch WiseDex's overview video for a fuller introduction to WiseDex.
To learn more, please visit the WiseDex website.
See Paul Resnick give a quick preview of WiseDex in this short video.

Nazanin Andalibi to give keynote speech at Reddit’s Mod Summit
Nazanin Andalibi will be the keynote speaker at the Third Annual Reddit Mod Summit on September 17, 2022. The virtual event brings together moderators on the popular social media site Reddit to discuss what’s new on the platform and interact with each other and the company’s leadership.
Andalibi's talk, “The Promises and Perils of Online Communities for Social Support and Wellbeing,” will highlight her work on how social media can facilitate both social support and community as well as harm and exclusion for marginalized people.
“I am excited about the opportunity to share key insights from my research with an audience that is so well-positioned to make practical changes for improving online communities,” says Andalibi. “To me, this invitation is a signal of my work's broader impact beyond the academic community.”
Andalibi hopes that her talk inspires Reddit moderators, designers, and decision-makers to “consider how, where, and for whom inequities manifest in subreddits, and how to foster and sustain more explicitly and meaningfully inclusive and compassionate online spaces supportive of diverse, marginalized life experiences and identities.”
She adds, “If Reddit as a platform decides to implement some of my design recommendations, then all the better!”
This summit is being held online and is open only to invited participants.

Washington Post: Facebook bans hate speech but still makes money from white supremacists
The Washington Post reports that despite Facebook’s long-time ban on white nationalism, the platform still hosts 119 Facebook pages and 20 Facebook groups associated with white supremacy organizations. Libby Hemphill said these groups have changed their approach to keep pages up.
“The people who are creating this content have become very tech savvy, so they are aware of the loopholes that exist and they’re using it to keep posting content,” said Hemphill. “Platforms are often just playing catch-up.”
More than 40% of searches for white supremacy organizations also presented users with an advertisement — meaning, Facebook continues to make money from hate groups’ pages.
Read the article from The Washington Post
Read the announcement from the U-M School of Information

Pennsylvania gubernatorial nominee courting voters on Gab pushes the GOP further right, experts say
Libby Hemphill tells the Pennsylvania Capital-Star that the Pennsylvania Republican gubernatorial nominee’s use of the far-right social media platform Gab is pushing the Republican party further to the right.
Hemphill explains that Gab emerged as a refuge for hate speech after mainstream platforms like Facebook and Twitter took steps to moderate and remove racist and bigoted content.
The Capital-Star reports that nominee Doug Mastriano paid $5,000 to Gab.com for campaign consulting. “When a mainstream politician says, 'I want to reach hate groups where they meet,' that’s scary to us,” Hemphill said.
She said that it remains to be seen how voters will react to Mastriano’s association with Gab, noting that it could motivate voters for or against him. But either way, “for a statewide office, it’s a risky move,” she said.
“Even if this doesn’t work in 2022, if it moves the window of what is acceptable to include engaging with white supremacists,” Hemphill said. “That is a significant push for the Republican party.”
Read the article from the Pennsylvania Capital-Star
Read the announcement from the U-M School of Information

Meet CSMR's WiseDex team at the NSF Convergence Accelerator Expo 2022
Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation. WiseDex is a joint venture of collaborators from CSMR, University of Washington, MIT, and Trust Lab, and is a 2021 cohort member of the National Science Foundation (NSF) Convergence Accelerator.
See Paul Resnick give a quick preview of WiseDex in this short video.
The WiseDex team is presenting at the NSF Convergence Accelerator Expo 2022 on July 27 and 28, 2022. This virtual event is an opportunity to see unique, use-inspired research solutions funded by the NSF Convergence Accelerator. Last year’s event drew over 2,000 registrants from academia, industry, nonprofit, and government—and this year’s event is expected to be even larger. During the event, you will have the opportunity to connect with the WiseDex team and to learn more about WiseDex through various live presentations.
The Convergence Accelerator Expo 2022 is open to the public. Researchers, innovators, technology and business practitioners, and media from academia, industry, government, nonprofit, and other communities are encouraged to attend.
To register, please visit https://nsf-ca-expo2022.vfairs.com. Registration is free.
Read the announcement from the U-M School of Information
Watch Paul Resnick's WiseDex preview on YouTube

New website provides information for marginalized communities after social media bans
An interdisciplinary team has launched a new online resource for understanding and dealing with social media content bans. The Online Identity Help Center (OIHC), the brainchild of Oliver Haimson, aims to help marginalized people navigate the gray areas in social media moderation.
Having content taken down from social media can be a jarring and upsetting experience. Being silenced can feel even more isolating if you are a person from a marginalized group. Those who are sexual, gender, and/or racial minorities may already feel unwelcome online, and content censorship can narrow a sense of community even further.
After content removal there is often an accompanying notification, but people may not fully understand why the content was deemed inappropriate. Often, it is unclear how to avoid censorship in the future. Even worse, people may be unsure how to reinstate an account post-ban.
OIHC aims to help people understand different social media platforms’ policies and rules about content takedowns. It also provides easy-to-read resources on different social media guidelines and what to do if your content is taken down.
"I just want people who are experiencing what can be really stressful situations to be able to find this resource and use it to help them in some way," says Haimson. "We think of it kind of as a digital literacy resource, helping people to learn more about this digital world.
"In an ideal world, I would love it if this could influence social media policy in some way to have social media policy managers better understand these challenges that people face."
Visit the Online Identity Help Center (OIHC)
Read the announcement from the U-M School of Information
(Image credit: Irene Falgueras via blush.design)

KQED: Sarita Schoenebeck on what a healthy online commons would look like
A podcast episode from KQED (California) on reimagining the future of digital public spaces features Sarita Schoenebeck, who shares some ways in which content moderation on social media platforms can become more effective through the use of shared community values and human rights.
"It's important to remember social media is not a blank slate any more than any community offline is," Schoenebeck says. "We can look at our local schools, for example, and they all have policies and values that shape people’s experiences.
"Free speech is at the government level, but platforms can make their decisions around what values they want to have and what they want to enforce,” she adds.
Listen to the podcast episode from KQED
Read the announcement from the U-M School of Information

David Jurgens receives NSF CAREER award
David Jurgens has been awarded a five-year, $581,433 grant from the National Science Foundation. The NSF CAREER award will fund his research on "Fostering Prosocial Behavior and Well-Being in Online Communities," which focuses on identifying and measuring how social media can have a positive impact on people’s lives.
Although many studies have focused on how social media platforms can negatively impact people, understanding the positive effects is still unclear. Jurgens plans on studying the effects of prosocial behavior in social media and how it can boost the psychological well-being of users.
"In general, I'm an optimist, but an optimist who likes to measure things," says Jurgens. "I've been curious whether social media has actual benefit in our lives for all its downsides."
As part of the project, Jurgens plans on releasing tools for everyday users to help people track the good and bad interactions and content on their social media feeds. He says the tools will "hopefully help folks make informed decisions about when it's time to take a break from social media."
Jurgens says the CAREER award will fund new experiments aimed at helping people be more prosocial by fostering strategic encouragement. "I'm hoping to give folks a hand, and a helpful nudge, to show more care and compassion towards others."

As Russia invades Ukraine, social media users “fly to quality”
A swift drop late last month in social media engagement with content from "Iffy" sources has prompted CSMR researchers to ask whether Facebook and Twitter users have been experiencing a "flight to news quality," or at least a flight away from less trustworthy sites, during that time—and if so, why.
After the beginning of the COVID-19 pandemic in 2020, our Iffy Quotient platform health metric indicated a flight to quality on Facebook and Twitter. We observed a sudden dip on both platforms in the sharing of popular URLs from sites whose journalistic practices are less trustworthy and are thus deemed "Iffy." This phenomenon was accompanied by a surge in the sharing of popular URLs from mainstream news sites, thereby suggesting a turn to tried-and-true sources of information during times of uncertainty.
Recently over the two-week period from February 14 to February 28, 2022, another flight away from less trustworthy sources seems to have taken place on the two social media platforms. For the week ending February 14, 8.8% of the engagement with the most popular URLs on Facebook was dedicated to content from Iffy sites, and on Twitter this percentage was 8.2%. For the week ending February 28, the numbers dropped to 6.1% and 5.4%, respectively—more than a 30% drop.
Such a dramatic shift is noteworthy enough, but a closer examination reveals further intrigue. While this engagement trended downward in relatively stable fashion on Facebook, Twitter first experienced a rapid growth in engagement with Iffy sites, up to 10.4% for the week ending February 21, followed by an even more rapid decline.
We suggest these trends may be influenced in part by media coverage of the recent invasion of Ukraine by Russian forces. While Iffy sites can be more entertaining to readers, when being informed takes primary importance, readers seem to turn their attention away from Iffy sites, much like what was seen in the spring of 2020. Twitter’s Iffy Quotient trajectory in particular loosely maps onto the Russian invasion timeline: the rise in engagement with Iffy site URLs takes place around the same time as the onset of rumors of an invasion taking place on or around February 16, while its descent quickens around February 24, when news was breaking of Russian forces moving into Ukraine.
The Iffy Quotient’s daily details page (for example, the daily details for February 14) adds further color. Before the Russian invasion, the most engaged-with URLs from Iffy sites more often revolved around topics that hit closer to the U.S., especially COVID-19 (e.g. mask mandates, vaccines, etc.), Justin Trudeau and the Canadian trucker protests, and the Trump-Russia scandal (e.g. John Durham, Hillary Clinton, etc.). Stories about a possible Russia-Ukraine conflict emerged in the most engaged-with URLs from both trustworthy and Iffy sites beginning around February 21, with greater engagement with trustworthy site content. Once the invasion began on February 24, trustworthy sites’ coverage of it received a lot of engagement and those stories tended towards reportage.
On the other hand, the Iffy sites’ more limited popular content on the invasion appeared to take a more interpretive approach—for example, analyses that offer highly partisan perspectives of the invasion’s impact, especially on the U.S. By February 28 the Iffy sites’ most popular stories were less frequently about the Russian invasion and had reverted back to the topics that were trending pre-invasion, while the trustworthy sites’ most popular stories were nearly all dedicated to the Russia-Ukraine situation.
The overall drop in social media engagement with Iffy sites over the two-week period in question may have also been influenced by actions taken by both Facebook and Twitter against Russian state media. Both platforms have implemented their own labeling methods to make readers aware of news coming from sources with ties to the Russian government, and Facebook has stopped recommending it globally and blocked it entirely in the EU. This may have deterred social media users from clicking on links that came from sites like RT and Sputnik (both considered to be “Iffy” according to site ratings issued by NewsGuard) and thus reduced the engagement share of Iffy sites. These were not among the most popular Iffy sites, however. For example, on February 20, 2022, prior to the invasion, RT had just one article in the top 10 from Iffy sites; among all articles, it was ranked 704 in terms of engagement on Facebook and 500 on Twitter. Thus, the platform actions taken against those media are not sufficient to explain the reduced user engagement with Iffy sites after the invasion.
Putting it all together, it is clear that social media users had shifting reactions to Iffy site content in the latter half of last month. There are strong indications that they once again experienced a flight from lower news quality and were aided by actions taken by social media platforms. Moreover, there are compelling reasons to believe that this flight was influenced by the onset of the Russia-Ukraine conflict and the media coverage it received. Once fighting began, engagement with Iffy site content on Twitter dropped rapidly, continuing a downward trend that began a few days before the invasion started. Such results might suggest not only that a flight to quality occurs when social media users are looking for accurate information, but also that readers may be treating the “hot takes” and highly partisan perspectives more as entertainment than as information sources, and entertainment is less captivating in times of uncertainty.
The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.
Written by James Park, Sagar Kumar, and Paul Resnick

CSMR awarded NSF Convergence Accelerator grant
The National Science Foundation (NSF) Convergence Accelerator, a new program designed to address national-scale societal challenges, has awarded $750,000 to a multidisciplinary team of researchers led by CSMR experts.
The project, “Misinformation Judgments with Public Legitimacy,” aims to develop jury-led services to identify and rectify misinformation on communication platforms. The goal of the services will be to make partially subjective judgments on content moderation in a way that the public accepts as legitimate, regardless of their political leaning. Paul Resnick is the project director, David Jurgens is one of the co-principal investigators, and James Park is the project manager.
“Almost everyone agrees that search and social media platforms should limit the spread of misinformation. They don't always agree, however, about exactly which content is misinforming,” says Resnick. “The core idea is to provide juries with fact-checks and other information related to a post and have the jurors deliberate before rendering judgments. By deferring to publicly legitimate judgments, platforms will act on misinformation more transparently, consistently, and effectively."
Joining Resnick, Jurgens, and Park on this initiative are:
Amy Zhang, University of Washington
David Rand, Massachusetts Institute of Technology
Adam Berinsky, Massachusetts Institute of Technology
Benji Loney, TrustLab
As part of the Convergence Accelerator’s Trust & Authenticity in Communication Systems research track, the project addresses the urgent need for tools and techniques to help the nation effectively prevent, mitigate, and adapt to critical threats to communication systems.

Paul Resnick featured in upcoming webinar on misinformation
Paul Resnick will be a featured guest in an upcoming webinar hosted by NewsWhip on Friday, August 13, 2021, at 11:30 AM EDT.
The webinar, "Misinformation 2021: Trends in public engagement," will explore the challenges of misinformation and data-informed reporting, historical trends demonstrating the staying power of misinformation, and proposed solutions for lessening the impact of misinformation. As part of this exploration, Paul will highlight insights drawn from our Iffy Quotient platform health metric. NewsGuard's Executive Vice President of Partnerships, Sarah Brandt, will also be featured in the conversation.
To attend the webinar, please register at https://lu.ma/2s2psf6e.

Need to settle old scores shows up in iffy social media content during pandemic, election
If you are among the Facebook and Twitter users who thought posts you read during the heart of the pandemic and election featured stories that seemed conspiracy laden, politically one-sided, and just flat-out antagonistic, you might have been onto something.
CSMR has published Part 2 of a two-part blog post describing some of the trending news topics on social media during the transition from spring to the summer months in 2020. This is another entry in a series of guest posts for NewsWhip, one of our partners for the Iffy Quotient platform health metric.
In Part 1 of this story we revealed that COVID-19, race relations in the U.S., and the U.S. presidential election were the three most frequent topics of popular news items from both trustworthy (what we call “OK”) and Iffy websites, when measured by their popularity on Facebook and Twitter. These three topics comprised over 75% of OK sites’ popular news stories and a little under 60% of Iffy sites’, and the data showed some surprising differences.
Beyond the three main news topics, about 23% of the OK sites’ stories covered miscellaneous current news, compared to over 28% of Iffy sites’ stories.
But the real surprise comes from the other remaining popular stories. Almost 13% from Iffy sites were what we call “axe-grinding” stories that tried to settle old scores and revisit old grievances that appeared to their authors to have been mishandled previously—typically by the opposing ideological party. These stories reflected a deeply politically conservative stance and often used words like “scam” and “witch hunt.” In contrast, OK sites had a small amount of axe-grinding content, a little over 1%, generally involving retrospective critiques (in the form of opinion pieces) of President Trump, his administration, or his political allies.
During the period of our analysis there were more popular axe-grinding stories from Iffy sites than there were COVID-19 stories from the same sites, ~13% vs. ~11%.
A fuller discussion of our findings can be found in our NewsWhip blog post. The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.
Read the full blog post written by James Park and Paul Resnick
Read the Michigan News press release

Social media bans, restrictions have mixed impact on potential misinformation following Capitol riot
Following the wave of social media account suspensions and shutdowns, including former president Donald Trump’s, after the storming of the U.S. Capitol on January 6, the Iffy Quotient has shown different results on Facebook and Twitter.
Between January 5 and January 13 the Iffy Quotient on Twitter fell from 14.8% to 11.5%, while on Facebook it went from 10.9% to 11.6%. This means that on Twitter fewer URLs from Iffy sites made the top 5,000 most popular URLs in the immediate days after the platform took action to ban the president permanently and suspend some 70,000 user accounts.
Worth noting is the fact that Facebook’s Iffy Quotient was already lower than Twitter’s on January 5 and, on average, has actually been lower than Twitter’s over almost the last two years, but it is encouraging to see a marked drop in Twitter’s Iffy Quotient after they very publicly intervened on their platform.
In addition to seeing fewer Iffy sites among the most popular 5,000 URLs on Twitter, relative engagement with Iffy content was down on both platforms, though barely so on Facebook. On January 5 the engagement share of Iffy content on Twitter was 24.3%, but by January 13 it was down to 9.5%. On Facebook the engagement share was 16.9% on January 5 and 16.8% on January 13. Over this eight-day period on Twitter, the URLs that were most engaged with were less and less often from Iffy sites.
These things naturally fluctuate, but it’s promising when there is measurably less robust engagement with Iffy site URLs on Twitter after they announce that they’ve taken some direct actions.
The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.

Center for Social Media Responsibility names Hemphill to leadership role
CSMR is pleased to announce the appointment of Libby Hemphill to the position of associate director.
Libby has served on our faculty council since its inception, and she studies the democratic potential and failures of social media, as well as ways to facilitate the analysis of groups, and differences between groups, within the realm of social media. In her new role, she will help us expand our efforts particularly in making meaningful, transparent comparisons of the experiences different groups of users have on various platforms.
Libby says: "I look forward to exploring how we can help platforms recognize the ways in which various groups might change their behaviors in order to exploit the platforms' features and mechanisms. At the same time, I'm excited about finding ways to amplify the positive effects of social media, like connection, discussion, and social support, and providing users and platforms with information and metrics to recognize changes over time."
With Libby in an expanded role at CSMR, we will continue to enthusiastically pursue our mission of helping social media platforms meet their public responsibilities. Welcome, Libby, to the leadership team!

Racism, election overtake COVID-19 as "iffy" news on popular social sites
Amidst the pandemic, one might expect that the most popular news URLs from Iffy websites shared on Facebook and Twitter would frequently be about COVID-19. Upon closer inspection, though, this isn’t exactly the case.
CSMR has published Part 1 of a two-part blog post describing some of the trending news topics on social media during the transition from spring to the summer months. This is the latest entry in a series of guest posts for NewsWhip, one of our partners for the Iffy Quotient platform health metric.
Using the Iffy Quotient’s daily details pages, we analyzed a sample of the most popular URLs on both Facebook and Twitter from Iffy sites and OK sites from May 1 to July 16, 2020. Much to our surprise, we found that not only was COVID-19 not the main topic of popular content from Iffy sites, it represented a relatively smaller fraction of popular Iffy content compared to other timely topics.
Almost three times as many popular Iffy site stories appeared to be about race-related issues (32.9%) than were about COVID-19 (11.4%), and U.S. presidential election-themed stories (14.3%) also outpaced pandemic ones. On the other hand, the most popular stories from OK sites were related to COVID-19 (37.1%), followed by stories about race-related issues (28.6%) and the U.S. presidential election (10%).
These three topics were the most popular and represented topics from both OK and Iffy websites, when measured by their popularity on Facebook and Twitter. But Iffy sites experienced the most success among social media users with their race-related stories, while the OK sites’ coverage of COVID-19 was their most popular, though their own race-related stories also garnered considerable attention. Iffy sites’ COVID-19 stories were far less frequently popular.
A fuller discussion of our findings can be found in our NewsWhip blog post. The Iffy Quotient for Facebook and Twitter and the daily details page, all updated daily, are available at our Iffy Quotient page.
Read the full blog post written by James Park and Paul Resnick
Read the Michigan News press release

CSMR featured in "Disinformation, Misinformation, and Fake News" Teach-Out
CSMR's work on addressing questionable content on social media, in particular our Iffy Quotient platform health metric, has been featured in the University of Michigan's "Disinformation, Misinformation, and Fake News" Teach-Out. In a video contribution, James Park discusses the methodology and uses of the Iffy Quotient.
Other CSMR experts also appear in the Teach-Out: Mark Ackerman explains Warrant Theory; Cliff Lampe addresses social media and misinformation, the role of social media platforms, and deep fakes; and Paul Resnick and James Park have a conversation with NewsGuard's Sarah Brandt about misinformation, media trustworthiness, and their implications for social media and beyond.
Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.
The "Disinformation, Misinformation, and Fake News" Teach-Out teaches participants how to navigate the digital information landscape and identify questionable news, and it introduces critical skills in media and information literacy. The content can be accessed on the Coursera and FutureLearn learning platforms.

Folha de S.Paulo: In times of uncertainty, readers seek information from known sources, says study
CSMR's research on the "flight to news quality" on social media has been covered by Folha de S.Paulo (São Paulo, Brazil).
Read the article from Folha de S.Paulo on our findings. (Please note that the article is in Portuguese.)
Read our original "flight to quality" analysis.

Christian Science Monitor: Why old-style news is new again
CSMR's research on the "flight to news quality" on social media has been covered by The Christian Science Monitor.
Read the editorial from The Christian Science Monitor on our findings.
Read our original "flight to quality" analysis.


People “fly to quality” news on social sites when faced with uncertainty
When information becomes a matter of life or death or is key to navigating economic uncertainty, as it has been during the COVID-19 pandemic, it appears people turn to tried-and-true sources of information rather than iffy sites that have become a greater part of the social media news diet in recent years.
CSMR has published a guest blog post for NewsWhip, one of our partners for the Iffy Quotient platform health metric, detailing this “flight to quality.” This type of behavior is similar to what people exhibit with their investments when financial markets are volatile.
In late February, we noticed a drop in the Iffy Quotient—fewer of the popular URLs on both Facebook and Twitter were coming from iffy sites. We were inspired to see if there was an associated surge in sharing of articles from mainstream sources, which might be interpreted as a flight to quality in uncertain times. To do this we calculated a Mainstream Quotient, which would tell us about the fraction of popular URLs that came from mainstream news sites. (Please see our NewsWhip blog post for more details on the Mainstream Quotient's methodology.)
Ultimately, we found that during the initial stages of the COVID-19 crisis (Feb. 1 - May 1, 2020) there was both a relative reduction in sharing of content from Iffy sites on Facebook and Twitter, and a relative increase in the sharing of content from mainstream news sites. Presumably, this reflects a combination of changes in user behaviors and actions taken by the platforms. (On a number of occasions Facebook and Twitter have publicly announced their efforts to combat the spread of misinformation about COVID-19, ranging from the removal of such content to the addition of warning labels and messages and, on Facebook, a “COVID-19 Information Center.”)
A fuller discussion of our findings can be found in our NewsWhip blog post. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.
Read the full blog post written by Paul Resnick and James Park
Read the Michigan News press release

Press release: New version of Iffy Quotient shows steady drop of questionable information on social media, partners with NewsGuard for better data
A press release has been issued by Michigan News on CSMR seeing a continued decline in questionable content on Facebook and Twitter. This finding comes courtesy of the newest version of our Iffy Quotient metric, the first of our platform health metrics designed to track how well media platforms are meeting their public responsibilities. The latest Iffy Quotient figures indicate that the percentages of the most popular news URLs on Facebook and Twitter that are from “iffy” sites (ones that frequently publish unreliable information) have fallen over the period of October 1, 2018, to July 1, 2019. On Facebook, questionable content dropped from 12.2% to 7.2% during that time, while on Twitter it dropped only slightly from 11.1% to 10.9%.
CSMR has also formed a new, exciting partnership with NewsGuard, who will now serve as our primary source for vetting and rating news and information sites.
The NewsGuard News Website Reliability Index provides a way to differentiate between generally reliable and generally unreliable sites. NewsGuard rates each site based on nine binary, apolitical criteria of journalistic practice, including whether a site repeatedly publishes false content, whether it regularly corrects or clarifies errors, and whether it avoids deceptive headlines.
Weighted points are awarded for each criterion and then summed up; a score of less than 60 earns a “red” rating, while 60 and above earns a “green” rating, which indicates it is generally reliable. NewsGuard also identifies which sites are satire—for example, the popular publication The Onion.
For the purposes of calculating the Iffy Quotient, a site with a NewsGuard “red” rating that is not identified as satire is considered iffy.
More details of how the Iffy Quotient is calculated are included in our Iffy Quotient report. The graph showing trends in the Iffy Quotient for Facebook and Twitter, updated daily, is available at our Iffy Quotient page.

New research on best practices and policies to reduce consumer harms from algorithmic bias
On May 22, 2019, Paul Resnick was among the featured expert speakers at the Brookings Institution's Center for Technology Innovation, which hosted a discussion on algorithmic bias. This panel discussion related to the newly released Brookings paper on algorithmic bias detection and mitigation, co-authored by Nicol Turner Lee, Resnick, and Genie Barton. It offers government, technology, and industry leaders a set of public policy recommendations, self-regulatory best practices, and consumer-focused strategies that promote the fair and ethical deployment of artificial intelligence systems and machine learning algorithms.
The full video of the panel discussion can be watched on YouTube. (Please note that this link starts the video at the beginning of the panel discussion, which follows about 12 minutes of introductory remarks.)

Unlike in 2016, there was no spike in misinformation this election cycle
In a piece written for The Conversation, Paul Resnick reflects on the 2016 election cycle and the rampant misinformation amplification that took place on Facebook and Twitter during that time. He notes that in the 2018 cycle, things have looked different—and perhaps a bit better, at least on Facebook.

AP: Social media’s misinformation battle: No winners, so far
With the U.S. midterm elections just a few days away, CSMR's Iffy Quotient indicates that Facebook and Twitter have been making some progress over the last two years in the fight against online misinformation and hate speech. But, as the Associated Press reports in its coverage of the Iffy Quotient and other research on social media-driven misinformation, there is still a long way to go in the platforms' efforts.

We think the Russian Trolls are still out there: attempts to identify them with ML
A CSMR team has built machine learning (ML) tools to investigate Twitter accounts potentially operated by Russian trolls actively seeking to influence American political discourse. This work is being undertaken by Eric Gilbert, David Jurgens, Libby Hemphill, Eshwar Chandrasekharan, and Jane Im, and is further described in a paper written by Jane.

The Hill: Researchers unveil tool to track disinformation on social media
The launch of CSMR's Iffy Quotient, our first platform health metric, has been covered by The Hill. Paul Resnick's comments on the Iffy Quotient's context and utility are also quoted.

Paul Resnick selected for inaugural Executive Board of Wallace House
Wallace House at the University of Michigan has announced the formation of its inaugural Executive Board, which will include Paul Resnick.
Wallace House is committed to fostering excellence in journalism. It is the home to programs that recognize, sustain, and elevate the careers of journalists to address the challenges of journalism today, foster civic engagement, and uphold the role of a free press in a democratic society.
The Executive Board will provide strategic support for Wallace House's existing programs and guidance in developing new initiatives. It will advise the Knight-Wallace Fellowships for Journalists, the Livingston Awards, and the Wallace House Presents event series. Composed of acclaimed journalists and accomplished University of Michigan faculty, the board will play an active role in leading the organization through a period of growth and expanded vision.

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features
Libby Hemphill is a co-author of a paper investigating a possible prediction method for online hostility on Instagram.
Abstract:
Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).
Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018)

University of Michigan experts discuss Facebook & Cambridge Analytica
University of Michigan experts weigh in on the Facebook and Cambridge Analytica controversy in the University's "Privacy, Reputation, and Identity in a Digital Age" Teach-Out. CSMR's Garlin Gilchrist and Florian Schaub join Sol Bermann, University Privacy Officer, in a video discussion about the relevant issues of, and takeaways from, the controversy.
Teach-Outs are free and open online learning events that bring together people from around the world to learn about and address important topics in society. They are opportunities to hear from diverse experts, discuss and connect with other participants, and consider actions one can take in one's own community.
The "Privacy, Reputation, and Identity in a Digital Age" Teach-Out considers questions of privacy, reputation, and identity using a case study approach and real-world scenarios across multiple topic areas.
The full video discussion of the Facebook and Cambridge Analytica controversy can be watched on YouTube or at Michigan Online.

Wall Street Journal: Trolls Take Over Some Official U.S. Twitter Accounts
Libby Hemphill was quoted in a Wall Street Journal article about trolls taking over some official U.S. government Twitter accounts.

"Genderfluid" or "Attack Helicopter": Responsible HCI Research Practice with Non-binary Gender Variation in Online Communities
Oliver Haimson is a co-author of a paper investigating how to collect, analyze, and interpret research participants' genders with sensitivity to non-binary genders. The authors offer suggestions for responsible HCI research practices with gender variation in online communities.
Abstract:
As non-binary genders become increasingly prevalent, researchers face decisions in how to collect, analyze and interpret research participants' genders. We present two case studies on surveys with thousands of respondents, of which hundreds reported gender as something other than simply women or men. First, Tumblr, a blogging platform, resulted in a rich set of gender identities with very few aggressive or resistive responses; the second case study, online Fantasy Football, yielded opposite proportions. By focusing on variation rather than dismissing non-binary responses as noise, we suggest that researchers can better capture gender in a way that 1) addresses gender variation without othering or erasing non-binary respondents; and 2) minimizes "trolls'" opportunity to use surveys as a mischief platform. The analyses of these two distinct case studies find significant gender differences in community dimensions of participation in both networked spaces as well as offering a model for inclusive mixed-methods HCI research.
CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, April 2018, Paper No.: 307, Pages 1–15
Access the paper here: https://doi.org/10.1145/3173574.3173881

Fragmented U.S. Privacy Rules Leave Large Data Loopholes for Facebook and Others
In a piece written for The Conversation—and reprinted in Scientific American—Florian Schaub comments on Facebook CEO Mark Zuckerberg’s Congressional testimony on ways to keep people’s online data private, and argues that Facebook has little reason to protect U.S. consumers the same way they are required to in other countries, due to more comprehensive privacy laws abroad.
Read Florian's article at The Conversation or Scientific American.

What your kids want to tell you about social media
Sarita Schoenebeck was quoted in a HealthDay article about the effects of parents' social media use.

Why privacy policies are falling short...
Florian Schaub shares some thoughts on the shortcomings of technology-related privacy policies in an "Insights" article written for the Trust, Transparency and Control (TTC) Labs initiative.
TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy. Initiated and supported by Facebook, and built on collaboration, the movement includes major global businesses, startups, civic organizations, and academic institutions.

Online Harassment and Content Moderation: The Case of Blocklists
Eric Gilbert is a co-author of a paper investigating online harassment through the practices and consequences of Twitter blocklists. Based on their research, Gilbert and his co-authors also propose strategies that social media platforms might adopt to protect people from harassment and still respect their freedom of speech.
Abstract:
Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this article, we present a rich description of Twitter blocklists – why they are needed, how they work, and their strengths and weaknesses in practice. Next, we use blocklists to interrogate online harassment – the forms it takes, as well as tactics used by harassers. Specifically, we interviewed both people who use blocklists to protect themselves, and people who are blocked by blocklists. We find that users are not adequately protected from harassment, and at the same time, many people feel that they are blocked unnecessarily and unfairly. Moreover, we find that not all users agree on what constitutes harassment. Based on our findings, we propose design interventions for social network sites with the aim of protecting people from harassment, while preserving freedom of speech.
ACM Transactions on Computer-Human Interaction, Vol. 25, Issue 2, March 2018, Article No.: 12

AP: Social media offers dark spaces for political campaigning
Garlin Gilchrist was quoted in an Associated Press article about the dark spaces for political campaigning found on social media.
Read the article at Business Insider (via the Associated Press)

MLive: New University of Michigan center to tackle "social media responsibility"
Garlin Gilchrist discussed the launch of the Center for Social Media Responsibility and the center's mission on MLive.

Classification and Its Consequences for Online Harassment: Design Insights from HeartMob
A paper, authored by Lindsay Blackwell, Jill Dimond (Sassafras Tech Collective), Sarita Schoenebeck, and Cliff Lampe, about design insights from the HeartMob system will be presented at the CSCW conference in October 2018. These insights may help platform companies, when designing classification systems for harmful content, take into account secondary impacts on harassment targets as well as primary impacts on the availability of the content itself.
Abstract:
Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star’s classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users’ needs into the design and moderation of online platforms.
Proceedings of the ACM on Human-Computer Interaction, Vol. 1, Issue CSCW, December 2017, Article No.: 24