Quantcast
https://www.mpib-berlin.mpg.de/staff/anastasia-kozyreva

Study: Majority of public supports action to combat spread of harmful misinformation on social media

The public supports robust measures to control the dissemination of harmful misinformation through social media platforms, a new study says. The research shows a disconnect between public sentiment and the views of figures such as Elon Musk.


Current Science Daily Report
Jun 21, 2023

The public supports robust measures to control the dissemination of harmful misinformation through social media platforms, a new study says. The research shows a disconnect between public sentiment and the views of figures such as Elon Musk, who advocate for unrestricted freedom of speech, says the study, described in a University of Bristol press release

"Content moderation of online speech is a moral minefield, particularly when freedom of expression and preventing harm caused by misinformation are conflicting," the University of Bristol press release said. "By understanding more about how people think these moral dilemmas should be addressed, the research aims to help shape new rules for content moderation which the public will regard as legitimate." 

“So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech,” said first author Dr. Anastasia Kozyreva, adaptive rationality research scientist from the Max Planck Institute for Human Development, in Berlin, Germany. “Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence.”

"Our results show that so-called free-speech absolutists, such as Elon Musk, are out of touch with public opinion,” co-author Professor Stephan Lewandowsky, chairman of Cognitive Psychology at the University of Bristol, said in the news release. The public supports the idea of stopping the spread of false information, particularly when it is hazardous and often shared, and researchers hope to formulate proposed regulations for content oversight, the release said. 

Part of the study was a project in which 2,500 people across the U.S. saw information about hypothetical social media posts featuring false information; they had to select from two options: delete the post or suspend the source of the post. The false information used in the study included details from the presidential election, Holocaust denial, anti-vaccination and climate change denial. The researchers noted in the news release that most of the participants chose to take at least some action to stop the dissemination of fake news. When asked how to deal with a questionable post, 66% favored deleting it in all instances.  

When asked how to deal with the account behind the post, 78% favored one form of intervention or another: issuing a warning or an indefinite or temporary suspension of the accounts. The most favored intervention action was issuing a warning, favored by a range of 31% to 37% of respondents on the four topics. 

The research found that participants did not equally view different categories of misinformation. Holocaust denial was acted upon by 71% of the participants, and the rejection of Joe Biden as the winner of the 2020 presidential election came in second at 69%. Anti-vaccination disinformation followed, with 66% of respondents favoring a penalty, according to the university news release. The study also found partisan differences, with Democrats more likely than Republicans to favor removing posts and punishing accounts. Climate-change denial drew the fewest rebukes, 58%, the study found.

“People by and large recognize that there should be limits to free speech, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial,” Lewandowsky said in the news release. 

The university said the study can help show just what moves people to want to see action against disinformation. The impact of false information and repeat offenses are the strongest drivers of decisions favoring online regulation, the study found.

The release said the study found that account details, including the identity of the person making the posts, the number of followers and the political leanings of the person creating the post, didn’t sway anyone. 

Participants who claimed to view free speech as a priority over halting fake information were less likely to delete posts or penalize an account of someone with a lot of followers. However, the news release said the study found that when stopping the spread of fake information was a priority, accounts with more followers were more likely to be penalized. 

In addition to Lewandowsky and Kozyreva, the researchers included Ralph Hertwig, Philipp Lorenz-Spreen, and Stefan M. Herzog from the Max Planck Institute for Human Development, Mark Leiser of the Vrije University of Amsterdam in the Netherlands and Jason Reifler from the University of Exeter. 

Reifler, a professor of political science, said researchers expect their work to drive the formation of oversight for content moderation of fake news, the university’s release added. 

“People's preferences are not the only benchmark for making important trade-offs on content moderation, but ignoring the fact that there is support for taking action against misinformation and the accounts that publish it risks undermining the public's trust in content moderation policies and regulations,” he said in the news release. 

Hertwig, the director of the Center for Adaptive Rationality of the Max Planck Institute for Human Development, said in the news release that in order to handle instances of misalignment between fake news and free speech, “we need to know how people handle various forms of moral dilemmas when making decisions about content moderation." 

Leiser, who is an assistant professor in Internet Law, said in the news release that it is important to have real oversight.

“Effective and meaningful platform regulation requires not only clear and transparent rules for content moderation, but general acceptance of the rules as legitimate constraints on the fundamental right to free expression,” he said, according to the university’s news release. “This important research goes a long way to informing policymakers about what is and, more importantly, what is not acceptable user-generated content." 

The paper, titled "Resolving content moderation dilemmas between free speech and harmful misinformation," was published in the Proceedings of the National Academy of Sciences, the news release said.


RECOMMENDED