Quantcast
University of Exeter

Research Finds Public Broadly Favour Taking Action to Stop Spread of Harmful Misinformation Online

The majority of people support robust action being taken to control the spread of harmful misinformation via social media, a major new study reveals.


Kerra Maddern
Feb 22, 2023

computer with appsLevels of violent extremist language vary across different parts of the incelosphere and have steadily increased in the main online spaces over the past six years

The majority of people support robust action being taken to control the spread of harmful misinformation via social media, a major new study reveals.  

The research suggests figures such as tech mogul Elon Musk,a self-proclaimed “free-speech absolutist”, are out of step with how the public resolves moral dilemmas regarding misinformation on social media. The findings show people largely support intervention to control the spread of misinformation, especially if it is harmful and shared repeatedly.

Content moderation of online speech is a moral minefield, particularly when freedom of expression and preventing harm caused by misinformation are conflicting. By understanding more about how people think these moral dilemmas should be addressed, the research aims to help shape new rules for content moderation which the public will regard as legitimate.

First author Dr Anastasia Kozyreva, Adaptive Rationality Research Scientist from the Max Planck Institute for Human Development, in Berlin, Germany said: “So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech. Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence.

As part of the study, more than 2,500 people in the USA took part in a survey experiment where respondents were shown information about hypothetical social media posts containing misinformation. They were asked to make two choices: whether to remove the posts mentioned in the post and whether to suspend the account that posted them. Topics of the posts included misinformation about the last US Presidential election, anti-vaccination, Holocaust denial, and climate change denial, and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post, as well as the consequences of the misinformation.

The majority chose to take some action to prevent the spread of falsehoods. When asked about how to deal with the questionable post, two-thirds (66%) expressed support for deleting it across all scenarios. When asked about how to deal with the account behind it, nearly four in five (78%) would intervene, with actions ranging from temporary to indefinite account suspension to issuing a warning.

When given the choice of doing nothing, using a warning, temporary suspension or indefinite suspension, most respondents preferred to issue a warning (between 31 per cent and 37 per cent across all four topics).

Not all misinformation types were penalized equally: climate change denial was acted on the least (58 p er cent), whereas Holocaust denial (71 per cent) and denial that Joe Biden won the US Presidential election (69 per cent) were acted on more often, closely followed by anti-vaccination content (66 per cent). Across these four specific issues, Republicans were less likely to remove posts and punish accounts than Democrats.

Co-author Professor Stephan Lewandowsky, Chair in Cognitive Psychology at the University of Bristol, in the UK said: “Our results show that so-called free-speech absolutists, such as Elon Musk, are out of touch with public opinion. People by and large recognize that there should be limits to free speech, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial.”

The study helps demonstrate which factors affect people’s decisions regarding content moderation online. In addition to the topic, the severity of the consequences of the misinformation, and whether it was a repeat offence had the strongest impact on decisions to remove posts and suspend accounts. Characteristics of the account itself—the person behind the account and their partisanship had little to no effect on respondents’ decisions. 

While the number of followers of an account had little effect overall, among those who said they prioritised free speech (versus stopping misinformation), the more followers an account had fewer people wanted to delete the post or sanction the account. The opposite was true for those who prioritised stopping misinformation – accounts that had more followers were more likely to be sanctioned and there was greater support for reviewing those posts.

The study was conducted by Anastasia Kozyreva, Ralph Hertwig, Philipp Lorenz-Spreen and Stefan M. Herzog, from the Max Planck Institute for Human Development, Mark Leiser, from the Vrije University of Amsterdam in the Netherlands, Stephan Lewandowsky, from the University of Bristol, and Jason Reifler from the University of Exeter, in the UK.

Professor Reifler said: “We hope our research can inform the design of transparent rules for content moderation of harmful misinformation. People’s preferences are not the only benchmark for making important trade-offs on content moderation, but ignoring the fact that there is support for taking action against misinformation and the accounts that publish it risks undermining the public’s trust in content moderation policies and regulations.”

Professor Hertwig, Director at the Center for Adaptive Rationality of the Max Planck Institute for Human Development, said: “To deal adequately with conflicts between free speech and harmful misinformation, we need to know how people handle various forms of moral dilemmas when making decisions about content moderation.” Dr Leiser, Assistant Professor in Internet Law, said: “Effective and meaningful platform regulation requires not only clear and transparent rules for content moderation, but general acceptance of the rules as legitimate constraints on the fundamental right to free expression. This important research goes a long way to informing policy makers about what is and, more importantly, what is not acceptable user-generated content.“

Publication: Anastasia Kozyreva, et al., Resolving content moderation dilemmas between free speech and harmful misinformation, PNAS (2023). DOI: 10.1073/pnas.2210666120

Original Story Source: University of Exeter


RECOMMENDED