The majority of people support strong actions being taken to restrain the spread of harmful misinformation through social media according to a news article published by the University of Exeter on February 9
The majority of people support strong actions being taken to restrain the spread of harmful misinformation through social media according to a news article published by the University of Exeter on February 9.
Research suggests that public figures such as tech mogul Elon Musk, a self-proclaimed "free-speech absolutist", are in the minority considering how the public resolves moral dilemmas regarding misinformation on social media. The findings reveal that people largely support intervention to control the spread of misinformation, especially if it is harmful and repeatedly shared.
The study, conducted by researchers from the University of Exeter, the Max Planck Institute for Human Development, the University of Bristol and the Vrije University of Amsterdam, sought to find more information concerning content moderation and the spread of misinformation on social media platforms.
“So far, social media platforms have been the ones making key decisions on moderating misinformation, which effectively puts them in the position of arbiters of free speech," said author Dr. Anastasia Kozyreva. "Moreover, discussions about online content moderation often run hot, but are largely uninformed by empirical evidence."
The study involved over 2,500 participants in the United States who took part in a survey experiment where they were shown information about hypothetical social media posts containing misinformation. They were then asked to make two choices: whether to remove the mentioned posts and whether to suspend the account that posted them. Post topics included misinformation about the 2020 U.S. Presidential election, anti-vaccination sentiments, Holocaust denial and climate change denial, as well as whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post, as well as the consequences of the misinformation.
The majority of respondents chose to take some action to prevent the spread of falsehoods. When asked how to deal with the post in question, 66% supported deleting it across all scenarios. When asked how to deal with the account behind the post, 78% said that they would intervene, with actions ranging from temporary to indefinite account suspension to issuing a warning.
When asked to choose between doing nothing, giving a warning or using a temporary or indefinite suspension, 31 - 37% of respondents chose to issue a warning. Of the four categories of misinformation, Holocaust denial was acted on by 71% of respondents, election denial was acted on by 69%, anti-vax sentiments were acted on by 66% and climate change denial was acted on by 58%. Across all categories, Republicans were less likely to remove posts and punish accounts than Democrats.
“Our results show that so-called free-speech absolutists, such as Elon Musk, are out of touch with public opinion," said study co-author Stephan Lewandowsky. "People, by and large, recognize that there should be limits to free speech, and that content removal or even deplatforming can be appropriate in extreme circumstances, such as Holocaust denial.”
The study also revealed which factors affect people's decisions regarding content moderation online; the topic of a post, the severity of the consequences of the misinformation, and whether it was a repeat offense had the strongest impact on decisions to remove posts and suspend accounts. The person behind an account and their political leanings had a negligible effect on respondents' decisions.
The study was performed by Anastasia Kozyreva, Ralph Hertwig, Philipp Lorenz-Spreen, and Stefan M. Herzog from the Max Planck Institute for Human Development, Mark Leiser, from the Vrije University of Amsterdam in the Netherlands, Stephan Lewandowsky, from the University of Bristol and Jason Reifler from the University of Exeter, in the UK.