Curbing the dissemination of false and misleading information has become a sort of clarion call among social media networks in recent years, as Facebook and other social media companies have found themselves under increased pressure from users, the government and journalism advocacy groups to do something about the proliferation of “fake news” online.

Facebook in late 2016 began tagging some false news stories as “disputed” by third-party fact-checking organizations, only to halt the practice in late 2017 and instead merely show related articles in an effort to give readers more context to news stories. Now, new research suggests that flagging false news stories with a warning label could indeed be one of the most effective ways social media networks can decrease the spread of misinformation.

The study, authored by Paul Mena, a professor of journalism at the University of California, Santa Barbara, sought to examine social media users’ news-sharing behaviors, specifically, whether a warning label might influence users’ intentions to share misleading content.

Mena conducted an online experiment to test whether there was a connection between the flagging of false Facebook posts and the perceived credibility of those posts. Participants were shown Facebook posts containing fabricated headlines and news content, some accompanied by a warning flag and others without (headlines for these simulated news items were designed to be ideologically attractive to both Democrats or Republicans.)

Respondents were then asked to rate how believable, accurate and authentic the fabricated Facebook post was, and the likelihood that they’d share that content on their own Facebook timelines.

Overall, 23 percent of respondents said they were likely to share one of the simulated posts crafted for the study, while 63.5 percent said they believed others would be likely to share that fabricated post.

However, the study also discovered that respondents who saw one of the fabricated posts with a fact-checking warning label reported lower intentions to share that deceiving content on their Facebook timelines than respondents who didn’t see a warning label. Flagging false news, in other words, could have an effect on reducing people’s willingness to share it.

A majority of respondents also said that the likelihood that others would share items perceived to be false would be diminished in light of a warning label.

“This study found that the flagging of false news had a significant effect on reducing false news sharing intentions,” Mena concluded. “The study showed that respondents who saw a fabricated Facebook post with a warning label had lower intentions to share that content than those who did not see the flag.”

Finally, the study discovered that the flags’ effect on user sharing intentions remained significant after controlling for participants’ political leanings, as the appearance of a warning flag decreased sharing intentions among Democrats, Republicans and Independents alike. However, the study found that Democrats and Independents were more likely than Republicans to share false news posts both when the warning label was absent and when it was present.

Mena’s research polled more than 500 participants from across the political spectrum. Respondents were sourced through Amazon’s crowd‐sourcing service Mechanical Turk. His findings were published the July edition of peer-reviewed academic journal Policy and Internet.