When it comes to social media, most users have chosen to take the good with the bad, but the owners of those platforms no longer have that option. As ubiquitous as social media has become, platform owners are increasingly being asked to take responsibility for the content—and the influence of that content—posted by their users.
It began, largely, with some reflection on social media’s influence during U.S. Presidential elections, and that concern spilled over into political contests and social movements in other countries across the globe, leading, in some cases, to government restrictions on how social media may be used.
While the U.S. doesn’t yet have hard and fast censorship rules, many groups have claimed to have been unfairly silenced by social media giants such as Facebook and Twitter. But that’s not where the calls for a more interactive—and proactive—ownership have ended. Many have called—for some time now—for social media leaders like Facebook to do more to combat cyber bullying and abusive content.
Recently, Facebook put out an announcement, picked up by several media companies, that the platform has “removed 2.5 million posts related to suicide or self-injury” in the third quarter of 2019 alone. The announcement is part of Facebook’s ongoing public relations campaign to show both users and critics that the platform is being proactive in combating the negative aspects that come with hundreds of millions of people interacting—often with relative strangers—online.
In addition to the announcement related to content about suicide, Facebook said the platform removed more than 4.4 million posts related to illegal drug sales, and that the company was also working to improve the overall atmosphere on Instagram, which is owned by Facebook.
This report and others like it are one way Facebook is continuing to work to shift public perception of the platform’s efforts to keep interactions on the site relatively positive and free of abuse, threats or criminal activity. This campaign comes even as critics have ramped up efforts to counteract what they see as bad actors on social media trying to use the platform as a megaphone for misinformation.
That may not be enough to silence critics, who continue to accuse Facebook and Twitter for, in their words, “not doing enough” to combat misinformation or disinformation on their platforms. For Facebook, this is an ongoing, uphill battle that has as much to do with public perception as it does with the nuts and bolts of programming.