Jon GingerichJon Gingerich

Welcome to another installment of How Low Can Facebook’s Reputation Go?

Once again, the world’s largest social media network has found itself mired in controversy for its role in proliferating the spread of misinformation and divisive content online, plunging the company into arguably its biggest public relations crisis to date. But the real difference is that, this time, Facebook’s troubles could signal a turning point regarding how digital media companies monitor the user-generated content that flows over their platforms, and may also usher in a new era of government-mandated regulations.

Facebook’s latest scandal comes after a former employee turned whistleblower leaked a trove of internal documents to the Securities and Exchange Commission and The Wall Street Journal, documents that revealed company employees had repeatedly raised internal concerns in the months leading up to the 2020 presidential election about the constant flow of misinformation, hate speech, conspiracy theories and polarizing content on the site, concerns which the company summarily ignored, presumably out of fear of hampering site engagement and scroll time, and hence, profits.

That bombshell development also included internal documents detailing how the social media giant had allegedly turned a blind eye to its own research showing how photo-sharing site Instagram is harmful to teen girls’ mental health.

This article is featured in O'Dwyer's Nov. '21 Technology PR Magazine
(view PDF version)

At this juncture, the parallels between Facebook’s troubles and the PR crisis that sent big tobacco’s reputation up in smoke can’t be missed. Reminiscent of when Phillip Morris changed its name to Altria in an effort to adjust its public image after a series of high-profile settlements, Facebook announced days after news of the scandal broke that it had rebranded its corporate parent company to Meta. Facebook CEO Mark Zuckerberg insisted the new brand identity wasn’t an attempt to divert attention away from the company’s ongoing woes, but instead reflected a months-in-the-making growth strategy as a “metaverse company,” which apparently will include a suite of virtual reality products, which the company views as the next online frontier. Still, the cynic in us might suspect the familiar application of window dressing here, of a company attempting to start afresh with a new name in the hopes that we’ll forget about its missteps. After all, we’ve seen this play before.

Admittedly, Facebook is due for a rebirth. It’s hardly the first time the platform has faced a crisis. Who could forget the Cambridge Analytica scandal, the shoddy digital architecture that allowed Russian trolls to circulate fake news in an attempt to sway the 2016 presidential election, the numerous data breaches and site crashes, the massive advertiser boycott in the wake of the George Floyd protests? Am I forgetting anything? It’s hard to keep track.

Ever since the 2016 election, we’ve grown increasingly wary of how social media platforms operate and what they do with our data, and Facebook’s latest crisis is the clearest indication yet that online social ecosystems are due for an overhaul, that the media companies that own them can’t police themselves and aren’t doing enough to curb the fake and violence-inciting content that runs rampant over their networks. Social media in 2021 amounts to an informational garbage dump. People have become radicalized, polarized, increasingly filled with bad ideas. It makes sense why Facebook is now moving beyond the social media business. What brand wants to be known as a forum that birthed extremist movements like the ones responsible for the Jan. 6 Capitol riot?

For years, experts have said that media literacy efforts are key to addressing the issue, but I’m not so sure. Most Americans simply want their biases confirmed online, and many aren’t interested—or don’t possess the critical thinking skills required—to understand when they’re being lied to. This is precisely why social media companies find themselves in the quandary they’re in: divisive content is reliably more likely to bring in more clicks, views and shares, so there’s an incentive to let this content run amok. Facebook gives algorithmic preference to content that elicits negative reactions, such as the anger emoji. Like tobacco, we’re beginning to recognize that with social media, the product is the problem.

My guess is that lawmakers will increase their scrutiny of digital platforms in the not-too-distant future. A series of congressional hearings quickly followed Facebook’s October scandal, and more are on the way. We can expect increased calls for reforms and the introduction of a slew of bills requiring everything from protections for children to increased transparency measures. Renewed calls for breaking up Big Tech are no doubt on the way. A Greentarget report released today found that more than a third of working journalists (34 percent) believe antitrust laws specific to fake news enforced against Big Tech are likely within the next three years.

The more likely—and realistic—change we’ll see will come in the form of self-regulatory efforts from social media companies, be it increased content moderation, privacy protocols, personal data protections and more stringent community standards. Either way, we can expect to see the concept of the social network evolve quickly within the next few years. That’s not necessarily a bad thing.