For the second time in the last year, Facebook has found itself in the reputational crosshairs after it was revealed that data analysis and political consulting firm Cambridge Analytica had improperly collected the private data of 50 million of the site’s users without their permission, once again positioning the social media giant in the unwitting role of disinformation specialist. With pressure mounting on the site to win back users’ trust and assuage fears regarding potential future breaches, it’s clear this issue signals not only a decisive turning point for Facebook but for conversations regarding data security in the U.S. as well.

The data firm, which is an offshoot of research company SCL Group, was formed in 2013 by former White House chief strategist Steve Bannon and billionaire conservative donor Robert Mercer. It has performed political consulting work around the world, including the Brexit campaign and last year’s controversial presidential elections in Kenya. Bannon allegedly introduced Cambridge Analytica’s services to the Trump camp in 2015 after it had initially worked for Ted Cruz’s presidential bid the year prior. It was during this time that the company allegedly began testing slogans such as “drain the swamp” and “build the wall” with audiences, phrases that later become staples of the Trump narrative.The shadowy London-based firm relied on “psychographic” profiling techniques that could allegedly identify and target individual voters on behalf of its clients. How the company acquired the data pool it needed to put those concepts into practice for the 2016 election, however, is the source of its subsequent controversy. Christopher Wylie, a former Cambridge Analytica research director turned whistleblower, detailed to the New York Times in March how the firm relied on a personality app developed by an outside researcher that collected Facebook users’ personal information. Paying several hundred thousand of the site’s users in early 2014 to take a personality test under the guise of a fake academic study, the app that participants downloaded gathered information not only on them but on their contacts as well, surreptitiously harvesting a trove of more than 50 million Facebook accounts without users’ knowledge or consent. Cambridge Analytica then used this illicitly obtained dataset to build profiles on persuadable voters and then pitched them Trump-related materials.

The ensuing Times story has left Facebook dealing with a rapidly intensifying crisis, with the sheer number of forces now aligned against the social media giant making it increasingly difficult to find improved footing and manage the conversation. The Federal Trade Commission this week announced that it has opened an investigation into Facebook’s handling of user data, and there’s a growing consensus on Capitol Hill that Facebook CEO Mark Zuckerberg should testify before the Senate Judiciary Committee. Facebook’s stocks plummeted a day after news of the controversy broke, falling $37 billion in a 24-hour period. Some lawmakers will no doubt propose changing guidelines for how data is regulated on the social site, just as they did after it was discovered that Russia-backed outfits placed political ads across social media platforms in an effort to spread disinformation and foment dissent in the months leading up to the 2016 presidential election. But arguably the greatest blowback Facebook has received has come from the public, with the hashtag #DeleteFacebook trending on social media channels as scores of the site’s former loyalists now advocate abandoning the platform altogether over its failures to keep users’ data safe. Trust in the site has faltered, among the media, the government, and the legion of users and marketers who use it. The underlying message is clear: Facebook has lost control of its platform.

A deficit of trust

The Cambridge Analytica controversy has kicked off a debate that intersects at a crossroads of modern political campaign tactics, digital marketing technology and a dialogue regarding social privacy and how Internet platforms treat user data. It deserves to be said that while Facebook’s crisis has effectively been treated as a data breach, what it really constitutes is a breach of trust. Data is digital marketing’s lynchpin. Facebook, for all intents and purposes, is in the data sharing business. APIs are the platform’s foundation; the site collects data on its users for the sake of selling ads. The level of information Facebook possesses on its users is vast, and the unusually high quality of that data, coupled with the length of time users spend on the site and the sheer reach it has — Facebook is one of the most heavily trafficked sites in the world, with more than two billion active monthly users, according to a Q4 2017 Statista forecast — is a reflection of its position in today’s digital ad market. It’s little wonder why such an ecosystem would be a prime target to have its data poached.

Michael Priem
Michael Priem

“Facebook is being vilified for something they weren’t first-party to,” said Michael Priem, founder and CEO of Minneapolis-based advertising firm Modern Impact, which specializes in omni-channel marketing, traditional advertising and machine learning for programmatic advertising. “The technology Cambridge Analytica used is utilized heavily across the web. The idea of audience segmentation is nothing novel or new. Other credible companies use it all the time. Because it happened on Facebook, however, it causes us to question what’s private and what’s not when I’m logged in to what’s perceived to be safely guarded information. The reality is, this could’ve happened to any publisher, and part of the reason Facebook has been put in the limelight is because of how pervasive it is and how well they’ve done with validating their user data.”

Consumers will tell you they value their privacy online, but our behaviors regularly belie those claims. We routinely hand over credit card and banking information to make purchases without a second thought, readily publish our political opinions, employment history, relatives’ names, shopping habits and advocate for beloved brands, yet appear incredulous when we discover that the data trail we’ve left behind is precisely the reason why we’re allowed to roam across these channels for free. It appears that we couldn’t do without these social tools today if we wanted to: Facebook is now Americans’ number-one source for daily news, beating out Fox News, ABC, CNN, CBS and NBC, according to a recent survey by San Francisco-based tech PR firm Bospar. A deeply laid subtext to the Cambridge Analytica story reveals that these technologies themselves might be less the problem than the fact that they’re influencing and changing our behaviors faster than we, or our lawmakers, can comprehend.

“We use Facebook as a platform for social sharing. As consumers, we live in a world where we expect a lot of things for free. When we have those expectations, we push publishers to create a different economic currency, which has become the advertising currency that companies like Cambridge Analytica use to deploy technology to create audience segments,” Priem said. “The bigger question centers on the safe havens of our private and public lives. We feel we have this sense of privacy that’s basically implied in certain areas of the digital world, and don’t like the fact that our data can be used to send targeted messages, when it’s no different than being inside a store and being captured on video. In reality, there are a number of privacy protections online, but we don’t have a very good idea of what’s being collected and what’s being used. My fear is that we, as consumers, have a deep misunderstanding of what marketers are doing and this causes a deep deficit of trust, and we’re quick to vilify the technology when the reality is there’s not enough conversation happening around the risk consumers face when we’re in the public domain.”

Silence, mixed messaging becomes the story

It doesn’t help that Facebook’s initial response in navigating the backlash arguably made the crisis worse. As it turns out, Facebook knew about the Cambridge Analytica leak as far back as 2015, and demanded the firm delete the trove of data it had obtained — it appears the company didn’t comply with those orders — but the site never disclosed that action to the public. The year prior, Facebook also began rolling back the access allowances it had previously granted third-party app publishers to users’ content, but most of us didn’t know about that until now either. Then there was the mixed messaging: Facebook initially downplayed Cambridge Analytica’s breach, then backpedaled in light of the pending Times story, referring to the firm’s actions as “a scam” and “a fraud” in a perfunctory press statement. Facebook has since announced that it has banned Cambridge Analytica from its platform and has demanded an internal audit of the company’s data, but it all comes as a bit little, a bit late.

“It’s been a multiphasic train wreck. Each day it’s been another disaster,” said Curtis Sparrer, co-founder and principal of Bospar. “In a crisis like this we recommend that clients come forward and scoop the journalists. I rarely think press conferences are a good idea, but here a rip-the-Band-Aid-off approach would’ve been a good thing. An excess of candor is needed. They’ve hurt themselves in a variety of ways, they have not responded adequately, and there’s been a lot of reporting about Facebook’s heavy-handed approach to this that has turned people’s stomachs.”

Surprisingly absent in this response was Zuckerberg himself, who finally ended his silence during a March 21 CNN exclusive, four days after the Times story broke. Zuckerberg, who isn’t known for his press interviews, said he’d be “happy” to testify before Congress “if it’s the right thing to do.” In a blog post the following morning, the Facebook chief outlined new steps the company was taking to protect its users’ data, mandating tighter scrutiny for app developers and setting new restrictions on the data that developers are allowed to access. This is all welcome information, if for any reason that, until now, it had remained unknown what Facebook and its founder had been doing behind the scenes ever since the company was made aware of these problems three years ago.

Curtis Sparrer
Curtis Sparrer

“That’s going to be something we’re going to have to get some clarity on,” Sparrer said. “Anytime we’re left wondering why someone’s staying silent, people start taking guesses that there might be something worse that he’s hiding, and your absence becomes the story. You need to explain how you’re going to solve the problem and show how [Zuckerberg] will handle this as a leader, because the question now is whether he can he do it.”

The New York Times report puts Facebook in crisis mode for the second time in the last year, coming after the role that site played during the 2016 election by circulating fake content over its news feed and selling political ads to Russia-backed companies. And changes to Facebook’s algorithms in recent months, which now emphasize content posted by users’ contacts, has hampered its value as a publishing platform among third-party sites that depend on the site for routing organic traffic to their published content. Reports are now surfacing regarding tensions inside the social media giant as a result, with chief information security officer Alex Stamos, a former Yahoo exec, announcing his forthcoming departure. One can’t help but wonder how permanent the fallout will be, and whether the company’s golden days are now sailing somewhere in the recent past.

One might also wonder if this crisis could influence the other victim in this story — Facebook’s users — to begin considering the role our choices play in the digital realm as well.