The growing controversy surrounding fake news, an epidemic of web outlets deliberately peddling misinformation and wild conspiracy theories that circulate through blogs and social media channels, has now been blamed for everything from the outcome of the Presidential election to a December attack on a Washington, D.C. Pizzeria erroneously linked to a child-abuse ring.
While U.S. consumers, lawmakers and tech leaders seem to agree that the phenomenon is a problem, a debate wages on regarding what should be done about it and who should be responsible for policing this content.
A recent survey from digital politics and policy outlet The Morning Consult underscores the prevalence of specious content on the web today. About half (49 percent) of respondents polled said they’re exposed to fake news through Facebook or Twitter on a daily basis, and more than two-thirds (69 percent) admitted reading a news story which they later discovered was phony.
When it comes to who bears the responsibility in preventing the spread of phony news content, however, Americans appear divided. The Morning Consult survey reported that 67 percent of respondents lay the responsibility for policing fake news on search engines such as Google, but almost the same number (66 percent) believe the person reading the news bears that duty, followed by social media sites like Facebook and Twitter (63 percent) and the federal government (56 percent).
Regarding what party bears the most responsibility for ensuring this content doesn’t spread, 24 percent said the obligation falls on the shoulders of the person reading the news, followed by social media sites (17 percent), the federal government (14 percent), web service providers (10 percent) and search engines (9 percent).
The Morning Consult survey, which polled more than 1,000 adults online in early December, also suggested political corollaries in respondents’ answers: the survey found that those identifying as Republican were more likely to cite the person reading fake news as responsible for ensuring others aren’t exposed it (25 percent, versus 20 percent of Democrats); Democrats, by contrast, were more likely to place the responsibility on social media companies (21 percent, versus 17 percent of Republicans).
Interestingly enough, the survey also found a clear majority of Americans are open to the prospect of tech companies censoring fake news, with 71 percent claiming it would be appropriate for Google to remove this content, 71 percent stating it would be permissible for Facebook and Twitter to do so, and 67 percent saying web service providers should outright ban its circulation.
Fake news becomes brand pariah
A rise in spurious news outlets is also stirring panic among brands that are now looking for ways to potentially vet their content from appearing on sites offering deceptive or misrepresentative information.
A December 8 Wall Street Journal report detailed some of the many well-known brands that are now appearing — and unknowingly, helping fund — these fringe sites, in an ad buying climate where marketing content now appears on sites not because companies placed them there, but based solely on consumers' browsing history or demographics.
Given the recent popularity of programmatic media buying, where the placement of display ads is entirely automated, it can be difficult for advertisers to know where their ads will appear, or if they're being featured on sites alongside phony content. According to September findings by digital market research company eMarketer, U.S. programmatic display ad spending could top $25 billion by the end of this year.
While the public at present seems unable to arrive at an agreement regarding who should be responsible for addressing the fake news phenomenon, in the private sector the consensus appears discreetly clear: leaders in the tech industry should be doing something to mitigate how this content is shared, at least in the search engines and sites that direct much of the traffic to these bogus sites.
Google in November announced it would take steps that would prevent sites offering false or deceptive content from generating revenue through the company’s ad-selling services.
Facebook chairman, CEO and co-founder Mark Zuckerberg, who previously trivialized the role a phalanx of fake news content pouring through his network could have played in influencing the election, seems to have buckled to pressure. Facebook in November banned phony sites from using the company’s advertising network to generate revenue, and the same month announced it is now mulling over ways it can limit the amount of false information being shared on the social media site, avenues that may include third-party verification services and new automated detection tools.
A May Pew Research Center survey found that 62 percent of U.S. adults get their news from social media, and two-thirds of Facebook users (67 percent) use that site as a news source, which accounts for about 44 percent of the general U.S. population.