Kathy BloomgardenKathy Bloomgarden

We’re confronting an era fraught with high political tensions, and we’re poised to enter a contentious election cycle. At the same time, businesses are dealing with increasingly advanced cyber-security attacks, and waves of misinformation are compounded by the rapid expansion of Generative AI.

Misinformation can impact all levels of an organization and needs to be top of mind for leadership to effectively counter business and reputational threats. Communications increasingly has a seat at the table in managing the fallout of misinformation. However, the alarming speed at which information floods our newsfeeds, coupled with the growing usage of covert tactics to intentionally spread misleading content—for instance, deepfakes and bot accounts—calls for communicators to shift from a reactive approach to a proactive one.

With pre-emptive analytics and robust response strategies, communicators can identify the voices behind the threats, evaluate the best course of action and, ultimately, protect their brand from further damage. At Ruder Finn, our in-house analytics and tech incubator—RF TechLab—allows employees to explore new technologies and integrate innovative offerings for clients at the intersection of communications and technology.

Identifying misinformation through ‘faction analysis’

Misinformation spreads with more speed and scope than facts, especially online. According to research done by the MIT Media Lab, falsehoods are “70 percent more likely to be retweeted on X (formerly Twitter) than the truth, and reach their first 1,500 people six times faster.” In the digital age that we live in, a significant portion of online misinformation is spread deliberately by factions pursuing political, social or ideological agendas. Research from Harvard found that people who knowingly and willingly shared misinformation online were “more likely to also report support for political violence, a desire to run for office and warm feelings toward extremists.”

This article is featured in O'Dwyer's May '24 PR Firm Rankings Magazine
(view PDF version)

The use of bots, or automated social media accounts, has also made it much easier for agenda-driven factions to perpetuate false narratives. At the end of 2022, we all watched the digital ecosystem erupt when a fake X handle posed as Eli Lilly, claiming “We are excited to announce insulin is free now.” The post garnered over 1,5000 retweets and 11,000 likes in mere hours, causing Lilly’s shares to drop more than 6 percent the day after the tweet.

What’s more, factions can better conceal their identities and motives with AI Deepfakes while pushing out more convincing messages. We’ve seen the emergence of distorted—yet persuasive—deepfakes impersonating influential figures from Taylor Swift to Joe Biden.

The potential damage of AI-generated video and audio extends globally, proven by the recent attempt to manipulate India’s general election. In April, fake videos of two A-list Bollywood actors criticized Prime Minister Narendra Modi and called for people to vote for the opposing Congress party. The videos were viewed on social media more than half a million times in a week.

“Faction Analysis” is critical to identify the root source of false claims before they breed polarized echo chambers and, ultimately, create an erosion of trust in mainstream media. By identifying who is involved in factions that generate misinformation, communicators have the chance to develop a response strategy that addresses the actions of a specific group. The RF TechLab completes faction analysis in the preparation stage of a campaign. This approach identifies the various agenda-driven groups—which can include super fans, extremists, and bot accounts—that can potentially contribute to a viral misinformation moment. Understanding where the threat originates from early on is paramount to prevent a bud of inaccuracy from blooming into a crisis and prepare an effective, strategic response if it does.

To respond or not to respond?

When developing a strategic response to crises, whether derived from misinformation or truth, more analysis is needed to know if to react, and if so, when. Although resonant at the moment, many viral instances are short-lived and quickly overtaken by the “next thing” in the media cycle. The turnaround time of a crisis can drastically vary based on news cycles, and social media amplification from online bots, public engagement and sharing patterns, among others.

While ten years ago viral moments had a larger share of voice in the public eye, the sheer volume of information that we consume daily in the digital era means that the cycle of relevancy moves faster in correlation. This has pros and cons for communicators—while it considerably shortens our window for a timely reaction in the cases where a response is needed, we must keep an eye on “conversation decay” which is when a viral topic may undergo a natural decline over time, often after an initial peak. Communicators feel pressured to respond to a viral moment or crisis in a timely manner, but sometimes, a response can reignite discourse that would’ve naturally dropped off as the public’s focus shifted to the next story.

We’ve all heard the saying, “The flame that burns twice as bright burns half as long.” The increasingly viral nature of modern media allows for seemingly everyone to have a say, amplifying viral moments to levels that may appear too widespread for communicators to mitigate. However, as communicators, we have the power to leverage the rapid expansion of AI and new technologies to take control of corporate narratives and connect individuals with accurate and meaningful information. To combat the threat that misinformation poses for organizations at all levels, companies need to pivot their strategies to prioritize analyzing and monitoring misinformation at every stage: inception, build-up, peak and resolution. Employing these tactics, communicators can pinpoint the root cause of misinformation and decide if and when to respond.

***

Kathy Bloomgarden is CEO of Ruder Finn.