![]() |
Eric Yaverbaum |
The hype around artificial intelligence doesn’t show signs of calming down anytime soon. A recent McKinsey survey found that 72 percent of businesses have adopted AI for at least one function this past year. There are also approximately 84,450 AI-focused businesses globally with an average of around 6,827 new AI companies launching each year for the past 10 years.
The AI market is taking the business world by storm for a reason: There are a multitude of benefits AI can offer companies. AI-powered analytics tools such as Tableau, Microsoft Power BI, Qlik and IBM Watson offer automated data analysis for organizations. So, instead of human employees spending hours of their day trifling through millions of data points, these AI tools can pull the same insights in seconds. Similarly, tools like ChatGPT, DALL-E, Jasper AI and Microsoft Copilot can instantaneously create images, text, code and other forms of content using generative AI.
Yet, as with any fast-growing industry, AI also has its setbacks. Some companies jumped on the generative AI train too quickly and are consequently finding themselves in a reputational crisis, particularly when using these tools not to eliminate mindless tasks, but to attempt to replace artists, writers and other creators.
For instance, when Coca-Cola released an AI-generated holiday ad on YouTube last November, the company didn’t receive a warm public reception. The comment section of the ad was filled with negative reviews and backlash. Coca-Cola’s ad received widespread criticism for using generative AI in place of creative talent. A post on X from Alex Hirsch, creator of Disney’s television series Gravity Falls, brings attention to this issue. Hirsch wrote about the ad, “FUN FACT: (Coca-Cola) is ‘red’ because it’s made from the blood of out-of-work artists! #HolidayFactz.”
This article is featured in O'Dwyer's Jan. '25 Special Issue on Crisis Communications |
The online backlash around Coca-Cola makes it clear why people aren’t fans of AI-generated ads. First off, people recognize that AI-generated content can take jobs away from talented professionals. Second, most viewers can tell when they’re watching something AI-generated, and they don’t like it. Research from NielsenIQ reported that, when consumers were shown AI-generated ads, they not only could tell the ads were AI-generated but also perceived them as annoying, boring and confusing. Coca-Cola made the unfortunate mistake of not recognizing consumers’ distaste for this application of AI and the company’s reputation is now paying for it.
Moreover, the use of Generative AI for ads and other content raises still unanswered IP questions and will likely have legal ramifications given that these tools are trained on the creative works of others. This fall, a group of authors sued the AI company Anthropic after it used their work to train its Claude chatbot. And numerous similar lawsuits have been filed against ChatGPT creator OpenAI, including IP infringement claims from authors John Grisham, Jodi Picoult and George R. R. Martin, among others, as well as the New York Times and Chicago Tribune. Additionally, Sony, Universal and Warner are suing two AI music startups—Suno and Udio—claiming “copyright infringement involving unlicensed copying of sound recordings on a massive scale.” In light of this, it would be wise to use these types of tools judiciously as these cases are being decided.
Generative AI can also pose serious cybersecurity threats to companies. Increasingly, criminals are using deepfake images, audio and video to scam employees. In Singapore, more than 100 public servants from more than 30 government agencies were sent compromising deepfake images in extortionary emails. Likewise in Hong Kong, one employee of a multinational company was scammed into paying HK$4 million (equivalent to about $512,000 USD) after having a deepfake video call with someone who appeared to be the company’s CFO. These deepfake scams aren’t just happening outside the U.S. The FBI issued a public warning last December describing an uptick in criminals using Generative AI to commit financial fraud.
Business leaders need to take steps to protect their companies from AI scams. Not only because protecting employees is the right thing to do, but also because it’s a bad look. After all, if a company is allowing fraud to be committed to their employees, what does it mean for their customers? If a business is impacted by one of these attacks, it may find itself with a financial and reputational crisis on its hands.
Despite these dangers, a recent Riskconnect survey found that 80 percent of organizations don’t have a dedicated plan to address generative AI risks. More companies need to consider the potential fallout that could result from an AI crisis and fast, especially given how rapidly the AI industry is growing.
The first step is planning for all crisis scenarios before they occur. Ideally, part of that plan should be how to prevent a crisis from happening altogether. For instance, companies can conduct market research on public perceptions of AI ahead of releasing promotional materials. This way they can better incorporate generative AI into advertising, marketing and PR campaigns. And given the current public opinion of using AI in place of hiring creatives as well as the IP concerns, maybe fully AI-generated ads aren’t the way to go. Instead, companies can consider how generative AI can be used to assist talented workers during the creative process, such as using AI to generate idea suggestions or mockups.
To prevent a scam-related crisis, companies can also do the necessary research and take critical steps to protect their business. For example, company-wide training is an easy way to teach employees not to fall for these types of scams. It might require additional company resources but so does dealing with public perception once the news of a crisis is out there.
Companies should also come prepared for a crisis. This means having potential statements for the press drafted in advance, solutions coordinated for every scenario and leadership being ready to give a sincere apology if the situation calls for it. Business leaders should always have a crisis plan prepared for any type of issue that could happen. However, when dealing with AI, whether or not you have a crisis plan could make or break your company. AI is a technology that’s constantly evolving at high speed, and regulation for it in the U.S. remains murky. Without a crisis plan in place, businesses might find themselves in a situation that they can’t keep up with.
I’m not saying businesses should be afraid of or not embrace AI. In fact, companies should think about ways to incorporate AI into their everyday business practices. For many companies, AI eliminates hours of work and gives a leg up to talent during the creative process. However, ignoring the risks, not doing your research and not planning for AI could hurt business leaders in the long run. AI is all about finding a balance between fast adoption and assessing risk. It’s about knowing what’s out there and how it could help or hurt your company. Simply put, businesses must be ready for whatever AI-generated situation—good or bad—may come.
***
Eric Yaverbaum is CEO of Ericho Communications. He's the author of industry-standard bestseller Public Relations for Dummies and seven other books, including Leadership Secrets of the World's Most Successful CEOs. He's a regular TV pundit, and his expert commentary has been featured in Forbes, Entrepreneur, The Washington Post, The New York Times, HuffPost, CNBC and PR Week, among others.