Filomena Fanelli (L) & Kayla Hannemann co-authored this article
Filomena Fanelli (L) & Kayla Hannemann co-authored this article.

If you work in media relations in any capacity, you surely know the most common dos and don’ts: monitor negative reviews; ensure that the complaint posted on social media doesn’t garner traction; review your statements to guarantee your messaging isn’t backlash-worthy.

But, as the use of artificial intelligence continues to grow in popularity, there’s a far more dangerous – and common – way to damage reputations: using LLMs like ChatGPT to respond to media inquiries and more. When striving to be seen as subject-matter experts, the improper use of AI can inadvertently cause clients to foil their own efforts.

This scenario becomes especially risky for agencies that rely on clients to provide commentary. When these experts lean on AI to create responses, they risk damaging not only their reputation with the media but also your reputation as a PR professional.

Mitigating Specific Scenarios

Let’s use a simple example: You share a list of a reporter’s questions with your client. When your client shares their responses, your gut immediately tells you that something is off. You want to be sure, so you use an AI scanner, which confirms your worst nightmare: your client used AI to craft their reply.

So, how do you encourage a PR-safe approach to AI and prevent a similar scenario from happening again?

Remind your clients that when crafting a response to a media inquiry, they should turn to AI for help with research or ideation, not for a finished product.

Sometimes, this is as simple as a reframing of their role as an expert.

When a journalist asks someone for a comment or insight, they are looking for a unique perspective. Even if your client is a leader in their field, using AI to write responses to journalists’ questions undercuts any authority that they may have. And yes, this includes asking AI to condense thoughts (without further editing) or to make their writing sound more cohesive.

Referencing specific platforms, like Qwoted, that have instituted strict guidelines around AI usage can also be helpful. To quote Qwoted directly: “We have a zero-tolerance policy towards misinformation and inauthenticity...Users providing false information will be banned from the platform.”

The site has also integrated AI detectors into its interface, so incoming pitches can be quickly and easily scanned for AI-generated versus human-generated content. This is a tangible proof point that you can – and should – be communicating to your clients.

Hard Truths, Delivered Kindly

To avoid any awkwardness, frame this discussion as something that is happening across the agency or company. Create a short list or guide of items to keep in mind, such as disclosing the use of AI, and a general disclaimer about the potential reputational harm of relying on AI.

Your clients should know the hard truth: media personnel can tell when someone is using AI.

As much as the internet jokes about em dashes and ChatGPT’s favorite emojis, the reality is that LLMs have a recognizable writing style. From word choice to structure and punctuation, there is a long list of AI-generated giveaways.

And the key to getting reporters and the public to notice – to standing out in the crowd – is to share a sharp, differentiated and compelling point of view, in a brand voice that can be felt and heard. Your clients deserve nothing less.

***

Filomena Fanelli is Founder and CEO of Impact PR & Communications. Kayla Hannemann is Senior Account Executive.