Jon Gingerich
Jon Gingerich

For nearly three years, I’ve listened to PR and marketing people blather on about how wonderful generative artificial intelligence is. I’ve edited article after article in these pages regarding how AI is reshaping the communications landscape, how it’s transforming content, increasing efficiencies, tearing down silos and basically doing everything short of ushering us into a new era of spiritual enlightenment.

I’m not going to argue that AI can create content quickly. I won’t deny it’s probably great for research, measurement, streamlining processes and a litany of other boring functions that, frankly, few of us will miss. I’m also willing to bet it saves people a lot of time, which is also to say, money. But generative AI creates garbage writing. And the people championing it don’t know what they’re talking about.

Have you seen the AI slop on the Internet that passes for “writing” these days? Have you noticed the emotionally comatose descriptions? The wearingly familiar clichés? The mannequin charm? The utter lack of voice, of anything resembling a personality? Everything sounds the same. Everything sounds like marketing copy. It’s soulless. It’s robotic. It’s lazy. And I haven’t even mentioned the factual inaccuracies, the misinformation and bot-generated deepfakes. It’s objectively bad. It’s poisoning the Internet. And you think this is the future of content?

Sure, if you’re writing a high school paper, ChatGPT can produce a structured—if formulaic and juvenile—argument quickly. But if you’re trying to say something interesting and inspired and original that makes an impact—if you’re trying to do what writing is supposed to do, in other words—it fails every time.

The world is constantly reminding us what happens when we try to replace human creative talent with AI. The Chicago Sun-Times and Philadelphia Inquirer recently faced a crisis after a third-party freelancer was hired to create a “summer reading” supplement for those papers and used AI to generate a recommended list of novels that don’t exist. A web sleuth recently discovered that several self-published “authors”—I use the term loosely—had accidentally left ChatGPT prompts in the final drafts of their books. (Maybe hire an editor!) And who could forget the time Sports Illustrated published a series of AI-generated articles bylined to nonexistent authors, or when a lawyer searching for a court precedent turned to AI, which hallucinated cases that never happened? A sci-fi magazine made headlines when it announced it was suspending its submissions because it had been bombarded with AI-penned manuscripts. The good news? The AI stories were so abysmal the editors were able to spot them a mile away.

This article is featured in O'Dwyer's Jul. '25 Travel & Tourism PR Magazine

There’s a hilarious presumption among AI devotees that those of us who remain critical of the technology simply haven’t gotten with the times and embraced bad content. To be clear: I’m aware that many marketing and PR people are convinced AI can create quality writing. I’m just saying they don’t know what quality writing is. I’m saying they’re wrong. I’m saying Gen AI isn’t making writing easier; it’s just making the Internet worse.

So, why do it in the first place? Because it’s cheap and easy, naturally. Because it exhibits quintessentially American thinking: Why spend years learning how to hone a craft when a computer can do it for you? Oh, but wait: it actually can’t.

A thought experiment developed by American philosopher John Searle called “the Chinese Room” illustrates the flawed logic we commit anytime we assume AIs are doing what we’re doing when they perform a task like writing. Imagine you’re in a room. On a table is an instruction manual. Behind you, a slip of paper written in Chinese is pushed through a slot in a door. You don’t speak Chinese, but the manual gives you precise instructions regarding how to translate the message. You do this and place your response back into the slot. To a person outside the room, it would be reasonable for them to assume you’re a Chinese speaker. Searle argues this is exactly what’s happening when we assume AIs are "intelligent." Gen AI uses highly complex statistical models to predict the next word in a sequence in an effort to imitate us. And while it does this job efficiently, it doesn’t understand the meaning of what it said. We project that assumption onto the AI. In other words, we’ve confused syntax with semantics. AI doesn’t write. It just types.

Indeed, writing is much more than a sequential arrangement of words on a page. Anytime we write something, that final product is a result of the writer's individual thumbprint, which is itself the result of thousands upon thousands of micro decisions based on a confusing soup of our experiences, opinions and tastes, as well as random ideas that just mysteriously arrived in that ineffable fog we call consciousness. AI doesn’t understand that; it just tries to sound like what it thinks “writing” is. But writing isn’t supposed to sound like “writing.” In fact, a fundamental part of writing is saying what the reader would never anticipate. Sometimes, it’s the “mistakes” that make writing great. How can anyone expect an algorithm that’s never grappled with the contradictions and complexities of the human experience to do anything more than imitate the safest and blandest writing out there so it can fool the most credulous among us?

AI can do a lot of great things, but it can’t write. All AI writing is doing is pumping more bad content into the world. If you want to connect with people, maybe try investing in people. And if you can’t come up with creative ideas, maybe give the job to someone else.