Jon GingerichJon Gingerich

Full disclosure: This editorial—well, at least most of it—was written by a person.

A lot has been said recently about ChatGPT, the sophisticated AI chatbot that’s taken the world by storm. Launched in November by OpenAI, ChatGPT in February surpassed a hundred million active users, effectively making it the fastest-growing consumer app in history.

ChatGPT can take virtually any written prompt you give it and generate a remarkably fast, remarkably detailed answer—in a remarkably conversational tone. It can translate languages, write computer code, process data and summarize complex topics with impressive clarity. It can also tell jokes and write news copy and essays—even write song lyrics and stories.

Predictably, this has unnerved some people. A century of science fiction has made artificial intelligence a precursor for the sort of dystopian hellscape our collective anxieties conjure where robots rise up and subjugate mankind. These fears are misplaced. Chatbots simply use deep-learning techniques to repeat information back to us. And ChatGPT happens to do so in a freakishly human way.

This article is featured in O'Dwyer's Mar. '23 Food & Beverage PR Magazine
(view PDF version)

If ChatGPT is an existential threat, it’s only because it has the ability to disrupt industries and potentially render a lot of the jobs we do obsolete. That’s a big deal—albeit a challenge we’ve faced more than once since the industrial revolution. But ChatGPT promises to do something more: its loyalists and detractors have claimed the program can produce quality written content, thereby leaving the future uncertain for those of us who create—either professionally or for pleasure—that intrinsically human stuff we call “art.” Indeed, a science-fiction magazine recently announced it was forced to stop accepting fiction submissions after being flooded with chatbot-generated stories. I was relieved to hear the writing was apparently easy to spot, with one editor describing these stories as “bad in spectacular ways.” I began suspecting that, for all its practical uses, it also isn’t a stretch to think the talk surrounding ChatGPT might be imbued with the same phony hype that surrounded other recent fads that never took off, like Web 3 and virtual reality.

Curious, I spent a week toying with it. My analysis? It’s fun. Addictive, even. For some applications, it’s incredibly useful. If you’re a programmer, I can imagine it might make your job easier. And, yes, it can perform certain tasks with a fluency that I’m afraid might threaten some sectors of the workforce. But the software has some serious limitations. Don’t count on it to produce anything resembling quality writing anytime soon.

I started with some simple prompts, asking it things I already know how to do. How do you play an E-minor pentatonic scale on the guitar? The sequence of notes it provided was correct and the instructions were easy to understand. It even offered tablature. How do you perform an armbar in Brazilian jiu-jitsu? For this, it gave me an impressively detailed answer—frankly, a better one than I could’ve provided. How do you set up a freshwater fish tank? It listed all the basic steps accurately and clearly and even offered tips I didn’t know. Finally: How do you use a blind cane? (I’m legally blind and recently completed orientation and mobility training.) The response it gave was correct but in a generalized way that wasn’t particularly useful for anyone experiencing vision loss. I’m willing to bet my trainer at the New York State Commission for the Blind would agree that it’s missing some pretty critical information.

I queried another subject I know about: me. I’ve published a lot over the years in a number of publications, so I figured I have a digital footprint large enough for ChatGPT to work with. According to ChatGPT, I’ve written for the Huffington Post, Esquire and Forbes (all wrong). It said I had taught writing courses at NYU and Pace University (also wrong). It said I’ve been editor of this magazine since 2018 (more like 2006) and that a short story I published in The Saturday Evening Post was written in 1931 by William Faulkner. It even got the name of my last published novel wrong. How annoying.

ChatGPT nailed all the basic history questions I asked, so I lobbed a few controversial topics as a litmus test, considering it trawls an Internet overflowing with fake news. When I asked how COVID-19 originated, it responded, with noted impartiality, that while the exact origin of the virus isn’t known, it’s believed to have come from bats, although there are various competing theories worth merit, including the possibility that it might’ve escaped from a lab in Wuhan. Not bad.

Testing its potential partisanship further, I asked it to write an essay summarizing Bill de Blasio’s accomplishments as New York City mayor. The results were laughably slanted, and while I could overlook the suggestion that de Blasio had left a “legacy of progressive reforms,” readily-available NYPD crime statistics counter its baseless claim that he’d “helped to reduce crime in the city.”

I wanted to test ChatGPT’s critical capabilities. More than anything, I wanted to see if it could be creative. So, I asked it to write a series of 600-word essays, one arguing that college should be free, another on how to combat global warming and one on ways to curb overpopulation. In all cases, the work it produced was clear, well-argued and reasonably well-written, albeit on a high-school level. Granted, the essays were boilerplate and utilized the same voice every time. For most teachers, I’m guessing the style would be quickly recognizable. They weren’t original; while the argumentation was clear, the responses always seemed a bit obvious. Only in one case—the free college education paper—did it even offer statistics. Still, with a bit of revision, if treated as a first draft, I’d hasten to guess these papers might pass as college term papers. To me, this was the most troubling revelation, and frankly, I have no idea how we—and especially educators—can unring this bell.

Still, I wasn’t convinced ChatGPT could be creative, so I asked it to come up with some prospective titles for the new novel I’m writing. I typed out the book’s subject and supplied it with a trove of keywords. I did this several times. Despite some names that sounded clever by accident, this exercise was a bust. It read like the bot was spinning its wheels, just randomly generating names. Titles by committee, if the committee was utterly incompetent.

Finally, I asked it to write a 2,000-word story based on my new novel’s synopsis and fed it a list of specific plot, character and thematic details I’ve been mulling over for months. The result? Hilariously awful. Uninspired. Turgid. The language made Hemingway sound florid. It was riddled with clichés. There was no voice, no personality aside from this oddly moralizing tone you might find in a Hallmark Channel movie. I repeated the experiment with several other story ideas I had lying around, then summarized several literary classics. In each case, the result was the same. ChatGPT is incapable of telling a story. It seemed to understand plot, but virtually every other component—character, theme, language, even dialogue—was missing. And on the plot level, every development was predictable. At no point did it say anything interesting or make any effort to surprise or engage me as a reader, to place me in the story. On a craft level, it was vapid. Painfully unaware. It lacked a fluency in the complexities of the human experience, and without that, the joys of reading are lost. ChatGPT can’t write. It can only type.

Two things are certain: AI-powered chatbots like ChatGPT aren’t going anywhere. ChatGPT is a great tool that has tremendous potential and can be used by people to solve problems, kick the tires on new ideas or even format press releases, blogs or any of the uncreative, uninspired content-mill clickbait that crowds out much of the Internet. So, if bad writing is your bag, you’re in luck. At its best, if finessed by an editing human hand, ChatGPT might be competent enough to write quality financial news and other copy-desk items. But don’t hold your breath for an AI-penned Great American Novel or even a worthwhile college term paper. There are some things AI will never be able to do, and I’m betting quality writing is one of them.

Writing is hard. At its heart, it’s the act of describing the near-ineffable fog of human consciousness, of sharing with other minds what it means to be us. My condolences to anyone who thinks a machine can do a better job of expressing this experience than they can.