Jon Gingerich
Jon Gingerich

Several years ago, the writer Cory Doctorow coined the term “enshittification,” which refers to online platforms’ tendency to provide user experiences that get progressively worse over time. Artificial intelligence has supercharged this phenomenon.

Google’s AI-powered Overview feature often provides users with incorrect answers to queries while effectively bringing the web traffic that online news publishers rely on to a halt. On social media platforms, meanwhile, AI is responsible for creating even more of a wasteland of misinformation than there was already. AI slop farms flood Facebook with nonsensical clickbait—for everything from recipes to political content—to earn cheap clicks from credulous Boomers with low media literacy. Deepfake videos of Joe Rogan and CNN’s Dr. Sanjay Gupta pitch products those individuals never actually endorsed. The news that Elon Musk’s AI chatbot, Grok, had christened itself “MechaHitler” and began making antisemitic comments was somehow the least shocking AI-related development last month. In an unexpected twist, some companies are now hiring writers to fix AI’s derivative, predictable and error-prone content. As prophesized, the web’s future just seems to get shittier and shittier.

As AI worms its way into virtually every facet of our lives, its benefits and detriments are becoming increasingly clear. Due to its sheer disruptive ability alone, AI has a habit of making the Internet’s previous challenges—which were manifold—seem trivial by comparison. It isn’t simply the fact that generative AI systems are basically plagiarism machines, feeding on copyrighted works without reimbursing the artists and publishers that created them. It isn’t the fact that students are widely using AI to cheat their way through college. (As it turns out, their professors are using it too!) It isn’t just the fact that AI data centers are causing electricity shortages in some parts of the country.

One of GenAI’s greatest existential threats that few talk about is the tremendous cost it could have on human creativity. Everyone seems to concede AI is one of the most significant technological developments in the last century, but few want to consider the very serious philosophical implications that technology has when we begin relying on it for virtually every mental task that comes our way. What happens when we collectively decide to delegate our critical and creative heavy lifting to machines? What happens when we begin sounding like the chatbots that have been programmed to imitate us? And what’s the point in being creative anyway, once we’ve internalized the idea that AI systems can create content faster and more efficiently than we can? What happens to cultures when they stop creating?

This article is featured in O'Dwyer's Aug. '25 Financial PR/IR & Professional Services PR Magazine

For almost three years, technologists, marketers and AI devotees have shouted from the rooftops purely anecdotal claims about GenAI’s ability to enhance human creativity. I’m not going to argue that AI might augment some creative processes—research and concept development come to mind—but a spate of recent studies suggests AI isn’t the creative panacea we’ve been promised.

A 2025 study from the University of Pennsylvania’s Wharton School found that while AI can bolster the quality of individual ideas, it also routinely generates the same concepts over and over again, creating results that lack diversity, variety and originality. Similarly, a recent MIT study split students into groups and asked them to write an SAT-style essay, with one group permitted the use of ChatGPT. (All students’ brain activity was monitored with headsets.) Once again, the students who used ChatGPT produced essays that repeatedly utilized the same words, phrases and ideas. Worse, the ChatGPT students also demonstrated lower brain activity than the other groups while penning the essays. AI isn’t simply a passive vessel we use to create content; it’s actively resulting in a kind of flattening, a homogenization where everyone’s creative output looks and sounds the same.

So, what happens when entire generations begin outsourcing their creative tasks to an algorithm instead of working on solutions themselves? A recently published study found that more than half—58 percent—of English majors at two Midwestern universities lacked the reading comprehension skills needed to understand the opening paragraphs of Charles Dickens’ “Bleak House.” It’s hard not to see these findings as anything other than a foregone conclusion. A 2024 Pew Research Center survey reported that a quarter of U.S. teens admitted to using ChatGPT for schoolwork, twice as many who’d said they’d used it the year before. OpenAI, the makers of ChatGPT, recently released its own report claiming that a third of college students now use its products. A 2024 survey conducted by the American Association of Colleges and Universities and Elon University found that more than half (59 percent) of university staff reported an increase in cheating since GenAI tools became available. We’re raising a generation of kids who lack the skills to think critically.

Short term, AI’s threat to human creativity can be heard in the debates surrounding the professions that technology is making economically redundant. (“Why pay writers when machines can do the work for free?”) Long term, the threat is more dire: If AI is disrupting the means by which we develop novel solutions, how will that hinder the future of human innovation?

The good news is that GenAI has shown us, by comparison, how uniquely creative people are. Thankfully, there isn’t an app for that yet.