“I would sum up my fear about the future in one word: boring. And that’s my one fear: that everything has happened; nothing exciting or new or interesting is ever going to happen again ... the future is just going to be a vast, conforming suburb of the soul.”

— J.G. Ballard

Jon GingerichJon Gingerich

There’s no question that artificial intelligence was 2025’s story of the year. AI was everywhere last year; it encroached upon every aspect of our lives. It’s all we hear about anymore; people won’t shut up about it. Admittedly, there’s a reason for this. AI is the greatest technological innovation in a generation; it demarcates a clear social and cultural turning point. Which makes it exciting and scary and more than a little annoying at the same time.

Unfortunately, many of the AI sector’s promises came up short in 2025, and if the future proves to be anything like what we’ve witnessed, I’m already disappointed. One thing is clear: AI isn’t a panacea. While it does some core tasks well, it handles many of them terribly. Worse, consider that AI search has destroyed the traffic model that’s kept digital publishing alive for the past two decades, that AI in 2025 was responsible for more than 50,000 layoffs—at Amazon, Microsoft, Meta and IBM, among others—that AI-related investment this year resulted in ridiculous market concentration, taking tech stocks to historic highs before crashing spectacularly in October. In fact, the industry’s biggest success stories came from the cadre of slimy tech hypemen who made grandiose claims about their AI companies’ capabilities even as those companies were losing billions. AI in 2025 had all the makings of a religion. That’s not a good thing.

This article is featured in O'Dwyer's Jan. '26 Crisis Communications & PR Buyer's Guide Magazine

The current narrative surrounding AI skepticism typically lumps AI critics into two camps: There are the “Doomers” who think AI will bring about the apocalypse, and the “Denialists” who dismiss the technology as a fad with no real-world value. This is a disingenuous framing that effectively pits both parties as clueless Luddites against the tech revolutionaries ushering us toward AI utopia. The reality, I think, is more nuanced. While maybe it’s unfair to refer to a technology as “slop” when it’s doing what we assumed was impossible a few years ago, it’s not “denialist” to henceforth ignore current hagiographies that AI will automate your job, create masterworks of divine artistic merit—basically do everything short of curing cancer. (Oh, it will probably cure cancer, too.) It’s not denial to say that the creative content AI produces is derivative crap, that AI remains a writing tool for people who don’t know what writing is, that it’s trained by plagiarizing the hard work of real creators—devaluing their hard work in the process—that the number of AI artists out there making anything interesting or remotely original is astonishingly small. It’s not denial to say that AI data centers are environmentally ruinous. It’s not denial to say that AI has turned our social media feeds into wastelands of slop, dividing audiences into gullible saps who believe even the most obviously AI-generated videos are real and those who now assume everything they see online is fake. It’s not denial to say that AI hallucinates. It’s not denial to state that right now, ChatGPT is probably flattering the worst person you know. It’s not denial to suggest that, despite it all, AI still can’t replace you.

There’s an implication that either AI’s best days are still around the corner, or that we achieved peak innovation before tech was constantly interrupting our lives. Either way, if some of our current anxieties surrounding AI seem unearned, clearly so too are many of its plaudits, and while it would be wise to concede that AI is a big deal, it also seems reasonable to question the current claims coming from behind the AI curtain. If you can’t see that this technology is going to change the world, you’re blind. If you think it’s going to do everything its religious devotees claim, you’re naïve.

So, even if AI has been oversold, it’s clear the technology isn’t going anywhere. It’s also obvious that its capabilities will get better every year. But I can’t help but wonder what happens when AI replaces not just our technology, but when it effectively takes over at the cultural steering wheel as well. To me, that’s the big question. AI’s current failures aren’t the issue—they’re to be expected. The real issue is how boring our new normal will be when bad art isn’t something created only by people. Most of us, I’m sorry to say, aren’t very creative. But thankfully, only people are creative. The terrible logic among AI evangelists that AI has somehow democratized creativity is a joke, even if I’m charmed by the tacit admission that an inability to be creative had prevented them from making art before. Suffice it to say, enlisting a machine to produce art doesn’t make it any more accessible than commissioning a painter to paint something for you. No technology can replace the long, painful journey of learning a craft—even if you produce crap. Imagine the emotional vacuousness of a future where audiences consume creative content made by machines that haven’t shared in the mysteries of the human experience. Imagine communicating with more bots on social media platforms than people. Imagine a world that promises only imitations of connection, a barren consumerist slopocalypse where we search for inspiration and meaning and community but instead bond over the latest AI Coca-Cola commercial. That future is already upon us, and it is painfully, horrifically boring.

The hunger for authentic art and authentic human interaction isn’t going anywhere. However, I’m guessing our tolerance for lazy creators and overvalued tech will quickly overstay its welcome. The future is here, and as usual, I’m betting on people.