![]() |
| Jon Schubin |
"ChatGPT thought this was excellent!" It's a line that's been uttered in agencies more than anyone wants to admit – usually moments before a client sends back a draft covered in red ink.
The problem is structural. Research shows that leading AI models reverse their editorial opinion nearly 60% of the time when challenged. They are built to agree with you. That makes them useful for catching typos and restructuring sentences, but dangerous when the real advice you need is that a draft isn't working. This is as true for one-sentence comments as it is for 1,000-word op-eds and 10-page research reports.
In April last year, OpenAI rolled back an entire update to ChatGPT after users noticed the model had become so agreeable that it would endorse virtually any idea presented to it. Sam Altman called it "too sycophant-y." If the people building these tools are addicted to self-flattery, then the responsibility falls on us users to reintroduce friction into the process and get an honest opinion.
Here are five ways to do that.
Upload your thought process. The model doesn't know your client, your audience or your brief. It fills those gaps with generic assumptions and produces a plausible answer with no conviction behind it. Fix that by defining the lens of evaluation. For example: "The target reader is an American asset manager with limited time and low tolerance for abstraction. The purpose is commercial credibility. Every paragraph must either introduce new information, sharpen the thesis or provide a concrete example. If the argument is weak, say so plainly and explain why." The more specific you are about what good looks like, the harder it becomes for the model to default to applause.
Clarify its role. Second person is your friend. Telling the model "you are a skeptical editor who pushes back on weak arguments" produces materially different output from "edit this." Imagine your toughest client reviewer, or the colleague whose feedback you slightly dread, and write that into the instructions. "You are tired of business writing that is polite and vague. You demand specificity." If a client has given you real feedback – they hate jargon, they want shorter sentences, they find your intros too slow – turn those into explicit instructions.
Feed it a standard. An AI editor with nothing to compare your draft against will default to vague encouragement. Give it something concrete: the client's style guide, three approved pieces that landed well, a competitor's op-ed that set the bar. The model needs a benchmark, codex or rubric to serve as a baseline.
Cycle your models. Conversations with AI tilt towards resolution – the model wants to tell you the work is getting better. I've watched scores rise from 6 to 7 to 8 across successive drafts, when the honest answer was that draft three was worse than draft one. The fix is simple: open a fresh window. Better still, run the same draft through a different model entirely. A virtual second opinion. Take any useful analysis and fold it back into your primary conversation.
Get a human to read it. There is no substitute for someone who has no incentive to make you feel good about a draft. That might be a senior colleague, a client contact who has been blunt with you before, or anyone whose reaction you can't predict. People are unpredictable, and that unpredictability is exactly what AI editing lacks. The best use of an AI editor is to get your draft to the point where a real one can take over.
None of this means abandoning AI as an editorial tool. It means treating it as what it is: a first pass, not a final verdict. The agencies that figure out how to build honest friction into their AI workflows will produce better work. Those who take "this is excellent!" at face value will keep wondering why clients disagree.
***
Jon Schubin is the global head of content at Cognito.


Younger employees are more likely to receive AI training in the workplace as well as guidance for using AI tools, according to a recent report.
For decades, visibility has been a game the largest brands have won by default. Today, as AI search increasingly serves as the Internet’s front door, those rules of discovery are being rewritten, presenting a rare and significant opportunity for communications professionals.
Why trusted and experienced public relations counselors can never be replaced by artificial intelligence.
Artificial intelligence, combined with experienced human judgement, has severed the link between time spent and value delivered.
When it comes to AI-generated search responses, many of the sources that produce significant visibility aren’t the traditional “top-tier” outlets PR teams prioritize. Instead, they’re blogs, community forums and smaller publications, highlighting the need for communicators to adapt their media visibility strategies.



