
What happens when good writing is called AI?
Jennifer Farr is senior account director, public relations, at Earnscliffe.
I recently pitched an op-ed to a top-tier Canadian publication. When the topic was approved and the draft was shared, I assumed the next step would be to refine it based on feedback from the outlet. Instead, there was concern with how it sounded.
The editor explained the piece had been flagged by an AI detection tool and couldn’t be considered for publishing. The problem is, it wasn’t written with AI.
I know that for a fact because I wrote it with my client. During a virtual meeting, I was sharing my screen and we built the article together in real time. Pausing, brainstorming, rewording, tightening things up. The process was collaborative and creative, and it reminded me of how much I love writing.
Since then, I’ve heard similar stories from others working in communications. In each case, the issue wasn’t what was being said, but whether it had been written by a human.
To be fair, I understand why this is happening. Editors are under serious pressure to ensure authenticity. As AI tools become more popular, the expectation to avoid publishing content written by AI has increased – and rightfully so.
But it does leave us in a bit of a weird place. In trying to protect authenticity, what if we’re now distorting it?
In communications, especially in an agency setting, writing is a team effort and most of the time it’s shaped through conversation. You brainstorm ideas, create a first draft, someone reviews and shares input, you edit and cut what doesn’t work, and then you rinse and repeat until it looks right.
If you do that well, the end result is a clean and structured article. Which, it turns out, can also read as a “tell” for being written by AI. Other flags often include specific writing styles or forms of punctuation (RIP em dash), leading many to wonder if they should adjust the way they write to sound less like AI.
The truth is there is no rule book, which creates some second-guessing on both sides. On the writing side, you start to wonder if something is too polished or if you should remove certain punctuation or phrases. On the editorial side, there’s the challenge of making a judgment call without ever knowing for certain whether AI was used or not.
So how do we find the balance?
If anything, safeguards like these matter now than they ever have before because of how accessible generative AI is. However, it’s important to remember that these tools are all new.
Many of us are still figuring out how to use them responsibly – and that applies to both AI generation and AI detection. It’s not lost on me how ironic it is that one of the primary ways we determine if something is AI-written is to run it through AI.
Here is the uncomfortable question this raises for me and many others who enjoy the art of writing: if a piece that was written in a messy, collaborative, very human way can be flagged as AI-generated, how can anyone truly define authenticity right now?
The fundamentals haven’t changed when it comes to writing with media in mind. We all aim to be credible and offer a unique perspective, and editors need to publish work they can stand behind.
Walking that line has become challenging, especially when the difference between AI and human writing isn’t as obvious as it used to be. I don’t know what the answer is, but I do know we’re going to have to get more comfortable sitting in the grey area while we figure it out.
The post My op-ed was flagged as AI. It wasn’t. appeared first on PR Daily.










