Detecting AI (2026 May)

by Barry A. Liebling

AI (artificial intelligence) engines are everywhere. And people are mixed in their reactions to this. I regard AI as a powerful tool that can create a lot of value when it is used wisely and skillfully. And, like any tool, AI can be used by fools and villains who will crank out worthless and even harmful content. So some people are alarmed by the power of AI and yearn for it to be tightly controlled and regulated.

But no matter what the alarmists want, no matter how they try to dampen AI, this technology will not go away. While some people deliberately avoid using it, others are employing AI extensively and are becoming increasingly skilled, or dangerously reckless.

Consider the world of writing. Is it acceptable to use AI when you write a column, an article, a report, a full book? And what are the appropriate boundaries for using AI? Suppose it delivers the entire product after you instruct it to do so. Alternatively, what if you use AI to proofread, check grammar, polish prose, but the actual writing is essentially your own. What are the proper standards for deciding what is legitimate and what is wrong?

A useful way to think about this is to consider what was acceptable before AI existed. Is it alright to have someone else write an entire book or article for you and then put your name on it? That is what people do who employ ghost writers. Sometimes they admit that they relied on outside help, but often they pretend to be the sole author. Whatever your judgment is regarding the use of a ghost writer – why should it be any different if the ghost writer is a non-human machine?

Full disclosure – I do all my writing myself and have no plans to use a ghost writer – either human or machine. The best policy is to state openly how your work was done. Was it produced by someone (or something) else? If so, that should be revealed.

What about having your writing edited? Are you obligated to state that you had help creating your final version, or is it generally understood that editorial assistance is permissible? And if a human can edit your work, why would it be worse to have an AI engine do the same thing?

As of this writing there are a number of programs that are designed to detect work that was done by AI. The goal is to catch people who are “cheating” and using the technology to do things they ought to do on their own.

In the academic world some teachers use detector programs in an effort to discourage students from doing their assignments via AI. But the detectors are only successful at catching students who do not have the competence to edit what the AI produces. It is not difficult to take a text produced by a clumsy AI engine and fix it up so it looks like it was done by a biological human.

And there is an easier path for the would-be writer. If AI-generated text is flagged as not-human, the human can request the AI engine to edit it so it is not recognized. If the new edit is not effective, the human can again ask for revisions until the AI detector fails.

If someone does that is it acceptable? I think yes, if the human discloses the procedure.

I teach MBA students at a graduate school of business, and recently I have seen the situation regarding AI become increasingly messy. I tell my students to use AI freely, but always indicate what engine is used, what the prompt is, and their evaluation of how well the technology performed. To provide a tangible incentive, I give my students extra points on their grade when they use AI properly.

I have found that some students use AI so extensively (but not prudently) that they (unwittingly?) write with a strong (not pleasant) AI accent. Their writing is filled with cliches, too many repetitions, extraneous general comments, em-dashes instead of ordinary (easy to type) dashes, and is inappropriately agreeable. I have offered to give them extra credit for experimenting with the technology, but I was told they did the work on their own – without AI.

Although I have no intention of doing so, I believe I could produce a text that an AI detector would flag as AI-created and not done by a human.

So programs that are supposed to differentiate text produced by AI from what is written by a human are facing an insurmountable challenge. AI engines are getting better at writing like a skillful person, and some humans are unintentionally mimicking inept AI output.

What is the best policy? Use any tools you find useful – including AI engines – but fully disclose your methodology. You have to check everything – with or without AI assistance. And of course, there are no excuses. You are always personally responsible for the quality of your work.

*** See other entries at AlertMindPublishing.com in “Monthly Columns.” ***

Comments are closed.