To the editor:
I’m sympathetic to the general thrust of Steven Mintz’s argument in Inside Greater Ed, “Writing within the Age of AI Suspicion” (April 2, 2025). AI-detection applications are unreliable. To the diploma that instructors depend on AI detection, they contribute to the erosion of belief between instructors and college students—not a superb factor. And since AI “detection” works by assessing issues just like the smoothness or “fluency” in writing, they implicitly invert our values: We’re tempted to have greater regard for much less structured or coherent writing, because it strikes us as extra genuine.
Mintz’s article is doubtlessly deceptive, nevertheless. He repeatedly testifies that in testing the detection software program, his and different non-AI-produced writing yielded sure scores as “% AI generated.” For example, he writes, “27.5 % of a January 2019 piece … was deemed prone to include AI-generated textual content.” Though the software program Mintz used for this train (ZeroGPT) does declare to determine “how a lot” of the writing it flags as AI-generated, many different AI detectors (e.g., chatgptzero) point out slightly the diploma of likelihood that the writing as an entire was written by AI. Each forms of information are imperfect and problematic, however they impart various things.
Once more, Mintz’s argument is beneficial. But when conscientious instructors are going to take a stand in opposition to applied sciences on empirical or principled grounds, they may do properly to reveal appreciation for the nuances of the varied instruments.