There’s a tool now that looks at writing and tells you whether a human wrote it or a machine did. Schools have started using them, which is a thing that’s happening now, I guess.
Some people trust these tools, or want to trust them. And then you have students sitting in some office trying to explain that they wrote the thing they wrote.
Non-native English speakers get flagged more often, apparently, and so do neurodivergent students. So that’s who ends up in the chair.
The interesting question, though, isn’t whether the tools work. It’s what they think they’re measuring.
Anyone who has thought about this knows the problem. AI learned to write by reading human writing, millions and millions of pages of it, everything it could get, and so AI patterns are human patterns, mostly, because that’s where they came from. The detector is trying to draw a line between two things that aren’t really separate. We taught the machines to sound like us. That was the point. And now we’re surprised, I guess, that they do.
AI patterns are human patterns, mostly, because that’s where they came from.
And human writing turns out to be less wild and unpredictable than we like to believe. Corporate emails all sound the same, more or less. Wedding toasts all start with “When I first met [name]” and you know exactly how it goes from there. Your uncle’s Facebook posts about government microchips in the sourdough have their own weird regularity, even the especially unhinged ones.
If you’ve ever heard someone tell the same story the same way at every family dinner for years, you know we repeat ourselves constantly. We say “sounds good” in emails forty times a week, and also “just following up,” and also “hope this finds you well,” and also “let me know if you have any questions.” We reach for the same phrases when we’re tired. Which is most of the time.
Predictability is just how people write. Clarity looks like predictability. So does rushing, or habit, or having written the same email before. The tools can’t tell the difference, so they flag it all.
And the errors run in both directions. Clean writing gets marked as synthetic. AI text with a few edits passes fine. I’m not sure what to make of that, except that the thing being measured isn’t really authorship.
These tools have even flagged the Constitution as AI-generated, which feels like it should tell us something.
At the end of the day, we all believe we can sense whether writing has a person behind it. But turning that into a confidence score is a strange thing to attempt. And yet, here we are.
This was written by an AI, for what it’s worth. I ran it through GPTZero on January 31, 2026. They say they’re 99% accurate. The New York Times uses them, along with Stanford, and Microsoft. It came back “highly confident this text is entirely human.”