Back to The Edge
    Technology
    March 29, 2026· 10 min read

    The Detection Deception, Part 1: AI Detectors Are Snake Oil and Everyone Knows It

    By John Ayers

    TLDR — Key Takeaways

    • AI detection tools measure word predictability (perplexity), not authenticity — the better you write, the more likely you get flagged
    • OpenAI built and killed their own detector in 7 months due to only 26% accuracy and 9% false positive rate
    • Stanford research proved AI detectors are biased against non-native English speakers, systematically flagging clean, clear writing as 'machine-like'
    • Detection is a race detectors are structurally guaranteed to lose — every new AI model gets better at sounding human
    • The real question isn't 'Did AI touch this?' but 'Is the expertise behind this content real?'

    Here's something that should make you uncomfortable. The tools that publishers, platforms, and universities are using to "detect" AI-generated text? They don't work. Not "they're imperfect." Not "they need improvement." They fundamentally do not do what they claim to do.

    John Ayers, Co-Founder of Chapters, walks through the evidence: OpenAI killed their own detector after just 7 months (26% accuracy). Stanford proved the tools are biased against non-native English speakers. Turnitin's AI flags caused universities to reverse their own policies.

    The better you write, the more likely you get flagged. Sit with that for a second.

    The detectors aren't getting better — the models are getting better at beating them. That's not the same thing. The real question isn't "Did AI touch this?" It's: "Is the expertise behind this content real?"

    Ready to turn your expertise into lasting impact?

    Get Started
    Start Your Chapter