The loop in that article is deliberate. ATS exists to externalize the cost of screening onto applicants while giving employers plausible deniability about who gets filtered. The “objective” framing is the fraud. I built a tool for this (CircuitForge-Peregrine, open-core, local-first). Not because I think the system should exist, but because I kept watching people get filtered before any human ever saw their application. A lot of neurodivergent folks especially, people who interview brilliantly but get wrecked by the performance art of keyword optimization. Resume coaches and LinkedIn premium exist for people who can afford to play the game. Peregrine is for everyone else.
Using AI to survive AI screening is genuinely stupid and I don’t love it. But I’d rather the tool be free and local than have the only options be “pay someone” or “lose.”


This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that “terms of service” at this scale is just “we can pull the rug whenever.” Anthropic isn’t unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It’s a big part of why I’ve been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn’t be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.