Five ATS Myths Small Agency Recruiters Believe
December 1, 2025 · 7 min read
The ATS myth problem cuts both ways. Job seekers are sold fear about algorithmic rejection. Recruiters are sold fear about the tools they're supposedly missing. Both narratives serve vendors, not practitioners.
After talking to recruiters at small and mid-size agencies, the same misconceptions come up again and again. Here are the five most persistent — and what's actually happening.
Myth 1: Your ATS is screening out good candidates automatically
This is the mirror image of the job seeker myth. Candidates are told their resumes are rejected by algorithms before humans see them. Recruiters are told their ATS is doing intelligent filtering on their behalf.
Neither is quite true.
Most ATS systems at the small agency level are glorified databases. They store resumes, track pipeline stages, and send automated emails. The "screening" is usually a simple keyword filter you set up yourself — and if you set it up poorly (or never set it up at all), it's either blocking good candidates or letting everyone through.
A study of 25 recruiters found that 92% said their ATS does not automatically reject resumes based on formatting or content. The screening that happens is the screening recruiters configure. If your ATS feels like it's doing smart filtering, someone set that up intentionally — and it's worth auditing whether the rules still reflect what your clients actually want.
Myth 2: ATS optimization is the main thing separating good candidates from bad ones
This belief has spawned a cottage industry of resume optimization services charging hundreds of dollars to keyword-stuff PDFs. It has also created a generation of candidates who've been coached to game systems rather than communicate clearly.
Here's what actually drives callbacks, according to research from interviewing.io: company pedigree and school names matter more than bullet-point wording. Relevant experience matters more than keyword density. A candidate with the right background and a mediocre resume beats a candidate with the wrong background and a perfectly optimized one, every time.
For recruiters, this means your screening criteria should focus on substance — actual experience, actual results, actual fit for the role — not on surface signals that candidates have learned to fake. What recruiters actually look for in the first 7.4 seconds is closer to trajectory and company pedigree than keyword density.
Myth 3: The more features your ATS has, the better your screening will be
Enterprise ATS vendors sell complexity as sophistication. Workflow automation, AI matching, predictive scoring, skills taxonomies — the feature list is impressive and the pricing reflects it.
For small agencies filling ten to thirty roles at a time, most of this is overhead. The time spent configuring, maintaining, and learning features you use once a quarter costs more than the time those features save.
The research on this is consistent: small agencies report better outcomes from simple, well-used tools than from complex ones that get partially adopted. A spreadsheet you actually update beats a $300/month platform where half the fields are always empty.
Myth 4: AI screening tools are only for high-volume enterprise hiring
This was true three years ago. It isn't true now.
The tooling has changed significantly. There are now AI screening tools built specifically for small agencies — tools that don't require ATS integration, enterprise contracts, or dedicated implementation teams. You upload a job description, upload a batch of resumes, and get back a ranked list with explanations.
The value proposition is actually stronger for small agencies than for enterprise teams. Enterprise firms have recruiters whose entire job is screening. Small agencies don't — screening competes directly with sourcing, client management, and business development for the same hours. Any tool that cuts screening time by 60% has a dramatically higher ROI for a five-person agency than for a fifty-person talent team. The deeper context on why AI resume screening underserves the middle market explains why enterprise tools miss this segment.
Myth 5: The shortlist is the hardest part
The shortlist is actually the easiest part to systematize. The hard part — the part that determines whether a placement happens — is everything that comes after: client alignment, candidate management, offer negotiation, and the judgment calls that no algorithm can make.
The risk of over-investing in screening tools and process is that you optimize the wrong stage. If your shortlists are good but your placements are slow, the bottleneck isn't screening. It's somewhere downstream.
Use tools to make screening faster and more defensible. But don't confuse a better shortlist for a better outcome. The shortlist is the beginning, not the result.
Resume Autopsy's approach to candidate screening is built on showing the work — every score comes with evidence. See how it works for small agencies.