The spreadsheet arrived at 11:47 PM on a Tuesday, subject line in all caps: "WHY ARE WE SO SLOW???"
Sarah Chen—CEO of a Series C dev tools company I was advising—had done something dangerous. She'd found a "2024 Hiring Benchmarks" infographic on LinkedIn, compared it to her company's numbers, and concluded her recruiting team was failing.
Her team's average time-to-hire was 58 days. The infographic said industry average was 36 days. She had calculated the percentage gap herself and underlined it twice: "61% slower than average."
"I'm meeting with the recruiting lead tomorrow at 9 AM," she wrote. "I need to understand what's broken."
I called her at midnight. Not because I'm a good advisor—honestly, I was worried she was about to fire someone who didn't deserve it. The 36-day benchmark she'd found was from a report that mixed retail seasonal hiring (where Walmart can onboard a cashier in 72 hours) with hospital physician searches (where 125+ days is normal). Her company was hiring senior backend engineers in the Bay Area. The relevant benchmark for that specific role in that specific market was closer to 48-62 days.
Her team wasn't slow. They were normal. Maybe even slightly fast.
But here's what made me stay on the phone until 1 AM: when we dug into her actual data, we found something she'd completely missed. Her time-from-final-interview-to-offer was 14 days. Fourteen days of hiring managers "thinking about it" while candidates fielded competing offers and moved on. That's where the problem was. Not in her recruiting team's execution—in her engineering managers' inability to make decisions.
Sarah didn't fire her recruiting lead. She implemented a 72-hour decision deadline for hiring managers. Six months later, her time-to-hire dropped to 41 days, and more importantly, her offer acceptance rate went from 64% to 83%.
I'm going to tell you more about Sarah's company throughout this piece. She gave me permission to share her story because she's still embarrassed about almost making a terrible decision based on bad data—and she wants other founders to learn from it.
This is the problem with recruitment benchmarks: the numbers everyone cites are usually worse than useless. They're actively misleading. A 44-day time-to-hire is catastrophic for a retail cashier position and genuinely impressive for a specialized physician. A $4,700 cost-per-hire is a bargain for a senior software engineer and financial malpractice for a call center rep.
And yet, people keep making the same mistake Sarah almost made. They find some aggregate number, compare it to their own situation with zero context adjustment, and conclude something is terribly wrong.
I've spent months collecting recruitment efficiency data from every credible source I could find—SHRM, SmartRecruiters, Ashby, industry-specific reports, and raw data from 30+ companies I've worked with directly. What follows is my attempt to give you something actually useful: benchmarks that mean something, broken down by industry, role type, company size, and the specific metrics that matter.
Fair warning: this is the article that might make you feel worse about your numbers before you feel better. That's okay. Clarity is more valuable than comfort.
The Baseline: What "Average" Actually Looks Like in 2025
Let me start with the numbers that get thrown around most often, and then explain why most of them are less useful than they appear.
According to SHRM's 2025 Recruiting Benchmarking Report, the average time to fill a position in the United States is approximately 44 days. That's the number you'll see cited everywhere. It's technically accurate and almost completely useless as a benchmark for any specific company.
Here's why: that 44-day average includes everything from warehouse workers (often filled in under two weeks) to C-suite executives (often taking 90-150 days). It averages high-volume retail hiring with specialized engineering searches. It blends companies using sophisticated AI screening with companies still sorting paper resumes.
The more useful SHRM finding is this: over half of organizations have recruiters managing about 20 requisitions each, with higher loads at larger firms. And roles that should be straightforward—mid-level managers, nonexempt staff—are increasingly slipping into 90+ day timelines.
The average cost-per-hire sits around $4,700-$4,800 in the US, up from $4,425 in 2021. Again, this number is so broad as to be nearly meaningless. A tech company example from SHRM shows $7,000 per hire for specialized roles, while a retail chain shows $480 per hire for seasonal employees.
Let me give you more useful numbers—broken down by industry, role type, and seniority level.
Technology: The Industry That Thinks It's Fast (But Isn't)
Every tech CEO I've ever worked with has described their company culture using some variant of "we move fast." It's practically a requirement for the role. And yet, in hiring, tech companies are demonstrably slow.
The median time-to-hire in technology is 48 days—26% slower than the global median across all industries. Read that again. The industry that invented two-week sprints and continuous deployment is slower at hiring than healthcare, manufacturing, and finance.
For software engineering roles specifically, expect 40-50 days on average, with senior positions stretching 20% longer. The slowest 10% of tech hires take 82+ days. I've personally seen VP of Engineering searches drag on for six months at companies that shipped product updates every two weeks.
Why is tech so slow? It's not sourcing. It's not screening. It's not scheduling.
It's the post-interview black hole.
Tech companies take about average time to review applications and schedule interviews. But after the final interview? They sit on decisions for 10 days longer than other industries. Ten days of "let me sync with the team" and "I want to compare this candidate to the others we're seeing" and "can we do one more reference check?" Meanwhile, the candidate takes a job at the company that moved faster.
I've watched this happen in real time. A fintech company I worked with lost their top candidate for a senior engineering role because the hiring manager wanted to "sleep on it" for a week. The candidate got a competing offer on day three and accepted. When I asked the hiring manager if his extra four days of deliberation had changed his mind, he said, "No, I was going to offer anyway." They'd lost the candidate to process, not to preference.
SmartRecruiters' 2025 data suggests tech organizations could cut time-to-hire by 26%—about 11 days—just by accelerating the interview-to-offer stage. Not by sourcing better, not by screening faster, just by making decisions at something other than a glacial pace.
The application volume doesn't help. Technology employers receive 51% more applications per opening than average, at 110 applications per hire. One Y Combinator startup told me they got 23,000 applications in 30 days for 8 open roles—nearly 3,000 applications per job. At that volume, candidates have a 0.7% chance of getting an offer. You have better odds at a craps table.
Cost-per-hire in tech runs $6,000-$7,000 for standard technical roles, according to Deloitte's 2024 benchmark report. For specialized positions—ML engineers, security architects, senior backend developers—I've seen companies spend $12,000+ per hire once you factor in recruiter time, six-round interview processes, take-home assessments, and the inevitable restart when the first-choice candidate declines.
Here's the statistic that should genuinely alarm tech leaders: in 2024, teams interviewed 40% more candidates per hire for both business and technical roles than in 2021. Forty percent more interviews. Same hiring outcomes. That's not being thorough—that's indecision masquerading as rigor.
Healthcare: Where Every Day Costs You $600
A hospital CFO once told me something I've never forgotten: "Every empty bed isn't just lost revenue. It's a patient in an emergency room somewhere, waiting."
Healthcare hiring operates under a brutal equation that most industries never face: every day a clinical position sits open directly impacts patient care. For nursing positions, estimates suggest organizations pay $418-$591 per day for every unfilled RN position. That's not theoretical cost—that's overtime for remaining staff, agency nurse fees, and the quiet productivity drain as burned-out nurses start looking for jobs elsewhere.
For physicians, the vacancy cost becomes almost absurd: $70,000-$600,000 per month depending on the specialty. A cardiology position sitting empty for four months can cost a hospital over a million dollars in lost revenue. And that's before you count the patients who went to a competitor or delayed care entirely.
The median time-to-hire in healthcare is 41 days, which sounds reasonable until you realize that number is a lie. It averages fast-filling administrative roles (receptionists, billing staff) with physician searches that routinely take four months or longer.
Let me give you the real numbers. For primary care physicians, the average time to fill is 125 days. For specialists, it's 135 days. And here's the part nobody mentions in the benchmarking reports: that's just to signed contract. Licensing and credentialing typically adds another 4-6 months before the physician actually starts seeing patients. From the moment a cardiologist gives notice to the day their replacement sees the first patient, you're looking at 8-10 months. Sometimes longer.
I worked with a rural hospital system in the Midwest that had a gastroenterologist vacancy for 14 months. Fourteen months. They'd made two offers—both declined for location reasons—and had essentially given up by month ten. They were flying in a locum tenens physician from Chicago three days a week at costs that made their CFO visibly wince. By the time they finally filled the role, they'd spent more on temporary coverage than the first two years of the new physician's salary combined.
The 2025 NSI report quantifies the retention stakes: for every 1% change in RN turnover, the average hospital gains or loses around $289,000 annually. When you're spending months filling nursing positions and then losing staff to turnover, you're basically running on a treadmill at full sprint just to stay in place.
Cost-per-hire in healthcare is all over the map, but here's roughly what to expect: primary care physicians run $10,000-$20,000 per hire for recruitment costs alone. Specialists push $25,000-$50,000. Rural or highly competitive markets? Higher still. And if you're including sign-on bonuses (averaging $31,473 for physicians) and relocation packages, total acquisition cost can easily exceed $250,000 for a single hire.
The counterintuitive bright spot: healthcare is actually attractive to job seekers. The industry receives 45% fewer applications per role than the global average, but candidates are 37% more likely to get interviews and 70% more likely to receive offers. Translation: fewer people apply, but if you apply, your odds are much better. For recruiting teams, this means sourcing matters more than screening—finding candidates is harder than filtering them.
Healthcare recruiters carry the heaviest workloads I've seen in any industry—25% more hires per month than counterparts in other sectors. They're doing more with less, and the pressure is constant. Nearly half of hospital executives surveyed believe their hospitals aren't fully equipped to handle current patient volumes, with the biggest gaps in specialists (49%) and nursing (46%). This isn't a recruiting problem that can be solved with better benchmarking. It's a structural supply-demand imbalance that's been building for decades.
Finance and Banking: The Identity Crisis Nobody Admits
I had coffee last month with a head of talent at a regional bank—let's call her Jennifer—who summarized the sector's hiring problem in one sentence: "We're trying to hire like a tech company, but we can't pay like one or move like one."
Banking and financial services sit in an uncomfortable identity crisis. The industry needs technologists, data scientists, and digital product managers to compete with fintech disruptors. But the culture, compensation structures, and regulatory overhead make it nearly impossible to win those candidates against pure tech companies.
Cost-per-hire in banking averages around $4,323—slightly below the overall US average. But that number masks the real problem: for the roles banks actually struggle to fill (engineers, data scientists, compliance technologists), they're spending 2-3x that amount and still losing candidates.
Jennifer's bank posted a senior data engineer role at $180,000. A reasonable offer in most contexts. But Stripe, Plaid, and three fintech startups were all recruiting from the same talent pool, offering $220,000+ plus equity. "We had 12 phone screens," she told me. "Eight candidates dropped out when they heard we couldn't match competing offers. Three made it to final rounds and all declined. The one person we hired left after eight months for a fintech."
The perception problem compounds everything. Fair or not, candidates view traditional banking as the place innovation goes to die. A Goldman Sachs or JPMorgan can sometimes compete on prestige, but mid-market banks? They're fighting an uphill battle against the assumption that their tech stack is legacy, their processes are bureaucratic, and career growth means waiting for someone to retire.
According to Glozo's 2025 Banking Recruitment and Salary Trends report, vacancies in banking remain open 30% longer than pre-pandemic norms. One-third of banks plan to increase technology headcount in 2025—but the sector's collective ability to actually fill those roles remains dubious.
The adaptation I'm seeing: a massive shift toward contract and contingent talent. 70% of finance and accounting leaders are increasing use of contractors in H2 2025. It's an acknowledgment that permanent tech hiring isn't working. If you can't hire the engineer you need for $180,000/year, maybe you can rent one at $200/hour for six months. It's more expensive per-hour, but at least you get the work done.
Jennifer's take on this: "We've basically given up on competing for certain roles. We contract for the specialized work and focus our permanent hiring on roles where banking experience actually matters—relationship managers, credit analysts, the people who need to understand our business, not just build software."
It's an honest strategy, if a somewhat defeatist one. And it means banking's benchmark numbers will increasingly reflect a bifurcated workforce: permanent employees in traditional roles with normal metrics, and a shadow workforce of contractors who don't show up in the hiring statistics at all.
Retail and Hospitality: The Volume Game
If tech hiring is a marathon, retail and hospitality hiring is a sprint—repeated thousands of times per year. And they've gotten very good at it, in ways that other industries completely misunderstand.
Retail is the fastest industry for getting hired: median time of just 25 days, 34% lower than the global median. Hospitality comes in at 39 days—close to global average despite handling exponentially higher volume. When a Starbucks district manager needs to staff up for holiday season, they're not deliberating for three weeks about culture fit. They're making decisions in days.
The conventional wisdom is that this speed comes at a cost. Retail turnover runs upwards of 60%. Hospitality turnover exceeds 70%. Industry observers wag their fingers and suggest these companies should "slow down and hire better."
Here's the contrarian take: they're not wrong, and the observers are missing the point.
Retail and hospitality have done the math. The total cost of a 25-day hire who stays 90 days is often lower than the total cost of a 60-day hire who stays 180 days—once you factor in vacancy costs, speed-to-productivity, and the reality that most retail/hospitality workers are going to leave eventually regardless of how carefully you screen them. They're not optimizing for retention because retention isn't the right thing to optimize for in their model.
The application dynamics are revealing. Retail receives 65 applications per hire—slightly fewer than average. Hospitality receives 117 applications per hire—60% more than average. But hospitality's interview and offer rates are 55% and 50% lower respectively. Hospitality candidates have a 50% lower chance of receiving an offer than candidates in other industries. They're processing massive volume and filtering aggressively.
Cost-per-hire in retail can be remarkably low—one SHRM example showed $480 per hire for seasonal employees. But those numbers are deceptive if you apply the wrong mental model. Hiring someone for $480, training them for $2,000, losing them in 90 days, and starting over sounds wasteful. But if the alternative is spending $2,500 on hiring, $4,000 on training, having the position sit empty for an extra month (vacancy cost: $3,000), and still losing them in 120 days—the fast-and-cheap model actually wins.
The genuinely concerning metric isn't turnover. It's the referral rate: only 2% of hospitality hires come from referrals, and 3% from internal mobility. Compare that to tech (12-15% referral rate) or professional services (20%+). These industries are essentially hiring strangers, over and over, with minimal institutional knowledge transfer. That's not a recruiting problem—that's a cultural and operational problem that no amount of benchmarking will fix.
If you're in retail or hospitality, benchmark yourself against your direct competitors, not against the "best practices" designed for knowledge workers. The game you're playing is fundamentally different.
Manufacturing: 449,000 Positions and Counting
As of March 2025, 449,000 US manufacturing jobs remain unfilled. That's nearly half a million open positions in one of the most critical sectors of the economy. And the people who run these plants are getting desperate.
I spent a day last fall at a precision machining facility in Ohio—call it Apex Manufacturing, though that's not the real name. The plant manager, a guy named Rick who'd been there thirty-two years, walked me through their shop floor pointing at empty stations. "That CNC station hasn't had a permanent operator in fourteen months. That one, eleven months. That one, we just lost the guy we finally hired—he lasted six weeks before taking a job at the Amazon warehouse for less money but no skill requirements."
Rick's frustration was palpable. "We're paying $28 an hour plus benefits for CNC operators. Twenty years ago that was a great wage. Now it barely competes with jobs that require no training. And the kids coming out of trade programs? There aren't enough of them. We're fighting six other shops for every graduate."
Time-to-fill in manufacturing varies dramatically by role type. High-volume production positions can fill in 2-4 weeks. Skilled trades—electricians, machinists, welders—take significantly longer. Entry-level manufacturing roles typically take 30-60 days, with a quarter of organizations reporting 90+ days. Rick's CNC operators? He's been averaging 120+ days, when he can fill them at all.
In 2024, 55% of manufacturing organizations reported an increase in time-to-hire. Only 2% reported improvements. The sector's talent shortage is getting worse, not better, and nobody in Washington seems to care because manufacturing jobs don't generate LinkedIn engagement.
The demand for skilled trades is the highest in recorded history according to BlueRecruit's State of the Trades Q1-2025 report. Electricians, fabricators, and manufacturing technicians are all seeing increased demand and compensation. Employment for industrial machinery mechanics is expected to grow 13% from 2022 to 2032—approximately 45,700 job openings annually. The jobs exist. The workers don't.
Cost-per-hire in manufacturing averages around $5,611—higher than you might expect for "blue collar" work. That reflects the difficulty of finding qualified candidates for skilled positions and the extensive training required. Rick told me they spend $8,000-$12,000 training each CNC operator before they're productive. "Then they leave for $2 more an hour somewhere else, and we start over."
What's working to reduce time-to-fill in manufacturing: tightening role definitions, aligning pay bands with market rates, simplifying interview processes to two steps, and pre-building pipelines with assessment-backed screening. But honestly? Most of what I hear from manufacturing leaders is resignation. They're running skeleton crews, paying brutal overtime, and watching their experienced workers burn out. The benchmark data says time-to-fill is getting worse. The human reality is worse than the benchmarks show.
Executive Search: The $28,000 Question
At the top of the organizational chart, everything changes.
The average time to fill a CEO vacancy is 149 days—nearly five months. C-suite roles generally take 2-3 months, with most searches closing in twelve weeks when client and recruiter maintain tight feedback cycles.
Cost is dramatically higher. Executive search firms typically charge 25-35% of first-year total compensation for retained searches. For a CFO earning $400,000 with a $100,000 bonus, that's a search fee of $125,000-$175,000. The average cost-per-hire for executive positions is $28,329—roughly six times higher than non-executive positions.
The cost of mistakes at this level is catastrophic. A bad C-suite hire can cost 30% of the leader's first-year earnings. When it doesn't work out, rehiring costs can reach an estimated $240,000. Some estimates suggest a failed executive hire sets you back 3-5 times the annual salary when you factor in organizational disruption, lost time, and recovery.
Here's what keeps executive recruiters awake: 62% of candidates lose interest in a role within two weeks if they don't hear back. When your average CEO search takes nearly five months, candidate engagement becomes critical.
US offer acceptance rates for executive positions sit at 79%—the lowest among peer markets. One in five candidates declines after you've invested months in the search and negotiation.
The Role Seniority Gradient
Across all industries, seniority level is one of the strongest predictors of hiring timeline. Here's the pattern I've observed:
Entry-level roles: 2-4 weeks in high-volume industries, 4-6 weeks elsewhere. These positions have abundant candidates, simpler qualification requirements, and faster decision-making.
Mid-level individual contributors: 4-8 weeks. Requires more screening for specific skills but decision authority typically stays with hiring managers who can move quickly.
Senior individual contributors: 6-10 weeks. Specialized skills, higher stakes, often involves more interviewers and stakeholder alignment.
Managers and directors: 8-12 weeks. Leadership assessment adds complexity, often requires executive approval, cultural fit evaluation becomes more rigorous.
VP and above: 10-16 weeks. Board involvement may be required, extensive reference checking, compensation negotiation becomes more complex.
C-suite: 12-20 weeks. Often involves retained search, multiple stakeholder groups, significant due diligence.
For each step up the seniority ladder, expect roughly 20-30% longer hiring timelines and 50-100% higher costs. A senior software engineer takes 20% longer to hire than a mid-level engineer. A VP of Engineering takes longer still.
Beyond "Engineer": Benchmarks for Specific Role Families
Most benchmarking discussions focus heavily on software engineers, partly because tech companies produce most of the data and partly because engineering hiring is expensive enough to attract attention. But other role families have their own dynamics that deserve attention.
Product Management: Time-to-hire typically runs 45-60 days for senior product managers, making it comparable to engineering. But here's the difference: product candidates are harder to assess objectively. There's no coding test equivalent. Companies end up running extensive case studies, multiple stakeholder interviews, and prolonged deliberation about "product sense" that's difficult to quantify. The result is long cycles with high variance. I've seen PM searches close in 30 days and seen them drag past 90 days for the exact same level role at similar companies. The difference usually comes down to how well the company has defined what "good" looks like for their specific context.
Design: Senior UX and product designers typically take 40-55 days to hire. Portfolio review adds a step that engineering doesn't have, but it also provides clearer signal—you can see the work. The challenge is that design talent is concentrated in certain metros (San Francisco, New York, Seattle, Austin), and remote design roles get flooded with applications. One design director I know received 800 applications for a single senior designer position when they posted it as remote. Sifting through that volume took longer than the actual interviews.
Data Science and ML Engineering: These roles consistently take longer than standard software engineering—often 55-70 days for senior positions. The technical assessment is more specialized, the candidate pool is smaller, and the hiring managers are often research-minded people who want extensive technical discussion before making decisions. PhD-required roles take even longer. One biotech company I worked with had a 95-day average for data scientists, partly because their interview process included a full research presentation that candidates needed weeks to prepare.
Sales: Here's a role family that moves faster than you'd expect. Experienced quota-carrying sales reps can be hired in 25-35 days at most companies, sometimes faster. The reason is straightforward: sales performance is measurable, quotas create urgency, and sales leaders tend to be decisive. The catch is that cost-per-hire can be high when you factor in OTE (on-target earnings) in compensation, and bad sales hires are extremely expensive—a rep who flames out in 6 months after a ramp period costs you not just salary but lost pipeline and territory damage.
Marketing: Highly variable depending on specialization. Digital marketing roles (paid acquisition, SEO, growth) often hire in 30-40 days because skills are testable. Brand and communications roles take longer—45-60 days—because assessment is more subjective. CMO searches average 80-100 days, which is long but shorter than other C-suite roles, partly because marketing leaders are used to selling themselves.
Operations and Customer Success: These tend to be faster than technical roles—typically 30-45 days for mid-level positions. The candidate pools are larger, the skills are more transferable across industries, and hiring managers in these functions tend to have less process. The risk is under-investment: because these hires are "easy," companies sometimes rush them and end up with quality problems that show up in customer satisfaction and retention metrics months later.
Quality of Hire: The Metric Everyone Claims to Care About
Here's an uncomfortable truth: only 32% of organizations feel they effectively measure quality of hire. Everyone agrees it's the most important metric. Almost no one has figured out how to track it well.
Quality of hire measures the value a new employee brings to an organization—how well they perform, how long they stay, how much they contribute. Unlike efficiency metrics that focus on speed and cost, quality of hire focuses on outcomes.
The challenge is that quality of hire data is spread across multiple systems (HRIS, ATS, performance management), complicated formulas make it hard to calculate consistently, and there's often no clear owner for the metric.
What leading companies are measuring:
Time-to-productivity: How quickly new hires become fully effective. The average time to reach full productivity is 28 weeks. Employees from within the same industry typically ramp faster; those from outside the industry take significantly longer (32 weeks on average).
90-day retention: Early turnover is the clearest signal of quality-of-hire problems. If people are leaving within three months, something is wrong with either the hiring process or the onboarding.
Hiring manager satisfaction: Surveys at 30, 60, and 90 days asking hiring managers to rate the new hire's performance, cultural fit, and whether they'd make the same hiring decision again.
Performance ratings: How new hires score on their first performance review compared to existing employees. High-quality hires should be performing at or above average within their first year.
The companies I've seen do this well stick to 4-5 key metrics aligned with business goals. They assess new hire performance at 30, 60, and 90 days with clear benchmarks. They avoid the trap of measuring too many things and instead focus on metrics they can actually act on.
The AI Impact: Numbers That Are Actually Changing
I should disclose something before we go further: I build AI recruitment tools for a living. I'm not a neutral observer here. But the data on AI's impact is too significant to gloss over, and I'm going to try to present it honestly—including the parts that don't fit the narrative that AI vendors (including me) would prefer you hear.
First, the adoption curve: it's steeper than most people realize. According to SHRM, 43% of organizations used AI for HR tasks in 2025, up from 26% in 2024. That's a 65% increase in one year. At the enterprise level, 78% of companies now use AI in recruitment—189% growth since 2022. This isn't a future trend. It's the present.
The efficiency gains are real. Organizations using AI report 31% faster hiring times. Resume screening completes 75% faster. Interview scheduling speeds up by 60%. The overall time-to-hire gap between AI-enabled and non-AI organizations is about 11 days—roughly 26% faster.
Cost impacts follow predictably: 30-40% reduction in cost-per-hire for organizations with clear implementation objectives. Average cost savings of $2.3M annually for enterprises with 1000+ employees. Recruiters save about 4.5 hours per week on repetitive tasks. PwC's analysis claims 340% ROI within 18 months for comprehensive AI recruitment platforms.
Quality metrics also show improvement: 50% improvement in quality-of-hire metrics, 25% boost in retention rates, AI screening achieving 89-94% accuracy rates. Advanced analytics can predict job performance with 78% accuracy and retention likelihood with 83% accuracy.
Those are the vendor-friendly numbers. Here's the part that keeps me up at night.
66% of US adults say they would avoid applying for jobs that use AI in hiring decisions. Two-thirds. When Gartner surveyed applicants, only 26% trusted AI to evaluate them fairly. That's a candidate experience time bomb. If your best candidates are actively avoiding AI-driven application processes, your funnel is leaking talent at the top—and your efficiency metrics won't show it.
I've seen this firsthand. A tech company I advised implemented AI-powered resume screening and saw their application-to-interview ratio improve by 40%. Great, right? Except three months later, a candidate they'd rejected mentioned on Blind that their system had filtered out everyone who'd listed their graduation year (a common age discrimination vector). The company's employer brand took a hit that took over a year to recover from. Their cost-per-hire actually went up as top candidates became harder to attract.
The implementation challenges are substantial: 67% of organizations cite data quality issues, 75% have bias and fairness concerns, 54% struggle with integration complexity, and 48% worry about regulatory compliance. That last one matters more every year—New York, Illinois, Maryland, and other jurisdictions are actively regulating AI hiring tools. The compliance risk is real and growing.
My honest assessment, which you can weight however you want given my conflict of interest: AI makes recruitment faster and cheaper. Whether it makes it better—genuinely better, for candidates and companies and society—depends entirely on implementation, ongoing monitoring, and an actual commitment to fairness that goes beyond checking compliance boxes. The technology amplifies whatever you put into it. That includes your organization's existing biases, your data quality problems, and your willingness to prioritize speed over thoughtfulness.
I believe AI recruiting tools can be a force for good. I also believe most implementations are sloppy, most vendors oversell, and most buyers don't ask hard enough questions. Make of that what you will.
The Diversity Metrics Nobody Wants to Discuss Honestly
I've written 10,000 words about recruitment benchmarks and haven't mentioned diversity once. That's partly because the data is sparse, partly because it's politically charged, and partly because most people discussing it are either virtue signaling or concern trolling. But it's too important to skip entirely.
Here's what the data actually shows, stripped of agenda:
Companies with explicit diversity hiring goals typically have 15-25% longer time-to-hire for roles covered by those goals. That's not because diverse candidates are harder to find—it's because companies add process steps (diverse slate requirements, additional review stages) that extend timelines. Whether that tradeoff is worth it depends on your values and your specific context. I can't tell you what to prioritize.
AI screening tools have a mixed record on diversity outcomes. Some studies show AI reducing human bias in initial screening. Other studies show AI amplifying historical biases baked into training data. The honest answer is: it depends on the tool, the training data, the implementation, and the ongoing monitoring. Anyone who tells you AI is definitely good or definitely bad for diversity hiring is selling you something.
Referral-heavy hiring tends to reproduce the demographics of your existing workforce. If your engineering team is 80% male, your referral pipeline will likely be 80%+ male. That's not an argument against referrals—they're still the highest-quality source—but it's a constraint to understand if demographic diversity matters to your organization.
The metric that matters most—and that almost nobody tracks—is whether diverse hires stay. A company that hits representation targets but has 40% turnover among underrepresented employees isn't succeeding at diversity; they're churning people through a system that doesn't work for them. The retention gap is the real diagnostic, and most companies are afraid to look at it.
My honest take: most companies' diversity hiring initiatives are performative. They announce goals, add some process steps, maybe hit some numbers, and never measure whether those hires actually thrive. Real diversity success requires fixing the culture and the systems, not just the recruiting pipeline. But culture change is hard and slow and doesn't make for a good press release.
I'm not going to tell you what your diversity strategy should be. That's above my pay grade and involves value judgments I can't make for you. What I will say is: if you're going to track diversity metrics, track the ones that matter (retention, promotion rates, engagement scores by demographic), not just the ones that look good in an annual report (hire rates).
Company Size Matters More Than You Think
One of the biggest mistakes I see in benchmarking is comparing a 200-person company against aggregate data dominated by enterprises. Company size fundamentally changes hiring dynamics.
Small companies (under 200 employees) typically hire faster than large ones. According to HackerRank's Tech Recruiting Survey, this is partly because larger companies receive on average 4x more applications per position and usually have more interview rounds. A startup might do two interviews and make a decision. An enterprise might have six rounds plus a panel plus executive approval.
But small companies pay a different price: they lack employer brand recognition, can't match enterprise compensation, and have smaller talent pools to draw from. I've worked with Series A startups where time-to-hire was actually longer than Fortune 500 companies because they struggled to get candidates interested in the first place.
Here's what I've observed across company sizes for professional roles:
Startups (under 50 employees): Highly variable. Can be extremely fast (2-3 weeks) when founders make decisions quickly, or painfully slow (12+ weeks) when they can't commit. Cost-per-hire tends to be high relative to salary because they're competing for talent against better-known companies.
Growth stage (50-500 employees): Often the sweet spot. Enough brand recognition to attract candidates, still lean enough to make fast decisions. Average time-to-hire typically 35-50 days. This is where I see the best balance of speed and quality.
Mid-market (500-5,000 employees): Process starts to slow things down. Multiple approval layers, more stakeholders, formal interview structures. Average time-to-hire 45-65 days. Cost-per-hire often increases as process complexity grows.
Enterprise (5,000+ employees): Maximum complexity. Legal review of offers, extensive background checks, formal salary banding, executive approval for senior roles. Time-to-hire can exceed 90 days for professional positions. But they also have resources for dedicated recruiting teams, employer brand investment, and competitive compensation.
The mistake is thinking that enterprise processes will work at smaller scale, or that startup speed is achievable at enterprise scale. Benchmark against companies your size, not against the aggregate.
Geographic Variations: The US Isn't the World
Most benchmarks I've cited are US-focused, but global hiring tells a different story.
US organizations receive 74 applications per opening—on par with the global average of 73. But the similarity ends there. US candidates are 9% less likely to accept offers compared to candidates in other nations. The US market shows offer acceptance at 79%—the lowest among peer markets.
Time-to-hire varies significantly by region. The US has a relatively low median at 35 days. Europe tends to run longer, partly due to notice period requirements—in Germany, senior employees may have 3-6 month notice periods baked into their contracts. Asia-Pacific varies dramatically by market: Singapore moves fast, Japan moves slow.
For engineering specifically: global time-to-fill averages 68 days, while US/Canada averages 56 days. The gap reflects both market differences and cultural expectations around hiring process.
If you're hiring globally, you need different benchmarks for different markets. A 60-day time-to-hire that's slow in the US might be fast in Germany.
Seasonal Patterns Nobody Talks About
Recruitment efficiency isn't constant throughout the year. There are predictable patterns that affect benchmarks:
Q1 (January-March): Hiring activity surges as companies execute new headcount budgets. Competition for candidates is high. Time-to-hire tends to be longer because candidates are also fielding multiple opportunities. This is when I see the most declined offers.
Q2 (April-June): Typically the most active hiring quarter. College graduation floods the entry-level market. Mid-career candidates who waited until after bonus season start looking. Time-to-hire often improves as candidate supply increases.
Q3 (July-September): Summer slowdown is real. Decision-makers take vacation. Candidates take vacation. August is notoriously slow. But it can also be a good time to hire—less competition means better candidate access for companies that keep hiring.
Q4 (October-December): Fall hiring rush in October-November, then a cliff in mid-December. Very few candidates accept offers in December—they wait until January for fresh start energy and because they want their annual bonus.
I adjust benchmark expectations by 15-20% based on quarter. A 50-day time-to-hire in August might be equivalent to 40 days in October.
Hospitality and retail face different seasonal patterns entirely—hiring surges before summer and holiday seasons, with time-to-fill targets that can be half of normal to handle volume.
The Referral Gap: What 7% Tells Us
Here's a metric that deserves more attention: referral rates.
US employers average 7% of hires from referrals. Hospitality averages just 2%. Healthcare struggles similarly. These numbers represent a massive missed opportunity.
Referral hires typically cost 50-70% less than other sources. They're faster to hire—candidates already have someone vouching for them. They tend to stay longer—the referring employee provides social connection. Quality metrics are typically higher.
Yet most organizations invest heavily in job boards and external recruiters while underinvesting in referral programs. I've seen companies spend $10,000 on a recruiter fee for a role that an employee could have referred for a $2,000 bonus.
The benchmarks suggest most organizations are leaving money on the table. If your referral rate is below 15%, that's a process problem worth fixing.
Internal mobility shows similar patterns: 9% of US hires come from internal candidates. For senior roles, this should be higher. Companies that can't promote from within are paying a premium to constantly hire externally—and often losing institutional knowledge in the process.
The Interview Inflation Problem
I mentioned earlier that teams interviewed 40% more candidates per hire in 2024 than in 2021. Let me explain why this matters—and introduce a concept I've started calling the "Comfort Interview Trap."
Every interview has costs: recruiter time scheduling, interviewer time conducting, candidate time participating, and opportunity cost for everyone involved. When you add interviews without improving outcomes, you're just burning resources.
The Comfort Interview Trap works like this: hiring managers feel uncertain about a candidate. Rather than make a decision, they schedule another interview "just to be sure." That interview doesn't provide new signal—it provides comfort. The manager feels better because more people weighed in. But the hiring outcome doesn't improve, because the additional interviews are confirming what earlier rounds already established.
I've worked with companies that had 8-round interview processes for individual contributor roles. When we analyzed their data, rounds 5-8 added almost no predictive value. They weren't revealing new information. They were psychological security blankets for hiring managers who didn't want to be wrong. But they were adding 3-4 weeks to the timeline and losing candidates to faster competitors.
The tell is when you hear things like: "Can we have one more person meet them?" or "Let's do another technical round to validate." That's almost never about the candidate—it's about the decision-maker's anxiety. And the solution isn't more interviews. It's better decision frameworks and hiring managers who understand that delay has costs.
The benchmark for interview-to-hire ratio should be a ceiling, not just a measurement. Healthcare data showing $18,000 savings from moving from 5:1 to 3:1 illustrates the opportunity. Every unnecessary interview is waste. And often, the interviews you're adding aren't making your decisions better—they're just making them slower.
My rule of thumb: if you're scheduling an interview and you can't articulate exactly what new information that interview will provide that previous rounds didn't, you're probably in the Comfort Interview Trap.
Source of Hire: Where Good Candidates Actually Come From
Not all candidate sources are equal, and the benchmarks reflect this:
Job boards: Still the dominant source for most companies, but efficiency varies wildly by role type. Good for high-volume, lower-skill positions. Increasingly noisy for specialized roles.
LinkedIn: Dominant for professional hiring, particularly for passive candidates. But response rates have declined as the platform has become saturated with recruiter outreach. I've seen response rates drop from 30%+ to under 10% over the past five years for cold outreach.
Referrals: Highest quality source for most companies. Fastest time-to-hire. Best retention. Yet typically underutilized.
Agency recruiters: Expensive but valuable for hard-to-fill roles. Typical fees of 20-35% of first-year salary. Worth it for specialized positions; expensive for roles you could fill directly.
Direct sourcing: Growing in importance as companies build internal sourcing capabilities. Lower cost than agencies but requires investment in sourcing tools and talent.
Career site applicants: Variable quality. Strong employer brand generates high-quality direct applicants. Weak brand means you're competing on job board rankings.
The benchmark insight: companies with diversified sourcing strategies typically have better overall efficiency than those dependent on any single source. If more than 60% of your hires come from one channel, you're probably overpaying or missing candidates.
Recruiter Productivity: The Numbers Behind the Numbers
Individual recruiter productivity varies more than most metrics, but here are the benchmarks:
After hitting a low of approximately 4.3 hires per recruiter per quarter in early 2023, the number stabilized at around 5.4 hires per quarter in 2024, according to Ashby's Talent Trends Report.
US recruiters typically handle 55 hires per month—81% more than the global average of 30. That sounds impressive until you consider that US recruiters are often managing 20+ requisitions simultaneously with less support.
Healthcare recruiters make 25% more hires per month than counterparts in other industries—reflecting both the volume of positions and the criticality of keeping roles filled.
Hospitality recruiters typically handle 21 hires per month—lower than average, which may reflect the complexity of seasonal surge hiring.
The physician recruiter benchmarks are interesting: 5-15 new candidate presentations per week is considered ideal, with 3-6 submissions to hiring managers per week depending on requisition load.
One metric I find particularly useful: interview-to-hire ratio. Healthcare data suggests that reducing this ratio from 5:1 to 3:1 can save a healthcare organization $18,000 per role. Every unnecessary interview costs money and time—for the organization, the recruiter, and the candidate.
The Burnout Nobody Benchmarks
Here's a metric you won't find in any vendor report: recruiter burnout.
I had dinner recently with a recruiting director who quit her job two months ago after twelve years in the industry. She's now working at a nonprofit, making 40% less, and describes herself as "finally able to sleep."
"You know what broke me?" she said. "The math. I was running 35 reqs, each one requiring me to source 200 candidates to get 20 interested, to get 10 to screen, to get 5 to interview, to get 2 to offer, to get maybe 1 hire. Multiply that by 35. Then do it again next month because we're 'behind on headcount.' Nobody ever asks if the headcount plan makes sense. Nobody asks if the req descriptions are realistic. It's just 'fill the roles faster.'"
The productivity benchmarks I cited—5.4 hires per quarter, 55 hires per month, 20+ simultaneous requisitions—represent what recruiters are doing. They don't represent what's sustainable. The difference matters.
Recruiting teams saw massive layoffs in 2022-2023 when hiring slowed. Many companies cut 50-70% of their recruiting staff. Now hiring is picking up, but companies are trying to run lean—asking smaller teams to hit the same numbers. The survivors are stretched thin, burning out, and quietly looking for exits.
When your recruiting team is exhausted, you don't just lose recruiters. You lose institutional knowledge about candidates, hiring manager preferences, what worked in past searches, and the relationships that make referrals happen. That's not measured anywhere, but it's real, and it's expensive.
The Ghosting Epidemic (Both Directions)
There's a stat I couldn't find in any benchmark report: the ghosting rate. Both candidates ghosting companies, and companies ghosting candidates. It's endemic, and nobody wants to talk about the actual numbers.
From what I've seen in company data: 20-40% of candidates who accept an interview slot never show up. 10-15% of candidates who receive offers never respond. 5-8% of candidates who accept offers don't show up on day one. These numbers have gotten worse since 2020, and they're worse in certain sectors (hospitality, retail, entry-level tech) than others.
But let's be honest about the other direction too. How many candidates apply and never hear back? I've talked to job seekers who submitted 200 applications in a month and received exactly zero responses—not even rejections. When companies ghost at scale, they shouldn't be surprised that candidates ghost back.
A candidate I interviewed for research—a software developer with seven years of experience—described his last job search: "I had three companies where I did full interview loops—four or five rounds each. Two of them just went silent. No rejection, no feedback, nothing. One company I followed up three times over six weeks. Nothing. The third company made an offer, which I accepted. But honestly? I ghosted on one final-round interview at another place because I was so fed up with being treated like my time didn't matter."
Ghosting begets ghosting. Companies treat candidates as interchangeable applicant data points. Candidates respond by treating interviews as options to be discarded without notice. The whole system is degrading, and no one's measuring it because no one wants to admit how bad it's gotten.
My own guess, based on patterns I've seen: candidate ghosting costs the average company 15-25% of their recruiting efficiency. That's time spent scheduling interviews that don't happen, processing offers that go unanswered, and managing pipelines full of people who disappeared. It's not in the benchmarks, but it's real cost.
The Layoff-Rehire Whiplash
We need to talk about what happened in 2022-2024, because it's still distorting every benchmark you're looking at.
Tech companies laid off over 400,000 workers in 2022-2023. Recruiting teams were decimated—often cut 50-70% while engineering cuts were 10-20%. HR leaders were told to "do more with less." Then, eighteen months later, many of the same companies started hiring again. Some are now trying to rehire people they laid off, at higher salaries, through agencies they're paying 25% fees to.
The benchmark implications are significant. Time-to-hire numbers in 2024-2025 are inflated partly because recruiting teams are understaffed. Cost-per-hire is higher because companies are relying more heavily on agencies and contractors. Quality metrics are suffering because the institutional knowledge of who to hire and how walked out the door with the recruiting teams.
I talked to a VP of Talent at a company that cut 60% of their recruiting team in late 2022. By mid-2024, they were hiring again—but their remaining recruiters were burned out and leaving. Their average time-to-hire had increased 40% compared to pre-layoff baselines. "We saved $3 million in recruiting salaries," she told me. "We've probably spent $5 million more in agency fees and bad hires since then. It was a terrible decision, and everyone knew it at the time, but the CFO wanted the headcount reduction."
If your benchmarks look worse than 2021 numbers, part of that is structural damage from the layoff cycle. It's not just market conditions—it's self-inflicted wounds that take years to heal.
The Cost of Vacancy: The Number Nobody Tracks
SHRM data shows every open position costs organizations between $4,000 and $9,000 per month in lost productivity, overtime, burnout, and project delays. When mid-level or executive roles stay vacant for months, that cost multiplies.
Let me make this concrete. If you have 10 open positions with an average time-to-fill of 60 days, and each vacancy costs $6,500 per month, you're losing $43,000 per month in vacancy costs—$130,000 over that 60-day period.
If AI tools could reduce your time-to-fill by 30%—from 60 days to 42 days—you'd save roughly $39,000 on those 10 positions. That's not the full ROI calculation, but it illustrates why time-to-fill matters beyond just candidate experience.
In healthcare, the vacancy costs are particularly brutal. For physicians, an unfilled position costs $70,000-$600,000 per month depending on specialty. For nurses, $418-$591 per day. A 125-day physician search at $200,000/month in vacancy costs translates to over $800,000 in opportunity cost before the new hire even starts.
This is why I push back when companies obsess over cost-per-hire without considering cost-of-vacancy. Spending an extra $5,000 to fill a role 30 days faster often generates positive ROI when vacancy costs are factored in.
Offer Acceptance: The Metric That Breaks Your Pipeline
The current average offer acceptance rate across industries is 69.3%. That means nearly one in three offers gets declined after you've invested weeks or months in the hiring process.
In the US specifically, acceptance rates are even lower—candidates are 9% less likely to accept offers compared to candidates in other countries. US market data shows offer acceptance at 79% for executive positions—meaning one in five executive candidates declines after an extended courtship.
What drives declinations? Competing offers (the market is competitive), compensation misalignment (candidates have better information than ever), timeline frustration (candidates lose interest within two weeks without communication), and counter-offers from current employers.
Healthcare data shows that improving acceptance rate by 20% can save an organization $24,000 in recruitment costs for physician positions alone. That's significant enough to justify investment in candidate experience, competitive compensation analysis, and faster decision-making.
What Nobody in This Industry Will Tell You
Let me step back from the data and share some observations that don't fit neatly into benchmark reports. These are opinions, formed over years of watching companies misuse metrics. Take them with whatever skepticism you think I deserve.
The recruiting industry has a vested interest in making you feel behind. Every benchmarking report I've ever seen is sponsored by a vendor selling solutions. SHRM needs you to feel like you need their certification programs. SmartRecruiters needs you to feel like your ATS is inadequate. I build AI recruiting tools—I need you to feel like automation would solve your problems. None of us are neutral observers. We're all selling something. Including me, right now, by establishing credibility so you'll consider our products later.
Most "best practices" are actually "median practices." When someone tells you the average time-to-hire is 44 days, they're telling you what typical companies achieve—not what great companies achieve. Benchmarking against the median is benchmarking against mediocrity. If your goal is to be as good as everyone else, congratulations, you've set the bar at "unremarkable."
Speed is overrated, and we oversell it. I've built my career partly on "time-to-hire reduction." But honestly? For most roles, a 10-day reduction in time-to-hire doesn't matter as much as we claim. The candidates who decline because you took 45 days instead of 35 days probably had other reasons. The candidates who would have been great will often wait if you're communicating well. The obsession with speed is partly genuine (vacancy costs are real) and partly manufactured anxiety (vendors need you worried about something).
Cost-per-hire is a garbage metric that we all track anyway. It includes things that have nothing to do with recruiting efficiency (like relocation costs), excludes things that do (like hiring manager time), and varies so wildly by role that aggregate numbers are meaningless. Everyone tracks it because everyone else tracks it. Very few companies make good decisions based on it.
Most recruiting problems are actually management problems. The company with 85-day engineering time-to-hire doesn't have a recruiting problem—they have a decision-making problem. The company with 55% offer acceptance doesn't have a recruiting problem—they have a compensation problem or a reputation problem. Recruiters often take the blame for organizational dysfunction they didn't create and can't fix.
The Benchmarking Mistakes I See Constantly
After years of working with companies on recruitment metrics, certain patterns of misuse keep appearing. Let me save you from the most common ones.
Mistake #1: Benchmarking against irrelevant peers. Sarah Chen compared her engineering team to a cross-industry average that included fast-food hiring. This happens constantly. A healthcare system benchmarks against aggregate data that includes retail. A startup benchmarks against enterprise numbers. A US company benchmarks against global data that includes markets with completely different labor dynamics.
The fix: segment ruthlessly. Industry, company size, role type, seniority level, and geography all matter. A benchmark that doesn't account for these factors is worse than no benchmark at all—it gives you false confidence in bad data.
Mistake #2: Optimizing for the wrong metric. I already confessed to this one—the logistics company where I helped cut time-to-hire in half while retention cratered. They hired faster, but they hired worse. The net result was more work, not less—they just shifted the cost from recruiting to turnover.
The fix: always pair efficiency metrics with quality metrics. Speed without quality is just faster failure. Cost reduction without quality maintenance is false savings.
Mistake #3: Measuring averages instead of distributions. A 45-day average time-to-hire might mean consistent 40-50 day performance, or it might mean some roles fill in 20 days while others take 90 days. The average tells you almost nothing about what's actually happening.
The fix: look at distributions, not just averages. What's your 90th percentile? What roles are outliers? Where are the bottlenecks? The interesting insights are usually in the variance, not the mean.
Mistake #4: Treating benchmarks as targets. A benchmark tells you what typical looks like—not what optimal looks like. If the average time-to-hire for your role is 50 days, that doesn't mean 50 days is your goal. It means 50 days is the norm you're competing against. If you're trying to be average, set average goals. If you're trying to win, set better goals.
Mistake #5: Measuring without acting. Some companies collect metrics obsessively but never use them to change anything. They build beautiful dashboards that nobody looks at. They track twenty metrics but act on zero. Measurement without action is just expensive documentation.
The fix: for every metric you track, have a clear answer to "what would we do differently if this number changed significantly?" If you don't have an answer, you probably don't need the metric. Metrics exist to inform decisions. If they're not informing decisions, they're just noise.
Building a Useful Benchmarking Practice
Let me describe what I actually recommend to companies building recruitment metrics practices.
Start with five metrics, not twenty. Time-to-hire, cost-per-hire, offer acceptance rate, 90-day retention, and hiring manager satisfaction at 90 days. These cover efficiency, cost, candidate experience, quality, and stakeholder perception. You can add more later, but start here.
Segment from day one. Don't wait until you have "enough data" to segment by department, role type, and seniority. Build the segmentation into your tracking from the start. You'll thank yourself in six months when you actually need to understand what's happening.
Find relevant external benchmarks. Industry associations often publish role-specific data. SHRM's benchmarking reports are useful. SmartRecruiters, Ashby, and other platforms publish aggregate data from their customer bases. Talk to peers at similar companies. Build a benchmark set that actually reflects your competitive reality.
Establish a review cadence. Monthly for operational metrics (time-to-hire, volume). Quarterly for strategic metrics (quality, cost). Don't review too frequently—you'll overreact to noise. Don't review too infrequently—you'll miss trends until they're crises.
Connect metrics to business outcomes. Time-to-hire matters because open positions have costs. Quality of hire matters because bad hires have costs. Cost-per-hire matters because recruiting budgets are finite. Always link recruiting metrics to business impact—that's what earns you resources to improve.
A Case Study in Getting It Right
Remember Sarah Chen, the CEO from the opening who almost fired her recruiting lead over a misleading benchmark? Let me tell you what happened after that midnight phone call.
Sarah's company—I'll call it DevStack, though that's not the real name—was a 400-person B2B developer tools company, growing about 40% annually. Their recruiting team was overwhelmed: two recruiters handling 60+ open roles across engineering, sales, and operations. The top-line metrics looked bad. Time-to-hire was 67 days. Cost-per-hire was around $8,200. Offer acceptance rate was 61%.
Sarah had concluded the whole system was broken. What we discovered was more nuanced.
When we segmented the data—something nobody had bothered to do in two years—we found three completely different realities hiding behind those averages. Sales roles were filling in 35 days with 75% offer acceptance. That's actually pretty good. Operations was at 52 days with 68% acceptance. Reasonable. But engineering was at 85 days with 52% acceptance. That's where the problem was.
"So our sales recruiting is fine?" Sarah asked, genuinely surprised. She'd been prepared to overhaul the entire function. "Better than fine. Your sales recruiter, Marcus, is one of the best I've seen at his level. He shouldn't be collateral damage in an engineering fix."
For engineering, we dug into the funnel stages. Sourcing was fine—they had a healthy pipeline of candidates, largely because DevStack had good developer brand recognition. Screening was efficient. Interview scheduling was typical. The problem was the gap between "final interview completed" and "offer extended." Average: 23 days.
Twenty-three days of engineering managers saying "I need to think about it" while candidates interviewed at three other companies and took competing offers. One hiring manager had a candidate sitting in limbo for 31 days. When I asked why, he said, "I was waiting to see who else applied." The candidate withdrew on day 28.
The fixes weren't complicated. Sarah implemented a 72-hour decision deadline after final interviews. If a hiring manager couldn't decide in 72 hours, they had to either extend an offer or pass—no more indefinite deliberation. There was pushback from engineering leadership ("what if we need more time to evaluate?"), but Sarah held firm. "If you can't evaluate a candidate in 72 hours after meeting them, either you're not prepared or they're not right. Either way, waiting three weeks doesn't help."
Second fix: they discovered their engineering offers were 8% below market. Not catastrophically low, but enough that candidates with multiple options would consistently choose competitors. They adjusted compensation bands and added signing bonuses for hard-to-fill senior roles.
Six months later, engineering time-to-hire dropped from 85 days to 48 days. Offer acceptance rate went from 52% to 76%. Overall cost-per-hire actually increased slightly—the signing bonuses added about $800 per engineering hire. But cost-per-successful-hire dropped significantly, because they weren't losing candidates and restarting searches.
The recruiting lead Sarah almost fired? She's still there. She got a promotion six months later. The benchmarks had made her look like the problem, but she wasn't—she was the one who'd been flagging the hiring manager decision delay for a year, and nobody had listened until Sarah got panicked enough to call me.
The lesson isn't "benchmark better." The lesson is that benchmarks are diagnostic tools, not verdicts. They tell you where to look, not what to do. Data without diagnosis is just numbers on a spreadsheet—the kind that almost costs good people their jobs.
Using These Benchmarks: What I Actually Do
After all these numbers, let me tell you how I actually use benchmarks in my work.
First, I segment ruthlessly. A company's "average time-to-hire" is almost meaningless. I want to see time-to-hire by department, by seniority level, by role family. A 45-day average might hide the fact that sales roles fill in 25 days while engineering roles take 70 days. Those are different problems requiring different solutions.
Second, I benchmark against relevant peers. A 200-person SaaS company should compare against other 200-person SaaS companies, not against Fortune 500 enterprises or 20-person startups. Industry, company size, growth stage, and geographic market all matter.
Third, I focus on trends over absolute numbers. Whether your time-to-hire is 40 days or 50 days matters less than whether it's trending up or down. A company that went from 55 days to 45 days has momentum; a company that went from 35 days to 45 days has a problem.
Fourth, I look for bottlenecks rather than averages. The tech sector's 10-day delay between interview and offer is more actionable than their overall 48-day time-to-hire. Fix the bottleneck, and the average takes care of itself.
Fifth, I always pair efficiency metrics with quality metrics. A team that reduced time-to-hire from 50 days to 30 days but also saw 90-day retention drop from 85% to 65% hasn't improved—they've just shifted costs from hiring to turnover.
What Good Actually Looks Like
Based on everything I've seen, here's what "good" performance looks like across the key metrics—not perfect, not top-decile, but solidly good performance that most organizations should be able to achieve:
Time-to-hire: Within 20% of industry and role-specific benchmarks. For tech, that's under 55 days for engineering roles. For healthcare, that's under 50 days for nursing and under 150 days for physicians. For retail, that's under 30 days for store-level positions.
Cost-per-hire: Within 15% of industry benchmarks while maintaining quality metrics. For most professional roles, that's $5,000-$7,000. For executive roles, budget 25-35% of first-year compensation.
Offer acceptance rate: Above 75%. Below that, you have either a compensation problem, a candidate experience problem, or a timeline problem.
90-day retention: Above 85%. If more than 15% of new hires leave within 90 days, something is broken in your hiring or onboarding process.
Hiring manager satisfaction: Above 4.0 on a 5-point scale at 90 days. If hiring managers consistently rate new hires below 4.0, quality of hire is suffering.
Recruiter productivity: 5-7 hires per recruiter per quarter for professional roles. Adjust up for high-volume hiring, down for executive search.
Interview-to-hire ratio: Below 4:1 for most roles. Above that, you're wasting time on candidates who aren't going to work out.
The Remote Work Factor: How Location Flexibility Changes Everything
One variable that's dramatically shifted benchmarks since 2020: remote work policies.
Companies with fully remote positions typically see 2-3x the application volume of equivalent on-site roles. The candidate pool expands from local talent to national or global talent. Time-to-hire often drops because you're not waiting for candidates to relocate or coordinating in-person interview logistics.
But the tradeoffs are real. I've seen fully remote companies with faster time-to-hire but higher 90-day turnover—people take roles without fully understanding the remote work culture, or they underestimate the challenges of working asynchronously. The expanded candidate pool also means more screening work to identify genuinely qualified applicants from the larger volume.
Hybrid roles fall somewhere in between. They offer flexibility while maintaining some local presence, but they also limit the candidate pool to people within commuting distance who are willing to come in some days. The "hybrid tax"—candidates who want full remote and won't consider hybrid—is real and growing.
What I've observed on compensation: fully remote roles in lower cost-of-living areas often accept 10-15% lower compensation than equivalent roles in expensive metros. But candidates are getting more sophisticated about this, and the arbitrage is shrinking. You used to be able to hire San Francisco talent at Austin prices for remote roles. That window is closing.
If you're benchmarking hiring metrics, you need to account for location policy. A 45-day time-to-hire for an on-site role in Chicago is very different from a 45-day time-to-hire for a fully remote role with no location requirements. The candidate pools are different. The competitive dynamics are different.
The Compensation Transparency Effect
Another factor reshaping benchmarks: the growing requirement to post salary ranges.
States including California, Colorado, Washington, and New York now require salary disclosure in job postings. The impact on recruitment metrics is measurable.
Positions with posted salary ranges see different application patterns. If the range is competitive, application quality often improves—candidates who apply are pre-qualified on compensation expectations. If the range is below market, application volume drops and quality suffers.
Offer acceptance rates tend to be higher for roles with disclosed ranges. The negotiation happens earlier in the process, so there are fewer surprises at offer stage. Candidates who advance know what to expect.
But there's a catch: if your disclosed range isn't competitive, candidates won't apply at all. You'll have great offer acceptance rates because everyone who makes it through already accepted the salary—but you'll struggle with a thin pipeline and potentially lower-quality candidates.
I've worked with companies who discovered through salary disclosure that they were 15-20% below market for certain roles. Their time-to-hire for those roles was long not because of process inefficiency, but because qualified candidates weren't applying. They had a compensation problem disguised as a recruiting problem.
The benchmarking implication: as salary transparency becomes universal, the companies with competitive compensation will see improving metrics while the companies paying below market will see declining metrics. The benchmarks will increasingly separate the compensation leaders from the laggards.
What The Benchmarks Don't Tell You
I've spent pages sharing data. Let me be honest about what that data doesn't capture.
Benchmarks don't tell you about candidate experience. You can have excellent time-to-hire numbers while treating candidates terribly—rushing them through a process that feels impersonal and transactional. The candidates you hire may resent how they were hired. The candidates you don't hire may tell their networks.
Benchmarks don't capture employer brand investment. Some companies hire slowly because they're building relationships with passive candidates over time. They might have longer time-to-hire but higher quality because they're not just processing applicants—they're cultivating talent.
Benchmarks don't account for market positioning. If you're the top employer in your market, candidates will wait for you. If you're competing against better-known brands, you need to move faster. The same 45-day time-to-hire means different things depending on your competitive position.
Benchmarks don't show you cultural fit. Some of the most successful companies I know have longer hiring processes because they're intensely focused on cultural alignment. They'll reject highly qualified candidates who don't fit. Their metrics look "worse" but their retention and performance are exceptional.
The numbers I've shared give you context. They help you understand whether your experience is typical or unusual. They point to potential problems and opportunities. But they can't tell you what's right for your specific organization with your specific strategy and culture.
Use benchmarks as input to thinking, not as a substitute for thinking.
Where This Is All Going: Predictions for 2027
I usually hate predictions in articles like this. They're either so obvious they're useless ("AI will become more prevalent") or so specific they're bound to be wrong. But I'll take some risks and share what I think is coming—not because I'm certain, but because thinking about the future forces clarity about the present.
Prediction 1: Time-to-hire will stop mattering for knowledge roles. By 2027, the companies that win talent won't be the fastest. They'll be the ones with the strongest "always recruiting" cultures—where they're building relationships with candidates before roles open, where hiring managers know the market intimately, where a new req gets filled from a pre-warmed pipeline in days rather than months. The benchmark race for lower time-to-hire will feel antiquated. Speed will be table stakes; depth of relationship will be the differentiator.
Prediction 2: Skills-based hiring will actually become real (finally). We've been talking about skills-based hiring for a decade. It hasn't happened at scale because companies haven't had the assessment infrastructure to make it work. That's changing. By 2027, I expect at least 30% of tech hiring to genuinely ignore degrees and focus on demonstrated skills—not as a PR statement, but as actual practice. The benchmarks will start segmenting by "skills-validated" vs "credential-based" hiring, and the former will show better quality metrics.
Prediction 3: The ATS market will consolidate brutally. There are too many ATS vendors. Way too many. Most of them are mediocre. By 2027, I expect 3-5 platforms to dominate enterprise hiring, with everyone else fighting for scraps or getting acquired. The platforms that survive will be the ones that actually improve outcomes—not just process applications, but predict success. The benchmarking implications: companies on winning platforms will see meaningfully better metrics than those stuck on legacy systems.
Prediction 4: Candidate trust in AI will bifurcate. Some companies will implement AI hiring thoughtfully—transparent about what's automated, fair in outcomes, willing to show their work. Those companies will build candidate trust and attract better applicants. Other companies will hide their AI usage, get caught in bias scandals, and see application quality decline. By 2027, there will be recognizable "AI-friendly" and "AI-problematic" employer brands, and the gap in hiring metrics between them will be significant.
Prediction 5: Someone will finally figure out how to measure quality of hire consistently. The fact that only 32% of organizations effectively measure quality of hire is embarrassing for an industry obsessed with data. By 2027, I expect a standard framework to emerge—probably driven by a few large platforms that can aggregate enough data to validate approaches. When that happens, the entire benchmarking conversation will shift from efficiency metrics to outcome metrics. And a lot of companies will discover that their "efficient" hiring processes are actually producing mediocre results.
Feel free to hold me to these. I'll either look smart or learn something.
The Metrics Nobody Tracks (But Should)
Let me end with three metrics I think are undervalued:
Candidate pipeline quality: Not just how many candidates apply, but what percentage are actually qualified. A thousand applications mean nothing if only 3% are worth interviewing. I've seen companies celebrate application volume while ignoring the fact that their job postings are attracting completely wrong candidates.
Hiring manager time investment: How many hours do hiring managers spend on recruitment activities? Every hour a sales director spends interviewing is an hour not spent selling. The hidden cost of hiring isn't just recruiter time—it's the opportunity cost of everyone involved in the process.
Time-to-value for new hires: Not just when someone starts, but when they actually begin contributing. The 28-week average time to full productivity means your "time-to-hire" is only part of the equation. A faster hire who takes 40 weeks to become productive may be worse than a slower hire who's fully effective in 20 weeks.
Recruitment efficiency isn't just about speed and cost. It's about building a process that consistently delivers people who perform well, stay long enough to contribute, and don't consume disproportionate organizational resources in the hiring process.
The benchmarks I've shared give you a starting point. But the real work is understanding what those numbers mean for your specific context—and identifying the specific bottlenecks, inefficiencies, and quality gaps that matter most for your organization.
What I Got Wrong
Before I wrap up, let me tell you about a time I got this completely wrong.
Three years ago, I advised a logistics company to focus ruthlessly on reducing time-to-hire. Their average was 58 days; I helped them get it down to 31 days. We implemented automated screening, compressed interview rounds, and created urgency around every open role. On paper, it was a massive success.
Eighteen months later, their 90-day retention had dropped from 82% to 61%. Their hiring managers reported lower satisfaction with new hires. Their cost-per-hire had gone down, but their cost-per-productive-employee had gone up, because they were churning through people who never should have been hired in the first place.
I'd helped them optimize for the wrong thing. We'd created a system that was really good at hiring people quickly and really bad at hiring people who would succeed. The speed improvement was real, but the quality degradation was hidden—it showed up in retention metrics and performance data that came months after the recruiting "success."
I share this because benchmarking advice is easy to give and hard to get right. Anyone can tell you to segment your data and focus on bottlenecks. The hard part is knowing which metrics actually matter for your situation, and which "improvements" are just moving costs from one bucket to another.
If there's one thing I want you to take away, it's this: question every benchmark, including the ones in this article. Ask what assumptions are baked in. Ask what it's not measuring. Ask whether improving this number will actually make your organization better, or just make your dashboard look better.
The End of Sarah's Story
Six months after that midnight phone call, Sarah Chen sent me another email. This one arrived at a reasonable hour—3 PM on a Thursday.
Subject line: "Numbers update"
DevStack's engineering time-to-hire was now 41 days, down from 85. Offer acceptance rate was 83%, up from 52%. And here's the number that mattered most to her: 90-day retention for engineering hires had gone from 71% to 94%. They weren't just hiring faster—they were hiring people who stayed.
"I almost made a terrible mistake," she wrote. "I almost blamed the wrong people for the wrong problem because I didn't understand what the numbers actually meant. Thank you for picking up the phone at midnight."
That's the thing about benchmarks. They're not answers. They're the start of better questions. And the quality of your recruiting outcomes depends less on hitting some industry average than on understanding what's actually happening in your specific context, with your specific roles, in your specific market.
Look at your numbers. But don't stop there. Ask why they are what they are. Ask what they're not telling you. Ask who's getting blamed for problems that aren't theirs. Ask what would happen if you stopped measuring the things that are easy to measure and started measuring the things that actually matter.
Somewhere right now, a CEO is looking at a benchmarking report and concluding their recruiting team is failing. Somewhere, a recruiter is getting blamed for a compensation problem or a decision-making problem or a culture problem they didn't create and can't fix. Somewhere, a candidate is being ghosted by the same company that will complain next week about candidates who ghost them.
The benchmarks won't fix any of that. Only the humans reading the benchmarks can.
So read the numbers. But remember: behind every data point is a person—a recruiter stretched too thin, a candidate waiting for an answer, a hiring manager afraid to make the wrong call, a CEO panicking over numbers they don't fully understand.
Fix the systems. But don't forget the people.