Three weeks ago, our Series A lead called to say the term sheet was ready. Eight million dollars. I was sitting in my home office at 2 AM, scrolling through LinkedIn posts and Reddit threads from job seekers sharing their experiences with AI screening systems.

The pattern in these accounts is consistent: experienced professionals, often over 40, report drastically different response rates depending on whether companies use AI screening. Hundreds of applications. A handful of callbacks. Stories about tracking which companies use which systems and finding correlations they can't prove but can't ignore.

I don't know if their observations are right. I don't know if our system is filtering out older workers. I've spent three years building this technology, and I still can't answer basic questions about whether it makes hiring fairer or just faster.

So here I am, on the last day of a year that reshaped everything about how humans find work, trying to make sense of what happened. This isn't the triumphant year-in-review my investors want me to write. It's also not the doom-and-gloom piece that would get me quoted in congressional hearings. It's something messier than either—an attempt to understand a year when AI recruiting grew up, screwed up, got sued, got regulated, got richer, and still left me wondering whether we're building something good.

The Numbers, and What They Hide

I've given this pitch maybe two hundred times. Global AI recruitment market: $660 million in 2025, up from $617 million. Projected to hit $1.12 billion by 2030. Adoption at 87% of companies, basically 100% in Fortune 500. Efficiency gains of 30-50% on time-to-hire. PwC claims 340% ROI within eighteen months. Recruiters saving four hours per role.

I can recite these numbers in my sleep. I've put them on slides, in pitch decks, in investor updates. They're true. They're also—and I'm only admitting this now, at 2 AM on the last day of the year—deeply incomplete.

Those are just the dedicated platforms. They don't count the shadow deployments. The LLMs that engineering teams have quietly plugged into screening workflows without telling HR. The hiring managers using ChatGPT to filter resumes before anyone else sees them. One of our clients discovered their IT department had been running applicants through Claude for six months. Nobody had approved it. Nobody had audited it. It just... happened.

So when industry reports say 40% of applications get screened by AI before human review, I don't believe it. That's the disclosed rate. The real number at large employers is probably 90%. Maybe higher. And the gap between what we disclose and what we actually do? That's the gap where the problems live.

The Year As I Lived It

February 2nd. I was on a flight to Amsterdam when the EU AI Act went live. My phone blew up the moment we landed. Our European clients wanted to know if we did "emotion recognition." I had to google what that meant exactly. Turns out analyzing facial microexpressions in video interviews was now illegal. We didn't do that—but I realized I wasn't entirely sure what our video analysis module actually measured. I spent the taxi ride reading our own technical documentation.

March hit harder. I was mid-call with a prospect when I saw the notification—EEOC had finally filed charges against HireVue. Crystal's case. Deaf Indigenous woman, denied captioning for a video interview, denied accommodation when she asked. HireVue's response was the usual: "entirely without merit." I muted myself and read the filing. Crystal still doesn't have a job. I thought about our own accessibility features. At that point, we had exactly two: screen reader compatibility and adjustable font sizes. I wrote a note to our product lead. It's still in my drafts.

May changed everything. Mobley v. Workday got certified as a class action. The potential class: everyone over 40 who was rejected by Workday's screening since 2017. I pulled the filing. Workday's own disclosure mentioned "1.1 billion applications were rejected" using its software. A billion people. I called my lawyer that afternoon. "Should we be worried?" He said he'd get back to me. That was seven months ago.

July made it worse. Judge Rita Lin expanded the Workday case to include their HiredScore AI. Her reasoning kept me up that night: "Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era." Software as a decisionmaker. That's what we sell.

September brought the big announcements. LinkedIn's Hiring Assistant going global. OpenAI entering the jobs market. I remember sitting in my car after reading the OpenAI news, not driving anywhere, just thinking. We'd spent three years building this. Now the company with the best AI on the planet was coming for our market.

October: Mercor raised $350 million at a $10 billion valuation. Three kids younger than my junior developers became billionaires. My co-founder texted me: "We picked the wrong business model." I didn't respond.

November brought research I wish I hadn't read. University of Washington showed that when humans work alongside biased AI, they don't correct the bias—they absorb it. The AI infects human judgment. Awareness training reduced biased decisions by 13%. Thirteen percent. We'd just spent $40,000 on bias training for client recruiters.

December, the audit dropped. New York's Local Law 144—the supposed model for AI hiring regulation. Two years of enforcement. Two complaints received. When auditors tested the same companies the city had examined, they found 17 instances of potential non-compliance. The city had found one. The enforcement agency's complaint hotline misdirected calls 75% of the time. This was the regulation we were supposed to be worried about.

The Recruiter Paradox

The pattern has been documented in HR industry research and recruiter forums: senior recruiters who champion AI implementation become redundant because of AI implementation. It's not rare—it's predictable.

The typical story: mid-sized tech company implements AI screening. Internal champion helps select the vendor, runs the pilot, writes the training materials. By Q4, they're screening twice as many applicants with half the team. The champion gets promoted. Gets a raise. Writes a case study for the vendor.

Then the layoffs come. Not a downsizing—an elimination. Two people remain: a coordinator to handle logistics and a contractor to "supervise" the AI. The tool the champion built? It made them redundant.

What's telling is the nuance in these stories. The AI works fine for volume hiring—production roles, sales, support. But senior positions? The ones where fit matters, where the resume doesn't tell the whole story? The hiring quality drops. The AI optimizes for pattern matching. It can't see potential. It can't take a chance on someone unconventional.

The irony compounds: these former recruiters are now being rejected by AI screening tools they helped implement. They know exactly what the systems look for—they helped design the criteria—and they still can't get past the filter. If someone who built the system can't beat it, what chance does everyone else have?

Many end up consulting—teaching other companies how to implement the same tools that replaced them. "I'm in the room when they talk about what to automate next," one wrote on a recruiting forum. The irony isn't lost on anyone.

The LinkedIn Bet

When LinkedIn announced Hiring Assistant was going global in September, the strategic implications were clear: this wasn't just a feature release. It was a repositioning.

LinkedIn had watched startups chip away at its recruiting business for years. Sourcing tools. Assessment platforms. Scheduling bots. Interview automation. Each one took a slice of what LinkedIn used to own.

Hiring Assistant was meant to be the response. Not just a tool but an agent—something that doesn't just assist recruiters but acts on their behalf. The "plan-and-execute" architecture, the "cognitive memory" that learns recruiter preferences, the autonomous outreach capabilities. This was LinkedIn saying: whatever AI can do for recruiting, we'll do it in-house.

The early numbers are impressive. Recruiters claim to save four hours per role. 69% higher InMail acceptance rates. Some users report doubling their capacity. An Equinix recruiter supposedly doubled the number of roles she could handle.

But here's what the PR materials don't mention: LinkedIn's business model depends on recruiters buying subscriptions. If Hiring Assistant is so efficient that companies need fewer recruiters, what happens to subscription revenue? And if the AI is doing the outreach, what stops candidates from just talking to the AI instead of paying for premium accounts?

LinkedIn's public messaging emphasizes "expanding the market" and "raising the floor." Translation: they're betting that AI will increase total hiring activity enough to offset reduced per-recruiter spending.

Maybe they're right. Or maybe they're cannibalizing their own business while hoping the growth numbers hold. Either way, it's a bet that industry analysts are watching closely.

The tension is real: LinkedIn's AI works, but the business model implications remain unclear. It's a question the whole industry is grappling with.

The OpenAI Wildcard

Then there's OpenAI.

When Fidji Simo announced the jobs platform in September, my first reaction was: finally. My second was: oh shit.

Finally, because someone with real AI chops was taking a serious run at the space. OpenAI could build things that recruiting vendors, including mine, couldn't match. Real reasoning capabilities. Genuine natural language understanding. The ability to evaluate candidates based on demonstrated competencies rather than keyword matching.

Oh shit, because OpenAI has the best AI on the planet and unlimited capital. We'd built our entire company on the assumption that domain expertise mattered more than model capability. That assumption was about to get tested.

The partnerships they announced—Walmart, BCG—signal serious intent. Walmart alone employs 1.6 million people. If OpenAI becomes the platform through which Walmart hires, trains, and promotes workers, that's a massive installed base before they've even launched.

Here's what keeps me thinking about the OpenAI announcement: they're planning to certify 10 million Americans in AI skills by 2030. If they succeed, they'll have credentialing data that no one else has. They'll know not just what people claim on resumes, but what they actually demonstrated in structured assessments.

Combine that with their jobs platform, and you have a closed loop where OpenAI certifies skills, matches candidates to jobs, and learns from hiring outcomes to improve both sides of the market.

I don't know where I was going with this section. I started it wanting to make a point about competition, and I've ended up just... scared, I guess. We spent three years building something, and there's a real chance it becomes irrelevant before we've figured out whether it's good or bad. That's a weird thing to process. I'm still processing it.

Mercor and the Money Question

In October, three guys who founded Mercor became billionaires. I looked up their ages. Younger than some of the interns at companies I've worked for. Younger than the analyst who does our financial projections. I'm not saying this to be bitter—okay, maybe a little bitter—but it tells you something about where the money is flowing.

Brendan Foody, Adarsh Hiremath, and Surya Midha started out building an AI that assessed candidates by analyzing interview transcripts. Standard stuff. Then they pivoted. Instead of matching job seekers with employers, they started hiring people to train AI models.

Think about that for a second. Their AI recruiting platform became a platform for recruiting humans to improve AI. They built an algorithm to match workers with jobs, and the job turned out to be teaching other algorithms to do jobs.

The $350 million raise at a $10 billion valuation tells you something about investor sentiment. In a year when overall VC funding was constrained, AI companies captured nearly half of global startup investment. $202 billion flowed into AI in 2025—75% more than 2024.

Some of that money is chasing real value. Some of it is chasing hype. The Mercor valuation could be justified if they become the dominant labor marketplace for AI development work. It could also be a peak-of-the-hype indicator.

I don't know which. I'm not sure anyone does.

What I do know is that the money changes the game. Well-funded AI recruiting startups can acquire competitors, hire better engineers, take more risks. They can afford to operate at a loss while building market share. Smaller players—like mine—face a choice: find a niche, find an acquirer, or find a new business.

What Nobody Talks About

Here's something you won't hear at industry conferences. During a due diligence call for our Series A, an investor asked me point-blank: "If your system is as good at predicting job performance as you claim, why aren't you using it to hire your own team?"

I didn't have a good answer. We don't use our own product internally. Most AI recruiting companies don't. We tell clients our algorithms can identify top talent, but when we hire engineers and salespeople, we do it the old-fashioned way—referrals, interviews, gut feel. The cobbler's children go barefoot.

While I'm confessing: here's the pitch I give to enterprise clients. I've given it maybe fifty times this year. "Our AI reduces bias by removing subjective human judgment from initial screening. Every candidate gets evaluated on the same criteria. No resume gets overlooked because a recruiter was tired or distracted or unconsciously influenced by a name or a photo."

That pitch is not a lie, exactly. But it leaves out the part where our AI learned its criteria from historical hiring decisions made by those same biased humans. It leaves out the part where we can't fully explain why the model weights certain factors. It leaves out Lin's concerns about proxies. I've closed deals worth hundreds of thousands of dollars by telling half the story. I don't know how to feel about that. I definitely know how I should feel about it.

And that's just the US market. I don't have reliable data on what's happening in Asia. Industry analysts report that Chinese tech giants operate AI hiring systems far more integrated than anything in the West—faster, trained on datasets Western companies can't access. Alibaba reportedly screens a million applications monthly. But nobody publishes research, nobody gets sued, and nobody knows what biases might be baked in. The EU can regulate emotion recognition all they want. It doesn't touch Shenzhen.

Then there's the question nobody wants to ask: what happens when candidates start gaming the system as effectively as the system games them? I've seen resumes that are clearly optimized for AI parsing—keyword-stuffed, formatted for machine reading, stripped of anything that might trigger a bias flag. Some candidates are using ChatGPT to rewrite their applications for each job. The arms race is already underway. At some point, we'll have AI-written resumes being screened by AI recruiters, with humans nowhere in the loop. I don't know what that world looks like, but we're heading there faster than anyone wants to admit.

The Lawsuit That Changes Everything

I've read the Mobley v. Workday filings three times. They keep me up at night.

Derek Mobley is the named plaintiff. African-American. Over 40. Has a disability. Applied to more than 100 jobs at companies using Workday between 2017 and 2023. Rejected every time. Not a single interview.

The core allegation is simple: Workday's AI discriminates. Not intentionally, but effectively. The algorithm was trained on historical hiring data that reflects historical biases, and it perpetuates those biases at scale.

What makes this case different from previous AI bias complaints is the scope. Workday processes applications for thousands of companies. In their own filings, they mention 1.1 billion rejected applications during the relevant period. A billion.

If the plaintiffs win, we'll learn things about AI hiring that the industry has tried to keep quiet. Training data composition. Validation methodologies. Internal bias audits. The gap between marketing claims and operational reality.

Workday denies everything. They call the rulings "preliminary, procedural." They say the claims "rely on allegations, not evidence."

But here's what I can't shake: I run a company that does something similar to what Workday does. I've never been entirely sure our system is fair either. We do bias audits. We test for demographic parity. We follow best practices. But the research keeps showing that bias finds proxies. Zip codes. College names. Language patterns. The algorithm learns to discriminate even when you explicitly tell it not to.

If Workday loses, every AI recruiting company will need to answer the same questions. Including mine.

The Bias Problem We Haven't Solved

In March, we got the bias audit results back. Green across the board. Demographic parity on race, gender, age. I forwarded the report to the board with a note about our commitment to responsible AI.

The problem with passing these audits: the metrics are designed to let you pass. They measure whether protected groups advance at equal rates—not whether they should advance at equal rates.

Think about who applies. Candidates from underrepresented backgrounds— they've been screened out so many times that many don't bother applying unless they're overqualified. So the ones in the applicant pool are, on average, probably stronger than candidates who apply casually to everything. Equal advancement rates might actually mean setting a higher bar for them.

The proxy problem is even more insidious. Remove race and gender from training, and the model learns to use school prestige as a proxy. For what? Quality? Socioeconomic status? Race? You literally can't tell.

At my own company, we tried removing college prestige from the model. Accuracy dropped 8%. Product killed it. We still send the bias audit reports to the board.

University of Washington, late 2024: leading AI models preferred white-associated names 85% of the time when evaluating identical resumes. Black male names were preferred exactly zero percent of the time across thousands of comparisons. Zero.

This year's follow-up was worse. When humans work alongside biased AI, they don't correct it—they adopt the bias. The AI doesn't just make bad decisions. It infects the humans trying to supervise it.

The human cost shows up in job seeker forums and support communities. People keeping journals of rejections. Writing down what they're feeling so they remember they're not crazy—that they used to run teams, ship products, build things that mattered.

The recurring theme in these accounts: "The worst part isn't the rejection. It's not knowing why. I asked companies for feedback. They say they can't share that information. One recruiter told me, off the record, she doesn't even see applications until after the AI has screened them. She literally doesn't know why I was rejected. Nobody does. The machine made a decision about my life, and nobody can explain it."

The Regulation Gap

You'd think, with all these problems, that regulators would be stepping in. You'd be wrong.

The EU AI Act is real. As of February 2025, emotion recognition in job interviews is banned in Europe. Full compliance for high-risk AI (including recruiting tools) is required by August 2026. Fines can hit 35 million euros or 7% of global turnover.

But here's the thing about EU regulations: they force compliance from big companies operating in Europe. They don't touch the thousands of smaller companies using AI to screen applications in markets where nobody's watching.

And in the US? The regulation is a joke.

New York City's Local Law 144 was supposed to be the model. Independent bias audits. Annual publication. Notice to candidates before AI evaluation.

Two years of enforcement. Two complaints. The state comptroller's December audit found the enforcement agency's process "ineffective." When auditors reviewed the same companies the city had examined, they found 17 instances of potential non-compliance versus the city's finding of one. Test calls to the complaint hotline were misdirected 75% of the time.

The message from NYC: we made a law. We don't enforce it. Do whatever you want.

Illinois and Colorado have laws coming. But Colorado's got delayed, extended, challenged. A December executive order created a DOJ task force specifically to undermine state AI regulations. Let that sink in. The federal government isn't just failing to regulate—it's actively obstructing states that try.

I'm not asking for heavy-handed regulation. I run a company that would be regulated. I have every incentive to want a light touch. But I'm also watching this industry get away with things that would be illegal if humans did them, and it's hard to see how that ends well for anyone. Right now, companies can discriminate at scale and face essentially no consequences. The only real check is the courts, and lawsuits take years. By the time Mobley v. Workday gets resolved, another billion applications will have been screened.

What Candidates Actually Experience

The industry talks about AI recruiting in terms of efficiency, cost savings, time-to-hire. We rarely talk about what it's like on the other side of the screen.

The name experiments are well-documented now—candidates testing whether "J. Williams" gets different results than "Jamal Williams." The academic research confirms what job seekers have suspected: response rates differ significantly based on name associations. Same resume. Same cover letter. Different outcomes.

One pattern from these informal experiments: the companies that respond differently based on name are disproportionately the ones using automated screening. Correlation isn't causation—but the pattern is hard to dismiss.

Other documented patterns: candidates over 45 who remove graduation years and immediately start getting callbacks. Candidates rejected by AI, then hired after the same company switches to human review. Deaf applicants who give up on video interviews after repeated rejections.

None of this is definitive proof. Any individual case could have an innocent explanation. But when you hear enough stories, the pattern gets harder to dismiss as coincidence. Survey data says 66% of US adults would avoid applying to jobs that use AI in hiring. Only 26% trust AI to evaluate them fairly.

There's something else going on here that I don't fully understand—and I've thought about it a lot. When a human rejects you, you can at least imagine appealing to them. Making a better case. Trying again. But when an algorithm says no, there's nothing to push back against. No face. No argument. Just silence. And the worst part: even if you demanded an explanation, the company using the tool probably couldn't give you one. They don't fully know either.

Several people described the same progression: after hundreds of rejections, they started to believe the problem was them. That they were unemployable. That something was fundamentally wrong with who they were. Only later—comparing notes with others, running their own experiments like Jamal—did they realize the system might be broken, not them.

So Why Am I Still Doing This

Reading back what I've written, I sound like someone who should quit. Sell the company, write a Medium post about my ethical awakening, move to Vermont. My co-founder has joked about this. I'm not sure he's always joking.

But here's the thing—and I've gone back and forth on whether to include this, because it sounds like cope—some of it actually works.

Skills-based hiring success stories exist. Career changers—nurses moving into healthcare tech, veterans transitioning to operations roles—who get flagged for transferable skills that traditional resume screening would miss. No CS degree, no tech experience on paper, but communication skills and pattern recognition that AI systems identify when human recruiters wouldn't have time to notice.

I don't know if these successes outweigh the failures. Probably not. But they're not nothing.

The skills-based hiring shift is real. Two-thirds of companies use it now to some degree. For people without elite credentials—which is most people—that matters. The efficiency gains are real too. Companies hire faster, which means candidates wait less, which means fewer people stuck in that awful limbo of not knowing.

I'm not trying to balance the ledger. I can't. One nurse getting a job she deserved doesn't cancel out the Marcuses. But I guess what I'm trying to say is—I don't actually think the technology is broken. I think we are. We moved too fast. We didn't ask hard enough questions. We took the money and figured we'd sort out the ethics later. And now "later" is here and we still haven't sorted it out.

What 2026 Looks Like From Here

Predictions are usually bullshit—I've been wrong about this industry more times than I can count—but here's what I think happens next.

The lawsuits decide everything. Mobley v. Workday probably doesn't go to trial until 2027, but discovery starts soon. When it does, we're going to learn things. Training data composition. Internal bias audits that never got published. The gap between what companies told customers and what they knew internally. I don't know if Workday specifically is hiding anything. I do know that the industry as a whole has a lot of closets with a lot of skeletons.

EU compliance deadline hits in August. That'll force some real changes—documentation, testing, deployment protocols—at least for companies that want European business. Whether it actually reduces bias or just creates more paperwork, I genuinely don't know. (My guess: mostly paperwork. But maybe I'm cynical.)

OpenAI changes the game. They have better models, more capital, and Walmart's 1.6 million employees as a testing ground. If they execute well, a lot of companies like mine become features rather than products. I've been telling my board we have eighteen months to find a defensible niche. That might be optimistic.

The bias problem doesn't get solved. It just gets managed differently. Better audits, better guardrails, better PR. But the fundamental issue—you can't train on biased history and expect unbiased outputs—doesn't have a technical fix. At some point we'll have to decide whether to keep pretending otherwise.

December 31st, 11:47 PM

I run an AI recruiting company. I've spent the last six hours writing about everything wrong with my industry while my Series A term sheet sits unsigned on my desk. The hypocrisy isn't lost on me.

We've made changes this year. Candidates can now request human review if they're rejected—most don't, but the option exists. We've expanded our bias audits beyond the metrics that are designed to be passed. We're building explainability into scoring, so recruiters can see which factors contributed and where human judgment should intervene.

Last month, a client wanted us to remove bias auditing from the contract. Seven figures in annual revenue. I said no. The conversation with my co-founder that followed was tense.

The question he asked still sits with me: "Do you actually want this company to succeed? Or are you looking for a way out that lets you feel good about yourself?"

We still haven't signed that client. We still need the money. And I still don't have a good answer to his question.

None of it is enough. We still use college prestige in the model because accuracy drops without it. We still pass bias audits that researchers say are meaningless. We still tell clients our system is "fair" when the honest answer is "fairer than humans, probably, but we're not sure."

Tomorrow someone will submit an application. Their resume will be parsed, tokenized, embedded in a vector space, compared against millions of data points, and scored. Under a second. Our system, or one like it.

If they're qualified, they might get a callback. If the algorithm learned the wrong patterns from biased history, they might not. They'll never know why, because we don't fully know why.

I keep going back to the job seeker accounts I read. Experienced professionals. Hundreds of applications. The system filtering them out before anyone saw their names.

I keep wanting to end this piece with something hopeful, something about how one person at a time we can make it better, but that feels like bullshit even as I type it. The truth is I don't know if anything we're doing changes the fundamental dynamics. Probably it doesn't. Probably the job seekers who write me get jobs through other channels and my company becomes a footnote, or they don't and we remain part of the problem.

But I'm going to keep building anyway. Because doing nothing feels worse. And because it's 3 AM now, and I've been writing this for seven hours, and I need to believe that something—anything—matters beyond the numbers we put in the pitch decks.

Ask me in a year whether that was naive.