Three weeks ago, our Series A lead called to say the term sheet was ready. Eight million dollars. I was sitting in my home office at 2 AM, scrolling through a spreadsheet of rejected job applications—not ours, but from someone who'd emailed me after reading one of my earlier pieces on hiring bias.
Marcus. Fifty-three years old. Former VP of Engineering. Laid off in March. Four hundred and twelve applications since then. Eleven callbacks. One offer—a contract role paying a third of his previous salary.
"I started tracking which companies use AI screening," he wrote. "The ones that do? Zero callbacks. The ones that don't? About a 15% response rate."
I don't know if he's right. I don't know if our system is filtering out older workers. I've spent three years building this technology, and I still can't answer basic questions about whether it makes hiring fairer or just faster.
So here I am, on the last day of a year that reshaped everything about how humans find work, trying to make sense of what happened. This isn't the triumphant year-in-review my investors want me to write. It's also not the doom-and-gloom piece that would get me quoted in congressional hearings. It's something messier than either—an attempt to understand a year when AI recruiting grew up, screwed up, got sued, got regulated, got richer, and still left me wondering whether we're building something good.
The Numbers, and What They Hide
I've given this pitch maybe two hundred times. Global AI recruitment market: $660 million in 2025, up from $617 million. Projected to hit $1.12 billion by 2030. Adoption at 87% of companies, basically 100% in Fortune 500. Efficiency gains of 30-50% on time-to-hire. PwC claims 340% ROI within eighteen months. Recruiters saving four hours per role.
I can recite these numbers in my sleep. I've put them on slides, in pitch decks, in investor updates. They're true. They're also—and I'm only admitting this now, at 2 AM on the last day of the year—deeply incomplete.
Those are just the dedicated platforms. They don't count the shadow deployments. The LLMs that engineering teams have quietly plugged into screening workflows without telling HR. The hiring managers using ChatGPT to filter resumes before anyone else sees them. One of our clients discovered their IT department had been running applicants through Claude for six months. Nobody had approved it. Nobody had audited it. It just... happened.
So when industry reports say 40% of applications get screened by AI before human review, I don't believe it. That's the disclosed rate. The real number at large employers is probably 90%. Maybe higher. And the gap between what we disclose and what we actually do? That's the gap where the problems live.
The Year As I Lived It
February 2nd. I was on a flight to Amsterdam when the EU AI Act went live. My phone blew up the moment we landed. Our European clients wanted to know if we did "emotion recognition." I had to google what that meant exactly. Turns out analyzing facial microexpressions in video interviews was now illegal. We didn't do that—but I realized I wasn't entirely sure what our video analysis module actually measured. I spent the taxi ride reading our own technical documentation.
March hit harder. I was mid-call with a prospect when I saw the notification—EEOC had finally filed charges against HireVue. Crystal's case. Deaf Indigenous woman, denied captioning for a video interview, denied accommodation when she asked. HireVue's response was the usual: "entirely without merit." I muted myself and read the filing. Crystal still doesn't have a job. I thought about our own accessibility features. At that point, we had exactly two: screen reader compatibility and adjustable font sizes. I wrote a note to our product lead. It's still in my drafts.
May changed everything. Mobley v. Workday got certified as a class action. The potential class: everyone over 40 who was rejected by Workday's screening since 2017. I pulled the filing. Workday's own disclosure mentioned "1.1 billion applications were rejected" using its software. A billion people. I called my lawyer that afternoon. "Should we be worried?" He said he'd get back to me. That was seven months ago.
July made it worse. Judge Rita Lin expanded the Workday case to include their HiredScore AI. Her reasoning kept me up that night: "Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era." Software as a decisionmaker. That's what we sell.
September brought the big announcements. LinkedIn's Hiring Assistant going global. OpenAI entering the jobs market. I remember sitting in my car after reading the OpenAI news, not driving anywhere, just thinking. We'd spent three years building this. Now the company with the best AI on the planet was coming for our market.
October: Mercor raised $350 million at a $10 billion valuation. Three kids younger than my junior developers became billionaires. My co-founder texted me: "We picked the wrong business model." I didn't respond.
November brought research I wish I hadn't read. University of Washington showed that when humans work alongside biased AI, they don't correct the bias—they absorb it. The AI infects human judgment. Awareness training reduced biased decisions by 13%. Thirteen percent. We'd just spent $40,000 on bias training for client recruiters.
December, the audit dropped. New York's Local Law 144—the supposed model for AI hiring regulation. Two years of enforcement. Two complaints received. When auditors tested the same companies the city had examined, they found 17 instances of potential non-compliance. The city had found one. The enforcement agency's complaint hotline misdirected calls 75% of the time. This was the regulation we were supposed to be worried about.
The Recruiter Who Lost Her Job to Her Own Tool
I met Sarah at a conference in Austin in June. She'd been a senior recruiter at a mid-sized tech company—the kind with a few thousand employees, enough to be serious about hiring but not enough for a dedicated AI team.
Her company had implemented an AI screening tool in early 2024. She'd been the internal champion. Helped select the vendor. Ran the pilot. Wrote the training materials.
"By Q4, we were screening twice as many applicants with half the team," she told me. We were at a hotel bar. She was on her third glass of wine. "I got promoted. Got a raise. Wrote a case study for the vendor."
In February 2025, her company laid off the recruiting team. Not downsized—eliminated. Two people remained: a coordinator to handle logistics and a contractor to "supervise" the AI.
"The tool I championed? It made me redundant."
I asked if she blamed the technology.
"I blame myself for not seeing it. But also—" she paused, swirled her wine "—the tool is fine for volume hiring. Production roles, sales, support. But the senior positions? The ones where fit matters, where the resume doesn't tell the whole story? They're making terrible hires now. The AI optimizes for pattern matching. It can't see potential. It can't take a chance on someone unconventional."
Sarah is job hunting now. She's been rejected by AI screening tools she helped select at three different companies.
"I know exactly what they're looking for," she said. "I helped design the criteria. And I still can't get past the filter. How's someone without my background supposed to have a chance?"
I emailed Sarah last week to ask if I could include her story. She'd found something—a consulting gig, helping companies implement AI recruiting tools. "I'm teaching them how to use the thing that replaced me," she wrote back. "The irony isn't lost on me. But at least now I'm in the room when they talk about what to automate next." She asked me not to use her real name. I'm not using the company's name either. They're a client.
The LinkedIn Bet
When LinkedIn announced Hiring Assistant was going global in September, I called my contact at Microsoft to get the inside story. She couldn't say much on record, but the broad strokes were clear: this wasn't just a feature release. It was a strategic repositioning.
LinkedIn had watched startups chip away at its recruiting business for years. Sourcing tools. Assessment platforms. Scheduling bots. Interview automation. Each one took a slice of what LinkedIn used to own.
Hiring Assistant was meant to be the response. Not just a tool but an agent—something that doesn't just assist recruiters but acts on their behalf. The "plan-and-execute" architecture, the "cognitive memory" that learns recruiter preferences, the autonomous outreach capabilities. This was LinkedIn saying: whatever AI can do for recruiting, we'll do it in-house.
The early numbers are impressive. Recruiters claim to save four hours per role. 69% higher InMail acceptance rates. Some users report doubling their capacity. An Equinix recruiter supposedly doubled the number of roles she could handle.
But here's what the PR materials don't mention: LinkedIn's business model depends on recruiters buying subscriptions. If Hiring Assistant is so efficient that companies need fewer recruiters, what happens to subscription revenue? And if the AI is doing the outreach, what stops candidates from just talking to the AI instead of paying for premium accounts?
I asked a LinkedIn product manager about this at the September launch event. He gave me a carefully rehearsed answer about "expanding the market" and "raising the floor." Translation: they're betting that AI will increase total hiring activity enough to offset reduced per-recruiter spending.
Maybe they're right. Or maybe they're cannibalizing their own business while hoping the growth numbers hold. Either way, it's a bet I'm glad I don't have to make.
(Sidebar: After the launch event, I ran into a LinkedIn engineer at the hotel bar. Three drinks in, he admitted they'd been scrambling for months. "The AI works," he said. "But nobody knows what happens to the business model." He asked me not to quote him. I'm not using his name.)
The OpenAI Wildcard
Then there's OpenAI.
When Fidji Simo announced the jobs platform in September, my first reaction was: finally. My second was: oh shit.
Finally, because someone with real AI chops was taking a serious run at the space. OpenAI could build things that recruiting vendors, including mine, couldn't match. Real reasoning capabilities. Genuine natural language understanding. The ability to evaluate candidates based on demonstrated competencies rather than keyword matching.
Oh shit, because OpenAI has the best AI on the planet and unlimited capital. We'd built our entire company on the assumption that domain expertise mattered more than model capability. That assumption was about to get tested.
The partnerships they announced—Walmart, BCG—signal serious intent. Walmart alone employs 1.6 million people. If OpenAI becomes the platform through which Walmart hires, trains, and promotes workers, that's a massive installed base before they've even launched.
Here's what keeps me thinking about the OpenAI announcement: they're planning to certify 10 million Americans in AI skills by 2030. If they succeed, they'll have credentialing data that no one else has. They'll know not just what people claim on resumes, but what they actually demonstrated in structured assessments.
Combine that with their jobs platform, and you have a closed loop where OpenAI certifies skills, matches candidates to jobs, and learns from hiring outcomes to improve both sides of the market.
I don't know where I was going with this section. I started it wanting to make a point about competition, and I've ended up just... scared, I guess. We spent three years building something, and there's a real chance it becomes irrelevant before we've figured out whether it's good or bad. That's a weird thing to process. I'm still processing it.
Mercor and the Money Question
In October, three guys who founded Mercor became billionaires. I looked up their ages. Younger than some of the interns at companies I've worked for. Younger than the analyst who does our financial projections. I'm not saying this to be bitter—okay, maybe a little bitter—but it tells you something about where the money is flowing.
Brendan Foody, Adarsh Hiremath, and Surya Midha started out building an AI that assessed candidates by analyzing interview transcripts. Standard stuff. Then they pivoted. Instead of matching job seekers with employers, they started hiring people to train AI models.
Think about that for a second. Their AI recruiting platform became a platform for recruiting humans to improve AI. They built an algorithm to match workers with jobs, and the job turned out to be teaching other algorithms to do jobs.
The $350 million raise at a $10 billion valuation tells you something about investor sentiment. In a year when overall VC funding was constrained, AI companies captured nearly half of global startup investment. $202 billion flowed into AI in 2025—75% more than 2024.
Some of that money is chasing real value. Some of it is chasing hype. The Mercor valuation could be justified if they become the dominant labor marketplace for AI development work. It could also be a peak-of-the-hype indicator.
I don't know which. I'm not sure anyone does.
What I do know is that the money changes the game. Well-funded AI recruiting startups can acquire competitors, hire better engineers, take more risks. They can afford to operate at a loss while building market share. Smaller players—like mine—face a choice: find a niche, find an acquirer, or find a new business.
What Nobody Talks About
Here's something you won't hear at industry conferences. During a due diligence call for our Series A, an investor asked me point-blank: "If your system is as good at predicting job performance as you claim, why aren't you using it to hire your own team?"
I didn't have a good answer. We don't use our own product internally. Most AI recruiting companies don't. We tell clients our algorithms can identify top talent, but when we hire engineers and salespeople, we do it the old-fashioned way—referrals, interviews, gut feel. The cobbler's children go barefoot.
While I'm confessing: here's the pitch I give to enterprise clients. I've given it maybe fifty times this year. "Our AI reduces bias by removing subjective human judgment from initial screening. Every candidate gets evaluated on the same criteria. No resume gets overlooked because a recruiter was tired or distracted or unconsciously influenced by a name or a photo."
That pitch is not a lie, exactly. But it leaves out the part where our AI learned its criteria from historical hiring decisions made by those same biased humans. It leaves out the part where we can't fully explain why the model weights certain factors. It leaves out Lin's concerns about proxies. I've closed deals worth hundreds of thousands of dollars by telling half the story. I don't know how to feel about that. I definitely know how I should feel about it.
And that's just the US market. I don't have reliable data on what's happening in Asia. A contact at a Singapore-based recruiting firm told me ByteDance's internal hiring AI is years ahead of anything in the West—faster, more integrated, trained on datasets we can't access. Alibaba supposedly screens a million applications monthly. But nobody publishes research, nobody gets sued, and nobody knows what biases might be baked in. The EU can regulate emotion recognition all they want. It doesn't touch Shenzhen.
Then there's the question nobody wants to ask: what happens when candidates start gaming the system as effectively as the system games them? I've seen resumes that are clearly optimized for AI parsing—keyword-stuffed, formatted for machine reading, stripped of anything that might trigger a bias flag. Some candidates are using ChatGPT to rewrite their applications for each job. The arms race is already underway. At some point, we'll have AI-written resumes being screened by AI recruiters, with humans nowhere in the loop. I don't know what that world looks like, but we're heading there faster than anyone wants to admit.
The Lawsuit That Changes Everything
I've read the Mobley v. Workday filings three times. They keep me up at night.
Derek Mobley is the named plaintiff. African-American. Over 40. Has a disability. Applied to more than 100 jobs at companies using Workday between 2017 and 2023. Rejected every time. Not a single interview.
The core allegation is simple: Workday's AI discriminates. Not intentionally, but effectively. The algorithm was trained on historical hiring data that reflects historical biases, and it perpetuates those biases at scale.
What makes this case different from previous AI bias complaints is the scope. Workday processes applications for thousands of companies. In their own filings, they mention 1.1 billion rejected applications during the relevant period. A billion.
If the plaintiffs win, we'll learn things about AI hiring that the industry has tried to keep quiet. Training data composition. Validation methodologies. Internal bias audits. The gap between marketing claims and operational reality.
Workday denies everything. They call the rulings "preliminary, procedural." They say the claims "rely on allegations, not evidence."
But here's what I can't shake: I run a company that does something similar to what Workday does. I've never been entirely sure our system is fair either. We do bias audits. We test for demographic parity. We follow best practices. But the research keeps showing that bias finds proxies. Zip codes. College names. Language patterns. The algorithm learns to discriminate even when you explicitly tell it not to.
If Workday loses, every AI recruiting company will need to answer the same questions. Including mine.
The Bias Problem We Haven't Solved
In March, we got the bias audit results back. Green across the board. Demographic parity on race, gender, age. I forwarded the report to the board with a note about our commitment to responsible AI.
Lin caught me in the hallway afterward. She's our senior ML engineer—the one who actually builds these systems while I talk about them at conferences.
"Can we talk about the audit?"
I was already late for a sales call. "What about it? We passed."
"That's the problem." She didn't move. "Those metrics are designed to let you pass. They measure whether protected groups advance at equal rates. Not whether they should advance at equal rates."
I stopped walking.
"Think about who applies. Black candidates, women in tech—they've been screened out so many times that many don't bother applying unless they're overqualified. So the ones in our pool are, on average, probably stronger than the white male applicants who apply casually to everything. Equal advancement rates might actually mean we're setting a higher bar for them."
I didn't have a response to that.
"And there's another thing." She pulled out her laptop, right there in the hallway. "College prestige. We removed race and gender from training. But look at this correlation." She turned the screen toward me. "The model learned to use school tier as a proxy. For what? Quality? Socioeconomic status? Race? We literally can't tell."
"Can we remove college?"
"I tried. Accuracy drops 8%. I brought it to product last month. They killed it."
"Why didn't I hear about this?"
She closed the laptop. "You were busy with the Series A. And honestly? Nobody wanted to have this conversation. We're selling 'fair hiring.' Admitting the model might be discriminating by proxy—that's not a conversation anyone wants before a funding round."
I missed my sales call. I sat in my office for an hour reading the research Lin had cited. University of Washington, late 2024: leading AI models preferred white-associated names 85% of the time when evaluating identical resumes. Black male names were preferred exactly zero percent of the time across thousands of comparisons. Zero.
This year's follow-up was worse. When humans work alongside biased AI, they don't correct it—they adopt the bias. The AI doesn't just make bad decisions. It infects the humans trying to supervise it.
We still haven't removed college from the model. The sales team still says 8% accuracy is unacceptable. Lin is still at the company, as far as I know still frustrated. And we still send the audit reports to the board.
I thought about Marcus again that night. Pulled up his spreadsheet. Four hundred and twelve applications. I started mapping which companies used which ATS systems, which ones we knew used AI screening. The correlation wasn't perfect—it never is—but it was there. The companies most likely to have rejected him were the ones most likely to be using tools like ours.
I wrote him back. Asked if he'd be willing to share more details about his applications. He sent me a follow-up email the next morning, 6 AM his time.
"I've started keeping a journal," he wrote. "Every rejection, I write down what I'm feeling. It helps me remember that I'm not crazy. That I used to run engineering teams of forty people. That I shipped products that millions of people used."
Then this, which I've read maybe twenty times:
"My wife thinks I'm depressed. She's probably right. Last week I caught myself lying about having interviews so she wouldn't worry. I've been on LinkedIn since 6 AM every day, not because I think it helps, but because I don't know what else to do. My daughter asked me last month when I was going back to work. She's eleven. I told her soon. I don't know if that's true anymore.
"The worst part isn't the rejection. It's not knowing why. I've asked a few companies for feedback. They say they can't share that information. One recruiter told me, off the record, that she doesn't even see applications until after the AI has screened them. She literally doesn't know why I was rejected. Nobody does. Your machine made a decision about my life, and nobody can explain it."
He ended with: "I'm not trying to blame you personally. I know you're just trying to run a business. But someone should know what this feels like on the other end."
The Regulation Gap
You'd think, with all these problems, that regulators would be stepping in. You'd be wrong.
The EU AI Act is real. As of February 2025, emotion recognition in job interviews is banned in Europe. Full compliance for high-risk AI (including recruiting tools) is required by August 2026. Fines can hit 35 million euros or 7% of global turnover.
But here's the thing about EU regulations: they force compliance from big companies operating in Europe. They don't touch the thousands of smaller companies using AI to screen applications in markets where nobody's watching.
And in the US? The regulation is a joke.
New York City's Local Law 144 was supposed to be the model. Independent bias audits. Annual publication. Notice to candidates before AI evaluation.
Two years of enforcement. Two complaints. The state comptroller's December audit found the enforcement agency's process "ineffective." When auditors reviewed the same companies the city had examined, they found 17 instances of potential non-compliance versus the city's finding of one. Test calls to the complaint hotline were misdirected 75% of the time.
The message from NYC: we made a law. We don't enforce it. Do whatever you want.
Illinois and Colorado have laws coming. But Colorado's got delayed, extended, challenged. A December executive order created a DOJ task force specifically to undermine state AI regulations. Let that sink in. The federal government isn't just failing to regulate—it's actively obstructing states that try.
I'm not asking for heavy-handed regulation. I run a company that would be regulated. I have every incentive to want a light touch. But I'm also watching this industry get away with things that would be illegal if humans did them, and it's hard to see how that ends well for anyone. Right now, companies can discriminate at scale and face essentially no consequences. The only real check is the courts, and lawsuits take years. By the time Mobley v. Workday gets resolved, another billion applications will have been screened.
What Candidates Actually Experience
The industry talks about AI recruiting in terms of efficiency, cost savings, time-to-hire. We rarely talk about what it's like on the other side of the screen.
A software engineer named Jamal emailed me in August. He'd been job hunting for five months. Eighty-seven applications, three callbacks. He'd started noticing a pattern in which companies responded and which didn't.
"I did an experiment," he wrote. "Same resume. Same cover letter. But I used 'J. Williams' instead of 'Jamal Williams.' Applied to thirty companies."
The response rate tripled.
"I can't prove it was the AI. Maybe all thirty companies just happened to have racist human recruiters. But the companies that responded to J. Williams and not Jamal? Every single one uses automated screening. I checked."
He asked me what to do. I didn't have an answer. I still don't.
Other emails told variations of the same story. A 47-year-old project manager who removed her graduation year and immediately started getting callbacks. A woman rejected by AI, rehired after the same company switched to human review. A deaf applicant who gave up on video interviews after seventeen rejections.
None of this is proof. Any individual case could have an innocent explanation. But when you hear enough stories, the pattern gets harder to dismiss as coincidence. Survey data says 66% of US adults would avoid applying to jobs that use AI in hiring. Only 26% trust AI to evaluate them fairly.
There's something else going on here that I don't fully understand—and I've thought about it a lot. When a human rejects you, you can at least imagine appealing to them. Making a better case. Trying again. But when an algorithm says no, there's nothing to push back against. No face. No argument. Just silence. And the worst part: even if you demanded an explanation, the company using the tool probably couldn't give you one. They don't fully know either.
Several people described the same progression: after hundreds of rejections, they started to believe the problem was them. That they were unemployable. That something was fundamentally wrong with who they were. Only later—comparing notes with others, running their own experiments like Jamal—did they realize the system might be broken, not them.
So Why Am I Still Doing This
Reading back what I've written, I sound like someone who should quit. Sell the company, write a Medium post about my ethical awakening, move to Vermont. My co-founder has joked about this. I'm not sure he's always joking.
But here's the thing—and I've gone back and forth on whether to include this, because it sounds like cope—some of it actually works.
Last month a woman emailed me. She'd been a nurse for fifteen years, wanted to transition into healthcare tech. No CS degree, no tech experience on paper. Under the old system—resume screening by overworked recruiters—she'd have been filtered out immediately. Our system flagged her communication skills, her pattern recognition from patient triage, her ability to learn complex protocols. She got an interview. She got the job. She starts in January.
I don't know if that one hire outweighs the Marcus situation. Probably not. But it's not nothing.
The skills-based hiring shift is real. Two-thirds of companies use it now to some degree. For people without elite credentials—which is most people—that matters. The efficiency gains are real too. Companies hire faster, which means candidates wait less, which means fewer people stuck in that awful limbo of not knowing.
I'm not trying to balance the ledger. I can't. One nurse getting a job she deserved doesn't cancel out the Marcuses. But I guess what I'm trying to say is—I don't actually think the technology is broken. I think we are. We moved too fast. We didn't ask hard enough questions. We took the money and figured we'd sort out the ethics later. And now "later" is here and we still haven't sorted it out.
What 2026 Looks Like From Here
Predictions are usually bullshit—I've been wrong about this industry more times than I can count—but here's what I think happens next.
The lawsuits decide everything. Mobley v. Workday probably doesn't go to trial until 2027, but discovery starts soon. When it does, we're going to learn things. Training data composition. Internal bias audits that never got published. The gap between what companies told customers and what they knew internally. I don't know if Workday specifically is hiding anything. I do know that the industry as a whole has a lot of closets with a lot of skeletons.
EU compliance deadline hits in August. That'll force some real changes—documentation, testing, deployment protocols—at least for companies that want European business. Whether it actually reduces bias or just creates more paperwork, I genuinely don't know. (My guess: mostly paperwork. But maybe I'm cynical.)
OpenAI changes the game. They have better models, more capital, and Walmart's 1.6 million employees as a testing ground. If they execute well, a lot of companies like mine become features rather than products. I've been telling my board we have eighteen months to find a defensible niche. That might be optimistic.
The bias problem doesn't get solved. It just gets managed differently. Better audits, better guardrails, better PR. But the fundamental issue—you can't train on biased history and expect unbiased outputs—doesn't have a technical fix. At some point we'll have to decide whether to keep pretending otherwise.
December 31st, 11:47 PM
I run an AI recruiting company. I've spent the last six hours writing about everything wrong with my industry while my Series A term sheet sits unsigned on my desk. The hypocrisy isn't lost on me.
We've made changes this year. Candidates can now request human review if they're rejected—most don't, but the option exists. We've expanded our bias audits beyond the metrics that are designed to be passed. We're building explainability into scoring, so recruiters can see which factors contributed and where human judgment should intervene.
Last month, a client wanted us to remove bias auditing from the contract. Seven figures in annual revenue. I said no. My co-founder—I'm going to call him David, which isn't his name—called me that night.
We don't yell at each other. We're not that kind of partnership. But this was close.
"You understand we have eighteen months of runway, right?"
"I understand."
"And you just walked away from seven figures because of—what? A blog post you want to write someday? Your conscience?"
I didn't say anything.
"Gene, I need to ask you something, and I need you to be honest." His voice got quieter, which was worse than the yelling. "Do you actually want this company to succeed? Or are you looking for a way out that lets you feel good about yourself?"
I hung up. First time I've ever done that to him. We didn't talk for three days. When we finally did, we pretended the conversation hadn't happened. We still haven't signed that client. We still need the money. And I still don't have a good answer to his question.
None of it is enough. We still use college prestige in the model because accuracy drops without it. We still pass bias audits that Lin says are meaningless. We still tell clients our system is "fair" when the honest answer is "fairer than humans, probably, but we're not sure."
Tomorrow someone will submit an application. Their resume will be parsed, tokenized, embedded in a vector space, compared against millions of data points, and scored. Under a second. Our system, or one like it.
If they're qualified, they might get a callback. If the algorithm learned the wrong patterns from biased history, they might not. They'll never know why, because we don't fully know why.
I keep going back to Marcus's email. Fifty-three years old. Four hundred and twelve applications. The system filtered him out before anyone saw his name.
I'm not going to pretend calling him tomorrow will fix anything structural. I can't undo the year. I can't make the industry suddenly accountable. I can't even promise our system didn't reject him—I'd have to check, and I'm not sure I want to know.
But I can pick up the phone. I can look at his resume myself. I can—
I don't know. I keep wanting to end this piece with something hopeful, something about how one person at a time we can make it better, but that feels like bullshit even as I type it. The truth is I don't know if calling Marcus changes anything. Probably it doesn't. Probably he gets a job through some other channel and my call becomes a footnote, or he doesn't and my call becomes a hollow gesture from someone who's part of the problem.
But I'm going to call him anyway. Because doing nothing feels worse. And because it's 3 AM now, and I've been writing this for seven hours, and I need to believe that something—anything—matters beyond the numbers we put in the pitch decks.
Ask me in a year whether that was naive.