The rejection email arrived at 6:47 AM. DeShawn Williams was still in bed, phone in hand, doing what he'd trained himself not to do—checking for responses before his first coffee. Bad habit. He knew better.

"We have reviewed your application and have decided to move forward with other candidates whose qualifications more closely match our current needs."

Seven hours. He'd submitted at 11:52 PM the night before, after rewriting his cover letter for the third time, after his wife had gone to sleep, after convincing himself that this one felt different. Seven hours was all it took for the system to evaluate fifteen years building payment infrastructure at companies you've heard of. Fifteen years. Seven hours. Rejection.

He lay there in the dark, the blue glow of the screen on his face, thinking about what his father used to say: Work twice as hard. He'd done that. He'd done more than that. And still.

What happened next—I've thought about this a lot, whether he was being paranoid or scientific—was that he submitted the exact same resume to a different opening at the same company. Same qualifications. Same projects. Same everything. Except he changed "DeShawn Williams" to "Derek Williams." Removed Howard University. Replaced it with "State University." Stripped out the National Society of Black Engineers.

The interview request came in forty-one hours.

When Williams told me this story, we were sitting in a Waffle House off I-85 outside Atlanta. His choice, not mine. He'd suggested it with a half-smile: "You want authentic? This is authentic." The fluorescent lights were harsh. A waitress refilled our coffees without asking. Williams stirred his with a spoon that looked like it had survived the Carter administration.

"The thing is," he said, and then stopped. Stirred. Started again. "The thing is, I already knew. Before I ran the experiment. I just..." He trailed off. "I needed to see it. To have something I could point to that wasn't just a feeling."

He's 43. Computer science degree from Howard. Senior architect roles at two companies whose stock prices you probably track. He's built systems that move billions. And he'd just proved, using his own career as the test case, what researchers have been documenting for years: the AI systems that screen job applications in America are systematically discriminating against Black candidates.

I should tell you something before we go further. I run an AI-powered recruitment platform. I sell the exact kind of technology this article is going to criticize. My company's revenue depends on employers believing AI makes hiring better.

So why am I writing this?

Here's the honest answer: I don't know if I should be. Three weeks ago, we got a term sheet from a Series A investor. $8 million. Life-changing money. The kind that turns a struggling startup into a real company. And two days ago, I got an email from our biggest potential client—a Fortune 100 retailer—asking us to remove bias auditing from our contract because, quote, "it creates unnecessary legal exposure."

I have to respond by Monday.

So I'm writing this instead. Maybe as a way of figuring out what to do. Maybe as penance for what I've already done. Maybe because I can't stop thinking about that Waffle House conversation, about the look on Williams's face when he said "I already knew."

I've sat in sales calls where clients asked about bias and I gave answers that were—let me be honest—bullshit. Carefully worded bullshit, the kind that's technically defensible but not actually true. I'll get to that.

I don't have this figured out. I'm going to tell you what I know, what I've seen, and what keeps me awake. You can judge for yourself what to do with it.

The Research Nobody Wanted to Talk About

In October 2024, researchers at the University of Washington published a study. It should have been front-page news. It wasn't. A few tech publications ran stories. HR blogs mentioned it. LinkedIn had a discourse cycle that lasted about 72 hours. Then everyone moved on.

I didn't move on. I couldn't.

They tested how three leading large language models—GPT-4, Claude, Gemini, the same systems increasingly used to screen resumes—ranked identical candidates with names associated with different racial and gender groups. Same resume. Same experience. Same education. Different name.

AI systems preferred white-associated names 85% of the time.

Black-associated names? Nine percent.

And the intersectional findings—this is the part I keep coming back to, the part I showed Williams at that Waffle House, watching his face as the numbers sank in—when comparing white male names to Black male names, the AI systems preferred the Black male name exactly zero percent of the time.

In thousands of comparisons. Zero.

"Zero." Williams repeated it back to me. Set down his coffee. "You know what that means? It means if I'm competing against a white guy with the exact same resume, I lose. Every time. Doesn't matter what I've done. Doesn't matter how good I am. I lose every time."

I want to make sure that lands. An identical resume. Identical qualifications. The only difference is whether the name at the top sounds Black or white. And the machines we've built to be "objective" reject the Black candidate every single time.

99% of Fortune 500 companies use some form of automation in hiring. Which means most job applications in America are filtered through systems with documented, severe racial bias.

I showed this study to my co-founder when it came out. His response: "Well, our system is different."

I didn't argue. I should have. I wanted to believe him. Still do, some days.

The Argument I Keep Having With Myself

Here's the thing—and I go back and forth on this constantly, sometimes in the same hour—is algorithmic bias actually worse than human bias?

Humans have always discriminated. A racist hiring manager might reject a few hundred candidates over a career. At least with AI, you can audit it. You can measure it. You can—theoretically—fix it. That's the pro-AI argument, and it's not crazy.

My friend Marcus (not his real name, he's a VP of Engineering at a company that competes with mine) makes this case aggressively. We were at a conference bar in Vegas last March, and I mentioned I was thinking about writing something like this article. He nearly spit out his drink.

"You're going to torpedo your own industry because some academic found bias in a lab experiment?" He was genuinely angry. "You know what's biased? Karen from HR who went to Michigan and only hires people who went to Michigan. At least an algorithm doesn't care where you went to college."

I pointed out that algorithms absolutely do care where you went to college, because they're trained on data from people like Karen.

"Then we fix the training data. We iterate. That's what engineering is." He ordered another bourbon. "You're telling me the solution to bias is... going back to gut feelings and networking? The system that gave us country clubs and old boys' networks?"

He's not wrong. That's what makes this hard. The alternative to AI hiring isn't some bias-free utopia. It's the system we had before, which was also deeply biased, just in ways that were harder to measure.

But here's where I part ways with Marcus: scale. A biased hiring manager affects hundreds of people. A biased algorithm affects millions. And it does it while wearing the disguise of objectivity. People trust machines in ways they don't trust Karen from HR. That trust is a kind of violence when the machine is just Karen's prejudices encoded in math.

I told Williams about this argument. He laughed—the kind of laugh that has no humor in it.

"You're debating philosophy while people's lives are being ruined," he said. "Every day that algorithm runs, someone like me doesn't get an interview. Doesn't get a job. Can't pay their mortgage. Starts doubting themselves. And you're in Vegas arguing about whether it's worse than Karen from HR."

I didn't have a response to that.

What I've Seen From the Inside

Six months ago, we had a product meeting. Standard stuff—reviewing our screening algorithm's performance, talking about improvements. Our head of engineering pulled up a chart showing selection rates by demographic group. (We don't ask for demographic data directly, but you can infer a lot from names and zip codes, which is part of the problem.)

The numbers weren't good.

I remember the silence. The way people suddenly found their laptops very interesting. And then someone—I'm not going to say who—said: "Well, we're not asking for demographic data, so technically we're not discriminating."

Technically.

God, I hate that word. I've heard it a lot in this industry. We're technically compliant. We're technically not using protected characteristics. We're technically following the law.

But the algorithm was finding proxies. It always does. Zip codes. College names. Membership in certain professional organizations. The system had learned that candidates from HBCUs had lower "success rates" in our training data—because the companies whose historical hiring data we trained on had hired fewer HBCU graduates. Not because HBCU graduates were less qualified. Because decades of discrimination had created a dataset that encoded discrimination as a feature.

We tried to fix it. Spent three months retraining. Brought in consultants—$40,000 for two weeks of work, which is roughly what we pay our engineers in three months. Did bias audits. Our numbers look better now.

But here's what I don't know—what keeps me up at 3 AM scrolling through error logs—what other proxies are still in there that we haven't found yet? What patterns has the machine learned that we don't even know to look for?

The Amazon Warning

This isn't new. In 2014, Amazon's engineers thought they'd solved hiring. They built an AI trained on ten years of resumes, designed to identify candidates who looked like successful Amazon employees.

The result was predictable in retrospect: the system learned that male candidates were preferable. It penalized resumes with the word "women's"—as in "women's chess club captain." It downgraded graduates from women's colleges. It favored certain verbs—"executed," "captured"—that appeared more frequently on male engineer resumes.

Amazon's engineers tried to fix it. They made the system neutral to specific terms. They reweighted features. Nothing worked reliably. The algorithm kept finding new proxies.

By 2017, the team was disbanded.

That was eight years ago. The industry's response was essentially: "Well, that was Amazon's problem. Our systems are different."

They're not. They're trained on the same historical data. They make the same mistakes. The only difference is that now we have research quantifying exactly how bad it is.

And we're still deploying these systems. Including mine.

The Sales Call I'm Not Proud Of

I need to tell you about something that happened in September.

We were on a sales call with a Fortune 500 financial services company. Big deal—$1.2 million annual contract, the kind that would change our revenue trajectory. Their head of talent acquisition asked a direct question: "How do you ensure your system doesn't discriminate?"

I had a slide for this. I talked about our bias audits. I mentioned that we test for demographic parity across race and gender. I referenced the NYC Local Law 144 methodology.

She pressed: "But the University of Washington study—the one showing zero selection rate for Black male names. How do you address that?"

And here's what I said: "That study tested general-purpose LLMs, not purpose-built recruitment systems. Our approach is specifically designed to avoid those failure modes."

Which is... true, in a narrow sense. We don't use raw LLMs for final screening decisions. But it's also misleading, because our system is built on top of language models, and those models carry the same biases. I emphasized the technical distinction while glossing over the substantive concern.

She seemed satisfied. We got the deal.

I've thought about that call a lot. I didn't lie. But I didn't tell the whole truth either. I gave an answer that made the problem sound solved when it isn't.

That's the thing about selling AI tools. You learn to speak in a way that's defensible without being honest. "Our systems are tested for bias" (true, but the tests don't catch everything). "We follow industry best practices" (true, but industry best practices are inadequate). "We're committed to fairness" (true, but commitment doesn't equal achievement).

I told Williams about that call. We'd kept in touch after the Waffle House—he was helping me connect with other people who'd experienced AI hiring discrimination.

"So you lied," he said.

"I didn't lie. I gave a technically accurate—"

"You lied." His voice was flat. "You knew your system probably does the same thing that study showed. And you sold it anyway. To a company that's going to use it to screen people like me."

I started to defend myself. Then I stopped. Because he was right.

"Honestly," I said, "I don't know what I should have said. 'We don't know if our system discriminates, and neither does anyone else, because the auditing methods don't work well enough'? That's closer to the truth. It's also not how you close a seven-figure deal."

"So your career matters more than whether my kids can eat."

I didn't have an answer. I still don't.

The Woman Who Built What She Knew Was Broken

Williams introduced me to Maya—not her real name—a few weeks after our first conversation. She's an ML engineer at one of the big HR tech companies. When I say big, I mean their tools screen millions of applications per year. She'd heard about me through the network Williams had been building—people who knew about AI hiring bias and wanted to talk, anonymously.

We were on a video call. She'd turned off her camera. Just a voice, slightly distorted—she was running it through something—telling me things I'd suspected but never heard said out loud.

"I knew from the moment I saw the training data." Her voice was flat. "Five years of hiring decisions from Fortune 500 clients. You know what that data reflects? The biases of whoever was hiring five years ago. Ten years ago. Twenty. The whole pipeline was built on the assumption that past hiring decisions were correct. That the people who got hired deserved it. That the people who got rejected deserved to get rejected."

I asked why she didn't raise concerns.

"I did. Once."

Long pause.

Her manager told her they'd "note the concern" but the timeline was tight. Clients were waiting. The sales team had made promises. They could "iterate on fairness" in version 2.0.

"Version 2.0 was eighteen months later. You know what was in it? Better UI. Faster processing. Not a single change to the model."

I asked if she felt responsible.

The silence went long enough that I thought the call had dropped.

"Every day," she finally said. "I think about the people who didn't get interviews because of something I built. I run the numbers in my head. Our system screens, what, two million applications a year? If even 5% of rejections are biased—that's a hundred thousand people. A hundred thousand." Another pause. "And I tell myself that if I'd quit, someone else would have built it anyway. Someone who cared less. That maybe being inside, I can push for changes."

She laughed—a rough, uncomfortable sound.

"I don't know if I believe that anymore. But it's what I tell myself so I can sleep."

I recognized the rationalization. It was the same one I used.

What's Happening in Bangalore

Here's something nobody talks about: AI hiring isn't just an American problem.

India's IT outsourcing industry—TCS, Infosys, Wipro—hires hundreds of thousands of people every year. And they've been early adopters of AI screening, partly because of the volume (TCS alone gets millions of applications annually) and partly because Western clients demand it as part of vendor qualification.

I spent a week in Bangalore last year meeting with HR tech companies. What I heard was disturbing.

A product manager at one of the big three (he asked me not to say which) showed me their screening dashboard. They'd trained their system on twenty years of hiring data. "We can predict job performance with 73% accuracy," he said proudly.

I asked whether they'd tested for caste bias.

He looked confused. "We don't collect caste data."

Right. But they collect names. And addresses. And which colleges people went to. And in India, all of those are proxies for caste. The IITs have well-documented caste disparities in admission. Certain surnames are associated with certain communities. Neighborhoods in major cities are often segregated by caste.

"We've never looked at that," he admitted. "Our clients don't ask about it."

I mentioned the University of Washington study. He hadn't heard of it.

This is the thing about AI bias: it exports. American tech companies train systems on American data, with American biases. Then those systems get deployed globally, where they interact with local bias structures in ways nobody's studied.

How much talent is being filtered out by algorithms trained on historical data that reflects the biases of a different continent? Nobody knows. Nobody's measuring.

I called Williams after that trip. Told him what I'd seen.

"Same game, different country," he said. "They found a way to automate caste discrimination without ever saying the word caste. Just like here—automate race discrimination without ever saying race. Clean hands. Plausible deniability. And someone's kids don't eat."

The Legal Reckoning

For years, AI hiring discrimination existed in a legal gray zone. You suspected something was wrong. You couldn't prove it. The algorithm was a black box. The company claimed trade secrets. And good luck getting anyone to explain why you weren't hired.

That's changing. The case that might reshape everything is Mobley v. Workday.

Derek Mobley is African-American, over 40, and has a disability. Between 2018 and 2023, he applied to more than 100 positions at companies using Workday's applicant screening software. Rejected from every one. No interview. No callback. Nothing.

His lawsuit alleges Workday's AI systematically discriminated based on race, age, and disability. Initially, it looked doomed—in January 2024, a judge dismissed it, ruling there wasn't enough evidence to classify Workday as an "employment agency" subject to anti-discrimination law.

Then came the reversal.

July 12, 2024: a federal judge allowed the case to proceed. The key sentence: "Workday's software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process."

Participating. In the decision-making process.

That's the sentence that keeps me up at night. That's the sentence that should keep everyone in this industry up at night. It means AI vendors—not just employers—can be held liable for discriminatory outcomes. The "we just built the tool, we didn't make the decisions" defense doesn't work anymore.

June 2025: The court conditionally certified Age Discrimination claims on behalf of a class that could include millions of applicants over 40 who were rejected by Workday's system.

Millions of potential plaintiffs. One vendor. One tool.

I keep waiting for the industry to panic. For emergency board meetings and product recalls. For some kind of reckoning.

Instead, the sales cycles continue. The contracts get signed. Everyone figures they'll deal with it when they have to. Our investor called last week to ask if Mobley would affect our valuation. I said probably not. Because the truth is, it probably won't—not until someone wins a judgment that actually hurts.

The Cases Keep Coming

Workday isn't alone.

August 2025: Someone sued Sirius XM Radio, claiming their AI hiring tool discriminated based on race. Same pattern as Amazon. Eight years later.

May 2024: The ACLU filed an FTC complaint against Aon Consulting—three of their hiring tools allegedly discriminate against people with disabilities and certain racial groups.

March 2025: EEOC charges against Intuit and HireVue. A deaf Indigenous woman was rejected because the video software lacked captioning. When she requested accommodation, denied.

HireVue's CEO said the complaint was "entirely without merit." Said Intuit "didn't use HireVue's AI-based assessment."

Maybe there's a legal distinction that matters. But here's what I can't stop thinking about: that woman still didn't get the job. Whatever system rejected her, it rejected her.

Labor lawyer Guy Brenner put it simply: "There's no defense saying 'AI did it.' If AI did it, it's the same as the employer did it."

The HR Leader Who Couldn't Say No

Williams's network kept growing. One of the people he connected me with was someone I'll call Lisa—a VP of People at a tech company, about 2,000 employees. She'd been forwarded one of his LinkedIn posts about algorithmic bias and reached out because she wanted to talk to someone who understood the tech side.

We talked for two hours. She drank three glasses of wine. By the end, she was saying things she probably wouldn't have said sober. I'm going to quote her anyway because I think it matters.

"I know our system has problems. I've seen the data. We have a gap—Black candidates get through initial screening at about 60% the rate of white candidates with similar qualifications."

I asked what she was doing about it.

"What can I do?" She wasn't being rhetorical. "We're under pressure to reduce time-to-hire. CEO wants hiring costs down 20%. The board asks about AI automation in every meeting. If I say 'we need to slow down the AI rollout because of bias concerns,' someone asks for data. I show them. They say 'let's monitor it.' Nothing changes."

Long sip.

"And here's the thing I can't say in meetings—even if I pushed back hard, even if I got them to pause the system... the human recruiters were probably biased too. At least the AI is consistent. At least I can audit it. At least there's a paper trail."

I asked if that made it okay.

"No." She set down her glass. "It doesn't make it okay. It makes it... manageable. Defensible. Something I can explain to a lawyer." She paused. "Is that the same thing? Honestly? I don't know anymore."

This is how institutional bias works. Not one villain making one bad decision. A thousand small choices, each reasonable in isolation, adding up to something terrible.

The Regulatory Mess

While lawsuits grind through courts, regulators are scrambling—and by "scrambling" I mean they're about five years behind.

New York City's Local Law 144 was supposed to be the model. Effective July 2023. Independent bias audits for automated hiring tools. Annual publication. Ten business days' notice before AI evaluation.

The enforcement has been a joke.

December 2025 audit by the State Comptroller: the enforcement agency's complaint process is "ineffective." In two years, they received exactly two complaints. Two. In a city where thousands of companies use AI hiring tools. They surveyed 32 companies and found one case of non-compliance.

One.

The message: comply if you want. Nobody's checking.

Illinois passed an AI Video Interview Act in 2019. An amendment taking effect January 2026 lets victims sue. Colorado's SB 24-205 was supposed to be comprehensive—annual impact assessments, risk documentation, consumer notice. Then it became a political football. Delayed to June 2026. Extended with a "cure period" through June 2027. The Trump administration explicitly labeled it "burdensome." A December 2025 executive order created a DOJ task force to challenge it.

Europe is different. The EU AI Act, effective August 1, 2024, classifies HR tools as "high-risk." Emotion recognition in job interviews became illegal February 2, 2025. Core obligations kick in August 2, 2026. Fines up to 35 million euros or 7% of global turnover. Extraterritorial reach—U.S. companies can be covered if their AI is used on EU candidates.

Fewer than 20% of organizations say they're "very prepared." Deadline is eight months away.

European policymakers prioritize worker protection over innovation speed. America's approach prioritizes... I'm not actually sure what. The freedom to discriminate efficiently, maybe. The convenience of not having to think about it.

The HireVue Problem

No company better illustrates this mess than HireVue.

Founded 2004. Pioneered video interviewing. Later added AI assessment. At its peak, the company claimed it could predict job performance by analyzing facial expressions, word choice, speaking patterns.

Think about that for a second. Predict job performance from your facial expressions.

The backlash was substantial. 2019: Electronic Privacy Information Center filed an FTC complaint. Research showed 44% of AI video interview systems demonstrate gender bias, 26% both gender and race bias.

Then HireVue's own internal research leaked. Facial analysis contributed only 0.25% to job performance prediction. A quarter of one percent. Candidates were being scored on factors with virtually no correlation to their ability to do the job.

Early 2020, HireVue dropped facial analysis. They now say their assessments use only transcripts.

Okay. That's something.

But here's my question: if facial analysis was never predictive, why was it deployed for years on millions of candidates? How many people were rejected because an algorithm didn't like their face?

We'll never know. The data exists somewhere in databases. Those people got generic rejection emails. They have no idea why.

The People We Don't Hear From

After Williams, I talked to eleven more people through his network—all of whom had done similar experiments with their own job searches. Word spread through a loose community of people who'd suspected they were being discriminated against and wanted to prove it.

Their stories had a numbing similarity. Three stuck with me.

James, 51, operations manager. Laid off in 2024 after an acquisition. Applied to 200+ positions. Four callbacks. Changed his graduation date to suggest he was younger. Callbacks jumped to nineteen in two months.

"I'm not even upset about the age thing," he said, which surprised me. "Companies have been discriminating against older workers forever. What gets me is the pretense. These systems were supposed to be neutral. Find the best candidates regardless of whatever. But they just automated what humans were doing. Made it faster. Scalable. Invisible."

I told him about Marcus's argument—that at least algorithms are auditable.

He laughed. "Auditable by who? You? The company that sold it? The company that bought it? None of you have any incentive to find problems. The only people who have that incentive are the people being rejected, and we can't see a damn thing."

Elena, 34, data scientist. PhD from Berkeley. Seven years experience. After she started including involvement in Asian-American professional organizations on her resume, her callback rate dropped by two-thirds. Same resume, removed affiliations. Callbacks tripled.

"The worst part," she said, "is second-guessing yourself. After the hundredth rejection, you start thinking maybe you're not as good as you thought. Maybe your degree doesn't mean what you thought it meant. Maybe you're just... not what they're looking for."

Quiet. Then:

"Then you do the experiment. And you realize it was never about you. It was about your name. Your organizations. The parts of yourself you were proud of."

Maria, 29, customer success. Hearing impaired. Needs captioning for video interviews. Forty-three applications to companies using AI video screening. Zero accommodations. Zero interviews completed.

She'd filed three ADA complaints. Two still pending. One dismissed because the company argued video interviews were "optional."

"I keep thinking about all the jobs I would have been perfect for. Jobs where my hearing doesn't matter at all. Jobs where I could have proven myself if anyone would just..." She didn't finish.

None of them wanted real names published. All still job hunting. All afraid that speaking publicly will make things worse.

That's the hidden cost. Not just the rejections. The silence they enforce. The people most harmed are least able to talk about it.

What I Wish I Could Tell Our Clients

I've spent this article being critical—of the industry, of other companies, of myself. So what should organizations actually do?

The tempting thing would be to give you a numbered list. "Six steps to fair AI hiring." Something you could print out and hand to your legal team. But I don't think the problem is that people don't know what to do. The problem is that they don't want to do it, because doing it is expensive and slow and creates legal exposure and makes your board nervous.

So instead, let me tell you what I wish I could say on sales calls. What I'd tell our clients if I didn't need their money to make payroll.

Understand what you bought. Most HR leaders I talk to can't explain how their AI tools actually work. They know the marketing. They don't know the training data, the model architecture, the validation methodology, the audit results. They bought something they don't understand, and when it discriminates, they'll claim they didn't know. That defense might work in court. It shouldn't let you sleep at night.

Stop buying audits designed to pass. Here's the dirty secret: you can structure a bias audit to find what you want to find. Test the right metrics, the right populations, the right scenarios, and your system looks clean. Test differently, and you discover it's rejecting Black candidates at 40% the rate of equally qualified white ones. Most companies—including most of my clients—choose the test that produces the result they want.

Make it possible to sue you. I know that sounds insane. But organizations that force disputes into arbitration, eliminate class action rights, and make it impossible for rejected candidates to understand why—they're not solving bias. They're hiding it. If your AI can't survive legal scrutiny, maybe that's telling you something important.

Train on the workforce you want, not the workforce you had. If you feed twenty years of biased hiring decisions into an algorithm, you get an algorithm that perpetuates bias. This isn't mysterious. It's basic ML. The solution isn't removing demographic information—studies show that often makes things worse. The solution is actively building datasets that represent what you want your company to look like.

Let humans override. Not rubber stamps. Not recruiters glancing at AI recommendations. Actual processes where human judgment matters. Random audits of rejections. A/B testing against human-only decisions. Authority—real authority—to override the algorithm.

Prepare for regulation instead of fighting it. EU AI Act: August 2026. Illinois: January 2026. Colorado (if it survives): June 2026. Companies treating this as a compliance problem to minimize are going to get caught. The ones treating it as an opportunity to fix their systems might come out ahead.

I've never actually said any of this on a sales call. Maybe I should start. Maybe that's what I'll do Monday instead of signing that contract.

The Harder Questions

I've offered criticisms, confessions, suggestions. Now I want to sit with questions I don't have good answers to.

If AI hiring tools are systematically biased, and 99% of Fortune 500 companies use them, what's the aggregate effect on the labor market? Research suggests millions of qualified candidates—disproportionately Black, older, disabled—filtered out before any human sees their applications. That's not just unfair. That's massive misallocation of talent. Economic damage nobody's calculating.

What happens to companies doing this? If your AI excludes Black engineers, you end up with less diverse teams. Less diverse teams produce worse outcomes—there's extensive research. You've optimized for the wrong thing. You just don't see the cost because the excluded candidates are invisible.

And the psychological damage. How many qualified people, after hundreds of unexplained rejections, have concluded they're the problem? That their skills aren't good enough. Their experience lacking. When actually they were filtered out by an algorithm that prefers white names, younger ages, certain zip codes.

That damage doesn't appear in lawsuits or audits. It's damage to people's sense of their own worth. Inflicted at scale. By systems that were supposed to be neutral.

Where This Leaves Me

I'm not an AI pessimist. I believe these tools can be made better. Bias reduced, maybe not eliminated. The research on fairness-aware algorithms points toward real improvements.

But I'm realistic about incentives. Companies building these tools want to ship fast, iterate later. Companies buying them want to cut costs, not maximize fairness. Regulators are underfunded, politically constrained, technically outmatched.

The people bearing the costs—Williams, Elena, James, Maria—have the least power to demand change.

Which is why the lawsuits matter. Why the EU AI Act matters. Why even broken laws like NYC's matter. They create consequences that wouldn't otherwise exist. They shift the burden from victims who must prove harm to organizations who must prove they're not causing it.

I called Williams before publishing this. Wanted his blessing, wanted to see if anything had changed.

He'd landed something. Contract work—not permanent, not ideal, but something. Sounded tired but steadier than when we met.

"You know what I keep thinking about?" he said. "All the people who ran the same experiment I did. Got the same result. And just... gave up. Concluded they weren't good enough. Never realized the game was rigged."

Pause.

"I'm glad you're writing this. People should know. But—and I'm not being cynical—you know what's actually going to change things? The lawsuits. The money. When it costs more to discriminate than to fix the systems." He laughed. "That's the American way, right? We don't do the right thing because it's right. We do it when doing wrong gets expensive."

I wanted to argue. I couldn't.

I asked what I should do about the client who wants us to drop bias auditing.

"What do you think you should do?"

"I think I should say no. I think I should walk away from the money."

"And will you?"

I didn't have an answer.

Somewhere right now, a resume is being submitted. The candidate is qualified. The algorithm is running. And inside that algorithm, patterns learned from decades of discrimination are about to make a decision.

We built these systems. We're still building them. I'm still building them.

The question is whether we build tools that judge people by their qualifications, or tools that launder our prejudices through code.

Monday is in three days. I still don't know what I'm going to do.