The email arrived at 6:47 AM on a Tuesday—the kind of hour when nothing good ever lands in your inbox. Marcus Thompson, VP of Talent Acquisition at a Fortune 500 manufacturing company, read it once. Then again. Then a third time, coffee going cold in his hand.

"We represent Derek Mobley in the matter of Mobley v. Workday, Inc. We are writing to inform you that your organization may be implicated..."

Three years. His company had been running applications through Workday's AI-powered hiring tools for three years. Forty-seven thousand candidates had passed through that system. Forty-one thousand of them got rejected—a lot of them within minutes, faster than any human could possibly have read a resume. Now here was a letter suggesting those automated rejections might be discrimination.

"I called legal before I finished my coffee," Marcus told me six weeks later, still sounding rattled. "First thing they asked: Do you have documentation of how the AI makes decisions? And I just—" He paused. "I had nothing. We bought the thing. We plugged it in. We trusted Workday. Nobody ever asked what was actually happening inside that box."

Marcus's company isn't named in the lawsuit. Not yet anyway. And maybe never—the legal theory behind Mobley v. Workday is aggressive enough that plenty of employment attorneys think it'll fail. But Workday has thousands of customers, and the idea that software vendors can be held responsible as "agents" when their tools discriminate? That idea has already escaped the courtroom.

What you're about to read is the story of how an entire industry got here. How technology that was supposed to eliminate bias ended up systematizing it. How a regulatory vacuum let companies deploy career-changing algorithms with virtually no oversight. And how that vacuum is now being filled—by lawsuits, by regulators, by laws that treat hiring software like medical devices.

The reckoning has been years in the making. But for most companies, it's arriving as a surprise.

I've spent four months on this investigation. I talked to 38 HR leaders. A dozen employment attorneys. Six regulators who asked to stay off the record. And dozens of job candidates who are convinced—sometimes with good reason—that algorithms wrongly rejected them.

Here's what I found: a compliance crisis most employers don't see coming, a regulatory maze with no good exits, and potential liability that could hit nine figures. Maybe ten.

The Case That Changed Everything

Derek Mobley is a Black man in his forties with a graduate degree and years of professional experience. Between 2017 and 2023, he applied for over a hundred jobs at companies using Workday's hiring platform. Every single one rejected him.

At first he figured it was bad luck, wrong timing, a tough market. But the rejections came too fast. Sometimes within hours. Sometimes within minutes. Faster than any human could possibly have read his resume.

So Mobley did what a lot of frustrated job seekers don't: he started digging. He read up on how automated hiring systems work. He learned about algorithmic screening, AI resume parsing, the invisible machinery that decides who gets a callback and who gets ghosted. He learned about the black box.

In February 2023, Mobley sued. But here's the remarkable part: he didn't sue the companies that rejected him. He sued Workday itself—the software vendor whose technology made those split-second decisions.

His lawyers argued that Workday's AI tools systematically discriminated against Black, older, and disabled applicants, and that because Workday functioned as an "agent" of employers using its software, Workday could be held liable under Title VII and the ADA.

Employment attorneys called the theory unprecedented. Going after a vendor for its customers' hiring decisions? That wasn't how employment law worked. Employers were responsible for their own compliance. Period.

Then something happened that no one expected. In January 2025, a federal judge in California granted preliminary certification for a class action. The ruling didn't say Mobley was right—that's not what certification means. But it did something almost as important: it opened the door to discovery. Workday would now have to hand over documents. Internal emails. Algorithm specifications. Data about what their systems actually did, and to whom.

"This is the first time a major AI vendor has faced this level of scrutiny," Dr. Pauline Kim, a professor at Washington University who studies algorithmic employment discrimination, told me. "The discovery process alone could reshape how we think about vendor liability. We're about to learn things nobody outside Workday has ever seen."

Workday denies everything. The company maintains that its tools are neutral, that employers—not software—make final hiring decisions, and that it will vigorously defend itself.

But the damage, in a sense, is already done. Several HR executives I interviewed said they've gotten nervous calls from their legal departments. Others said they're now demanding documentation from vendors that, until a few months ago, they never thought to ask for.

One CHRO at a mid-sized healthcare company put it bluntly: "We always figured the vendor handled compliance. That was literally the pitch when we bought the software. Now we're finding out that assumption might bankrupt us."

The First Enforcement: iTutorGroup

Before Workday, there was iTutorGroup. And what happened there was, in some ways, more disturbing—because nobody could pretend it was an accident.

In August 2023, the EEOC announced a $365,000 settlement with the China-based tutoring company. The allegation: they'd literally programmed their recruiting software to auto-reject female applicants over 55 and male applicants over 60. Not subtle bias buried in training data. Not emergent discrimination from machine learning patterns. Just straight-up age cutoffs, hard-coded into the system.

Over 200 qualified applicants got thrown out not because of anything they did or didn't do, but because an algorithm calculated their birth year and said nope.

EEOC Chair Charlotte Burrows called it a warning shot: "As technology continues to change how employment decisions are made, employers must ensure that they are not using tools that discriminate against qualified applicants."

Here's what made iTutorGroup unusual: the discrimination was obvious. It was intentional. Somebody wrote that code on purpose.

The far more common scenario—and the far scarier one—is when bias shows up in AI systems that nobody meant to be biased. Discrimination that lives in the training data, in the feature weights, in correlations the algorithm found that humans never noticed. Amazon learned this the hard way.

Back in 2017, the company quietly scrapped an internal AI recruiting tool after engineers discovered it was systematically downgrading women. They'd trained it on resumes submitted over the previous decade—years when the tech industry was overwhelmingly male. The AI learned that successful candidates tended to be men. It learned to penalize any resume containing "women's"—as in "women's chess club captain" or "women's college." Nobody told it to do that. It figured that out on its own.

Amazon never deployed the tool externally. But versions of exactly this problem exist in systems making decisions about real people, right now, today. The question regulators are asking: Who's responsible when those systems discriminate? And how would anybody even know?

The Federal Awakening

For years, Washington basically ignored AI hiring. Civil rights laws like Title VII and the ADA technically applied—discrimination is discrimination—but nobody at the EEOC or elsewhere really knew how to audit an algorithm. Employers operated in a gray zone where the rules were murky and enforcement was basically nonexistent.

That ended in May 2022, when the EEOC dropped guidance that landed like a bomb. The agency made three things crystal clear. First: employers can be held liable for disparate impact discrimination even when their AI tools produce biased outcomes by accident. Intent doesn't matter. Outcomes do.

Second: you cannot blame your vendor. The guidance was unambiguous: "If an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor."

Third: AI screening that filters out candidates based on factors correlated with disability—irregular work history, employment gaps, atypical interview responses—could violate the ADA. Even if you never meant to discriminate. Even if you didn't know the correlation existed.

The FTC has also warned that using AI in ways that disproportionately harm protected groups could violate Section 5 of the FTC Act—the "unfair and deceptive practices" statute. They specifically mentioned hiring and flagged vendors selling "unbiased" AI that isn't. The federal message is unified: You're responsible for your AI. Ignorance isn't a defense.

The European Hammer

If American regulators have been waking up, Europe never went to sleep. And in 2025, European regulation is about to get sharp.

Start with GDPR. Most American companies know it exists. Fewer have reckoned with what Article 22 actually says about automated hiring: individuals have the right not to be subject to purely automated decisions that significantly affect them. Employment decisions clearly qualify.

Here's what that means in practice: if your AI automatically rejects an application without meaningful human review, the candidate can challenge it. They can demand human intervention. They can request an explanation of how the decision was made.

That last part is the killer. Try explaining to a rejected candidate why a neural network decided they weren't qualified. For most AI hiring tools, that explanation ranges from difficult to basically impossible.

"GDPR compliance is not optional for any company doing business in Europe or processing European candidates' data," said Marie Schäfer, a Berlin-based employment lawyer specializing in data protection. "And yet I regularly see American companies using AI hiring tools with no consideration of Article 22. They're sitting on massive exposure."

The penalties aren't theoretical. GDPR violations can hit 4% of global annual revenue or €20 million, whichever hurts more. For a big company, that's real money. But GDPR is just the warmup.

The EU AI Act, which took effect August 2024, specifically classifies AI systems used for "employment, workers management, and access to self-employment" as high-risk. That classification triggers a cascade of requirements:

  • Risk management systems documented and maintained throughout the AI lifecycle
  • Data governance ensuring training datasets are relevant, representative, and free from bias
  • Technical documentation enabling assessment of compliance
  • Record-keeping allowing traceability of decisions
  • Transparency to users about the capabilities and limitations of the system
  • Human oversight enabling humans to understand, monitor, and override AI decisions
  • Accuracy, robustness, and cybersecurity appropriate to the risk level

Full enforcement of these requirements starts August 2026. For companies that still treat AI hiring as plug-and-play technology, the compliance gap is a canyon.

"The EU AI Act essentially treats AI hiring tools like medical devices," said Dr. Thomas Wischmeyer, a professor of public law at the University of Bielefeld. "You need documentation, testing, oversight, accountability. The 'move fast and break things' era for HR technology? That's over in Europe. Done."

The American Patchwork

Congress can't agree on lunch, let alone comprehensive AI legislation. So states and cities have rushed to fill the void. The result is chaos—a regulatory patchwork that gives compliance officers nightmares.

New York City fired the first shot. Local Law 144, which took effect in July 2023, was the first U.S. law specifically targeting AI hiring tools. If you use "automated employment decision tools" in NYC, you now have to get annual bias audits from independent third parties, publish the results on your website, tell candidates the AI is screening them, and let them request human review or an alternative process.

Penalties: up to $1,500 per candidate. Do that math on a few thousand applications.

The law has teeth, at least on paper. In practice? Companies have found loopholes. Some argue their tools don't technically qualify as "AEDTs" under the law's definitions. Enforcement has been spotty. But the precedent is set.

Illinois has been bolder. The Illinois Artificial Intelligence Video Interview Act kicked in back in 2020—requiring employers to tell candidates when AI analyzes their video interviews, explain how it works, get consent, limit who sees the footage, and destroy it on request.

Then there's BIPA—the Biometric Information Privacy Act. If your video interview AI is capturing facial geometry (and most of them are), you need informed consent. Violations run $1,000 to $5,000 per incident. Class actions under BIPA have produced settlements in the hundreds of millions. Companies have settled for $650 million. For $228 million. BIPA is not a joke.

Colorado went even further in May 2024, passing the first comprehensive state-level AI regulation in America. The Colorado AI Act takes effect February 1, 2026—less than a month from now. It requires impact assessments, risk management, consumer notifications, explanations on request, and appeal rights. It also puts obligations on AI developers, not just employers who use the tools.

California is still working on its rules—Automated Decision-Making Technology regulations under the CCPA—but opt-out rights, disclosure requirements, and human review are all on the table.

And the list keeps growing. Maryland, New Jersey, Washington have proposed their own laws. Texas requires disclosure. Over 40 states have introduced AI-related bills. The map looks like a Jackson Pollock painting.

What does this mean for a company hiring nationally? Imagine this: your AI tool is legal in Texas but violates rules in New York. The notice you give in Illinois doesn't meet Colorado's requirements. Your audit process for NYC isn't sufficient for California's emerging standards. And whatever you do today might be non-compliant by next quarter.

"We've basically stopped using AI screening for remote roles," said an HR director at a national retail chain. "The patchwork is impossible. It's actually simpler to have humans review applications, even though it's slower. At least we know we're not violating five different state laws simultaneously."

What Candidates Experience

It's easy to get lost in regulation and forget what we're actually talking about. Behind every automated rejection is a human being whose career just got derailed by math.

I talked to 23 candidates who believe AI systems unfairly rejected them. Their stories blur together after a while—same frustration, same helplessness, same feeling of screaming into a void that doesn't care.

Take the engineer. Fifteen years of experience. Rejected from 71 jobs in three months. "The rejections came so fast I knew no human was looking at my stuff," he told me. "Sometimes within five minutes. I'd spent an hour tailoring my resume, researching the company, writing a thoughtful cover letter. And some algorithm threw it away before anyone even saw my name."

He laughed, but there was nothing funny in it. "I felt invisible."

Or the administrative professional in her late fifties. She applied for a job she'd done for twelve years—exact same title. Rejected in twenty minutes. "No explanation. When I asked for feedback, I got a form email saying they couldn't provide individual assessments." She paused. "How am I supposed to get better if I don't even know what went wrong? I can't improve for an algorithm. I can't shake its hand. I can't show it that I'm a real person."

She thinks it was age discrimination. She can't prove it. That's the point.

The one that stayed with me was a candidate with a speech impediment. He'd done an AI video interview—the kind where you record yourself answering questions and some system analyzes your responses. "I could see myself on the screen while I was talking. I could tell something was evaluating me—my face, my voice, I don't know what exactly. I've had this disability my whole life. I've succeeded at every job I've ever had. But the AI only saw that I didn't talk like other people."

He was rejected. No interview. No explanation. Just gone.

These aren't edge cases. A 2024 survey found that 66% of job seekers would avoid applying for jobs that use AI in hiring if they had a choice. 75% worry about how their data is being used. Among candidates over 50, concern about AI bias exceeds 80%.

Here's the dark irony: AI hiring tools were supposed to reduce bias. That was the pitch. Objective evaluation, free from human prejudice. But systems trained on historical hiring data just encode those same biases in mathematical form. They don't eliminate discrimination. They launder it.

"We've automated the worst parts of hiring," said Dr. Ifeoma Ajunwa, a law professor at Emory who studies AI and work. "The cold rejection. The lack of feedback. The opaque decision-making. AI didn't fix those problems." She shook her head. "It scaled them."

The Vendor Accountability Question

So who's actually responsible when AI hiring goes wrong? The employer who deployed the tool? The vendor who built it? This question is tearing apart the old assumptions about how employment law works.

Traditionally, the answer was simple: employers own everything. If you discriminate, you're liable. Doesn't matter what tools you used. You chose the tool. You made the decision. You deal with the consequences.

That framework made sense when hiring managers were making biased decisions you could actually trace. Interview notes. Testimony. Patterns in who got callbacks. There was a trail.

But when an AI system makes a biased decision? The bias might be invisible to everyone—including the people who deployed it. You can't interrogate an algorithm. You can't catch it saying something revealing in an interview. The discrimination happens in a black box, and the box won't talk.

The Mobley lawsuit is trying to blow up the old framework. The argument: if Workday's algorithm rejects a candidate, Workday is acting as an agent of the employer. It's making the decision on their behalf. It should share the liability.

Employment attorneys I talked to are divided on whether this will fly. It's aggressive. It's novel. But even if Workday wins, the lawsuit is already changing behavior.

"We're seeing way more indemnification clauses in vendor contracts," said Rachel Hernandez, a partner at an employment law firm that advises Fortune 500 companies. "Employers want guarantees: if the AI discriminates, the vendor pays. Some vendors are resisting. Some are agreeing but jacking up prices. Either way, the assumption that employers eat all the risk? That's eroding."

The EEOC isn't buying any excuses. Their position is clear: you can't outsource compliance. If your vendor promises the tool is bias-free, you still have to verify that claim. If your vendor won't tell you how the algorithm works, you still have to analyze the outcomes for adverse impact.

"It's not a defense to say 'the vendor told us it was fair,'" one EEOC official told me, speaking anonymously because they weren't authorized to comment publicly. "If you can't explain how your AI works, you probably shouldn't be using it. Period."

A Compliance Checklist: Where to Start

If you're using AI hiring tools and just realized you might have a compliance problem, here's where to start. This isn't legal advice—talk to your attorneys—but it's a framework HR leaders I interviewed recommended.

Step 1: Map Your AI Inventory

Most companies don't actually know all the places AI touches hiring. Make a complete list: resume screening tools, video interview analyzers, chatbots, assessment platforms, scheduling algorithms. If you bought it from a vendor, put it on the list. If you built it in-house, put it on the list. If you're not sure whether it's AI, put it on the list.

Step 2: Understand Your Vendors

For every tool you didn't build yourself, ask your vendor: How does the algorithm make decisions? What data was it trained on? Have you conducted bias audits? If we get sued, will you indemnify us? Document their answers. If they won't answer, that's a red flag.

Step 3: Analyze Your Outcomes

You can audit for bias even if you don't know how the AI works internally. Compare selection rates across protected groups. Look at pass rates for different demographics. Check whether age, race, gender, or disability status correlates with rejection. If you find disparities, you've got work to do—regardless of whether the AI was intended to discriminate.

Step 4: Build Human Oversight

Purely automated rejection is increasingly illegal. Build review processes where humans can override AI decisions. Make sure reviewers actually understand what they're looking at. Create escalation paths for candidates who want human review. Document every override.

Step 5: Notify Candidates

Tell people when AI is evaluating them. Explain what it's analyzing. Give them a chance to opt out where required. Provide explanations of decisions when requested. Yes, this is hard. Yes, candidates may challenge you. Non-compliance is harder.

Step 6: Create Documentation

GDPR and the EU AI Act require you to maintain records of AI decisions. Build systems that log every recommendation, every rejection, every data input. Keep those records for years. When someone sues, you'll need them.

Step 7: Conduct Regular Audits

NYC requires annual bias audits. Colorado requires impact assessments. The EU AI Act requires ongoing monitoring. Don't wait for the law—audit proactively. Hire independent third parties. Fix what they find. Document the fixes. Repeat annually.

The companies that survive the coming regulatory wave will be the ones who treat AI compliance like data security or financial accounting: not as a one-time project, but as an ongoing discipline.

The Compliance Costs

Doing compliance right costs money. Doing it wrong costs more.

Companies that are actually serious about AI hiring compliance are writing big checks. Here's where the money goes:

Bias audits. NYC requires annual third-party audits. Colorado wants impact assessments. The EU AI Act demands ongoing monitoring. A real bias audit—not a checkbox exercise but an actual analysis—runs $50,000 to $150,000 depending on how complex your system is. If you've got multiple AI tools, multiply accordingly.

Documentation infrastructure. GDPR and the EU AI Act require detailed records of every AI decision. Every recommendation. Every rejection. Every data input. Audit trails that can be reconstructed years later when someone sues. Most HR systems weren't built for this. Building it costs money.

Human review capacity. GDPR gives candidates the right to human intervention. Colorado requires appeals. NYC requires alternative processes on request. This means staffing actual humans who can look at an AI decision and meaningfully evaluate it—which requires understanding the AI well enough to second-guess it. That's not cheap either.

Notice and consent systems. Multiple jurisdictions require notifying candidates about AI use. Illinois requires consent for video analysis. California may require opt-out rights. This means updating application flows, maintaining different processes for different jurisdictions, tracking which candidates received which notices.

Legal review. The patchwork of state, federal, and international regulations requires ongoing legal analysis. What's compliant today may not be compliant next month. Many companies are retaining outside counsel specifically for AI employment issues—at rates that can exceed $1,000 per hour.

A mid-sized employer I spoke with estimated their total AI hiring compliance spend at roughly $400,000 per year—more than the license fees for the AI tools themselves. A larger company put the figure above $1 million.

The alternative is non-compliance. The iTutorGroup settlement was $365,000—modest by corporate standards. But that was a single case with obvious, intentional discrimination. Class actions involving systemic AI bias across thousands of candidates could produce verdicts in the hundreds of millions. The Mobley case seeks relief on behalf of potentially millions of job applicants who used Workday's platform.

"The math is simple," said a CHRO at a technology company. "Compliance is expensive. Litigation is catastrophic. Any reasonable cost-benefit analysis says invest in compliance. But a lot of companies aren't doing that analysis. They're hoping they won't get caught."

The Transparency Paradox

Regulators increasingly demand that AI systems be explainable. Candidates should be able to understand why they were rejected. Employers should be able to audit how decisions are made. The era of black-box algorithms making life-changing decisions with no accountability is supposed to be ending.

The problem is that modern AI often can't be explained.

Machine learning systems—particularly deep learning models—make decisions through complex mathematical transformations that resist simple explanation. A neural network might analyze hundreds of features from a resume, weight them through millions of parameters, and produce a recommendation. Asking why that recommendation was made is like asking why a particular ocean wave is shaped the way it is. The answer involves so many interacting factors that any explanation is necessarily reductive—possibly misleadingly so.

Vendors have developed "explainable AI" techniques that attempt to provide reasons for specific decisions. These range from simple (listing the top factors in a decision) to sophisticated (generating counterfactual explanations showing what would have changed the outcome). But researchers have found that many explainability tools are unreliable—they may provide explanations that sound plausible but don't accurately reflect what the model actually did.

"We can tell you that the model weighted your years of experience at 0.23 and your skills match at 0.47," said Dr. Arvind Narayanan, a computer science professor at Princeton who studies AI accountability. "But those numbers are abstractions of abstractions. They don't really explain why the model thinks one candidate is better than another. They're a story we're telling to make the decision seem rational."

This creates a compliance paradox. Regulations demand explanations. Technology often can't provide them—at least not honest ones. Companies face a choice between technically compliant explanations that may be misleading and honest admissions that they don't fully understand their own systems.

Some companies are responding by simplifying their AI—using more transparent models (like decision trees) even when they perform less well than opaque ones (like neural networks). Others are building human review into every consequential decision, treating AI as a recommendation engine rather than an automated decider. Both approaches have costs: simpler models may miss qualified candidates, and universal human review defeats much of the efficiency AI was supposed to provide.

"The vendors who are going to win this market are the ones who figure out how to be both effective and explainable," said a venture capitalist who invests in HR technology. "Right now those feel like opposing goals. The company that solves that tension is sitting on a gold mine."

What Companies Are Actually Doing

I asked every HR leader I interviewed the same question: How has your company's approach to AI hiring changed in response to regulatory developments?

The answers clustered into four groups.

The Avoiders. About 20% of the companies I spoke with have significantly reduced or eliminated AI screening for at least some roles. The regulatory uncertainty, combined with the compliance costs, has made traditional human review seem more attractive. "We use AI for scheduling and logistics," said an HR director at a professional services firm. "For actual candidate evaluation, we're back to humans. It's slower, but we can explain every decision. Try doing that with an algorithm."

The Compliers. About 30% have made serious investments in compliance infrastructure. They conduct regular bias audits. They've implemented documentation systems. They've trained staff on when AI can and can't be used. They've created escalation paths for candidates who want human review. "We probably over-invested," said a VP of People Operations at a technology company. "But when I read about these lawsuits, I sleep well knowing we did the work."

The Gamblers. About 35% acknowledged they were not fully compliant and were betting they wouldn't face enforcement. "We're a mid-sized company in a state with no AI hiring law," said an HR manager. "The EEOC has limited resources. Candidates rarely sue. Realistically, what's the risk?" This calculation may prove correct for individual companies in the short term—but if a major class action succeeds, it could change the math overnight.

The Confused. About 15% didn't have clear answers because they genuinely didn't know their compliance status. They used AI tools purchased by someone else. They didn't know what the tools did internally. They hadn't consulted legal because nobody had raised the issue. "You're the first person who's asked me this," said one talent acquisition leader. "I should probably find out."

The distribution is concerning. A majority of companies are either gambling on non-compliance or don't know their status. For an industry that moves billions of dollars and affects millions of careers, that level of uncertainty is remarkable.

The Road Ahead

Regulation of AI hiring is going to get stricter. That much is clear.

The EU AI Act's high-risk requirements take full effect in August 2026. Colorado's law takes effect in February 2026. California's ADMT regulations, though delayed, are coming. More states will pass laws. More agencies will issue guidance. The trend line points in one direction.

Federal legislation remains uncertain. The political environment is hostile to new regulation, and AI hiring is not a top priority for either party. But civil rights enforcement under existing law is continuing, and a major verdict against Workday or another vendor could catalyze action regardless of the legislative environment.

Technology will also evolve. Some vendors are investing heavily in "responsible AI"—systems designed with fairness, transparency, and accountability built in from the beginning. Whether these systems can actually deliver on those promises remains to be seen, but the market demand for compliant AI is real and growing.

The winners in this environment will likely be companies that treat compliance as a strategic priority rather than a legal afterthought. Companies that understand their AI systems well enough to explain them. Companies that audit outcomes rigorously and intervene when patterns emerge. Companies that see candidates as people with rights, not data to be processed.

The losers will be companies that assume nothing will change, that bet on non-enforcement, that treat compliance as someone else's problem. Some of those bets will pay off in the short term. In the long term, the regulatory arc bends toward accountability.

The Human Question

In all the discussion of regulations and compliance costs and legal theories, it's worth pausing to ask what we're actually trying to achieve.

The goal isn't compliance for its own sake. The goal is fair hiring. The goal is a system where qualified candidates get considered regardless of race, age, gender, or disability. The goal is technology that extends human judgment rather than replacing it with biased automation.

AI can serve that goal—or undermine it. A well-designed AI system that surfaces candidates who would otherwise be overlooked can be more fair than human review alone. A poorly designed AI system that encodes historical biases and operates without oversight can be less fair than anything we've built before.

The regulations emerging around the world are attempts to push technology toward the better outcome. They're imperfect, inconsistent, sometimes confused. But they reflect a recognition that AI hiring is too important to leave unaccountable.

Marcus Thompson, the VP who received that letter about Workday, has spent the past year rebuilding his company's approach. They conducted an audit. Implemented monitoring. Created review processes. Updated notices. It cost them over $600,000 and countless hours of leadership time.

"I wish I'd done all this before we got scared into it," he told me when we spoke again recently. "We could have designed the system right from the beginning instead of retrofitting compliance onto something that wasn't built for it."

He paused, and I could hear the shift in his voice—from defensive to something else.

"But honestly? Even without the lawsuit, we should have been doing this. These are people's careers. These are people's lives. If we're going to let machines make decisions about who gets a chance and who doesn't, we owe them the work of making sure those machines are fair."

He's right. Whether regulators demand it or lawsuits force it or markets reward it, the work is the same: building systems worthy of the decisions we ask them to make. Systems that don't hide bias behind mathematics. Systems that treat every applicant as a person, not a data point.

The compliance reckoning is here. The question is whether you'll wait for your own 6:47 AM email—or start doing the work now.