99% of Fortune 500 companies use AI in recruitment. Fewer than a quarter have real compliance programs. That math is about to hurt.
2019. I'm sitting in a conference room at Liepin watching our legal team argue with product managers about a new AI feature. The feature would analyze candidate behavior patterns to predict job-hopping risk. Marketing loved it. Legal was sweating.
"What if they ask how it works?" someone asked.
"They won't."
They were right. For years, they were right. Candidates didn't ask. Regulators didn't ask. Nobody asked. We shipped features that made decisions about people's careers using methods we couldn't fully explain, and the only questions we got were about whether it was faster than the old way.
That era is over.
January 2024. The EEOC announces its first-ever settlement in an AI hiring discrimination case. A tutoring company called iTutorGroup had programmed its recruitment software to automatically reject women over 55 and men over 60. Not subtly. Not through proxy variables. Just: if age > X and gender = Y, reject. Over 200 qualified applicants turned away. Cost: $365,000.
That settlement was pocket change for a company that size. But it was the shot across the bow. The message: we're watching now.
The Compliance Chaos Nobody Prepared For
Here's the thing about AI hiring regulations. There isn't one. There are dozens. And they don't agree on anything.
The EU says your AI recruiting tool is "high-risk" the moment it touches hiring. Not might be. Is. Automatically. The regulation explicitly lists systems "intended to be used for the recruitment or selection of natural persons." If you're using AI to screen resumes, schedule interviews, or rank candidates in Europe, you're in the high-risk category. Period.
What does high-risk mean? By August 2026: rigorous risk assessments, detailed technical documentation that explains how the thing actually works, human oversight mechanisms, registration in an EU database, and CE marking. You know, like a toaster. Except the toaster doesn't decide whether someone gets a job.
Penalties? Up to 35 million euros or 7% of global turnover. For Workday, that's potential exposure approaching half a billion dollars. For Oracle, even more. These aren't theoretical numbers designed to scare people. GDPR fines hit 1.2 billion euros in 2024 alone. Cumulative penalties since 2018: 5.88 billion euros. The EU enforces.
Meanwhile, in America? We have a patchwork that makes the tax code look elegant.
New York: The Law That Doesn't Work
NYC Local Law 144 was supposed to be the model. Passed in 2021, effective July 2023, it requires any employer using an "Automated Employment Decision Tool" to get an independent bias audit, publish the results, and give candidates 10 days notice before the AI evaluates them.
Sounds reasonable. Implementation has been a disaster.
December 2025: the New York State Comptroller audits the law's enforcement. The Department of Consumer and Worker Protection surveyed 32 companies and found exactly one instance of non-compliance. One. The Comptroller's own review of those same companies? At least 17 potential violations.
The problem is structural and kind of hilarious in a dark way. The law requires you to post bias audits if you determine you need to comply. But if you just... don't determine that? There's no easy way to catch you. Job applicants don't know AI is screening them. Regulators don't have the resources to investigate every company. The whole thing relies on voluntary compliance, which is another way of saying it relies on nothing.
Even the definitions are a mess. What exactly is an "Automated Employment Decision Tool"? What counts as "independent" for an auditor? The law doesn't say clearly. So vendors, auditors, and employers are basically making it up as they go. Some companies have interpreted the law so narrowly that almost nothing they use counts as an AEDT. Legal creativity at its finest.
Penalties exist: $1,500 per violation, $10,000 per week of continued violation. But enforcement requires detection. Detection requires resources. Resources require budget. And the budget isn't there.
Illinois: The State That Learned to Sue
If New York shows what happens when regulation has no teeth, Illinois shows what happens when it has too many.
The Biometric Information Privacy Act—BIPA—is a 2008 law that nobody thought much about until lawyers figured out it had a private right of action. Meaning: individuals can sue. Directly. $1,000 per negligent violation. $5,000 if it's intentional or reckless.
Per. Violation.
Since 2019, over 1,500 BIPA lawsuits have been filed. In 2025 alone, 107 new class actions. Settlement hall of fame: Facebook at $650 million, TikTok at $92 million, Google at $100 million, Clearview AI at $51.75 million. These aren't regulatory fines. These are private lawsuits. Plaintiffs' attorneys have discovered a gold mine.
For AI recruiting, this matters because video interviewing tools often collect biometric data. Facial geometry. Voiceprints. The stuff BIPA was written to protect.
In Deyerler v. HireVue, plaintiffs alleged that HireVue's facial expression analysis violated BIPA. HireVue tried two defenses. First: our facial scans aren't "biometric identifiers" because we don't use them to identify specific people. Court said no—BIPA explicitly covers "facial geometry." Second: the Illinois AI Video Interview Act preempts BIPA. Court said no again—the laws impose "different but concurrent obligations."
The case is ongoing. But here's what I keep thinking about: HireVue already dropped facial analysis in 2020. Their chief data scientist admitted it contributed about 0.25% to predictive accuracy. One quarter of one percent. They were collecting biometric data, assuming massive legal risk, for almost nothing. And they're still getting sued for the years they did use it.
If you're using video interviewing with any AI analysis in Illinois, talk to a lawyer. Yesterday.
California: Privacy as Employment Law
California took a different approach. Instead of regulating AI specifically, they extended consumer privacy rights to employees and job applicants. The California Privacy Rights Act (CPRA), effective 2023, means candidates can now ask what data you collected, demand you delete it, and opt out of its sale.
This sounds manageable until you think about what AI recruiting actually does. You scrape candidate data from LinkedIn. You store it in your ATS. You run it through matching algorithms. You share it with hiring managers. Maybe you use it to train your models.
Under CPRA, a candidate can show up and say: tell me everything you have on me, where you got it, who you shared it with, and then delete all of it. Within 30 days. For every California resident who applies. That's not hypothetical. That's the law.
Most ATS systems aren't built for this. They're built to hoard data, not purge it. Deletion requests mean manual work, which means cost, which means companies are going to start getting very selective about which California candidates they even want in their pipeline. The unintended consequence of privacy protection might be privacy discrimination.
Colorado: The Preview of Everything
Colorado's AI Act, effective February 2026, is probably the most comprehensive state-level AI regulation in the country. It's also the closest thing we have to a preview of where federal regulation might eventually go.
The framework requires "reasonable care" to protect consumers from algorithmic discrimination. For high-risk AI (which explicitly includes employment), that means: risk management programs, annual impact assessments, public disclosure of what AI systems you use and how you manage discrimination risks, and reporting any discovered discrimination to the Attorney General within 90 days.
The notice requirements are intense. You have to publish on your website what high-risk AI you use, how you manage risks, and what data you collect. You have to notify affected Colorado residents directly. If you discover your AI is discriminating, you have to tell the state.
There's a safe harbor if you follow the NIST AI Risk Management Framework, which is more guidance than most jurisdictions provide. There's an exemption for companies under 50 employees who don't train AI on their own data. But for any company of meaningful size, Colorado creates significant new obligations.
What I find interesting: legislators tried to soften the law in 2025 with Senate Bill 318. They proposed narrowing the definition of algorithmic discrimination, creating carve-outs for small businesses and open-source AI, clarifying appeal rights. The effort failed. The law stays as written.
Colorado is betting that strict AI regulation is good policy. We'll find out in 2026 whether they're right.
The Litigation Wave Nobody's Ready For
Forget regulation for a minute. The real action might be in the courts.
Mobley v. Workday is the case everyone's watching. Filed February 2023. The plaintiff, Derek Mobley, is African American, over 40, and disabled. He claims to have applied for over 80 jobs that used Workday's screening tool. Rejected every time. His theory: the AI discriminates based on race, age, and disability.
In May 2025, a federal judge allowed the case to proceed as a class action, conditionally certifying age discrimination claims on behalf of what's believed to be millions of applicants. Millions. If Workday loses or settles for a significant amount, the floodgates open.
But here's the development that should terrify every AI vendor: a July 2024 ruling in the case held that AI vendors themselves can be held liable for discrimination. Not just the employers using the tools. The vendors who built them.
This changes everything. Previously, vendors could hide behind their customers. "We just provide the tool. How they use it is their responsibility." Now? Direct exposure. If your algorithm discriminates, you're on the hook even if you never made a single hiring decision.
The research fueling these cases is damning. Studies show AI systems favored white-associated names 85% of the time versus Black-associated names just 9% of the time. Male names preferred 52% of the time versus female names 11%. The systems never—never—preferred Black male names over white male names. Not once.
Under Title VII's disparate impact framework, that's potentially devastating evidence. You don't need to prove intent. You just need to show the effect.
What HireVue Taught Us (And What We Ignored)
The HireVue story is the one everyone in this industry should study. For years, they were the leader in AI video interviewing. Unilever, Goldman Sachs, Hilton—major employers used their tech. The pitch was seductive: AI analyzes candidates' facial expressions, body language, and word choice to predict job performance. Science! Efficiency! The future!
Then the criticism started. In 2019, EPIC filed an FTC complaint calling the technology "unfair and deceptive." AI researchers piled on. Meredith Whittaker from the AI Now Institute called it "profoundly disturbing" and "pseudoscience." Princeton's Arvind Narayanan said it was "AI whose only conceivable purpose is to perpetuate societal biases."
The scientific argument was specific: emotion recognition from facial expressions doesn't work. Emotions don't map reliably to facial movements. Cultural differences, disabilities, neurodivergence—the systems would systematically disadvantage anyone who didn't express emotions the way the training data expected.
In 2020, HireVue quietly dropped facial analysis. Their chief data scientist admitted it contributed about 0.25% to predictive accuracy. Zero point two five percent. For customer-facing roles where you'd expect nonverbal communication to matter most? Four percent.
CEO Kevin Parker's explanation was admirably honest: "When you put that in the context of the concerns people were having, it wasn't worth the incremental value."
Later, they also dropped vocal tone analysis. Same reason—minimal predictive value.
Here's my question: how many other AI recruiting tools are like this? Features that sound impressive, collect sensitive data, create legal risk, and barely move the needle on actual hiring outcomes? How many vendors know their fancy AI is mostly theater but keep selling it because customers don't ask hard questions?
I suspect the answer is: most of them.
What I've Seen From The Inside
Five years at BOSS Zhipin and Liepin gave me a particular perspective on this. We weren't dealing with GDPR or BIPA—China has its own regulatory framework that's different in important ways. But the underlying tensions are universal.
At BOSS Zhipin, during hypergrowth, we had 2 million people chatting with hiring managers daily. The AI features we built—matching, recommendations, ranking—they were optimized for engagement. Get candidates talking to employers. Get conversations started. The metric that mattered was whether users came back the next day.
Nobody asked whether the AI was fair. The concept barely existed in our discussions. We had data scientists looking at model performance, but "performance" meant prediction accuracy, not demographic parity. We weren't evil. We just weren't thinking about it. It wasn't part of the culture, the metrics, or the incentives.
At Liepin, running the online platform for a more enterprise-focused product, I started seeing the other side. Big clients asking questions. What data do you collect? How does the matching work? Can you explain why this candidate was recommended? Usually we could answer. Sometimes we couldn't. The explanations that satisfied engineers rarely satisfied HR directors or legal teams.
The companies asking the hardest questions were usually the multinationals—firms with European operations who already dealt with GDPR, or American companies with particularly active legal departments. Domestic Chinese companies rarely asked. The regulatory pressure wasn't there.
That's changing now. China's Personal Information Protection Law and the Algorithm Recommendation Regulations create new requirements. But the enforcement culture is different. So far, anyway.
The Real Compliance Gap
Let's talk numbers. 99% of Fortune 500 companies use AI in recruiting. Only 48% have structured compliance programs. 72% struggle with transparency requirements. 74% are uncertain about which regulations even apply to them.
This gap isn't ignorance. It's calculation.
Compliance is expensive. A proper AI governance program requires legal review, technical documentation, regular audits, monitoring systems, and trained staff. For a multinational, that's millions annually. The alternative—hoping you don't get caught—costs nothing unless you get caught.
And so far? Most companies haven't gotten caught. NYC Local Law 144 has been active for two years with minimal enforcement. GDPR fines hit big targets but haven't touched most HR AI users. The Workday lawsuit is ongoing, not resolved. From a pure risk-reward perspective, ignoring compliance has been the rational choice.
That calculation is changing. The EU AI Act enforcement starts in 2027. More states are following Colorado's model. Plaintiffs' attorneys have identified AI discrimination as a growth area. The infrastructure for enforcement is being built.
The question isn't whether the crackdown is coming. It's whether you want to be compliant before it arrives or after.
What Actually Works
I've seen companies do this well. The patterns are consistent:
They map their exposure first. Where are your candidates? What laws apply? If you hire in the EU, NYC, California, Illinois, and Colorado, you have five different compliance frameworks. You can't build a program without knowing what you're building it for.
They audit vendors aggressively. Most AI recruiting is third-party tools. Your vendor's compliance failures become your liability. Ask hard questions: What data do you collect? How is it used? Can you explain decisions? What bias testing have you done? If vendors can't answer, that's information.
They document everything. Not because documentation is fun, but because it's the only defense when regulators or plaintiffs come knocking. What decisions did your AI make? Why? What data informed them? Who reviewed? When? The company that can answer these questions has leverage. The company that can't is vulnerable.
They build in human oversight. Every major regulatory framework requires it. But beyond compliance, human oversight catches problems. AI systems drift. Data shifts. What was fair last year might not be fair this year. Regular human review is insurance against algorithmic decay.
They test for bias regularly. The EEOC's four-fifths rule provides a benchmark: if the selection rate for any group is less than 80% of the rate for the most-selected group, you have a potential problem. Run the numbers. Know what your AI is actually doing. It's harder to be sued for discrimination you've already identified and fixed than for discrimination you never looked for.
What I'm Building Toward
At OpenJobs AI, we started with compliance as a core requirement, not an afterthought. That's partly because I'd seen what happens when you don't. But it's also a bet on where the market is going.
Every decision our agents make is traceable. Data sources documented. Processing steps logged. Decision rationales recorded. When a candidate asks why they weren't selected, we can actually explain. When a regulator audits us, we have the receipts.
Is this more expensive than building a black box? Absolutely. Does it slow down development? Sometimes. But I believe explainable AI is better AI. The discipline of documentation forces clearer thinking. The requirement to justify decisions forces more justifiable decisions.
More practically: enterprise buyers are starting to ask about compliance. Not all of them, not yet. But the ones who've talked to lawyers, the ones who've seen the Workday lawsuit, the ones operating in regulated industries—they want to know. Being able to answer those questions is becoming a sales advantage.
The Question Nobody Wants to Ask
Here's what keeps me up: does any of this AI actually improve hiring?
Not make it faster. Not make it cheaper. Actually improve it. As in: do companies using AI recruiting tools hire better people who perform better and stay longer than companies that don't?
I don't know. And I've looked. The evidence is thin. Most studies measure efficiency: time-to-fill reduced by X%, cost-per-hire down Y%. But efficiency isn't quality. Hiring the wrong person faster is still hiring the wrong person.
The few studies that look at hiring quality show mixed results. Some AI tools appear to identify better candidates. Others appear to just identify candidates who interview well, which isn't the same thing. The connection between AI screening and long-term job performance is... fuzzy.
Meanwhile, we know AI systems can discriminate. The research is solid. Biased training data produces biased outputs. Historical hiring patterns encode historical biases. Systems optimized on the wrong outcomes optimize for the wrong candidates.
So we have: tools that might improve hiring, probably make it faster, definitely create legal risk, and occasionally discriminate. The risk-benefit calculation is genuinely complicated.
The regulatory response makes sense in this context. Governments aren't banning AI in hiring. They're saying: prove it works. Prove it's fair. Document what you're doing. Let people ask questions. That seems... reasonable, actually.
The Bottom Line
The regulatory landscape for AI recruitment is a mess. Overlapping jurisdictions, inconsistent definitions, uneven enforcement. Companies operating globally face EU AI Act requirements, GDPR obligations, NYC bias audits, California privacy rights, Illinois biometric lawsuits, and Colorado disclosure requirements. All at once. All different.
Most organizations are not ready. The gap between AI adoption (near universal at the enterprise level) and compliance program maturity (barely half have one) is staggering. That gap represents risk—financial, legal, reputational.
But it also represents opportunity. The companies that figure this out first will have advantages. Vendors that build compliant tools will win enterprise deals. Organizations that demonstrate responsible AI use will have stronger employer brands. Compliance is becoming competitive advantage.
The $365,000 iTutorGroup settlement was the warning shot. The Workday class action is the escalation. EU AI Act enforcement starts in 2027. State regulations are proliferating. The window for getting your house in order is narrowing.
The technology is powerful. The stakes are high. The regulations are real and multiplying.
Time to take it seriously. Or wait for the lawsuit. Your choice.