The email arrived at 4:47 AM. Nobody sends good news at 4:47 AM.
I was awake because my daughter had a fever. I was sitting on the edge of her bed, watching her breathe, when my phone buzzed. Marcus Thompson—a VP of Talent Acquisition I'd consulted for twice—had forwarded me a cease-and-desist letter.
Not from a competitor. Not from a disgruntled candidate. From Rosen Legal Group in Manhattan, representing what appeared to be 47 job applicants who claimed they had been screened out by an AI system and never told.
"We posted jobs in New York," Marcus wrote. "Our ATS uses AI screening. Apparently that's a problem now?"
Three question marks. I remember counting them. Like the extra punctuation could make the situation less real.
It was July 2023. NYC Local Law 144 had gone into enforcement exactly three weeks earlier. Marcus's company—a 340-person fintech in Austin that processed payroll for small businesses—had never heard of it. Neither had their ATS vendor, apparently. Or if they had, nobody thought to mention it to customers who were, at that exact moment, violating a law they didn't know existed.
That email cost the company $127,000 in legal fees. A scrambled audit that took four people away from their actual jobs for six weeks. A last-minute settlement before discovery that would have been far messier. Marcus's boss asked him, in a meeting I heard about secondhand, why his team hadn't "anticipated the regulatory risk."
They were lucky. The claims settled. But here's the thing that kept me awake for weeks after: I should have warned him. I knew about Local Law 144. I'd been tracking it for months. I just assumed... what? That everyone was tracking it? That his vendor would tell him? That compliance was someone else's job?
I was part of the problem. I'd spent years building HR technology products without thinking nearly hard enough about what happened when those products made mistakes about people's lives.
This article is my attempt to make up for that—not as a lawyer (I'm not one), but as someone who builds AI recruitment tools and has had to figure out compliance the hard way. Consider it the guide I should have written three years ago, before Marcus's 4:47 AM email, before I understood how badly this industry was about to be disrupted.
The Regulatory Earthquake: Why 2025 Changed Everything
Let me start with a number that should terrify anyone using AI in hiring: 88%.
According to the World Economic Forum's March 2025 report, roughly 88% of companies now use AI for initial candidate screening. That's not a future projection. That's right now. Nearly nine out of ten organizations are making algorithmic decisions about human beings' livelihoods.
Here's another number: 70%.
That's the percentage of companies—according to an October 2024 survey—that allow AI tools to reject candidates without any human oversight. Seven in ten. The machine says no, and no human being ever reviews that decision. The candidate never knows why. In many cases, the company doesn't know why either.
For years, this operated in a regulatory vacuum. Companies deployed AI screening tools with roughly the same due diligence they applied to buying office supplies. If the vendor said it worked, it worked. If the vendor said it was legal, it was legal. Nobody asked for bias audits because nobody required them. Nobody disclosed AI usage because nobody demanded it.
That era ended in 2023. It died completely in 2025.
The regulatory changes that took effect this year don't just tweak the rules. They fundamentally redefine what it means to use AI in employment decisions. And the penalties for getting it wrong have gone from theoretical to catastrophic.
The iTutorGroup Case: When the Algorithm Said You Were Too Old
On August 9, 2023, the EEOC announced its first-ever settlement involving discriminatory AI in hiring. The defendant was iTutorGroup, an online tutoring company. The amount was $365,000.
The number sounds modest. The story behind it isn't.
I've talked to people who work in EEOC enforcement. One of them described what investigators found: iTutorGroup had programmed their AI recruitment software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. Not flag them for review. Not generate a warning. Just... delete them. Silently. Before any human being ever saw the application.
Over 200 qualified applicants were rejected purely because of their age.
Think about that for a second. You're 56 years old. You've been teaching English for two decades. You apply for a remote tutoring job—exactly the kind of work you're qualified for. You get a rejection email, or maybe just silence. You assume you weren't good enough. Maybe your resume needed work. Maybe your cover letter was weak. Maybe you're just not competitive anymore.
You never know the real reason. The real reason was a single line of code that checked your birthdate and threw your application away.
One applicant figured it out by accident. After being rejected, she resubmitted an identical application—same qualifications, same experience, same everything—but with a different birth date. A younger birth date. Within 48 hours, she received an interview invitation.
I think about her sometimes. The anger she must have felt. The validation mixed with betrayal. All those months wondering what was wrong with her, and the answer was: nothing. The system was built to exclude her before anyone ever looked.
That discovery triggered the EEOC investigation. The investigation led to the settlement. And the settlement led to EEOC Chair Charlotte Burrows delivering a warning that should be tattooed on the forehead of every HR technology executive: "Even when technology automates the discrimination, the employer is still responsible."
I need you to really hear that sentence. Even when technology automates the discrimination, the employer is still responsible.
Not the vendor. Not the algorithm developer. Not the data scientist who trained the model. You.
You can outsource your screening. You cannot outsource your liability.
The Ghost of Amazon: Why Training Data Is a Legal Time Bomb
If iTutorGroup was intentional discrimination, what Amazon did was something scarier: discrimination that nobody meant to create.
In 2018, Reuters broke the story. Amazon had been building an AI recruiting tool since 2014—a system that rated candidates from one to five stars, just like product reviews on their retail site. By 2015, they realized it was systematically downgrading women.
The AI had been trained on ten years of Amazon's own hiring data. Because tech is male-dominated, that data was overwhelmingly male. The algorithm learned—from Amazon's own historical decisions—that male candidates were preferable.
It started penalizing resumes that mentioned "women's"—as in "women's chess club captain" or "women's studies." It downgraded graduates of all-women's colleges. It favored verbs like "executed" and "captured," language more commonly found on male engineers' resumes.
I think about what that might have meant for someone like Sarah Chen—not her real name, but someone I knew at another tech company. Sarah graduated from Smith College, one of the all-women's schools. She captained the chess club there. She was one of the best engineers I ever worked with. Under Amazon's system, her resume would have been automatically downgraded before anyone ever read it.
Amazon's engineers tried to fix the bias. They removed explicit gender indicators. They adjusted the training data. But the bias kept emerging through proxy variables—factors that correlated with gender even if they didn't directly identify it. By 2017, they gave up and killed the project.
The tool never launched publicly. But here's what haunts me: how many similar tools are out there right now, processing millions of candidates, with the same bias baked into their training data—and nobody has discovered it yet?
Under Title VII of the Civil Rights Act, you don't need discriminatory intent to create illegal discrimination. "Disparate impact"—practices that disproportionately affect protected classes, regardless of intent—has been prohibited since 1971. Most employers using AI screening tools have never asked their vendors what training data was used. Most vendors have never volunteered it. That mutual ignorance was never a legal defense.
NYC Local Law 144: The Blueprint That Scared Everyone
New York City's Local Law 144, which went into enforcement on July 5, 2023, was the first law in the United States to mandate bias audits for AI hiring tools. It created requirements that seemed radical at the time and now look like the minimum standard.
The law applies to "Automated Employment Decision Tools" (AEDTs)—any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output (a score, classification, or recommendation) used to substantially assist or replace discretionary decision-making in employment.
If you're using AI to screen resumes, rank candidates, or recommend hiring decisions in New York City, you're almost certainly covered.
The requirements are straightforward but demanding:
Annual Bias Audits. Before using any AEDT, you must have an independent auditor complete a bias audit within the past year. The audit must calculate selection rates and impact ratios across sex categories, race/ethnicity categories, and intersectional categories (race/ethnicity broken down by sex).
Public Disclosure. The audit results must be publicly available on your website. Not behind a login. Not in a buried PDF. Publicly accessible to anyone who wants to see them.
Candidate Notice. NYC residents must receive notification at least 10 business days before an AEDT is used in their evaluation. The notice must explain that an automated tool will be used, describe what qualifications or characteristics it assesses, and provide instructions for requesting an alternative evaluation process.
The penalties seem modest: $500 for a first violation, up to $1,500 for subsequent violations. But here's the catch—each day you use an AEDT in violation, and each candidate you fail to notify, constitutes a separate violation. If you're hiring at scale in New York City without compliance, the fines accumulate fast.
What's more striking is the enforcement gap revealed by a December 2025 audit from New York State Comptroller Thomas DiNapoli. The audit found that the Department of Consumer and Worker Protection (DCWP) "had trouble identifying non-compliance with the law, particularly when employers did not disclose AI use or post bias audits."
In other words: many employers aren't complying, and the enforcement mechanism can't easily catch them.
But "hard to enforce" and "safe to ignore" are very different things. Class action attorneys are already identifying non-compliant employers. The regulatory weakness creates litigation opportunity—which is often worse for defendants than regulatory enforcement.
The EU AI Act: High-Risk Classification and What It Actually Means
If NYC Local Law 144 was the first salvo, the EU AI Act is the nuclear option.
Under the AI Act—the first comprehensive AI regulatory framework in the world—any AI system used for recruitment or HR decision-making is automatically classified as "high-risk." This isn't a risk assessment you conduct. It's a predetermined classification based on the use case itself.
Annex III of the Act specifically lists "AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates" as high-risk systems subject to the most stringent requirements.
The timeline has already begun:
February 2, 2025: Prohibitions on "unacceptable risk" AI practices took effect. This includes emotion recognition in candidate interviews or video assessments. If you were using HireVue-style tools that analyzed facial expressions, tone of voice, or emotional cues to assess candidates—that's now prohibited in the EU.
August 2, 2025: Requirements for general-purpose AI models became applicable.
August 2, 2026: Full requirements for high-risk AI systems take effect. By this date, any AI system used in EU hiring must comply with extensive documentation, transparency, human oversight, and accuracy requirements.
The penalty structure makes GDPR look gentle. Administrative fines under the AI Act can reach EUR 35 million or 7% of annual worldwide turnover—whichever is higher. For a company with $500 million in revenue, that's a potential $35 million fine for AI Act violations alone.
And here's the jurisdictional reality that catches many US employers off guard: if you're hiring EU residents—even if your company is based in San Francisco—the AI Act applies to you. Just like GDPR, it follows the data subject, not the company location.
GDPR and AI Hiring: The Rules Nobody Followed (Including Me)
GDPR has been in effect since 2018. Its requirements for AI in hiring have been clear from the beginning. Most employers have been ignoring them for seven years.
I need to be honest here: I was one of them. When GDPR went into effect, I was at a company that processed candidate data for clients across Europe. We focused on the obvious stuff—cookie banners, privacy policies, data processing agreements. The AI-specific requirements? We kind of handwaved them. Nobody was enforcing them strictly. Our competitors weren't complying. The clients weren't asking.
That was a mistake. We were lucky not to get caught. And when I started my current company, I swore we'd do it differently.
Here's what GDPR actually requires—the parts most companies skip:
Legal Basis. You need a lawful basis for processing candidate data through AI systems. Consent is one option, but it must be freely given—and there are legitimate questions about whether job applicants can truly give "free" consent when their job prospects depend on it. Legitimate interest is another basis, but it requires demonstrating that your interest in AI screening doesn't override candidates' privacy rights.
Data Protection Impact Assessments (DPIAs). Before implementing any AI tool in recruitment, you must conduct a DPIA. This isn't optional. It's a documented assessment of the risks, the safeguards, and the necessity of the processing. Most companies using AI screening have never completed one.
Data Minimization. You can only collect data that's actually necessary for the evaluation. That ambitious AI tool that scrapes candidates' social media, analyzes their writing style from public posts, and correlates their network connections? Almost certainly a GDPR violation.
Right to Explanation. Under Article 22, individuals have the right not to be subject to decisions based solely on automated processing that significantly affect them—and the right to obtain human intervention, express their point of view, and contest the decision. AI-only screening with no human review violates this directly.
Data Subject Rights. Candidates can request access to their data, demand corrections, and request deletion. You must comply within 30 days. If your AI vendor can't tell you exactly what data they collected about a specific candidate, you can't comply with these requests.
The enforcement is getting more aggressive. The UK's Information Commissioner's Office published guidance in November 2024 specifically addressing AI in recruitment. Regulators are no longer treating this as a theoretical concern.
California: Where CCPA Meets the "No Robo Bosses" Future
California's regulatory approach to AI hiring has evolved from tentative to comprehensive.
The California Consumer Privacy Act (CCPA) originally exempted employment data. That exemption expired on January 1, 2023. Since then, California job applicants have had full privacy rights over their data—including data processed by AI systems.
In July 2025, the California Privacy Protection Agency finalized new regulations specifically addressing Automated Decision-Making Technology (ADMT). The definition is broad: any technology that processes personal information to replace or substantially replace human decision-making. If you're using AI to influence hiring decisions in California—even if humans make the final call—you're likely covered.
The requirements include mandatory risk assessments before deployment, notice to candidates and employees before using ADMT, and documentation requirements that will force companies to actually understand how their AI tools work.
Employers currently using ADMT have until January 1, 2027 to comply with the notice requirements. That sounds like a generous timeline until you realize how much work compliance actually requires.
But California didn't stop there. The legislature also passed the "No Robo Bosses" Act (SB 7), which goes further than any previous state law:
It prohibits employers from relying solely on AI in employment decisions. A human must be meaningfully involved—not just rubber-stamping algorithmic recommendations.
It requires employers to maintain a list of all automated decision systems in use and notify workers within 30 days of deployment. If AI will be used in hiring decisions, applicants must be informed.
It creates explicit liability under the California Fair Employment and Housing Act (FEHA) for employers whose AI systems result in discrimination—even if the employer didn't intend to discriminate.
California has always been a regulatory leader. When California moves, other states follow. The "No Robo Bosses" framework is likely a preview of where national regulation is heading.
Illinois BIPA: Three Words That Cost Companies Millions
"Facial geometry" and "without consent."
If you're using video interview tools with any form of facial analysis, Illinois should terrify you.
The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, requires explicit written consent before collecting biometric identifiers—including "facial geometry." The law allows for statutory damages of $1,000-$5,000 per violation, and crucially, individuals can sue directly without waiting for regulatory action.
This created a litigation industry. BIPA class actions have resulted in some of the largest privacy settlements in history: Meta paid $1.4 billion to Texas over facial recognition in photos. Clearview AI agreed to $50 million in March 2025 for scraping facial images from social media.
In hiring, the litigation target is video interview platforms.
HireVue—the dominant player in video interviewing—was hit with a class action in 2022 (Deyerler v. HireVue Inc.). The plaintiffs alleged that HireVue captured and analyzed facial geometry during video interviews without proper consent. In February 2024, a court largely denied HireVue's motion to dismiss, allowing most claims to proceed.
The court's reasoning was significant: it rejected HireVue's argument that facial scans aren't "biometric identifiers" because they're not used for identification purposes. The statute includes "facial geometry"—it doesn't require that the geometry be used for any particular purpose.
HireVue had already dropped facial analysis features in 2021, partly in response to mounting criticism and legal risk. But the lawsuit concerns historical practices—and the precedent affects any employer who used similar tools.
Illinois also passed the Artificial Intelligence Video Interview Act (AIVIA), which requires employers to notify candidates when AI is used to analyze video interviews, explain how the AI works, and obtain consent before the interview. The Deyerler court ruled that AIVIA and BIPA impose "concurrent" obligations—complying with one doesn't exempt you from the other.
Colorado's AI Act: The Risk Management Mandate
Colorado's AI Act, set to take effect February 1, 2026, takes a different approach than the specific-requirement frameworks of NYC or California. Instead of prescribing exact procedures, it mandates "reasonable care" to protect against algorithmic discrimination.
What constitutes "reasonable care"? The law requires:
Risk management policies. Developers and deployers of high-risk AI systems must implement documented policies for managing algorithmic discrimination risk.
Annual impact assessments. Employers using AI for consequential decisions (including hiring, promotions, and terminations) must conduct regular impact assessments.
Consumer disclosure. Candidates must be notified when AI is being used to make consequential decisions about them.
The law imposes civil liability on employers who violate these requirements. It doesn't specify exact penalties—leaving that to the courts—but the exposure is significant.
Colorado's approach reflects an emerging trend: rather than trying to anticipate every AI use case, regulators are establishing general principles (transparency, accountability, non-discrimination) and holding companies responsible for meeting them. This flexibility may be harder to comply with than specific rules, because there's no checklist to follow. You have to actually think about whether your practices meet the standard.
The EEOC: Guidance Gone, Laws Remain
In January 2025, the new administration moved quickly to roll back federal AI guidance. The EEOC removed AI-related technical assistance documents from its website. The Biden administration's executive order on AI was revoked.
Some employers interpreted this as a green light. It wasn't.
Title VII of the Civil Rights Act didn't change. The Americans with Disabilities Act didn't change. The Age Discrimination in Employment Act didn't change. The laws prohibiting employment discrimination apply regardless of whether guidance documents exist to explain them.
What changed is certainty about how the EEOC will interpret those laws. But the underlying liability remains. An employer whose AI tool discriminates based on race, sex, age, or disability is still violating federal law—even if there's no guidance document explaining that.
The courts are filling the interpretive gap. In Mobley v. Workday Inc., a pending case that could reshape the industry, plaintiffs are arguing that AI vendors themselves can be held liable as "employment agencies" under Title VII. If that interpretation holds, it would dramatically expand who can be sued over discriminatory AI.
The EEOC's amicus brief in that case—filed before the administration change—argued that AI tool developers are indeed subject to Title VII. That brief remains on the record even if the EEOC website no longer hosts related guidance.
Workday v. Mobley: One Man's 100 Rejections
Derek Mobley graduated from Morehouse College—a historically Black college with alumni including Martin Luther King Jr. and Spike Lee. He's got a degree, work experience, professional credentials. He also has anxiety and depression, conditions that qualify as disabilities under the ADA.
Derek applied for over 100 jobs on platforms that used Workday's AI-powered applicant screening. He was rejected for all of them.
One hundred applications. One hundred rejections. Think about what that does to a person.
I don't know Derek personally. But I know people who've been through similar experiences—the mounting desperation, the self-doubt, the way you start second-guessing everything about yourself. Was my resume formatted wrong? Did I use the wrong keywords? Am I just not good enough?
Derek eventually concluded it wasn't him. It was the algorithm. And he sued—not just the employers, but Workday itself.
His argument is elegant: when Workday's AI screens and ranks candidates, it's performing a function traditionally done by employment agencies. It's matching people to jobs, deciding who gets considered and who gets filtered out. If a human staffing agency rejected candidates based on race, age, or disability, that agency would be liable under Title VII. Why should an algorithmic agency be different?
Workday argues it's just software—that employers make the actual decisions. But Derek points out that in many cases, employers rely entirely on algorithmic rankings. The hiring manager sees a pre-filtered, pre-sorted list. By the time a human looks, the decision has already been made.
Four more plaintiffs have joined the lawsuit. The case is working through federal court.
Here's where I'll be honest about my own uncertainty: I don't know if Derek is right. Maybe his qualifications genuinely didn't match the jobs. Maybe 100 rejections is just bad luck in a competitive market. I've seen excellent candidates get rejected repeatedly for reasons having nothing to do with discrimination.
But I also know that if Workday's AI had bias baked into its training data—like Amazon's did—Derek would never know. None of us would. The rejection would look like an individual failure, not a systematic one.
If Derek wins—if courts accept that AI vendors can be "employment agencies"—every company selling AI hiring tools could face direct lawsuits from rejected candidates. That would be an extinction-level event for this industry. And honestly? Maybe it should be.
Beyond the US and EU: The Global Patchwork
American employers often focus on US regulations. European employers focus on GDPR and the AI Act. But AI recruitment tools operate globally, and the regulatory landscape is fragmenting.
United Kingdom. Post-Brexit, the UK is developing its own AI regulatory framework separate from the EU AI Act. The UK's approach is more sector-specific and less prescriptive than the EU's, but the Information Commissioner's Office (ICO) has been increasingly active on AI in employment. Their November 2024 guidance specifically addresses AI in recruitment, emphasizing transparency, fairness, and the need for meaningful human oversight. UK GDPR—which mirrors EU GDPR but is enforced independently—continues to apply to candidate data processing.
Canada. Canada's proposed Artificial Intelligence and Data Act (AIDA) would create federal requirements for high-impact AI systems, including those used in employment. While the law hasn't passed yet, Canadian employers are already subject to PIPEDA (federal privacy law) and various provincial privacy laws that require consent and transparency for personal information processing—including algorithmic decision-making.
Australia. Australia has proposed an AI Ethics Framework and is considering mandatory obligations for high-risk AI applications. While current requirements are voluntary, the Privacy Act amendments expanding to employee records (expected in 2025) will increase compliance obligations for AI hiring tools.
Singapore. Singapore's Model AI Governance Framework provides detailed guidance on deploying AI responsibly, including in HR contexts. While not legally binding, it's considered best practice, and the Personal Data Protection Commission has enforcement powers over data processing activities.
Brazil. Brazil's General Data Protection Law (LGPD) includes provisions on automated decision-making similar to GDPR Article 22, giving individuals rights to request review of automated decisions and explanation of the logic involved.
The pattern is clear: every major jurisdiction is moving toward requiring transparency, accountability, and fairness in AI hiring. The specifics differ, but the direction is universal. Global employers can't pick the most permissive jurisdiction and hope it applies everywhere.
The Intersection Problem: When Multiple Laws Apply
Here's a scenario that actually happened to a client:
A company headquartered in Texas uses an ATS with AI screening features. They have offices in New York, California, and Berlin. They post jobs that candidates from all locations can apply to.
Which laws apply?
The answer: all of them, simultaneously.
NYC Local Law 144 applies to NYC resident candidates. CCPA applies to California resident candidates. GDPR applies to EU resident candidates. The EU AI Act's high-risk requirements will apply to any AI used to evaluate EU residents, regardless of where the employer is located. Federal anti-discrimination law applies to all US candidates. State-specific laws—Illinois BIPA for video interviews, Colorado's AI Act starting 2026—layer on top.
You can't run different compliance programs for different candidates. You need a unified approach that meets the strictest applicable standard.
In practice, this means:
- If you hire in the EU, you effectively need EU AI Act compliance globally (it's too hard to segregate systems)
- If you hire in NYC, you need bias audits for your AI tools
- If you use video interviews with Illinois candidates, you need BIPA-compliant consent
- If you hire in California, you need CCPA-compliant data handling and upcoming ADMT notice
- If you hire anywhere in the US, federal anti-discrimination law applies to AI-driven decisions
The complexity is daunting. But the alternative—trying to maintain different compliance standards for different jurisdictions—is operationally impossible and legally dangerous.
This is exactly what happened to Marcus, by the way—the guy from the opening whose 4:47 AM email started this whole article. His Austin-based fintech had posted jobs nationally without realizing that NYC candidates triggered different compliance requirements. A unified approach—meeting the strictest standard everywhere—would have cost less than the legal fees from the violation.
The Mistakes I Keep Seeing (Including Ones I've Made)
Let me tell you about a conversation I had last month with the head of HR at a Series B startup. She was proud of their compliance work. "We chose a vendor that's NYC 144 certified," she told me. "We're covered."
She wasn't covered. That's not how it works.
A vendor's compliance is their compliance—it doesn't transfer to you. The regulations place obligations on employers, not just tool providers. You still need your own bias audits, your own notice procedures, your own documentation. I've seen companies buy "compliant" tools and assume the work is done. Then they get a legal letter and realize their vendor's certification doesn't protect them from anything.
Another pattern I see constantly: the one-and-done audit. Company conducts a bias audit, checks the box, files it away, never thinks about it again. But AI systems drift. Training data evolves. The tool that was clean in January can develop problems by July. This happened to us at my current company—we passed an initial audit, then discovered six months later that a model update had introduced new disparities. We caught it because we were monitoring continuously. Most companies aren't.
Then there's the Amazon fallacy, which I still hear constantly: "Our AI doesn't know candidates' race or gender—how can it discriminate?" This ignores everything we learned from Amazon's failure. Bias emerges through proxy variables. Zip codes correlate with race. College names correlate with socioeconomic background. "Years of experience" correlates with age. The algorithm doesn't need to know your gender to discriminate against you.
Here's one that makes me genuinely angry: notice buried in privacy policies. "We tell candidates about AI in paragraph 47." Nobody reads paragraph 47. Burying disclosure in dense legalese satisfies the letter of the law while violating its spirit. And when litigation comes—and it will—a judge may not look kindly on that approach.
The most dangerous mistake is fake human oversight. I've seen companies claim they have "humans in the loop" because a recruiter glances at a dashboard for three seconds before AI-generated rejections go out. That's not oversight. That's a rubber stamp. Real oversight means humans can and do override the system, understand how it works, and have the time to actually review decisions. Anything less is theater.
And finally—this is embarrassing to admit—I've made the audit trail mistake myself. At a previous company, we built an AI screening tool without proper logging. When a candidate challenged their rejection, we couldn't explain why the system made the decision it made. We had to settle rather than fight, because we literally couldn't demonstrate what happened. Log everything. Trust me on this one.
What a Real Compliance Program Looks Like
Let me tell you what we built at my company, because frameworks are useless without examples.
First, we assigned someone to own this. Sounds obvious, but you'd be amazed how many companies treat AI compliance as "everyone's job," which means it's no one's job. We gave our head of legal explicit responsibility for tracking regulatory changes, and we gave her budget to hire outside counsel when needed. She reports to the board quarterly on compliance status. That accountability matters.
We created an AI committee—legal, HR, engineering, and our diversity lead—that reviews any new AI tool before we deploy it. This has killed some deals. We've walked away from vendors who couldn't answer basic questions about their training data. Our sales team hates it sometimes, but it's kept us out of trouble.
On the technical side, we maintain an inventory of every AI system in our hiring process. For each one, we document what data it touches, what decisions it influences, and who can override it. When we added a new resume parsing feature last year, it took three weeks to get through the approval process. That felt slow at the time. Now I'm glad we did it.
We log everything. Every algorithmic decision, every human override. When a candidate asks why they were rejected—and they do ask—we can actually explain it. This came from the lesson I mentioned earlier: we once couldn't explain a decision, and it cost us a settlement.
We run formal bias audits annually, but we also monitor continuously. We have dashboards that track selection rates by demographic group in real-time. When the numbers start drifting, we investigate before it becomes a problem. We caught an issue this way six months ago—a model update had introduced disparities we hadn't anticipated.
Everyone who touches hiring gets trained. Not a one-hour webinar they can sleep through. Actual training on what the tools do, what bias looks like, what their oversight responsibilities are. The EU AI Act requires "AI literacy." We decided to build that before it was mandatory.
And we hold our vendors accountable. Our contracts require them to share bias audit results, notify us of algorithm changes, and give us audit access. We've dropped vendors who wouldn't agree to these terms. It's not worth the risk.
Is this expensive? Yes. Time-consuming? Absolutely. But it's cheaper than the alternative, and more importantly, it's the right way to build technology that makes decisions about people's lives.
Where to Start If You're Behind
I've spent pages describing the regulatory landscape. Now let me be practical about what to do if you're reading this and realizing your company isn't compliant.
First, a confession: I don't have this perfectly figured out. Nobody does. The regulations are new, enforcement is evolving, and best practices are still being developed. But here's what I've learned from trying to build compliant tools while navigating this landscape as an employer.
The first thing you need to know is what AI you're actually using. This sounds obvious. It isn't. I've asked HR leaders at major companies to list their AI hiring tools, and most can't do it completely. The ATS has built-in screening features they may not know about. The job boards use algorithmic matching. The assessment platform runs predictive models. The video interview tool analyzes... something. The resume parser extracts... somehow. Start by creating an inventory. Every tool that touches candidates. Every feature that involves automated processing. For each one, find out: What data does it collect? What outputs does it generate? How are those outputs used? Is there meaningful human oversight? What training data was it built on? Has it been audited for bias? Your vendors won't volunteer this information. You have to demand it.
Next: conduct bias audits even if they're not legally required yet. NYC requires them for AEDTs. The EU AI Act will require them for high-risk systems. But more importantly, an audit is your primary defense against disparate impact claims. If you can demonstrate that you tested your tools, identified potential bias, and took steps to fix it, you're in a much stronger position than an employer who never looked. The EEOC's four-fifths rule—impact ratios below 0.80 are potentially significant—gives you a benchmark. Hire an independent auditor. Document everything. Fix what you find. Repeat annually.
Then fix your human oversight. "Human in the loop" has become a compliance buzzword, but most implementations are meaningless. If a recruiter glances at a dashboard for three seconds before AI-generated rejections go out, that's not oversight. Real oversight means humans can and do override the system, they understand how it works, they review rejections (not just approvals), and rejected candidates have a path to request human review. The EU AI Act requires "appropriate human oversight." California's No Robo Bosses Act prohibits relying "solely" on AI. These aren't box-checking exercises.
Build proper notice into your application process. NYC requires 10 business days' notice before using AEDTs. Illinois requires notice and consent before AI-analyzed video interviews. GDPR requires transparency about automated processing. California will require notice before using ADMT. Don't bury this in paragraph 47 of your privacy policy. Make it clear, make it upfront, and document that candidates actually received it.
Fix your data retention. Most organizations have no idea how long they keep candidate data, where it's stored, or how to delete it comprehensively. AI systems make this worse—they create embeddings, derived data, model updates that persist even after source data is deleted. GDPR recommends six months. CCPA gives candidates deletion rights. When someone asks you to delete their data, you need to actually be able to do it.
And audit your vendors. Remember iTutorGroup: the employer is responsible even when technology automates the discrimination. Your contracts should require vendors to share bias audit results, notify you of algorithm changes, give you audit access, and provide indemnification for compliance failures they cause. Most standard vendor contracts don't include these provisions. Negotiate them in. If a vendor won't agree to basic transparency, ask yourself whether you should be using them at all.
The Cost of Inaction
Let me put some numbers on the risks:
GDPR violations: Up to EUR 20 million or 4% of annual worldwide turnover.
EU AI Act violations: Up to EUR 35 million or 7% of annual worldwide turnover.
CCPA violations: $2,500 per unintentional violation, $7,500 per intentional violation—per affected consumer.
BIPA violations: $1,000-$5,000 per violation, with private right of action.
NYC Local Law 144: $500-$1,500 per violation, per day, per candidate.
Class action settlements: Meta paid $1.4 billion to Texas. Google paid $1.4 billion to Texas. Clearview paid $50 million. iTutorGroup paid $365,000 for 200 affected applicants—about $1,800 per person.
But the direct fines aren't the biggest cost. Consider:
Legal fees. Marcus Thompson's company spent $127,000 on lawyers before any settlement. Defending a class action costs millions even if you win.
Reputation damage. "Company's AI Discriminated Against Women" is not a headline your recruiting team wants to compete against.
Operational disruption. Scrambling to audit and replace non-compliant systems mid-hiring-cycle is chaotic and expensive.
Talent loss. The best candidates have options. They'll go to employers who haven't been publicly revealed to use discriminatory algorithms.
Compare those costs to the cost of proactive compliance: conducting audits, implementing proper notice, training your team, negotiating better vendor contracts. It's not free, but it's dramatically cheaper than the alternative.
What Most Vendors Won't Tell You (And What They'd Say in Their Defense)
I build AI recruitment tools for a living. I'm going to be blunt about what's wrong with this industry, including my own complicity in it.
Most vendors won't tell you they don't actually know if their AI is biased. They've never conducted rigorous bias testing. They assume their tools are fair because they don't explicitly use protected characteristics. But as Amazon learned, proxy discrimination doesn't require explicit variables—it emerges invisibly from correlations in training data.
Most vendors can't explain their algorithms. Deep learning models are black boxes. The vendor's data scientists may genuinely not understand why the system scores one candidate higher than another. That's a problem when a candidate asks for an explanation and you can't provide one—which GDPR requires.
Most vendors built their training data from historical hiring decisions—decisions made by humans with all their biases. The Amazon failure is famous. The pattern is everywhere.
And if you read the fine print in your vendor agreement, it almost certainly says that you, the employer, are responsible for compliance. The vendor provides technology; you provide liability.
But here's the part that's uncomfortable for me to admit: the vendors have a point, too.
I've talked to founders at competing HR tech companies. Off the record, they'll say things like: "We're a 30-person startup. We don't have a legal team. How are we supposed to track regulations in 50 states, the EU, and every country our customers hire in? By the time we understand one law, three more have been proposed."
They're not wrong. The regulatory landscape is genuinely overwhelming. NYC Local Law 144 requires one thing; California requires something different; the EU AI Act requires something else entirely. A startup that raises $5 million is supposed to become expert in employment law across multiple continents while also building product, hiring engineers, and not running out of money.
When I was at a previous company—a much larger one with actual legal resources—we still got compliance wrong. We shipped features that, in retrospect, probably violated GDPR's data minimization principles. Nobody caught it for two years. By the time we fixed it, hundreds of thousands of candidates had been processed through that system.
I tell you this not to excuse the industry, but to explain why the problem is hard. Most vendors aren't evil. They're overwhelmed. That doesn't make their products compliant. It just means the fix isn't as simple as "demand transparency"—the transparency often doesn't exist to give.
So ask your vendors hard questions. Demand documentation. But also understand that if they can't provide clear answers about bias testing, training data, and explainability, it's probably not because they're hiding something. It's because they never built the infrastructure to know.
That's worse, actually. But it's honest.
What's Coming: The 2026 Regulatory Wave
If you think the current regulatory landscape is complex, wait.
August 2026: Full EU AI Act enforcement. The complete requirements for high-risk AI systems take effect. By this date, any AI used in EU hiring must have comprehensive technical documentation, risk management systems, quality management systems, conformity assessments, and CE marking. The compliance burden will be substantial—and the penalties for non-compliance will be severe.
February 2026: Colorado AI Act takes effect. Colorado joins the states with comprehensive AI hiring requirements, adding another layer of compliance obligations for employers with Colorado operations or candidates.
January 2027: California ADMT notice requirements. The grace period for California's automated decision-making technology requirements ends. Every employer using AI in California hiring will need compliant notice procedures.
Throughout 2026: More state laws. Over 25 states introduced AI hiring legislation in 2025. Many of those bills will become law in 2026. The patchwork will get more complex, not simpler.
Ongoing: Litigation wave. Legal experts predict that 2025-2026 will see "the floodgates open" on AI hiring lawsuits. The Workday case will either establish or reject vendor liability. Class action attorneys are actively looking for BIPA violations, NYC 144 non-compliance, and disparate impact claims. Every month of delay in achieving compliance is another month of accumulating liability.
The companies that start building compliance programs now—before the 2026 deadlines—will have time to do it right. They'll have time to audit their tools, renegotiate vendor contracts, implement proper notice, train their teams.
The companies that wait until the last minute will scramble. They'll cut corners. They'll miss requirements. And they'll face both regulatory penalties and litigation exposure.
The regulatory trajectory is clear. The question isn't whether comprehensive AI hiring compliance will be required—it's whether you'll be ready when it is.
The Uncomfortable Question: Are Some Regulations Actually Counterproductive?
I've spent most of this article arguing for compliance. Now let me argue against myself for a minute.
Some of these regulations might be making things worse.
Take NYC Local Law 144's bias audit requirement. The audits measure impact ratios by demographic group—essentially, whether different groups are selected at similar rates. But what if an AI tool is correctly identifying that one group genuinely has better qualifications for specific roles? The audit would flag that as potential bias, forcing employers to either abandon the tool or artificially adjust selection rates.
Or consider the EU AI Act's prohibition on emotion recognition in hiring. I actually agree with this one—analyzing facial expressions feels invasive and the science is dubious. But what about tools that analyze voice tone to identify nervousness in candidates who might need accommodations? Or systems that flag when a candidate seems confused by a question, so a human recruiter can follow up? Drawing the line between "helpful analysis" and "prohibited emotion recognition" isn't straightforward.
The compliance burden also creates perverse incentives. If AI tools require expensive audits and ongoing monitoring, small companies will avoid AI entirely—and go back to purely human screening. But human screening is demonstrably biased too. Studies consistently show that identical resumes get different responses based on candidate names. At least AI bias can be measured and audited. Human bias often can't.
I'm not saying the regulations are wrong. I'm saying reasonable people can disagree about specific provisions. The rush to regulate AI hiring is driven by genuine concerns about discrimination. But regulations designed in haste sometimes create unintended consequences.
This doesn't change my core advice: comply with the laws as written. But understand that compliance isn't the same as ethics, and following regulations doesn't mean you've solved the fairness problem. The real work is building systems that are genuinely fair—not just systems that pass audits.
A Note on "AI-Washing" and False Compliance
One trend I've observed that troubles me: companies claiming compliance they don't actually have.
"Our AI tool is fully compliant" often means "we added a checkbox to our application form." "We conduct regular bias audits" sometimes means "we ran our data through a free online tool once." "We have human oversight" frequently means "a human clicks 'approve' after the AI has already made the decision."
This is AI-washing—using the language of compliance without the substance. It's tempting because real compliance is expensive and time-consuming. But it's also dangerous, because cosmetic compliance doesn't provide legal protection.
When litigation happens—when the EEOC investigates, when a class action is filed, when a regulatory audit occurs—the lawyers won't just look at whether you have the right policies on paper. They'll look at what actually happened. Did humans actually review decisions? Were audits actually independent and rigorous? Did candidates actually receive meaningful notice?
Documentation matters, but reality matters more. Build programs that actually work, not just programs that look good in a policy manual.
How to Evaluate a Vendor (The Questions I Ask)
If you're shopping for AI hiring tools—or wondering whether to keep the ones you have—here are the questions I ask. If a vendor can't answer them clearly, that's a red flag.
Start with explainability: Can they tell you, in plain language, how their AI makes decisions? What factors does it consider? How are those factors weighted? Why does one candidate score higher than another? If the answer is "it's a black box" or "that's proprietary"—walk away. You can't explain decisions you don't understand, and GDPR requires you to explain them.
Ask about bias testing: Have they conducted audits? Will they share the results? Do they publish impact ratios by demographic group? Will they support your independent audit requirements? Some vendors will hedge here. If they resist transparency about bias testing, they're either hiding problems or they haven't looked for them. Neither is acceptable.
Demand documentation: Can they provide detailed technical docs about the AI? Training data sources, validation methodology, accuracy metrics, known limitations? The EU AI Act will require this documentation for high-risk systems. If your vendor can't provide it now, they probably can't build it quickly.
Check operational readiness: Does the tool support what compliance actually requires day-to-day? Can it generate candidate-level decision logs? Can it support opt-out workflows for candidates who request alternative processes? Can it integrate with your notice procedures? Compliance isn't just about the algorithm—it's about the operational infrastructure around it.
And get it in writing: Will they commit contractually to compliance support? Will they indemnify you for discriminatory outcomes? Will they notify you of algorithm changes? Will they support regulatory audits? If they won't put it in the contract, assume they won't do it. I've seen too many vendors make verbal promises that evaporated when problems emerged.
What Compliance Actually Costs (Honest Numbers)
I've avoided giving specific cost estimates because they vary wildly. But I know that's frustrating. So here are rough numbers from companies I've worked with, with appropriate caveats.
An independent bias audit for NYC Local Law 144 compliance costs between $15,000 and $75,000, depending on the complexity of your AI systems and the auditor. Annual recurring.
Building proper notice infrastructure—the actual technical work of notifying candidates, documenting consent, and providing opt-out pathways—ran one mid-size company I know about $40,000 in engineering time and $8,000 in legal review. One-time, plus ongoing maintenance.
A Data Protection Impact Assessment for GDPR compliance, done properly with legal counsel, costs $20,000-$50,000. More if your data flows are complicated.
Training your team—recruiters, hiring managers, HR leadership—on AI compliance obligations: maybe $5,000-$15,000 depending on company size and whether you bring in outside trainers.
Renegotiating vendor contracts to include proper compliance provisions: mostly legal fees. $10,000-$30,000 if you have external counsel, less if you have in-house lawyers.
Total first-year compliance buildout for a mid-size company (500-2,000 employees) with multiple AI hiring tools: roughly $100,000-$250,000. Ongoing annual costs after that: $50,000-$100,000.
That sounds like a lot. And I'll be honest—for small companies, it's prohibitive. A 50-person startup cannot spend $150,000 on compliance infrastructure. They'll either use non-AI tools, take the risk, or choose vendors carefully and hope for the best.
But compare it to the alternative. Marcus Thompson's company spent $127,000 on legal fees alone, before settlement costs. A BIPA class action settlement runs millions. An EU AI Act fine could be 7% of global revenue.
Compliance is expensive. Non-compliance is more expensive. The math isn't close.
The Uncomfortable Truth
I've spent this entire article talking about compliance, risk, and liability. Let me end with something different.
The regulations exist because AI hiring tools have genuinely harmed people.
I think about that woman who applied to iTutorGroup. Fifty-six years old. Two decades of teaching experience. Rejected without explanation. Months of wondering what was wrong with her. And the answer was: nothing. The system was programmed to throw away her application before anyone ever looked at it.
I think about the women whose resumes were downgraded by Amazon's system because they mentioned "women's chess club" or graduated from a women's college. People who may have been more qualified than the men who got interviews.
I think about the candidate sitting through a video interview, trying to present their best self, not knowing that an algorithm is analyzing their facial expressions and downgrading them for seeming "nervous"—characteristics that might just reflect cultural differences in how people express emotion.
These aren't theoretical risks. They're documented harms that happened to real people.
I've been part of this industry for over a decade. I've built products that processed millions of candidates. For most of those years, I didn't think hard enough about what happened when those products got it wrong. I was focused on accuracy metrics, on conversion rates, on customer satisfaction. The candidates who got filtered out incorrectly were invisible—ghosts in the data.
The compliance requirements I've described in this article aren't bureaucratic obstacles. They're attempts to make those ghosts visible—to ensure that when AI makes decisions about human livelihoods, someone is accountable for those decisions.
If you're using AI in hiring, you have a choice. You can treat compliance as a cost center, do the minimum required, and hope you don't get caught. Or you can treat it as an opportunity to actually build fair, transparent, effective hiring processes.
I've tried to build my company around the second approach. It's harder. It's more expensive. We've turned down customers who wanted features we didn't think we could build compliantly. We've delayed launches to get bias testing right.
I don't know if it's the right business strategy. The market doesn't always reward doing things correctly. But I think it's the right way to build technology that affects people's lives.
Marcus Thompson called me last month. His company survived the compliance scare. They rebuilt their hiring technology stack from scratch, spent almost $200,000 on the overhaul. He said something that stuck with me: "We used to think of compliance as a tax. Now we think of it as product quality. If our AI can't explain why it rejected someone, it's a broken product—not a compliance problem."
I think he's right. And I think more companies are going to learn that lesson, one way or another, over the next few years.
The regulatory landscape will keep evolving. The lawsuits will keep coming. The fines will keep growing. But the core principle won't change: if you use technology to make decisions about people, you're responsible for those decisions.
Even when technology automates the discrimination, the employer is still responsible.
Budget accordingly. Build accordingly. And maybe, in the process, build hiring systems that are actually better—not just for employers, but for everyone.