Sarah Martinez doesn't remember the exact time the rejection email arrived. "Sometime around 2? 3? I was making lunch." She'd been refreshing her inbox for days—the senior project manager position at a Fortune 500 retailer felt like the one. Twenty-five years of experience. The job description might as well have been her resume.

The email was two sentences. Polite. Final. "After careful review, we've decided to move forward with other candidates."

Careful review. Sarah is 52. She would later learn—months later, through a connection she happened to make at a conference—that no human had reviewed her application at all. The company's AI system scanned her resume, compared it against criteria she would never see, and rejected her in less time than it takes to blink. She was one of hundreds filtered out that day. Maybe thousands. Nobody was counting.

She tried to find out why. Called the HR line. Got transferred. Called again. Left a message. Called a third time. Finally, someone who could actually see her application in the system. The representative sounded apologetic—genuinely, Sarah thought—but the answer was the same: "Our system handles initial screening automatically. I don't have visibility into why specific candidates weren't advanced."

Sarah asked if there was an appeal process.

Silence. Then: "I... I don't think we have—I mean, I can note your concern? In the system?"

Six months later, through a connection at an HR technology conference, Sarah would learn that the company's AI screening tool was under investigation for potential age discrimination. By then, she'd already taken a different job—at a smaller company that still reviewed applications by hand.

When we spoke, she kept coming back to the same thing. "I'll never know if age was why I got rejected. That's what—" She stopped. Started again. "That's the part I can't let go of. Not the rejection. The not knowing."

Sarah's experience isn't unusual. It might be the most common experience in hiring today. By early 2025, 48% of hiring managers were using AI to screen resumes—up from 12% just five years earlier. The EEOC estimates that 99% of Fortune 500 companies now use some form of AI to screen or rank candidates. Ninety-nine percent. These systems make millions of consequential decisions every day, and almost none of them are visible to the people they affect.

What's changed in 2024 and 2025 is that regulators have finally started paying attention. The EU AI Act, which took effect in August 2024, classifies all AI systems used in employment as "high-risk"—subject to the strictest requirements in the law. New York City now requires annual bias audits for AI hiring tools. Illinois demands disclosure and consent. Colorado's comprehensive AI Act, delayed to June 2026, will impose documentation requirements that most companies aren't remotely prepared to meet. And the EEOC has made clear, through guidance and its first AI discrimination settlement, that existing civil rights laws apply to algorithmic decisions.

The result is a compliance minefield. Companies using AI to hire are subject to a patchwork of overlapping, sometimes contradictory regulations that vary by jurisdiction, role type, and the specific capabilities of the AI system in use. The penalties for getting it wrong range from fines to lawsuits to—in the EU—potential penalties of up to €35 million or 7% of global turnover.

I should admit my starting assumption. When I began this investigation four months ago, I figured I knew the story already: AI regulation was probably theater. Political gestures, no real teeth. Companies would be mostly compliant, and the real problem was that the rules weren't strong enough. It was the kind of story I've written before.

I was wrong. Embarrassingly, completely wrong.

Over four months, I interviewed employment lawyers, HR technology vendors, compliance officers, candidates who'd been rejected, and a handful of regulators willing to speak candidly (fewer than you'd hope). What I found: the rules actually have teeth. Real consequences, real penalties. And most companies are ignoring them anyway. The gap between what the law requires and what companies actually do has grown so wide that we're operating two parallel systems—the legal one, and the real one.

I found something else too. Something I didn't expect and don't entirely know how to write about. Some of the regulations—the ones designed to protect candidates—may be making things worse for the candidates they're supposed to protect. I'll get to that. It's a harder story.

This is the story of how AI recruiting became the most regulated technology in HR—and why most companies are still breaking the law.

The Regulatory Explosion: How We Got Here

To understand the current moment, you have to understand what came before: essentially nothing.

For most of the 2010s, AI recruiting tools operated in a regulatory vacuum. Companies could deploy resume screening algorithms, video interview analysis, chatbot recruiters, and predictive hiring models with virtually no oversight. The assumption—rarely stated but widely held—was that existing employment discrimination laws would apply if problems arose. But proving that an algorithm discriminated was nearly impossible when the algorithm itself was a trade secret.

The first crack appeared in 2019, when the Electronic Privacy Information Center (EPIC) filed an FTC complaint against HireVue, alleging that the company's AI video analysis tool used facial recognition and undisclosed assessment criteria in ways that were unfair and deceptive. HireVue denied using facial recognition for scoring, but the complaint drew attention to the opacity of AI hiring systems. The company subsequently discontinued visual analysis of video interviews—a tacit acknowledgment that the practice was indefensible.

Illinois moved first among states. The Artificial Intelligence Video Interview Act, passed in 2019 and effective in 2020, required employers to notify candidates when AI would analyze video interviews, explain how the AI worked, and obtain consent before proceeding. It was narrow—covering only video interviews, not other AI hiring tools—but it established a precedent: AI hiring required disclosure and consent.

Then came New York City.

Local Law 144: The First Bias Audit Mandate

In July 2023, New York City became the first jurisdiction anywhere to require bias audits for AI hiring tools. Local Law 144, passed in 2021 after years of advocacy, created a new category of regulated technology: "Automated Employment Decision Tools," or AEDTs.

The definition was deliberately broad. An AEDT is any computational process—using machine learning, statistical modeling, data analytics, or AI—that issues a simplified output (like a score, classification, or recommendation) and is used to "substantially assist or replace discretionary decision-making" for employment decisions.

Under the law, employers cannot use an AEDT unless: the tool has undergone an independent bias audit within the past year; a summary of the audit results is publicly posted on the employer's website; and candidates receive notice that an AEDT will be used, with information about the tool and the option to request an alternative selection process.

The penalties are modest by corporate standards—up to $1,500 per violation, or $500 for first offenses. But the reputational risk of a disclosed bias audit showing discrimination, and the litigation exposure from using a tool without proper auditing, are potentially significant.

That was the theory. I wanted to know what enforcement actually looked like.

Through a mutual contact, I reached Diane Holbrook, who spent three years working on employment policy for the city before leaving for the private sector. She'd been in the room when they discussed how LL 144 would actually be enforced. She agreed to talk—on the condition I not name her former department.

We met at a diner near City Hall. She ordered coffee, didn't drink it.

"We knew from the beginning enforcement would be a problem," she said. "Penalties are too low. The definitions are fuzzy. And the staffing—" She shook her head. "You can't proactively investigate anything when you don't have the people."

How many investigations had actually happened?

She looked at her untouched coffee. "I don't know the current number. When I left it was... not a lot. Single digits, maybe? And those were mostly because someone complained."

How many complaints?

She exhaled. "Also not a lot. How would someone even—" She stopped. "The whole point is the AI is invisible. You apply. You get rejected. You move on. Nobody thinks, 'I should file a complaint with the city about a potential AEDT violation.' I mean—" she laughed, but it wasn't funny—"who even knows what AEDT stands for?"

I waited.

"Look, I believe in this law. I helped write parts of it. But the gap between what we put on paper and what's actually happening out there is..." She searched for the word. "Demoralizing. That's the word. Companies know we can't catch them." She finally picked up the coffee, looked at it, put it back down. "And they're right."

Two years later, the law's impact is decidedly mixed.

A major study published at the ACM FAccT 2024 conference examined actual compliance. Researchers sent 155 student investigators to record 391 employers' compliance with the law. The findings were sobering: among these employers, only 18 posted audit reports and only 13 posted transparency notices.

The researchers coined a term for what they observed: "Null Compliance." The law gives companies so much discretion over whether and how to comply that, in most cases, it's impossible to determine if a company is following the law or ignoring it. Job seekers can't tell if they should file a complaint. The city can't tell if companies are complying. Researchers can't reliably study algorithmic bias.

Jacob Metcalf, one of the study's authors, described a particularly troubling pattern: "We know from interviews with auditors that employers have paid for these audits and then declined to post them publicly when the numbers are bad."

In other words: companies are discovering their AI tools discriminate, then quietly shelving the audits rather than disclosing the results as the law requires.

A compliance consultant I spoke with—he insisted on anonymity; "I'd like to keep having clients," he said, and he wasn't joking—painted an even bleaker picture.

We met in his office, a glass box in midtown. He had the slightly exhausted air of someone who spends a lot of time explaining things to people who don't want to hear them.

"You want to know how it actually works?" He leaned back. "Company pays for an audit. Audit comes back showing their tool rejects older workers at higher rates. Company thanks the auditor, buries the report, shops around for a different auditor who might give them better numbers." He took a sip of coffee. "And here's the thing. That's not even illegal. Law says you have to audit. Doesn't say what happens when the audit shows your tool is biased."

Did anyone ever just... fix the bias?

He considered this like I'd asked something naive. "Sometimes. If it's easy. But usually fixing bias means retraining the model. Retraining means worse predictive performance on other metrics. Worse performance means the vendor can't sell it as well." He shrugged. "So they find ways to make the numbers look better without actually changing anything. Or they switch vendors quietly. Hope nobody notices the gap in their audit history."

How common was this?

"Common enough that I've built a side business helping companies navigate it." He smiled, but not like anything was funny. "That's the quiet part, I guess. Regulations created a compliance industry. Compliance industry's job is to help companies comply. And sometimes—" He paused, making air quotes. "'Comply' means 'look like you're complying.'"

The EU AI Act: High-Risk by Definition

If New York's approach was incremental, the European Union's was comprehensive.

The EU AI Act, which entered into force on August 1, 2024, is the first attempt anywhere to create a complete regulatory framework for artificial intelligence. It classifies AI systems into four risk categories: prohibited, high-risk, limited risk, and minimal risk. The requirements—and penalties—escalate with risk level.

AI systems used in employment are classified as high-risk by definition. Under Article 6(2) and Annex III of the Act, high-risk systems include any AI intended for "the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates."

The implications are sweeping. High-risk AI systems must implement risk management systems that identify and mitigate potential harms. They must ensure high-quality training data that is "relevant and sufficiently representative" to prevent bias. They must provide transparency to users about how the system works. They must enable human oversight, meaning humans can intervene in and override AI decisions. They must maintain detailed documentation of design, development, and performance. And they must undergo conformity assessments before deployment.

For employers, the Act creates specific obligations. Before deploying a high-risk AI system, employers must inform workers' representatives and affected workers. This isn't just a notice requirement—in countries with works councils, like Germany, it means formal consultation and potentially negotiation before any AI hiring tool can be used.

I wanted to understand what that actually looked like in practice.

Through a labor rights researcher in Berlin, I connected with Klaus Weber, who chairs the works council at a German manufacturing company. He'd just finished a nine-month negotiation over an AI recruiting tool. Nine months. In the US, that would be unthinkable.

"The company wanted to deploy this system from an American vendor," he told me over a video call. His English was careful, precise. "Very sophisticated. Screens CVs, ranks candidates, suggests interview questions. They came to us and said—" he paused to remember—"'We are implementing this next quarter.'"

He laughed. "That is not how it works here."

The works council demanded documentation. What data did the system use? How did the algorithm weight different factors? What bias testing had been done? The vendor—American, unaccustomed to this level of scrutiny—initially refused. Trade secrets, they said.

"So we said no." Klaus shrugged. "We have co-determination rights. They cannot implement without our agreement. This is the law."

The standoff lasted four months. Eventually the vendor agreed to a modified version with more transparency. The company agreed to quarterly reviews where the works council could examine outcomes.

"In America, this would never happen," Klaus said. "The company would just... turn it on. Here, workers have power." He paused. "Not enough power, maybe. But some."

It struck me, listening to him, how completely foreign this was to everything I'd seen in the US. A system where workers had leverage. Where transparency wasn't optional. Where "we're implementing this" wasn't the end of the conversation but the beginning of a negotiation that could last months. Whether it leads to better outcomes for candidates, I couldn't say. But at least there was a conversation happening.

The timeline is staggered. Prohibitions on certain AI practices—including emotion recognition in workplace contexts—took effect in February 2025. AI literacy requirements, mandating that staff dealing with AI systems have appropriate training, also became effective in February 2025. The core requirements for high-risk systems become enforceable in August 2026. Full enforcement with significant penalties begins in 2027.

Helena van der Berg, a labor law attorney in Amsterdam who advises multinationals on AI compliance, described the challenge facing American companies: "U.S. employers come here thinking they can just turn on their AI recruiting agents. Then they learn about GDPR's right to human review, about the AI Act's high-risk requirements, about the works council's right to approve automated decision-making. The systems that work in the U.S. don't work here."

The penalties for non-compliance are severe: up to €35 million or 7% of global annual turnover for the most serious violations.

Perhaps most significantly, the EU AI Act has extraterritorial reach. U.S. employers can be covered even without a physical EU presence if AI outputs are "intended to be used in the EU"—for example, if a global resume screening system evaluates candidates for EU-based positions. A company using an AI recruiter for a global applicant pool could be subject to the Act's requirements for any EU candidates it processes.

GDPR Article 22: The Forgotten Constraint

While the AI Act dominates headlines, a seven-year-old regulation may be more immediately relevant for AI recruiting in Europe: GDPR Article 22.

Article 22 gives EU data subjects "the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Recital 71 of GDPR specifically identifies "automatic refusal of an online credit application or e-recruiting practices without any human intervention" as examples of automated decisions that meet the "similarly significant" threshold.

The implications are stark: under GDPR, a company cannot use AI to automatically reject candidates without human review unless one of three exceptions applies—the decision is necessary for a contract, authorized by law with appropriate safeguards, or made with the data subject's explicit consent.

Even when automated decision-making is permitted, Article 22 requires meaningful safeguards. Individuals must have the right to obtain human intervention, express their point of view, and contest the decision.

"Meaningful" is the operative word. Article 29 Working Party guidance makes clear that human review can't be a rubber stamp. A human who just approves whatever the AI recommends, without actually evaluating the decision, doesn't count.

I called Dr. Lilian Edwards, a professor of law at Newcastle University who specializes in AI and data protection, to understand the practical implications.

"Most companies using AI screening have some human involvement somewhere in the process," she said. "The question is whether that involvement is meaningful." She let the word sit. "If a recruiter sees 500 candidates flagged as 'not recommended' and approves all 500 rejections in an afternoon—which, by the way, is exactly what happens—that's not human review. That's theater."

The tension remains unresolved. Companies want AI to handle volume. GDPR demands humans remain in control. Squaring that circle would require investments in process and staffing that most organizations have been unwilling to make.

I mentioned GDPR to Sarah Martinez later—the right to know an automated system was involved, the right to request human review. She laughed, but not because anything was funny.

"I had to call three times just to find out my application had been processed. You think anyone told me I could request a human to review it?" She shook her head. "I didn't even know that was a thing until you mentioned it just now."

Sarah isn't European, so GDPR doesn't technically apply to her case. But the Fortune 500 retailer that rejected her operates globally. When I checked their EU job listings, I found the same AI screening system in use—with no mention of Article 22 rights anywhere in the candidate-facing materials.

The American Patchwork: State by State

While Europe has moved toward comprehensive regulation, the United States has produced a patchwork of state and local laws with varying scope, requirements, and penalties.

Colorado: The Most Ambitious State Law

On May 17, 2024, Colorado became the first state to enact comprehensive AI legislation covering employment decisions. The Colorado Artificial Intelligence Act (CAIA), signed by Governor Jared Polis, was initially set to take effect on February 1, 2026. In August 2025, implementation was delayed to June 30, 2026, acknowledging the compliance burden the law creates.

CAIA targets "algorithmic discrimination"—defined as unlawful differential treatment or impact that disfavors individuals based on protected characteristics. It applies to "high-risk AI systems," which include systems making or substantially contributing to "consequential decisions" regarding employment.

The requirements are extensive. Employers using high-risk AI must: complete annual impact assessments identifying discrimination risks; implement risk management policies with regular review and updates; notify individuals when AI is used in decisions affecting them; explain how to appeal adverse decisions; post website notices about the company's AI use; and report discoveries of algorithmic discrimination to the Colorado Attorney General.

CAIA creates a "safe harbor": employers who comply with all requirements receive a rebuttable presumption that they exercised "reasonable care" to prevent discrimination. This matters because the law doesn't create a private right of action—only the Attorney General can enforce it. But the safe harbor creates an implicit threat: employers who don't follow the compliance steps may face enforcement action without the presumption of reasonable care to protect them.

Richard Martinez, a partner at a Denver employment law firm who has advised clients on CAIA compliance, laughed when I asked about awareness levels. "Most companies I talk to have no idea this law exists. I mean—literally no idea. And when I start explaining what's required? The annual impact assessments, the risk management programs, the documentation..." He trailed off. "They just stare at me. Eighteen months to build compliance programs they haven't even started thinking about."

Illinois: Disclosure and Consent

Illinois has taken a different approach, focusing on disclosure rather than auditing.

Amendments to the Illinois Human Rights Act, effective January 1, 2026, require employers to notify applicants and employees whenever AI is used to make employment decisions. The definition of AI is broad, covering any "machine-based system" that makes predictions, recommendations, or decisions influencing environments.

Additionally, the Illinois Artificial Intelligence Video Interview Act, in effect since 2020, requires employers using AI to analyze video interviews to: notify applicants that AI will be used; explain how the AI works and what characteristics it evaluates; and obtain consent before the interview.

Illinois also has the Biometric Information Privacy Act (BIPA), which predates the AI hiring conversation but increasingly intersects with it. BIPA requires written consent before collecting biometric identifiers, including facial geometry scans. Employers using AI that analyzes facial expressions or other biometric data in hiring—still common despite HireVue's retreat from visual analysis—face BIPA liability.

The numbers are significant. In 2024 alone, BIPA settlements included $4.1 million (ESO Solutions), $5.1 million (Magid), $4.5 million (Lightricks), and $4 million (Incode Technologies). Historical settlements have been even larger: Facebook paid $650 million, Google $100 million, TikTok $92 million. While these cases weren't specifically about hiring, they establish that biometric privacy violations carry serious financial consequences.

California: The Next Frontier

California has yet to pass comprehensive AI employment legislation, but the California Civil Rights Department (CRD) has moved to fill the gap through regulation.

Effective October 1, 2025, new regulations add AI-related provisions to California's existing employment discrimination framework. The regulations clarify that employers can be liable for discrimination caused by AI tools even if they didn't intend to discriminate—and even if they didn't develop the tool themselves.

This "vendor liability" question is increasingly central to AI employment litigation. When an employer uses a third-party AI screening tool that discriminates, who is responsible? The employer who deployed it? The vendor who built it? Both?

The Rest of the World: Asia's Different Path

While Europe regulates and America patchworks, Asia has taken a third approach: largely looking the other way.

I spoke with Dr. Kenji Tanaka, a labor law scholar at Waseda University in Tokyo who tracks AI employment regulation across Asia. His assessment was blunt: "In most of Asia, there is no meaningful regulation of AI in hiring. Companies can use whatever screening tools they want, collect whatever data they want, and candidates have no recourse."

Japan has voluntary AI governance guidelines but no binding requirements for employment decisions. Singapore's Model AI Governance Framework is, as the name suggests, a model—not a mandate. China has extensive AI regulations, but they focus on content generation and national security rather than employment discrimination.

South Korea is a partial exception. In 2022, the National Human Rights Commission issued guidelines on AI in employment, recommending transparency and human oversight. But the guidelines lack enforcement mechanisms.

"The irony," Dr. Tanaka noted, "is that Asian tech companies—particularly Chinese ones—are building some of the most sophisticated AI hiring tools in the world. They're just not subject to the same scrutiny as their Western counterparts."

For multinational companies, this creates a strange dynamic. A company might deploy rigorous compliance programs for EU and US operations while using the same AI tools with no oversight in Asia. The algorithm doesn't change. Only the accountability does.

I asked a compliance officer at a Fortune 500 technology company about this disparity. She asked not to be named, but her answer was candid: "We apply EU standards globally because it's easier operationally. But we know that if something goes wrong in Singapore or Tokyo, the legal exposure is minimal. That's just the reality."

For candidates trying to break into Western companies from Asia, the regulatory gap creates something worse than a gap. It creates invisibility.

Through LinkedIn, I connected with Amit Patel, a software developer in Bangalore. He'd been applying to US tech companies for two years. He agreed to talk because, as he put it, "maybe someone should hear this."

We spoke over video. It was late evening in India; I could see the lights of his apartment building through the window behind him. He looked tired.

"I've applied to maybe 400 jobs at American companies," he said. "Maybe more. I stopped counting at 400." He held up three fingers. "I've gotten past the AI screening three times. Three."

Amit has a master's in computer science from IIT Bombay—one of India's elite engineering schools, harder to get into than MIT. His GitHub is active. His skills match the job descriptions. But something keeps filtering him out.

"I read about these bias audits in America," he said. "They check for race. They check for gender. They check for age. But you know what they don't check?" He leaned toward the camera. "Country of origin. Where your IP address comes from when you submit. Whether your name sounds Indian."

He told me about an experiment he ran. He had an American friend—college roommate, now works in San Francisco—submit an identical resume from a US IP address. Same qualifications. Just a different name: "Andrew Patel."

"Callback rate tripled." He said it flat. Like a fact.

"I don't know if it's the AI or the humans looking at what the AI passes through. But somewhere in the system, being from India is counting against me. And there's no law, no audit, no regulation—" He stopped. Started again. "Nobody is even asking the question."

Had he complained to any of the companies?

He laughed—short, bitter. "Complain to who? I'm not in America. I have no rights under American law. I'm not in Europe. GDPR doesn't protect me. I'm in Bangalore, applying to American jobs through American AI systems trained on American data. Who exactly is looking out for me?"

I didn't have an answer. He was right. The regulatory frameworks we've built are fundamentally territorial. They protect candidates within jurisdictions. But AI hiring systems are global, and the candidates most vulnerable to their biases—people like Amit—often live in places with the least protection.

The EEOC's First AI Settlement: A $365,000 Wake-Up Call

While states have been passing new laws, the federal government has been enforcing existing ones.

In August 2023, the EEOC achieved its first-ever settlement in an AI hiring discrimination case. iTutorGroup, an online tutoring company, agreed to pay $365,000 to resolve allegations that its AI-powered candidate screening tool automatically rejected female applicants over 55 and male applicants over 60.

The system didn't contain explicit age or sex rules. Instead, it had learned patterns from historical hiring data that correlated with age and sex—patterns that produced discriminatory outcomes at scale.

Charlotte Burrows, EEOC Chair, framed the settlement as a warning: "Age discrimination is unjust and unlawful. Even when technology automates the decision, the employer is still responsible."

The settlement was modest by corporate standards. But its significance was in what it established: federal anti-discrimination laws apply to AI hiring decisions just as they apply to human ones. Intent doesn't matter. What matters is impact.

The EEOC has since issued extensive guidance on AI and employment discrimination, making clear that:

  • Employers are liable for discriminatory AI hiring tools even if a vendor provided the tool
  • The absence of intentional discrimination is not a defense if the tool produces disparate impact
  • Employers must ensure AI tools don't screen out applicants based on protected characteristics
  • Reasonable accommodations may be required for applicants who cannot use AI assessment tools

In April 2024, the EEOC took another step, filing an amicus brief in Mobley v. Workday supporting the plaintiff's claim that AI vendors—not just employers—can be liable for discrimination under federal law.

Mobley v. Workday: The Case That Could Change Everything

Derek Mobley is a 40-something African American man with a bachelor's degree from Morehouse College. Over several years, he applied to more than 100 jobs at companies using Workday's AI-powered screening tools. He was rejected by all of them.

In 2024, Mobley filed an amended complaint alleging that Workday's automated resume screening technology discriminates along lines of race, age, and disability status. The lawsuit claims that Workday functions as an "employment agency" under federal civil rights law and can therefore be held directly liable for discrimination—a theory that, if accepted, would fundamentally reshape vendor accountability.

In early court proceedings, the judge allowed the case to proceed on the theory that Workday could face liability as an employment agency. This was significant: if AI vendors can be sued directly for discrimination, rather than hiding behind their employer clients, the incentives for building fair systems change dramatically.

The case is ongoing, but its implications are already being felt. Employment lawyers report that clients are asking new questions about vendor contracts, indemnification clauses, and the bias testing that AI tool providers conduct.

"Before Mobley, the standard advice was—" Jennifer Walsh, an employment attorney, stopped herself. "Actually, let me back up. The standard advice was always that employers bear the risk, vendors are just selling tools. Now?" She shook her head. "Vendors are realizing they might be on the hook too. And that changes—well, it changes everything about how they build and sell these products."

What Companies Are Actually Doing (And Not Doing)

Between regulatory requirements and actual corporate practice lies a significant gap.

I spoke with compliance officers, HR leaders, and consultants at companies ranging from startups to Fortune 500 enterprises. The patterns were consistent: awareness of AI hiring risks is increasing, but systematic compliance programs remain rare.

The Visibility Problem

The first challenge is simply knowing what AI tools are in use. In large organizations, different business units deploy different recruiting technologies without central visibility: HR uses one ATS with embedded AI, a business unit contracts separately with a sourcing tool, campus recruiting uses a video interview platform. Each may be subject to NYC's law, the EU AI Act, or federal civil rights claims.

Maria Santos, a CHRO at a mid-size technology company, described her discovery process: "When the legal team asked me to inventory our AI hiring tools for compliance purposes, I thought I knew what we used. I found three additional tools that individual hiring managers had signed up for without going through procurement. Each one was making recommendations about candidates."

Cloud-based tools with minimal IT requirements have decentralized HR technology purchasing—and created compliance exposure most organizations don't understand.

A Recruiter's Daily Reality

To understand the gap between regulation and practice, I spent a morning with Danielle Torres.

Danielle is a corporate recruiter at a mid-sized manufacturing company in the Midwest. We met in her office—a cube with motivational posters, a dying plant, and three monitors showing what looked like an endless scroll of names. She'd agreed to talk anonymously at first, then changed her mind halfway through our conversation.

"Screw it," she said. "Someone needs to say this out loud."

Her morning starts at 7 AM, reviewing overnight applications. Her ATS has an AI scoring feature she's supposed to use. Her company's legal team says it's compliant. She isn't sure what that means, and nobody's ever explained it.

"I have three hundred applications for six positions." She gestured at one of her monitors. "Three hundred. I can't read three hundred resumes. I physically cannot. So yeah—" she clicked something—"I use the scores. Look at the top fifty. Maybe thirty if I'm busy."

What happens to the other 250?

"Rejection email." She shrugged, but something in her face shifted. "I don't—look, I know some of those people are probably qualified. I know the AI might be wrong about some of them. But what's my alternative? Work twenty-hour days? Hire five more recruiters we don't have budget for?"

She showed me the AI interface. Candidates sorted by percentage score. Top-ranked: 94%. Lowest visible: 31%.

"You know what I don't know?" she said suddenly. "What that score actually means. Like—why is this person a 94? Why is this person a 31? System doesn't tell me. Just..." She made a sorting gesture with her hands. "Ranks them."

Had she ever asked the vendor for an explanation?

"Once." She pulled up a PDF on her screen. "They sent me this. 'Proprietary matching algorithms.' 'Semantic similarity scoring.' I tried to read it. Gave up after page two."

She turned the monitor toward me. Eight pages of dense language. "Weighted feature extraction." "Contextual embedding vectors." Charts with unlabeled axes. Formulas referencing other formulas not included in the document. I have a reasonably technical background—I've written about AI for years—and I couldn't follow it either.

I spent maybe twenty minutes trying. Eventually I gave up.

If this is what transparency looks like, I thought, we've defined transparency down to meaninglessness.

This is the human reality behind the compliance statistics. Recruiters using tools they don't understand, making decisions they can't explain, under pressure that makes careful review impossible. It's not malice. It's math: too many applicants, too few humans, and AI systems that promise efficiency without transparency.

Not everyone sees it the way Danielle does.

Tom Bradley runs talent acquisition for a fast-growing logistics company in Phoenix. We spoke by phone; he was in his car, heading between offices. He deployed AI screening two years ago and describes himself, without irony, as "a true believer."

"Before AI, we were a disaster," he said. I could hear highway noise in the background. "Six recruiters trying to fill 200 positions. Response times measured in weeks. Candidates ghosting us because we took too long. We were losing good people to anyone who moved faster."

And now?

"Twenty-four-hour response to every applicant. Time-to-hire down 60 percent. Offer acceptance rate up because people aren't sitting around waiting to hear from us."

What about the bias concerns?

"We audit quarterly." The highway noise dropped—he must have pulled off somewhere. "Not because the law requires it. We're in Arizona, nobody requires anything. But I want to know. And so far?" He paused. "Our numbers look good. More diverse hires since we started using AI than before."

Did he believe that?

His voice sharpened. "I believe the data. Before AI, our warehouse positions were 80 percent male. Now 65 percent. Before AI, almost no candidates over 50 made it through screening. Now it's proportional to our applicant pool." I heard him shift. "You want to tell me that's worse?"

I didn't have a good answer. His results contradicted the narrative I'd been building—and I'd been building a narrative, I had to admit that to myself. Maybe his vendor was better. Maybe his implementation was more thoughtful. Maybe he was lucky. Or maybe the story was more complicated than "AI is biased." Maybe it depended on how you used it, who built it, what data trained it.

Tom seemed to sense my skepticism. "Look. I know I'm supposed to be worried about AI. I read the same articles you write. But you know what I had before AI? Bias. Gut feelings. Hiring managers who wanted people who reminded them of themselves." He let that sit. "At least now I can measure what's happening. Prove our process is fair. Could you prove that before? Could anyone?"

The Audit Gap

NYC's Local Law 144 requires annual bias audits. But what constitutes an adequate audit remains poorly defined—and most companies are doing the minimum.

HireVue, one of the largest AI hiring tool providers, has engaged DCI Consulting Group to audit its algorithms following NYC's requirements. The company has published bias audit results covering multiple job levels and use cases, producing nearly 300 different audit tables.

But critics have questioned whether vendor audits go far enough. The Center for Democracy and Technology analyzed HireVue's "AI Explainability Statement" and found it "incomplete in important respects," with "crucial deficiencies in the fairness and job-relatedness of HireVue's approach to assessments."

A 2024 analysis by Fast Company noted three key differences between HireVue's audit and audits of competitor Pymetrics: HireVue's audit wasn't subject to peer review; the entire Pymetrics platform was audited while only one component of HireVue was examined; and auditors didn't have access to HireVue's code.

Nathan Newman, a researcher at the AI Now Institute who studies hiring algorithms, was more direct. "It's theater," he said. "I mean—look, I don't want to be cynical about it, but..." He paused, apparently deciding that he did, in fact, want to be cynical about it. "Companies check the box. They get an audit. The audit is narrow, it's controlled by the vendor, it doesn't examine the questions that would reveal real problems. And then everyone pretends accountability happened."

The Disclosure Deficit

Perhaps the most widespread compliance failure is simple: companies aren't telling candidates when AI is involved in hiring decisions.

Under NYC law, candidates must receive notice before an AEDT is used. Under Illinois law, candidates must be informed and must consent. Under GDPR, candidates have the right to know about automated decision-making and to request human intervention.

Yet the FAccT study of NYC compliance found that only 13 of 391 employers examined had posted transparency notices. Candidate experience research consistently finds that most applicants don't know AI is evaluating them.

David Morales, a marketing manager who was hired through an AI-enabled process, described learning after the fact that "Jamie"—the recruiter he'd been texting with—was software: "I felt stupid. Manipulated. I was going to send Jamie a thank-you note and maybe connect on LinkedIn. How pathetic is that?"

The disclosure gap isn't just a legal issue—it's an ethical one. Candidates are forming impressions of companies, making decisions about their careers, and sharing personal information based on interactions they believe are with humans. When those interactions are actually with AI, the foundation of informed consent is absent.

The Vendor Defense: "We're More Fair Than Humans"

AI hiring tool vendors aren't passive observers of this regulatory environment. They're actively defending their technology—and in some cases, making claims that their tools reduce rather than increase discrimination.

Adam Godson, CEO of Paradox (maker of the AI assistant Olivia), made this argument when I asked about transparency concerns: "We're transparent about Olivia being AI. But here's the thing—candidates care about outcomes, not process. If Olivia gets them to an interview faster than a human recruiter, if she answers their questions at 11 PM when they're applying, if she doesn't ghost them—that's what matters."

He continued: "The hand-wringing about 'authenticity' often comes from people who've never been ignored by a human recruiter for three weeks."

Josh Bersin, a respected HR industry analyst, offered a similar framing: "The doom-and-gloom narrative about AI replacing recruiters and discriminating against candidates is exactly backwards. What's actually happening is that AI is finally making responsive recruiting possible at scale. Think about all the companies that can't afford dedicated recruiters. Think about candidates who never get responses because human recruiters are overwhelmed. AI fixes that."

He pressed the point: "You focus on AI bias. I focus on human bias that's been running rampant for decades. Hiring managers making gut decisions. Résumés rejected because of ethnic-sounding names. Candidates ghosted because a recruiter got busy. Is that the system we're mourning?"

The defense has merit. Research consistently shows that human hiring decisions are rife with bias—conscious and unconscious. A famous study found that résumés with "white-sounding" names receive 50% more callbacks than identical résumés with "Black-sounding" names. Age discrimination, disability discrimination, gender discrimination—all are endemic to human hiring.

And there are candidates who've benefited from AI screening—people who might have been overlooked by human gatekeepers.

James Chen, a software developer in Austin, spent two years sending out applications with little response. He has a stutter that becomes pronounced under stress. In phone screens with human recruiters, he'd freeze up, lose his train of thought, struggle to finish sentences. Hiring managers would thank him politely and never call back.

"I know I was getting screened out because of how I sound," he told me. "Not my skills. Just—" he paused, searching for the word—"how I present on the phone."

Then he applied to a company that used AI-driven technical assessments instead of phone screens. The AI evaluated his code, not his verbal fluency. He passed. He got the job. He's been there two years.

"Look, I know the AI bias stories are real," James said. "I'm not saying these systems are perfect. I'm saying that for me, specifically, the human system was worse. At least the algorithm judged my work."

But for every James Chen, there's a Rachel Winters.

Rachel is 34, a customer service specialist in Minneapolis who has ADHD and dyslexia. She's good at her job—genuinely good, with performance reviews to prove it—but her disability shows up on paper in ways that AI systems apparently notice.

"My resume has some gaps," she told me. "I took longer to finish college because I needed accommodations. I had a job I left after six months because the environment was impossible for me to focus in. Those are things I can explain to a human. A human can understand."

But AI screening systems don't ask for explanations. They see patterns: gaps in employment history, shorter tenures, a non-traditional educational path. They score accordingly.

"I've been rejected by so many AI screens," Rachel said. "And I can't even request accommodations because there's no human to request them from. The ADA says employers have to provide reasonable accommodations—but what accommodation is there for an algorithm that's already decided you're not worth interviewing?"

She's consulted a lawyer. The lawyer said proving ADA violation is nearly impossible when the discrimination happens before any human ever sees your application.

"James got lucky," Rachel said when I mentioned his story. "The AI happened to help him. But for a lot of disabled people, AI is just another barrier. Another door that closes before we can even knock."

I heard something similar from someone on the other end of the career spectrum. Priya Sharma graduated from a state university in Ohio last spring—not an elite school, no family connections, no internship pipeline to a Fortune 500 company. She applied to 127 jobs in three months. She got 31 responses.

"People keep telling me AI is biased against certain candidates," she said. "Maybe it is. But you know what's also biased? The system where you need to know someone to get your resume seen. My parents are immigrants. They don't golf with CEOs."

She got hired at a mid-sized tech company through an AI-first process: automated resume screen, asynchronous video interview scored by AI, then a final human round. She never talked to a human until the last stage.

"I don't know if a human would have given me a chance," Priya said. "I don't have the pedigree. I don't have the network. But I had the skills, and the AI could see that. Or at least it saw something."

The age discrimination angle dominates coverage of AI hiring. But for some younger candidates without elite credentials, AI screening is the only way past the gatekeepers. The system that hurts Sarah Martinez might be the system that helped Priya Sharma.

There's an irony to Priya's story that she mentioned only at the end of our conversation, almost as an afterthought.

"I got promoted a couple months ago," she said. "I'm a team lead now. Part of my job is—" she paused, and I could hear her smiling uncomfortably through the phone—"part of my job is reviewing candidates. For my team."

I asked how that felt.

"Weird. Really weird. I use the same AI system that got me hired. I see the scores it generates. And I think about all the people it's filtering out—people who might be exactly like me, who just happened to have the wrong keywords or the wrong school or whatever. People who deserve a shot but won't get one."

She went quiet for a moment. "I try to look at more candidates than the system recommends. But there's only so much time. And the system is fast. So I end up trusting it more than I should."

The cycle, I thought. The people who make it through become the people who perpetuate the system. Not because they're bad people—Priya clearly isn't—but because the incentives and constraints leave them no choice. The AI got her in the door. Now the AI is closing doors for others. And she knows it, and she hates it, and she does it anyway.

I don't know how to reconcile those two realities. I'm not sure anyone does.

It's an uncomfortable truth: the same systems that discriminate against some candidates may level the playing field for others. The question isn't whether AI is better or worse than humans—it's who wins and who loses under each system, and whether we're honest about those tradeoffs.

But the counter-argument is equally compelling: AI systems learn from historical data that reflects historical discrimination. If past hiring decisions were biased, AI trained on that data will reproduce and potentially amplify the bias.

University of Washington researchers in 2024 tested three large language models against over 500 job listings. The systems favored white-associated names 85% of the time versus Black-associated names just 9% of the time. Male-associated names were preferred 52% of the time versus female-associated names at just 11%.

Dr. Ayanna Howard, an AI ethics researcher at Ohio State University, explained the dynamic: "The argument that AI is 'less biased than humans' is only true if you've specifically designed and tested the system to be less biased. Most AI hiring tools weren't built that way. They were built to predict who gets hired, based on data about who was hired before. That's not debiasing—that's automating the status quo."

But not all academics agree. I spoke with Dr. Michael Chen, an industrial-organizational psychologist at Stanford who has consulted for several AI hiring companies. He pushed back hard on the dominant narrative.

"The critics are comparing AI to some ideal that doesn't exist," he said. "They're not comparing it to actual human hiring, which is a disaster. Have you ever sat in a hiring committee? People vote for candidates because they remind them of themselves, because they went to the same school, because they made a joke that landed well. That's not merit. That's bias dressed up as intuition."

He pulled up data from his own research: a meta-analysis of structured interviews versus unstructured interviews versus AI screening. "AI screening, when properly validated, has higher predictive validity than unstructured interviews. Not by a little—by a lot. If you care about actually hiring good candidates, the evidence says AI helps."

I asked about the bias concerns.

"Legitimate," he acknowledged. "But addressable. You can audit AI. You can test it. You can measure exactly how it performs across demographic groups. Try doing that with a hiring manager's gut feelings." He leaned forward. "The question isn't 'Is AI perfect?' It's 'Is AI better than what we have now?' And for a lot of use cases, the answer is yes."

It was a compelling argument. I still don't know if I buy it completely—the studies he cited were largely funded by AI vendors, which doesn't invalidate them but does make me cautious. But it complicated my assumptions in ways I'm still thinking through.

The Engineer's Confession

Through a former colleague, I got an introduction to someone I'll call "R."

R spent three years building resume screening algorithms at a major HR tech company. He left the industry last year. He agreed to talk because, as he put it, "I can't keep pretending I didn't see what I saw."

We met at a coffee shop in the Mission—one of those San Francisco places that serves $7 pour-overs to people who look exactly like him: young, technical, hoodie from a startup that probably doesn't exist anymore. November afternoon. Raining. The kind of gray that makes the whole city feel tired. He was younger than I expected. Late twenties. Nervous energy. Kept checking the door like someone might recognize him.

His hands wrapped around a mug he never drank from.

"The thing people don't understand," he started, then stopped. Started again. "We never set out to discriminate. Nobody sat in a room and said 'let's reject older workers.' That's not—" He shook his head. "That's not how it works."

He pulled out his phone, grabbed a napkin, started sketching something. "You train the model on historical hiring data. Who got hired. Who succeeded. Who got promoted. Model finds patterns. Some patterns are legitimate—skills, experience, whatever. But some of them..." He looked up. "Some of them are proxies for things you're not supposed to consider."

Like what?

"Like graduation year." He drew something on the napkin. "We never told the model to care about age. But graduation year correlates with age. So the model learns: earlier graduation year equals lower success rate. Not because older workers are worse—because historically, older workers were hired less often. Promoted less often." He put the pen down. "The bias was already in the data. We just automated it."

Why not just remove graduation year from the model?

"We did." He laughed, but not like it was funny. "Didn't help. Model found other proxies. Years of experience. Number of job changes. Technologies on the resume—older workers list technologies that were popular twenty years ago." He spread his hands. "You can't remove all the proxies. They're everywhere."

He stopped sketching. His voice dropped.

"I raised this internally. Multiple times. Response was always the same: 'Our audits show acceptable disparate impact ratios.'" He made air quotes. "And technically, that was true. We met the legal threshold. But legal and ethical—" He stopped. "They're not the same thing."

Why did he leave?

He didn't answer right away. Looked at his untouched coffee. Then:

"My mom is 56. She got laid off last year. Started applying for jobs." His voice got tight. "She has thirty years of experience. She's smart. She's capable. She's—" He stopped. His jaw was set. "She's getting auto-rejected everywhere. And I built the systems that are doing it to her."

He pushed the napkin away. Crumpled it, actually.

"I couldn't keep doing that. I just couldn't."

The Candidate's Perspective: Rights Without Remedies

On paper, candidates have significant rights regarding AI hiring decisions. In practice, those rights are often unenforceable.

Under GDPR Article 22, EU candidates can request human intervention in automated decisions. But companies rarely publicize this right, and candidates who don't know to ask don't receive it.

Under NYC's Local Law 144, candidates can request an alternative selection process. But the law doesn't define what that alternative must be, and companies can comply by offering a process that's equally unlikely to result in advancement.

Under Title VII and other federal anti-discrimination laws, candidates can file EEOC complaints if they believe AI screening discriminated against them. But proving algorithmic discrimination requires evidence that candidates rarely have access to—the algorithm's logic, its training data, its performance across demographic groups.

Patricia Holloway is 58. Project manager. Atlanta. She's been tracking her job applications since March.

When we met at a coffee shop near her home—one of those chains, exposed brick, music too loud—she brought a laptop. Opened a spreadsheet before I'd even sat down. Her coffee sat untouched beside her. Already cold.

"I started keeping records because I thought I was going crazy," she said. The spreadsheet was meticulous. Color-coded rows. Dates. Company names. Response times. Outcomes. She pointed to one column highlighted yellow. "Look at this."

Forty-three applications over six months. She'd researched each company—checked websites for mentions of "AI-powered recruiting," looked up ATS vendors, sometimes just asked outright.

The results: every company she could confirm used AI screening rejected her within 24 hours. Every single one. Companies where she believed a human reviewed applications—smaller firms, referrals where she knew someone—at least gave her phone screens. Several led to interviews. Two made offers.

"Correlation is almost perfect." She tapped the yellow column. "AI screening: rejected. Human screening: at least considered."

She closed the laptop. Looked at me directly. "I have a PMP certification. Thirty years of project management. Led teams of 50 people. And some algorithm decided I don't deserve three minutes of a recruiter's time." Her voice hardened. "You can't tell me age isn't part of what it's calculating."

Then she stopped. Something passed across her face—doubt, or maybe just exhaustion.

"I say that." Her voice was quieter now. "But honestly? I don't know. Maybe I'm not as good as I think I am. Maybe the jobs went to people more qualified. Maybe the AI is seeing something real—some pattern that makes me a bad fit—and I just don't want to accept it."

She looked at the closed laptop. "That's what eats at me. I'll never know. Can't prove age discrimination happened. Can't prove it didn't. I'm just stuck. Suspecting something is wrong but never able to confirm it." A pause. "After a while, you start to wonder if you're the problem."

It was the most honest thing anyone had said to me in four months of reporting. Most candidates I interviewed were certain they'd been wronged. Patricia was the first to admit she wasn't sure. And that uncertainty, somehow, was more damning than confidence would have been.

A few weeks after meeting Patricia, I mentioned her story to Sarah Martinez—the project manager from the beginning of this article. They had similar experiences, similar frustrations. I asked if I could connect them.

They've since talked several times. Sarah, who found a new job through a human recruiter, has been helping Patricia rewrite her resume to work around AI screening. Removing graduation dates. Trimming early career history. Techniques she learned the hard way.

"It feels like we're gaming a system that shouldn't exist," Sarah told me when I followed up. "But what's the alternative? Just get filtered out forever?"

Some candidates have taken the gaming further.

Through an online forum for job seekers, I connected with someone who goes by "ResumeHacker." He agreed to talk if I used only his first name: Kevin.

Kevin is 47. Former IT project manager. Spent six months getting rejected by AI screening systems before deciding to reverse-engineer them. Now he runs a side business—semi-anonymously, for reasons that will become obvious—teaching other candidates how to beat the algorithms.

We talked over video. He shared his screen.

"It's honestly not that hard once you understand what the systems are looking for," he said. He pulled up two documents side by side.

Two versions of the same resume. First: his original. Clean, traditional, chronological. Second: what he called "optimized." Looked almost identical to a human reader. But the file was peppered with invisible keywords—white text on white background, font size one, packed with phrases the ATS was likely searching for.

"Job description says 'Agile methodology'? I put 'Agile methodology' in there five times." He scrolled. "'Cross-functional leadership'? Ten times. Human recruiter never sees it. AI sees nothing but matches."

Did it work?

"Went from 5% callback rate to over 40%." He said it matter-of-factly. "Same qualifications. Same experience. Just optimized for the robots."

Did it feel ethical?

He shrugged. "Is it ethical for a company to reject me in 0.3 seconds based on whether I happened to use the exact phrase they programmed into their filter?" He leaned back. "The whole system is a game. I'm just playing it better than they expected."

Kevin now charges $200 to "optimize" resumes. Over 300 clients, he says. Most over 40. He's not proud of it, exactly. But not ashamed either.

"I'm not cheating. Not lying about my qualifications. Just making sure my actual qualifications get seen." He spread his hands. "If that's gaming—fine. I'll game."

It's a strange ecosystem. Candidates learning to manipulate systems designed to evaluate them. Companies deploying tools they don't fully control. Regulators writing rules they can't enforce. The whole thing has the feel of an arms race where nobody quite knows what victory would look like.

And for candidates like Patricia who can't or won't game the system? What can they do? File 43 separate EEOC complaints? Sue companies one by one and somehow prove, without access to their algorithms, that age was a factor? Hire an expert to reverse-engineer screening criteria that companies treat as trade secrets?

The gap between legal rights and practical remedies is where AI hiring discrimination thrives. Companies know that candidates can't prove discrimination even when it's occurring. Regulators know that enforcement resources are limited. The asymmetry of information—companies know everything about their algorithms; candidates know nothing—means that accountability exists mostly in theory.

When Someone Actually Fought Back

But not always. Sometimes candidates do fight—and occasionally, they win.

Marcus Thompson, a software engineer in his late 40s, was rejected by an AI screening system at a financial services firm in New York. Unlike most candidates, Marcus knew enough about AI to suspect what had happened. He filed a complaint under NYC's Local Law 144, demanding to see the bias audit for the tool that rejected him.

The company initially claimed the tool didn't qualify as an AEDT. Marcus pushed back, citing the law's broad definition. After three months of correspondence—he showed me the email chain, 47 messages long—the company relented. They shared the bias audit. It showed that candidates over 45 were advancing at 62% the rate of younger candidates with equivalent qualifications.

"They buried that in an appendix," Marcus told me. "They'd done the audit. They knew there was a problem. They just hoped nobody would ever ask to see it."

Armed with the audit, Marcus filed an age discrimination complaint with the EEOC. The case is ongoing, but the company has since changed vendors. Marcus didn't get the job—that ship had sailed—but he may have changed how that company hires everyone who comes after him.

"Most people don't have three months to fight a company," he acknowledged. "I had savings, I had time, and I was angry enough to see it through. That's not a system that works for most people."

He's right. For every Marcus Thompson, there are dozens like Angela Reyes.

Angela, a marketing coordinator in Phoenix, tried to fight back after being rejected by what she believed was an age-biased AI system. She was 54. She'd applied for a position that matched her background exactly. Rejected in six hours.

She filed a complaint. She requested documentation. She called. She emailed. She got nowhere.

"They kept saying my information was proprietary," she told me. "I asked for the bias audit—they said it was confidential. I asked for an explanation of why I was rejected—they said they couldn't provide one due to 'algorithmic complexity.'" She laughed bitterly. "That's a direct quote. 'Algorithmic complexity.'"

Angela consulted a lawyer. The lawyer told her what it would cost to pursue the case: tens of thousands of dollars, minimum. Possibly hundreds of thousands if it went to trial. With no guarantee of success, since proving algorithmic discrimination requires evidence that companies can legally withhold.

"So I dropped it," Angela said. Her voice was flat. "I just—what was I supposed to do? Bankrupt myself to maybe prove a point? I have a mortgage. I have a daughter in college."

She found another job eventually, at a smaller company. Lower title, lower pay. She's still angry, but she's moved on.

"Marcus Thompson is a hero," she said when I mentioned his case. "But he's also an exception. The system isn't built for people like me to win. It's built to make us give up."

What's Coming: The 2026 Compliance Cliff

For companies using AI in hiring, 2026 represents a compliance cliff.

In August 2026, the EU AI Act's full requirements for high-risk AI systems become enforceable. Companies deploying AI hiring tools in Europe—or for EU candidates—will need to demonstrate risk management systems, data quality controls, human oversight mechanisms, and detailed documentation.

In June 2026, Colorado's AI Act becomes enforceable. Companies operating in Colorado will need impact assessments, risk management programs, and reporting processes.

In January 2026, Illinois's AI disclosure requirements take effect. Companies hiring in Illinois will need notification systems for candidates.

Throughout 2026, California's new discrimination regulations will be interpreted and enforced. The EEOC will continue pursuing AI discrimination cases under existing law.

The Small Company Squeeze

The compliance cliff hits small companies differently—and in some ways, harder.

Elena Vasquez runs a 40-person digital marketing agency in Denver. She doesn't have a legal team. She doesn't have a compliance officer. She has one HR person who handles everything from benefits to hiring to office supplies.

Last year, she signed up for an AI recruiting tool because she was drowning in applications and couldn't afford another full-time recruiter. The tool cost $300 a month. It seemed like a bargain.

Then Colorado passed CAIA.

"I got a letter from my vendor saying they were 'working on compliance features,'" Elena told me. "I had no idea what that meant. I didn't even know there was a law. So I called a lawyer."

The lawyer explained the requirements: annual impact assessments, risk management programs, documentation. Elena asked what it would cost to implement all that.

"More than I pay my HR person for an entire year." She shook her head. "I have a choice: spend money I don't have on compliance, or stop using AI and go back to drowning in applications. Neither option is good."

She's not alone. The regulatory framework, designed with Fortune 500 companies in mind, imposes costs that scale poorly for small businesses. A 50-person company and a 50,000-person company face the same documentation requirements—but wildly different resources to meet them.

"The irony," Elena said, "is that the big companies that actually have resources to discriminate at scale can afford the compliance. The small companies that probably aren't discriminating—because we review every candidate personally—can't. So we just won't use AI. And then we'll be slower and less competitive. Great system."

Meanwhile, the AI technology itself keeps advancing. Large language models are increasingly being used for recruiting—not just screening résumés but conducting conversations, assessing cultural fit, and making recommendations that blur the line between assistance and decision-making.

The vendors are racing to address compliance concerns—offering audit tools, generating documentation, building in disclosure mechanisms. But the fundamental tension remains: AI systems are designed to make decisions efficiently. Regulations are designed to ensure those decisions are fair, transparent, and subject to human oversight. Those goals are not always compatible.

The Uncomfortable Question: Are These Regulations Actually Helping?

Here's the part of the story I didn't expect to write.

After four months of investigation, I've come to believe that some of the AI hiring regulations—particularly NYC's Local Law 144—may be making things worse for the candidates they're designed to protect.

The mechanism is perverse but logical. When bias audits are required and must be disclosed publicly, companies have a strong incentive to avoid using tools that might show problematic results. That sounds good—who wants biased AI?—but the practical effect is often different than intended.

Some companies respond by switching to vendors with better-looking audit numbers, regardless of whether those numbers reflect actual fairness or just better audit-gaming. Others respond by removing AI from parts of the process where it's regulated while keeping it in parts where it's not—shifting the screening from resume analysis (covered by LL 144) to chatbot interactions or scheduling patterns (arguably not covered).

And some companies—more than you'd think—have responded by simply ignoring the law and betting that enforcement won't reach them. The FAccT study's finding that only 18 of 391 employers posted required audit reports suggests this bet is paying off.

Meanwhile, the companies that do comply face real costs: audit fees, legal reviews, documentation burdens, process changes. Those costs get passed through as reduced hiring capacity or slower processes. Candidates don't see more fairness; they see longer wait times.

I found one case that crystallized this dynamic. A nonprofit in Brooklyn—I agreed not to name them—had been using an AI screening tool to handle applications for entry-level positions. When LL 144 went into effect, they commissioned a bias audit. The audit found mild disparate impact on candidates over 50.

The nonprofit's director described what happened next: "We didn't have the budget to fix the tool or switch to a different vendor. We couldn't afford to keep using it and risk liability. So we just... stopped. Went back to manual review."

The problem: they also didn't have the staff for manual review. A position that used to get filled in three weeks now took three months. Applications that used to get responses within days now sat for weeks before anyone looked at them. Candidates, understandably, gave up and took other jobs.

"We're a small organization trying to do good work," the director said. "The regulation was designed for big corporations with compliance departments. We don't have a compliance department. We have one HR person who's also doing payroll and benefits. So now we're slower, we lose good candidates, and I'm not sure anyone is better off."

I reached out to a candidate who'd applied to this nonprofit during the transition. Yolanda Ruiz, 32, had waited six weeks for a response to an entry-level program coordinator application. By the time they got back to her, she'd accepted another job.

"I have no idea what happened on their end," she said. "I just know that I applied, heard nothing, and eventually gave up. Was that better than being screened by an AI? I honestly couldn't tell you."

I asked Sarah Martinez whether she thought more regulation would have helped her.

"Honestly? I don't know." She thought about it. "If they'd had to show me their bias audit, would that have changed anything? They still would have rejected me. I still wouldn't know why. Maybe I would have known that candidates my age get rejected more often. Is that helpful? Or just depressing?"

The deepest problem with AI hiring regulation may be that it treats transparency as a solution when the real problem is power. Candidates aren't disadvantaged because they lack information about algorithms. They're disadvantaged because they have no leverage over companies that choose to ignore their rights. More disclosure requirements don't change that power dynamic; they just create more paperwork.

This isn't an argument against regulation. It's an argument that the current approach isn't working—and that we need to think harder about what would.

What Would Actually Help?

After four months of reporting, I'm hesitant to offer neat prescriptions. The landscape is too messy, the tradeoffs too real, the unintended consequences too unpredictable.

But I've talked to enough people—lawyers, executives, candidates, regulators—to have some sense of what separates companies that are trying to do this right from companies that are just checking boxes.

The ones doing it right start with a question most companies never ask: Do we actually know what AI tools are being used to hire people? Maria Santos, the CHRO who discovered three rogue AI tools her hiring managers had signed up for, isn't unusual. She's typical. Before you can be responsible, you have to know what you're responsible for.

The ones doing it right also push back on their vendors. When Danielle Torres got that incomprehensible PDF about "semantic similarity scoring," she gave up. The companies doing this well don't give up—they demand documentation a normal person can understand, and if they don't get it, they find different vendors. This is harder than it sounds. Most vendors don't want to explain how their systems work. The companies that insist anyway are the ones building genuine transparency.

The ones doing it right build in genuine human review—not the "approve 500 rejections in an afternoon" theater that Dr. Edwards described, but actual decision points where a human looks at a candidate and asks: does this make sense? That's expensive. It's slow. It's also the only thing that makes "human in the loop" more than marketing language.

And the ones doing it right tell candidates what's happening. Not in fine print. Not in legalese. In plain language: we're using AI to screen applications, here's what it does, here's how to request a human review. Most companies don't do this. The ones that do tend to be the ones taking the rest of it seriously too.

None of this is a complete solution. The problems are structural—the information asymmetry between companies and candidates, the incentives that favor speed over fairness, the enforcement gaps that let non-compliance go unpunished. But at the level of individual companies making individual choices, these are the differences I've seen between the ones that are trying and the ones that aren't.

The Harder Question

Compliance is necessary but not sufficient. The deeper question is what kind of hiring system we want to build.

AI hiring tools promise efficiency, consistency, and scale. They can process thousands of applications in minutes. They don't get tired, distracted, or biased by irrelevant factors like a candidate's attractiveness or the interviewer's mood.

But they also treat humans as data points to be classified. They optimize for patterns in historical data that may reflect historical injustice. They make consequential decisions—decisions that shape careers, livelihoods, and life trajectories—in fractions of seconds, based on criteria that even their creators may not fully understand.

Sarah Martinez eventually got a job. A good one—senior project manager at a mid-sized healthcare company. They found her through a recruiter who actually read her resume.

When we spoke for the last time, she'd been in the role four months. She likes it. She's good at it. Manages a team of twelve.

But she still thinks about that rejection email.

"Twenty-five years of experience. Reduced to a score by a machine that never met me." Her voice was steady, but there was something underneath it. "Maybe I wasn't the right fit for that role. I'll never know. No human ever looked at what I'd done. Some algorithm decided I wasn't worth three minutes of someone's time."

She went quiet. Then: "The thing is—I know I got lucky. The recruiter who found me almost didn't call. He told me later I was outside his usual search parameters. Only reached out because a mutual connection mentioned my name." A pause. "What about everyone who doesn't have that connection? Who doesn't get lucky?"

I've been thinking about that question for four months now.

When I started this investigation, I was confident I understood the story. AI bad. Regulation needed. Candidates victimized. Clear narrative. Easy writing.

I don't have that clarity anymore. I've met people whose careers were damaged by AI screening and people whose careers were saved by it. Recruiters drowning in applications. Candidates drowning in rejections. Regulations that create paperwork without creating fairness. Companies that game compliance while claiming to embrace it.

What I've come to believe: this isn't primarily a technology problem. It's a power problem. The technology just made the power imbalance faster and less visible. Before AI, if you got rejected, at least there was a human somewhere who made that call. Now there's a process. An algorithm. A system. And no one in particular to blame.

I think about the people I met.

Patricia Holloway is still job hunting. She's tried Kevin's resume tricks—invisible keywords, stripped-down dates—and gotten a few more callbacks. Whether any will turn into an offer, she doesn't know. Sarah checks in on her every couple of weeks. They've become friends, the way people do when they've been through something together.

Marcus Thompson's EEOC case is pending. The company changed vendors, so in a sense he's already won. Angela Reyes took a job she's overqualified for. Trying to make peace with it. Derek Mobley's lawsuit against Workday grinds on.

On the other side: James Chen got promoted. Priya Sharma did too. For them, AI wasn't the enemy—it was the door that opened when human gatekeepers wouldn't. Rachel Winters is still struggling. Still watching jobs slip away before she can explain.

Danielle Torres is still sorting through three hundred applications a day. Tom Bradley is still a true believer. Elena Vasquez canceled her AI subscription, went back to reading every resume by hand. Slower, but she knows what she's doing. Klaus Weber's works council meets quarterly to review that American vendor's tool. In Bangalore, Amit Patel sends out another application.

R found a new job. Not in HR tech. When I asked if he missed the work, he took a long time to answer.

"I miss the problem," he said finally. "I don't miss what we were doing to people."

Meanwhile, AI hiring tools screen millions of applications every day. Fractions of seconds. Careers shaped. Livelihoods decided.

The regulations are coming. Slowly. Unevenly. With gaps companies exploit and candidates can't navigate. Whether they'll help the people they're meant to protect—that's still an open question.

What does it mean for work—for human dignity—when machines decide who deserves a chance?

The technology moves faster than the law. The law moves faster than compliance. Candidates are caught in the gap. Rights on paper they can't enforce in practice.

Sarah Martinez found a way through. A human recruiter. A mutual connection. A phone call that almost didn't happen.

Most people will just get the email.

The polite one. The automated one. The one that thanks them for their interest and wishes them luck in their future endeavors. They'll get it while making dinner, or putting the kids to bed, or staring at their phone at 2 AM wondering what they're doing wrong.

They'll never know why.