The Payroll Run Had Six Minutes Left

The agent did not look dangerous.

It sat inside a payroll operations queue, reading variance flags, comparing current pay data with prior runs, and suggesting fixes before the noon lock. A payroll specialist had already approved dozens of small corrections. Most were useful: missing local tax fields, a stale cost center, one overtime code that had been entered under the wrong work location.

Then the agent touched a union rule exception.

The employee’s pay change was not wrong. It only looked wrong because a retroactive rule update had arrived late. The agent treated the delta as an error, generated a correction, and routed it for approval. The specialist rejected it. Then she saw two more. A manager opened the workflow history and found the same recommendation pattern in another business unit.

Six minutes remained before the run closed.

The immediate question was not whether the model had hallucinated. It was not whether the vendor’s responsible AI statement had the right language. It was not even whether a human was in the loop. A human had been in the loop. The loop was now part of the incident.

The question was simpler and harder: how do we stop this thing?

Can payroll pause only the agent without stopping the entire run? Can IT revoke its write path but leave read-only audit access intact? Can HR freeze the affected workflow so no more approvals move downstream? Can legal preserve the logs before someone cleans up the configuration? Can the team reverse the recommendations already approved? Can employees be paid correctly while the investigation continues?

That is the next HR AI control problem.

Companies spent the last year asking whether AI agents could automate work. Vendors answered with agents for payroll, recruiting, employee service, policy, analytics, scheduling, internal mobility, and talent actions. The new buyer question is different.

Can the agent be stopped in a safe state?

The phrase “kill switch” sounds like a single red button. In HR, it will not be that simple. A digital worker can touch identities, permissions, model outputs, workflow state, human approvals, employee records, vendor systems, audit logs, notifications, and downstream business processes. Stopping it requires more than turning off a feature. It requires a control plane that can isolate the agent, freeze the work, preserve the evidence, route humans back in, and repair whatever already happened.

The agent kill switch is becoming one of the hardest product tests in HR AI.

Why This Became Urgent Now

The urgency comes from three places at once: agents are gaining real authority, security teams are seeing agent incidents move from theory to operations, and regulators are beginning to describe stop, override, record, and recovery duties in more concrete language.

The product direction is already visible. In January 2026, ADP introduced ADP Assist agents that can think, plan, and take action with human oversight across payroll and HR. ADP described agents that identify payroll variances, facilitate remediation, answer employee policy questions, generate workforce insights, analyze employee-level data, and initiate talent actions such as a promotion from natural language.

This is not a help-center bot answering benefits questions at the edge of HR. It is software moving toward the center of employment administration: pay, policy, analytics, and talent actions.

Workday made the same argument from the system-of-record side. In February 2025, Workday announced its Agent System of Record, saying companies would need centralized management for AI agents from Workday and third parties. The company listed agent onboarding, defined roles, secure data access, access controls, policy enforcement, real-time operational visibility, identity verification, orchestration, cost monitoring, and role-based agents for recruiting, talent mobility, payroll, contracts, financial auditing, and policy.

Microsoft is building the same layer for the wider enterprise. Microsoft Agent 365 is positioned as a control plane for AI agents across Microsoft, open-source, and third-party environments. Microsoft says the system includes registry, access control, visualization, interoperability, and security. Two details matter for HR: the registry can quarantine unsanctioned agents, and adaptive access policies can block compromised agents from organizational resources.

ServiceNow is turning the control layer into an operating workspace. Its AI Control Tower bundles AI discovery and inventory, AI asset lifecycle management, AI risk and compliance management, AI case management, and content for NIST AI RMF and the EU AI Act. Its March 2026 release notes added metrics for security policy violations, agent goal hijack, output with PII detected, high-risk output, and MCP server access.

The platforms are converging on the same premise: agents need lifecycle control.

The security data explains why. On April 21, 2026, the Cloud Security Alliance reported that 82% of organizations had unknown AI agents running in their IT environments, while 65% had experienced at least one AI agent-related incident in the prior 12 months. The reported effects were not abstract: 61% cited data exposure, 43% operational disruption, and 35% financial losses. Only 21% had formal processes to retire AI agents.

The most important number for a kill-switch discussion is smaller. When agents exceeded their scope, only 11% of respondents said the action would be automatically blocked. Thirty-eight percent required human approval. Twenty-four percent required logging.

Most organizations are still relying on approval or evidence after the fact. The system often keeps moving.

Another CSA survey, published in March 2026, found that 68% of organizations could not clearly distinguish between human and AI agent activity. That is a problem before any incident. During an incident, it becomes a blocker. A company cannot stop what it cannot identify. It cannot roll back what it cannot attribute.

The threat environment is also changing. On April 26, 2026, CSA’s AI Safety Initiative wrote that indirect prompt injection had crossed from proof-of-concept to live exploitation. It cited concurrent Google and Forcepoint analyses of hidden web instructions designed to hijack browsing agents, coding assistants, and enterprise copilots. The same note said Google observed a 32% relative increase in malicious indirect-prompt-injection content between November 2025 and February 2026 across the billions of pages it crawls each month. Documented payloads included forced payments, API key exfiltration, recursive file deletion, and biased recruitment screening.

That last example matters. Agent security is no longer only about source code or cloud secrets. It is about employment workflows.

Regulation is pushing from the other side. Article 14 of the EU AI Act says human oversight for high-risk AI systems should enable natural persons to decide not to use the system, disregard, override, or reverse the output, and intervene or interrupt the system through a stop button or similar procedure that allows the system to halt in a safe state. Employment and worker-management AI sit in the high-risk zone of the Act.

California has already moved on employment records. The California Civil Rights Council said its automated-decision system rules, approved in June 2025 and set to take effect October 1, 2025, clarify that automated-decision systems may violate state law if they harm applicants or employees based on protected characteristics. The same summary says employers and covered entities must maintain employment records, including automated-decision data, for at least four years.

New York City’s Local Law 144 page says employers and employment agencies cannot use automated employment decision tools unless they satisfy bias audit and notice requirements; the city clarified that notice must be provided 10 business days before use.

These rules are not the same. They do not create one universal HR AI compliance checklist. But they share a direction of travel: companies must be able to show what happened, who had authority, what the system did, what humans could do, and how the organization responded.

A kill switch that leaves no evidence is not enough. A log that cannot stop the system is not enough.

The product test is both.

A Stop Button Is Not a Kill Switch

The simplest version of a kill switch is a binary control: agent on, agent off.

That version will fail in HR.

Consider five agent states that can exist at the same time:

StateExample in HRWhy simple shutdown fails
Model or reasoning serviceA model summarizes interview transcriptsTurning off the model may not stop a workflow already in approval
Agent identityA payroll agent has an identity and permissionsDisabling the feature may leave credentials, tokens, or connectors alive
Tool accessThe agent can call payroll, ATS, calendar, or HRIS APIsStopping one tool call may not revoke other paths
Workflow stateRecommendations are already routed to managersThe agent can be off while prior outputs keep moving
Employment recordAn output shaped a candidate, employee, pay, or schedule decisionThe decision may need reversal, explanation, and remediation

“Kill switch” is the wrong metaphor if it means only power off.

The better phrase is controlled halt.

A controlled halt has several layers. The first is detection: the system notices a scope breach, suspicious tool call, policy violation, anomalous output, prompt-injection signal, excessive data access, or human escalation. The second is containment: the agent cannot keep acting while the team decides what happened. The third is evidence preservation: logs, prompts, tool calls, model versions, data sources, reviewer screens, approval states, and downstream actions are locked. The fourth is workflow freeze: pending approvals, automated messages, schedule changes, payroll remediations, candidate rejections, or promotion workflows are paused. The fifth is fallback: humans can continue critical operations without using the compromised or faulty path. The sixth is recovery: bad outputs are reversed, affected people are notified or reconsidered where appropriate, and the agent returns only after review.

Each layer answers a different question.

Detection asks: what changed?

Containment asks: what can still move?

Evidence asks: what must be preserved?

Fallback asks: how does the business keep operating?

Recovery asks: who was affected, and how do we make them whole?

In cybersecurity, those questions are familiar. In HR AI, they are still being translated.

The translation matters because HR systems are not only technical systems. They create facts about people. They decide which candidate gets seen, which employee gets a shift, which worker gets paid correctly, which manager sees a retention warning, which person gets recommended for promotion, which policy answer an employee relies on, and which team is flagged for reorganization.

If a security agent misclassifies a file, the company can quarantine the file and investigate. If an HR agent wrongly routes a candidate rejection, the harm may already have reached a person. If a payroll agent approves a bad correction, money may move. If a scheduling agent violates a local rule or ignores availability, employees may arrange their lives around a broken schedule. If a promotion agent builds a recommendation from stale skills data, a manager may carry that impression into a calibration meeting.

Rollback in HR is not the same as rollback in software.

Software rollback restores a prior system state. HR rollback must address the human state created by the system. It may need to reopen a candidate pool, remove a note from a record, correct pay, rerun a decision with proper evidence, notify an employee, document why a prior recommendation was disregarded, and make sure a manager does not keep acting on a tainted summary.

There is no clean revert button for trust.

The HR Version Is Harder Than IT Thinks

IT and security teams can see the agent as a non-human identity with tools, permissions, telemetry, and risk signals. That view is necessary. It is incomplete.

HR has to ask what the agent’s action meant.

A payroll correction is not only an API write. It is a change to wages. A scheduling update is not only a workflow action. It changes time. A candidate ranking is not only a model output. It affects opportunity. A policy answer is not only generated text. It may shape whether an employee takes leave, reports harassment, asks for accommodation, or challenges a manager. A promotion workflow is not only a talent action. It touches status, compensation, retention, and fairness.

HR needs its own kill-switch taxonomy.

The first category is access kill.

This is the technical stop: disable the agent identity, revoke tokens, suspend OAuth grants, remove group membership, block API calls, disable MCP server access, or force the agent into read-only mode. This is where Microsoft Entra, Purview, Defender, ServiceNow, identity teams, and security operations matter.

The second category is action kill.

This stops the agent from taking particular actions even if it can still observe. A recruiting agent may still read candidate status but cannot reject, rank, or message candidates. A payroll agent may still inspect variance data but cannot suggest or facilitate remediation. A policy agent may still search documents but cannot send answers to employees without human review. The control is not agent on/off; it is action-specific.

The third category is workflow kill.

This freezes the process where the output is moving. Pending approvals do not advance. Draft messages do not send. Offer workflows pause. Payroll exceptions remain in a manual queue. Promotion requests cannot complete. Schedules do not publish. Workflow kill is often where business owners feel pain, because it can slow real operations. Without it, bad output keeps traveling after the agent is technically stopped.

The fourth category is evidence lock.

This prevents the organization from losing the decision record. The company needs the agent ID, acting user, data sources, permissions at the time of action, prompts or instructions, model or workflow version, tool calls, output, reviewer view, reviewer action, timestamp, downstream action, and later correction. Evidence lock also protects against well-intended cleanup. During an incident, teams want to fix things quickly. Fixes can destroy the record.

The fifth category is communication hold.

HR agents increasingly draft or send candidate messages, employee-service answers, manager nudges, policy summaries, and workflow notifications. A kill switch should be able to stop external or employee-facing communications separately from internal processing. If a recruiting agent has generated 500 rejection emails based on a broken screen, the right answer may be to hold the batch, not shut down the entire ATS.

The sixth category is human fallback.

This is the manual path. Payroll still has to run. Employees still need policy answers. Recruiters still need to schedule interviews. A shift still needs coverage. A promotion cycle may still have deadlines. The fallback path should be designed before the incident. If the company discovers during the incident that only the agent knows how the new workflow works, the kill switch will feel unusable.

The seventh category is recovery and appeal.

This is where HR owns the problem. If affected employees or candidates need notice, reconsideration, corrected pay, corrected records, a manager explanation, an appeal route, or a documented human review, HR cannot outsource that to the vendor or the security team. The vendor can provide data. IT can stop systems. Legal can advise. HR has to repair the employment relationship.

The categories overlap. That is the point. A real kill switch is a set of coordinated controls across systems that were never designed around digital workers.

The Platform Race Is Becoming a Control Race

The visible market story in HR AI is still about productivity.

Vendors show agents that can reduce tickets, schedule candidates, answer policy questions, generate analytics, audit payroll, support managers, and guide employees. Those demos matter because buyers need value. But the deeper race is moving toward control.

Microsoft’s framing is direct: manage agents the way companies manage people, apps, and infrastructure. Agent 365 gives agents unique identities, a registry, access control, analytics, logging, security posture management, and policy enforcement. It also says unsanctioned agents can be quarantined and compromised agents can be blocked from resources.

For HR, that is not only an IT feature. It is a precondition for using agents inside workforce systems. A recruiting agent that has no unique identity is a bad audit object. A payroll agent that cannot be blocked in real time is a bad operational risk. An employee-service agent that can access broad document repositories without policy enforcement is a future evidence problem.

ServiceNow’s AI Control Tower starts from a different position: enterprise workflow. Its pitch is that AI assets should be connected to business services, lifecycle stages, risk and compliance workflows, and AI case management. The March 2026 additions around agent goal hijack, PII output, high-risk output, MCP access, and agentic system lifecycles show where the category is going. The control layer needs to see not only the agent but also the business service it is affecting.

That distinction matters in HR. A policy agent and a promotion agent may use similar model infrastructure but carry very different employment risk. A scheduling agent that only suggests shift swaps is different from one that publishes schedules. A recruiting agent that drafts interview questions is different from one that screens out candidates. The control plane needs business context, not only technical telemetry.

Workday’s Agent System of Record is closer to the HR and finance core. Its advantage is semantic: Workday already knows roles, job architecture, skills, managers, payroll, finance structures, employee data, and business processes for many customers. If agent governance lives inside the system that understands people and money, HR can ask better questions. Which agent touched payroll data? Which role did it operate under? Which business process did it affect? Which worker population was in scope? Which costs did it create? Which policy did it enforce?

ADP brings a different strength: payroll and workforce data at scale. Its press release says its platform spans 1.1 million clients, 140 countries and territories, and 42 million wage earners. When ADP agents touch payroll variances, tax registrations, employee-level data, and promotion initiation, the control issue becomes immediate. Payroll is not a sandbox. A false positive, bad remediation, or inappropriate employee-level query can become an operational and legal problem quickly.

None of these vendors solves the full HR kill-switch problem alone. Microsoft may control identity and enterprise agent governance. ServiceNow may control workflow and case management. Workday may control HCM process context. ADP may control payroll intelligence. A customer may also have Greenhouse, SAP SuccessFactors, Oracle, UKG, iCIMS, Eightfold, Paradox, Fountain, Slack, Teams, Okta, custom automations, assessment platforms, background-check vendors, and document repositories.

The agent may cross them.

The kill switch cannot be just a vendor button. It has to become a cross-system contract.

At minimum, every HR agent that can affect an employment outcome should have:

ControlWhat the buyer should ask
Unique agent identityCan every action be attributed to a specific agent, not a shared service account?
Declared purposeWhat job is the agent allowed to do, and what is outside scope?
Action boundaryWhich actions are read-only, recommend-only, approval-gated, or executable?
Data boundaryWhich HR data domains can the agent reach, and under what purpose?
Scope-exceeded responseDoes the system block, route for approval, only log, or keep moving?
QuarantineCan unsanctioned or suspicious agents be isolated without breaking unrelated workflows?
Workflow freezeCan pending outputs be paused before managers, employees, or candidates act on them?
Evidence lockCan the organization preserve the complete decision record at the moment of incident?
Human fallbackCan critical HR work continue manually or through a safer path?
Recovery pathCan affected records, decisions, messages, and payments be corrected and documented?

This is not a theoretical procurement list. It is the difference between using AI agents as assistants and employing them as operational actors.

Rollback Means Reversing a Decision Trail

The word rollback hides the hardest part.

When software teams roll back a deployment, they restore a prior build. When database teams roll back a transaction, they reverse a write. HR AI rollback is messier because the agent may have changed what people believe.

A recruiter may have seen a candidate summary that framed the candidate as weak. A manager may have read a promotion recommendation built on incomplete skills data. An employee may have received a policy answer and acted on it. A payroll specialist may have approved a correction. A scheduler may have published a shift pattern. An HR business partner may have used workforce-risk analytics in a reorganization discussion.

Even if the system state is restored, the human state remains.

An HR rollback needs a decision trail.

The trail should answer:

  • What did the agent see?
  • What did the agent infer?
  • What did the agent recommend or execute?
  • Which human saw the output?
  • What evidence was shown to the human?
  • What alternatives were available?
  • What action was approved, rejected, ignored, or escalated?
  • Which downstream systems received the result?
  • Which people were affected?
  • What correction happened later?

This is where the “decision evidence packet” becomes inseparable from the kill switch. The company cannot reverse what it cannot reconstruct.

Take recruiting. If an agent wrongly screens out candidates because a job-description parser overweights a credential, rollback is not simply turning the agent off. The company may need to identify the affected requisitions, reopen candidates, notify recruiters, prevent automated rejection emails, rerun screens with corrected criteria, preserve the prior model and data state, document why candidates were reconsidered, and decide whether candidate notices or audit disclosures are required.

Take payroll. If an agent suggests invalid remediations for a local rule, the company must stop further remediation, identify approved and pending changes, preserve variance logic and approval screens, run manual review before payroll closes, correct employee pay if needed, and retain the record in case an employee later challenges the result.

Take promotion. If an agent initiates or supports a promotion workflow from natural language, the organization must know whether the agent selected the evidence, whether the manager saw contrary evidence, whether stale or prohibited data was used, and whether the employee has a way to challenge or supplement the record.

Take scheduling. If an agent publishes shifts that conflict with availability, local labor rules, fatigue constraints, or accommodation signals, rollback may require republishing schedules, paying premiums, notifying employees, preserving the optimization run, and proving the next run excluded inappropriate data.

Each scenario has a different operational clock. Payroll may have hours. Candidate rejection may have days. Promotion calibration may have weeks. Employee trust may take longer.

That clock should be visible in the product.

HR AI vendors will need recovery SLAs, not just uptime SLAs. Uptime says the system is available. Recovery says how quickly the vendor and customer can identify affected decisions, freeze downstream actions, produce evidence, support manual review, correct records, and certify that the agent returned under a changed control.

This is where many AI demos are still thin. They show the happy path: agent reads, agent reasons, agent recommends, human approves, workflow completes. They rarely show the failure path: agent reads the wrong source, human approves under time pressure, output propagates, employee challenges the decision, audit asks for logs, and the customer needs a clean rollback.

The failure path is where enterprise software earns trust.

Human Approval Is Not Containment

Many organizations will try to solve the kill-switch problem with human approval.

That will help. It will not be enough.

The CSA survey shows why. When agents exceed scope, 38% of organizations require human approval and 24% require the action to be logged. Only 11% automatically block the action. That posture assumes the human approval point can absorb the risk.

Sometimes it can. Often it cannot.

A human reviewer may not know the agent exceeded scope. The interface may show a clean recommendation, not the prohibited data path behind it. The reviewer may have seconds, not minutes. The reviewer may be trained in HR policy but not in agent tool chains. The action may be one item inside a queue of hundreds. The approval may stop the final write but not the agent’s data access, intermediate summaries, or draft messages. The reviewer may approve because rejecting requires more work.

This is not a critique of HR professionals. It is a critique of control design.

Human approval is a decision control. Containment is a system control. They are related but not interchangeable.

An agent that violates a boundary should not always ask a human, “May I proceed?” In high-risk HR contexts, the safer default may be “I have stopped; here is what I tried to do; here is what is frozen; here is the evidence; here is the manual path.”

The difference matters in several situations:

SituationWeak controlStronger control
Agent accesses a prohibited data domainAsk manager to approve outputBlock action, preserve logs, alert data owner
Agent detects conflicting policy documentsGenerate best-effort answerHold employee-facing response, route to HR policy owner
Agent tries to write payroll remediation outside scopeRequire click approvalDisable write path, freeze affected remediations
Agent receives prompt-injected contentWarn reviewer in queueQuarantine input, block tool calls, open incident case
Agent uses stale skills data in promotion workflowLet manager overrideFreeze workflow, show data freshness, require updated evidence

Approval has a place after containment. It should not replace containment.

This is also the lesson of indirect prompt injection. If malicious or manipulated content can hide in web pages, tickets, issue comments, emails, PDFs, calendar entries, or documents that agents ingest, the user may never knowingly submit a bad instruction. A human reviewer at the end of the workflow may not see the injected text. The agent may already have called tools, summarized data, or drafted actions.

The kill switch has to operate inside the agent’s runtime, not only at the final approval screen.

Who Owns the Switch?

No single function can own the HR AI kill switch.

IT owns identity, access, systems integration, logging infrastructure, and technical containment. Security owns threat detection, suspicious behavior analysis, incident response discipline, and forensic preservation. Legal owns privilege, notification obligations, regulatory interpretation, litigation hold, and liability posture. Compliance owns policies, evidence, and control testing. Procurement owns vendor commitments. The business owns operational continuity.

HR owns the employment meaning.

That means HR must be able to answer questions the other functions cannot answer alone:

  • Was this an employment decision or only administrative assistance?
  • Which candidates, employees, managers, or worker groups were affected?
  • Did the output touch pay, opportunity, scheduling, performance, promotion, leave, accommodation, discipline, or employee relations?
  • What would a fair correction look like?
  • Should the person receive notice, reconsideration, explanation, or an appeal route?
  • Which managers need guidance so they do not keep using tainted information?
  • Which records should be corrected, annotated, or removed from future decision use?

If HR does not define these answers before deployment, the incident will define them under pressure.

The ownership model should be explicit before an agent goes live. A high-risk HR agent should have four named roles.

The technical owner can stop the agent, revoke access, and work with security. The HR process owner understands the business process and affected population. The legal or compliance owner decides record, notice, and regulatory duties. The vendor owner can produce product logs, explain system behavior, and support rollback.

There should also be a named executive sponsor. Agents that affect employment outcomes should not be orphaned experiments.

The operating model should include drills. Not policy reviews. Drills.

Run a scenario: a recruiting agent is found using employee relations notes when evaluating internal candidates. What gets blocked? Which tokens are revoked? Which workflows freeze? Which candidates are identified? Which managers are notified? Which logs are preserved? Which vendor joins the call? Which candidates get reconsidered? Which future data path is removed?

Run another: a payroll agent approves corrections based on a stale local rule. What happens before payroll close? Who can keep payroll running manually? How are affected employees identified? What evidence is preserved? Who signs off before the agent returns?

Run a third: an employee-service agent gives the wrong leave answer to 300 employees because it used an outdated handbook. How are messages found? Who receives correction? Which policy source becomes authoritative? How is the agent prevented from citing the old file again?

If the organization cannot complete these drills, the agent is not production-ready for sensitive HR work.

The Next Product Category: Quarantine, Recovery, Appeal

Agent governance is moving through phases.

The first phase was inventory: what agents do we have?

The second phase was identity and access: what can they reach?

The third phase is now arriving: what happens when they should stop?

This phase will create new product expectations. Agent registries will need quarantine states, not only active/inactive labels. Access tools will need purpose-bound and time-bound permissions that expire or downgrade when the business context changes. Workflow platforms will need freeze controls for pending outputs. HR systems will need decision evidence packets. Case management tools will need AI incident case types. Employee-service systems will need correction campaigns. Talent systems will need reconsideration workflows. Audit tools will need to connect model behavior, agent permissions, human review, and employment outcome.

The most interesting product surface may be the recovery layer.

Today, enterprise software often treats incidents as technical events. HR AI incidents will be socio-technical events. The system has to help a company move from “we stopped the agent” to “we corrected the employment process.”

That recovery layer could include:

  • affected-person discovery across candidates, employees, managers, and worker groups
  • decision replay under corrected data or rules
  • manual review queues with preserved context
  • correction notices and internal manager guidance
  • appeal intake and evidence submission
  • payroll or schedule remediation tracking
  • audit-ready closure reports
  • controls proving the agent cannot repeat the failure path

This is not only risk management. It is a market opportunity.

The vendors that can show the full failure path will win more serious buyers. A CHRO can buy productivity. A CISO can allow rollout. Legal can live with the record. A CFO can see the hidden cost of supervision and recovery. Managers can understand when to trust the agent and when to stop it.

The vendors that cannot show the failure path will keep selling demos.

The Red Button and the Empty Queue

Return to the payroll run.

The clean version of the story is that someone clicks a red button and the agent stops.

The real version has more screens.

One screen shows the agent identity suspended. Another shows write access revoked but read access retained for investigation. Another shows pending remediation approvals frozen. Another shows the affected employees and business units. Another shows the variance logic, rule source, model version, prompt, tool calls, and reviewer actions. Another opens a manual payroll exception queue. Another assigns legal and HR review tasks. Another tracks corrections. Another records why the agent can return, under what narrower scope, and with which new test.

Only then is the queue empty for the right reason.

The agent did not need to be evil. It did not need to be hacked. It needed only to act confidently in the wrong context, close to a deadline, inside a workflow where humans were already moving fast.

That will be a normal HR AI failure mode.

The companies that handle it well will not be the ones with the most dramatic AI policy. They will be the ones with the dull operational controls: registry, identity, scope, quarantine, freeze, evidence, fallback, recovery, and appeal.

The future of HR AI will include digital workers. Digital workers need badges, supervisors, logs, and offboarding. They also need something human employees rarely need.

They need a way to be stopped before their work becomes someone else’s life.


This article provides a deep analysis of agent kill switches, rollback, and HR AI control planes. Published April 30, 2026.