The Agent Quarantine Layer Is HR AI's First Minute of Control
The Unknown Agent Was Still Connected
The first sign was not a failed login.
It was a candidate export.
A security analyst reviewing OAuth grants saw a browser-based automation agent connected to a recruiting workspace. The agent had been created by a recruiter who left the company three weeks earlier. It was not in the official vendor list. It was not registered in the HR technology inventory. Its name looked harmless: “req-cleanup-helper.” It had access to a shared drive folder with interview notes, a spreadsheet of hard-to-fill roles, a calendar integration used for panel scheduling, and a sandbox copy of candidate profiles that still contained names, emails, locations, school fields, work histories, and recruiter comments.
No one could say whether the agent had made a decision.
That was the problem.
The recruiting operations lead said the agent had originally been used to clean duplicate candidate records. IT said the OAuth grant allowed broader read access than the original task required. Security saw recent network calls. Legal asked whether any rejected candidates had been touched. The hiring manager for one role remembered receiving an AI-drafted shortlist two weeks earlier, but did not know which assistant had generated it.
The agent was not obviously malicious. It was worse than that.
It was ownerless, over-scoped, and still connected.
The first question in the room was not whether to delete it. Deleting it could destroy evidence. It was not whether to write a new AI policy. The policy already said unauthorized AI tools were prohibited. It was not even whether to tell the vendor. There was no clear vendor.
The question was what to do in the first minute.
Can the company isolate the agent without erasing its logs? Can it revoke write and action permissions while retaining read-only forensic access? Can it freeze candidate outputs that may have been generated by the agent? Can it map which requisitions, candidates, files, APIs, and users were within reach? Can it prove the agent did not continue acting after containment? Can recruiting keep operating while the investigation continues?
This is the agent quarantine problem.
For the last month, the HR AI conversation has moved through a sequence: access drift, kill switch, rollback, decision evidence packet, audit room. Each layer answered a different failure mode. Access drift explained how a digital worker’s permissions expand beyond its original purpose. The kill switch explained how to stop an agent that is actively misbehaving. The evidence packet explained how to reconstruct a decision. The audit room explained how HR, IT, legal, security, procurement, and vendors must prove the same story.
Quarantine sits before all of them.
It is the moment between discovery and diagnosis. The company knows enough to mistrust the agent, but not enough to delete it. It must contain the risk, preserve the record, and keep the business from making more decisions on top of a possibly contaminated workflow.
In HR, that minute matters because digital workers do not only touch data.
They touch people.
Why Quarantine Moved Ahead of Recovery
The reason agent quarantine has become urgent is simple: enterprises are finding agents faster than they can govern them.
On April 21, 2026, the Cloud Security Alliance reported that 82% of surveyed organizations had unknown AI agents running in their IT environments. Sixty-five percent had experienced at least one AI agent-related incident in the prior 12 months. Among those incidents, 61% involved data exposure, 43% caused operational disruption, and 35% produced financial losses. Only 21% of respondents had a formal process for decommissioning agents.
The most revealing number was smaller. When an agent exceeded its scope, only 11% of organizations said the action would be automatically blocked. Thirty-eight percent required human approval. Twenty-four percent required logging.
That means most organizations are still built for review or recordkeeping after the fact. They are not yet built for immediate containment.
Microsoft’s May 1, 2026 move made the issue more concrete. Microsoft Agent 365 became generally available with a broader control-plane pitch: discover, observe, govern, and secure agents across Microsoft and partner environments. Microsoft emphasized shadow AI discovery, agents operating with their own credentials, local agents running on Windows devices, Intune policies that block unmanaged local agents, Defender runtime blocking for malicious coding-agent behavior, and registry synchronization with AWS Bedrock and Google Cloud connections.
This is not a side feature. It is the new administrative surface.
The Agent Registry documentation shows the same direction in product language. Administrators can see total agents, ownerless agents, unmanaged agents, high-severity risk counts, and actions such as block or delete. The risk column aggregates signals across Microsoft Entra, Defender, and Purview and routes administrators to mitigation steps when needed.
ServiceNow is moving from the workflow side. Its AI Control Tower release notes describe managed and unmanaged AI assets, AI models, AI systems, prompts, datasets, MCP servers, risk classification, security and privacy metrics, agent goal hijack, output with PII, high-risk output, MCP access monitoring, and lifecycle management for agentic AI systems. A few days earlier, ServiceNow and Google Cloud described a unified governed registry where AI agents and MCP servers across both platforms appear in a continuously updated view of what agents are running, what they access, and how they behave.
Workday is approaching the same problem through the system of record. Its Agent System of Record is now generally available. Workday says agent interactions are recorded and tracked, whether the agent acts on behalf of a user or as itself, with access backed by Workday’s security model. It also says third-party agents can be governed and that telemetry can be captured across the agent estate.
The identity vendors are arriving at the same conclusion. Okta for AI Agents treats agents as first-class identities, supports shadow agent discovery, assigns human owners, issues short-lived credentials, enforces least privilege, and provides governance workflows to revoke access when needed. Ping Identity’s April 28, 2026 research announcement framed the failure mode clearly: access grants permission, but it does not enforce control. Agents can combine individually legitimate permissions in unintended ways at runtime.
The HR evidence is no softer.
ICIMS and Aptitude Research reported on April 30, 2026 that 69% of companies use AI somewhere in talent acquisition, while only 18% use it broadly across hiring processes. Screening is the leading use case at 58%. Candidate communication follows at 54%. Nearly half of companies, 45%, lack a formal AI governance framework. Forty-six percent are using or planning to use agentic AI for talent acquisition.
The hiring workflow is already under pressure. Greenhouse’s 2026 benchmark analyzed more than 6,000 companies and 640 million applications from 2022 to 2025. Annual applications per recruiter rose 412%, from 146 to 746. Recruiters per organization fell 56%, from 10.43 to 4.62. Time to fill rose 37%, from 43.64 days to 59.67 days.
That is the operating environment where teams are adopting automation.
Recruiters are overloaded. Candidates are using AI. Hiring managers want faster shortlists. HR operations teams want agents to clean records, schedule interviews, answer policy questions, summarize notes, and move work across systems. The more agents help, the more they need access. The more access they have, the more quarantine becomes a real control, not a security metaphor.
The most important HR AI question in early May 2026 is not whether companies should recover from agent harm.
It is whether they can stop new harm from forming while they find out what happened.
Quarantine Is a State, Not a Shutdown
Deleting an agent is not quarantine.
Blocking an agent is not always quarantine either.
Quarantine is a temporary operating state that isolates a suspected agent, limits its authority, preserves evidence, and prevents dependent workflows from advancing blindly. It exists because a company often needs several hours or days to determine whether an unknown or over-scoped agent actually caused harm.
That distinction matters in HR. A company may discover a suspicious recruiting agent at 10 a.m. while interviews are scheduled for the afternoon. It may find a payroll agent with unexpected permissions two hours before a pay run. It may discover that an employee service agent used a policy source that was not approved for leave and accommodation questions. It may find a performance assistant connected to private manager notes during a calibration cycle.
In each case, the wrong response can create a second incident.
If the agent is deleted, logs may disappear. If every related workflow is stopped, payroll, hiring, and employee service may break. If nothing is stopped, the agent may keep influencing decisions. If only the vendor is notified, internal teams may not preserve the employment record. If only security acts, HR may not know which candidates or employees were affected.
A quarantine state has to be more precise.
| Control state | What it means | HR example |
|---|---|---|
| Active | Agent can run under approved scope | An interview scheduling agent books panels from approved candidate stages |
| Watch | Agent remains active but is monitored for a specific risk | A sourcing agent is flagged for unusual profile exports |
| Quarantined | Agent cannot execute new actions, but evidence is preserved | A recruiter-created assistant is blocked from candidate files while logs remain available |
| Output hold | Prior outputs are frozen from downstream use | Shortlists, rejection rationales, or performance summaries cannot advance until reviewed |
| Access downgrade | Write or action rights are revoked, but read-only forensic access remains | Payroll agent can no longer submit corrections, but investigators can inspect prior tool calls |
| Manual fallback | Human process replaces the agent path temporarily | Recruiters manually schedule interviews for affected requisitions |
| Decommissioned | Agent is retired after evidence and dependencies are resolved | Ownerless agent is removed after logs, tokens, outputs, and affected records are closed |
The difference between quarantine and decommissioning is timing.
Quarantine is what the company does when it does not yet know enough.
Decommissioning is what the company does after it knows enough to retire the agent without damaging the investigation or the business process.
This is why quarantine belongs between the kill switch and the audit room. A kill switch can stop an actively harmful agent. An audit room can review the full record. Quarantine keeps the middle from collapsing. It prevents new agent action, stops suspect outputs from moving downstream, and preserves enough context for the evidence packet that comes later.
The state must also be reversible under governance. If an agent is cleared, it may return with narrower permissions, a new owner, better logging, a patched connector, a lower risk tier, or a fresh impact assessment. If it is not cleared, it moves to decommissioning and remediation.
The product surface is not a red button.
It is a status model.
The First-Minute Playbook
An HR AI quarantine playbook should start before the incident.
The first minute cannot be invented during the first minute.
A workable playbook has eight steps.
| Minute-one action | The question it answers | Evidence to preserve |
|---|---|---|
| Identify the agent | What is this digital worker, and where is it running? | Agent ID, owner, creator, platform, runtime instance, device or cloud location |
| Classify the workflow | Which HR process could it affect? | Recruiting, payroll, performance, scheduling, employee service, learning, mobility |
| Contain authority | What can it still do right now? | Tokens, OAuth grants, service accounts, API scopes, tool permissions, delegated user rights |
| Freeze suspect outputs | Which decisions could keep moving? | Shortlists, rankings, summaries, messages, pay corrections, schedules, reviews |
| Preserve evidence | What must not be altered or deleted? | Prompts, model version, connector calls, logs, data sources, reviewer actions, workflow state |
| Map blast radius | Which candidates, employees, managers, vendors, and systems were within reach? | Affected records, file access, API calls, requisitions, employee groups, time window |
| Route fallback | How does HR keep operating without the agent? | Manual queues, approvers, exception paths, communication templates |
| Assign ownership | Who can clear, narrow, retire, or escalate the agent? | HR owner, security owner, legal owner, vendor contact, business approver |
Each step is simple on paper. Each is difficult in a real HR stack.
The agent identity may not be clean. It may be a custom script, a Copilot agent, a browser extension, an OAuth-connected SaaS agent, a workflow automation built by an operations analyst, an MCP client, a vendor feature, or a local coding agent that can read files and call APIs. It may act under its own identity. It may act on behalf of a human. It may switch between both.
The workflow may not be labeled as high risk. A “candidate communication” agent may draft rejection notices. A “data cleanup” agent may change candidate profiles. A “manager assistant” may summarize performance notes. A “policy helper” may answer leave questions. A “shift optimizer” may shape who receives hours.
Containment also has to be scoped. Blocking all access may interrupt business-critical workflows. Leaving read access open may expose sensitive records. Removing all tokens may make it impossible to reconstruct the agent’s actual reach. The better control is a tiered containment action: suspend execution, revoke write and action rights, freeze credential renewal, retain runtime evidence, and keep only the forensic path open to authorized investigators.
The hardest part is output freeze.
An agent can be stopped while its prior work keeps moving. A candidate shortlist may already sit in a hiring manager’s inbox. A generated rejection rationale may already be attached to an ATS record. A payroll correction may wait in an approval queue. A performance summary may already be part of a calibration packet. An employee service answer may have been sent through Teams.
Quarantine must therefore apply not only to the agent, but to the work products that came from it.
That is a different design requirement. HR systems need provenance tags that can mark outputs as agent-generated, agent-assisted, human-edited, or under review. Workflow systems need hold states that prevent downstream action until a human clears the item. Case management systems need quarantine cases with evidence links. Communication systems need recall or correction paths where appropriate.
This is where generic AI governance fails.
It can say “review AI outputs.”
The first-minute playbook has to say which outputs stop moving now.
Identity Becomes the Control Surface
Agent quarantine is ultimately an identity problem.
The company can write policies about AI behavior. It can run model evaluations. It can require vendor attestations. Those controls matter. But in the first minute, the organization needs to answer operational questions: who or what is acting, under whose authority, with which permissions, against which systems, and through which path?
Classic IAM was not built for that level of agent behavior.
The March 2026 CoSAI paper on Agentic Identity and Access Management describes the shift directly. AI agents are autonomous, composable, short-lived, and able to act across sensitive data and APIs. The paper argues for first-class agent identities, short-lived and context-bounded entitlements, visible delegation chains, code and model binding for higher assurance, enforcement at every hop, and immutable logs that can reconstruct which agents existed, what they were allowed to do, and what actions they performed.
That is the technical foundation for quarantine.
If an agent has no identity, quarantine becomes guesswork. If an agent shares a human account, the company may have to suspend the human to stop the software. If an agent uses long-lived tokens, revocation becomes messy. If permissions are inherited through a chain of sub-agents, the organization may stop one node while another keeps acting. If logs only show final API calls, investigators cannot tell whether the agent, the human, or another automation path caused the event.
The new identity pattern has five requirements.
First, each agent needs a distinct identity. A recruiting assistant should not disappear inside a recruiter’s account. A payroll variance agent should not look like a generic service account. A performance summary agent should not be indistinguishable from a manager’s copilot session.
Second, each agent needs a human owner. Ownerless agents are not just administrative clutter. They create accountability gaps. Microsoft now surfaces agents without owners in the Agent Registry. Okta’s product language makes human ownership part of agent registration. Workday frames agents as part of a blended workforce that needs visibility and accountability.
Third, permissions must be purpose-bound and time-bound. An agent created to clean duplicate candidate records should not keep broad access to interview notes after the cleanup project ends. A policy agent should not retain access to payroll correction tools. A manager assistant should not read employee relations files unless the use case is approved and logged.
Fourth, delegation must be visible. If an agent acts on behalf of a recruiter, the record should show the recruiter, the agent, the approved scope, and the downstream tool call. If a sub-agent is spawned, the chain must remain traceable. Ping’s warning about delegation opacity and sub-agent spawning is not theoretical. It is exactly the pattern that breaks HR accountability when an output becomes an employment record.
Fifth, revocation must be fast and specific. A serious quarantine layer should allow the company to suspend an agent, revoke one connector, block one action type, freeze one workflow, disable one token family, or downgrade permissions from action to read-only. It should not force an all-or-nothing shutdown unless the risk requires it.
This is why identity vendors, HCM platforms, workflow platforms, and security platforms are converging.
Agent security is not only about defending endpoints. It is about proving authority over digital workers.
HR will feel this earlier than many other functions because people data is dense. Candidate files, pay records, performance feedback, leave information, accommodation notes, manager comments, skills data, scheduling history, and employee relations records are all high-context data. An agent does not need administrator access to cause harm. It may only need enough context to produce a confident but wrong recommendation.
The quarantine layer starts with a badge.
No badge, no control.
The HR Workflow Problem
Security teams often think in assets, identities, tokens, vulnerabilities, and blast radius.
HR thinks in decisions.
That difference is why agent quarantine cannot stay inside security tooling.
A security team may decide that an agent is contained when its token is revoked. HR’s problem may just be starting. Which candidate records did it read? Which recommendations did it write? Which managers saw its summaries? Which employees relied on its policy answers? Which pay corrections, schedules, reviews, or mobility suggestions were shaped by it? Which vendor system still stores the output? Which messages already left the company?
Quarantine has to follow the employment workflow.
In recruiting, the quarantine object may be a requisition, a candidate stage, a shortlist, an assessment summary, a rejection reason, an interview note, or a candidate communication. If an unknown agent touched a high-volume role, the affected population may be hundreds or thousands of applicants. If it touched executive search, the population may be small but sensitive.
In performance management, the quarantine object may be a generated review draft, a calibration packet, a peer-feedback summary, a skills inference, a promotion recommendation, or a risk flag shown to a manager. The harm may not be visible as a system action. It may be an impression carried into a meeting.
In payroll, the quarantine object may be a variance recommendation, a retroactive pay correction, an overtime code, a tax field, a location rule, or an approval record. The business cannot simply wait forever. People need to be paid.
In scheduling, the quarantine object may be a shift allocation, labor-law exception, availability conflict, overtime forecast, or call-off recommendation. Employees may already have arranged childcare, transport, or second jobs around the schedule.
In employee service, the quarantine object may be a policy answer. That sounds low risk until the answer affects leave, benefits, accommodation, discipline, pay, or complaint escalation.
This is why the quarantine layer needs three sublayers:
| Sublayer | What it contains | Example |
|---|---|---|
| Access quarantine | Agent execution, credentials, connectors, scopes, tool calls | Suspend an agent’s ability to write to the ATS |
| Output quarantine | Agent-generated artifacts and downstream workflow items | Hold shortlists, summaries, messages, pay changes, schedules |
| Record quarantine | Evidence and employment records preserved for review | Lock logs, prompts, source data, reviewer actions, version context |
Access quarantine stops the agent.
Output quarantine stops its work from becoming more consequential.
Record quarantine prevents the organization from losing the story.
Most companies will build the first layer before the second. That is understandable. Access controls live in existing security and identity systems. Output controls require product changes inside ATS, HCM, payroll, scheduling, performance, employee service, messaging, and workflow tools. Record controls require retention and legal hold rules across vendor and internal systems.
But HR AI risk lives in the second and third layers.
If the agent is stopped but the candidate rejection still sends, the quarantine failed. If the agent is blocked but the performance summary remains in the calibration deck without a warning, the quarantine failed. If the agent is deleted and logs vanish, the quarantine failed. If HR cannot identify affected people, the quarantine failed.
The first minute is technical.
The next hour is operational.
Who Owns the Quarantine Button
The quarantine button cannot belong to one function.
If security owns it alone, the response may stop the agent but miss the employment consequence. If HR owns it alone, the response may protect workflow continuity but miss identity exposure. If legal owns it alone, the response may preserve evidence but move too slowly. If IT owns it alone, the response may become a ticket queue. If procurement owns it alone, the vendor may answer contract questions while the agent keeps acting.
The ownership model should be explicit before rollout.
| Function | Quarantine responsibility | Decision rights |
|---|---|---|
| HR process owner | Identifies employment workflow impact and affected populations | Freeze or release HR workflow outputs |
| Security | Assesses agent behavior, exposure, threat signals, and containment | Suspend execution, block runtime behavior, open incident response |
| IT / IAM | Revokes credentials, changes scopes, controls devices and integrations | Downgrade, revoke, expire, or reissue access |
| Legal / privacy | Preserves evidence, applies legal hold, guides notice and rights duties | Decide retention, disclosure, and investigation boundaries |
| Procurement / vendor management | Activates vendor obligations and evidence requests | Require vendor logs, documentation, support, and attestations |
| Business manager | Provides operational context and manual fallback capacity | Continue, pause, or re-run business process under human review |
| AI governance owner | Coordinates risk tier, control testing, and return-to-service criteria | Clear agent for restricted return or require decommissioning |
This looks heavy. It has to be.
HR AI agents touch workflows where mistakes become claims, complaints, missed pay, lost opportunities, unfair schedules, or damaged trust. A quarantine decision can be urgent and legally meaningful at the same time.
The practical answer is not to convene seven executives every time an agent is suspicious. It is to define risk tiers.
Low-risk agents can have automated containment and routine review. Medium-risk agents can trigger HR operations and security review within a defined service level. High-risk agents touching hiring decisions, pay, performance, promotion, discipline, scheduling, leave, accommodation, or termination should trigger cross-functional quarantine authority immediately.
Return-to-service criteria should also be predefined. An agent should not leave quarantine because a manager says the workflow is inconvenient. It should leave quarantine because the team has answered specific questions:
- Was the agent authorized for the workflow?
- Was the owner current and accountable?
- Were permissions consistent with purpose?
- Were all connected tools known?
- Were prior outputs reviewed or cleared?
- Were affected people identified?
- Were logs preserved?
- Was any vendor issue resolved?
- Were new controls tested?
- Is the return limited, monitored, and documented?
Those questions turn quarantine from an emergency reaction into an operating discipline.
What Buyers Should Ask Vendors
The quarantine layer will become a procurement test.
HR buyers should stop asking only whether a vendor has responsible AI principles, model documentation, bias audits, or human-in-the-loop design. Those matter, but they do not answer the first-minute question.
The first-minute questions are more concrete:
| Buyer question | Why it matters |
|---|---|
| Can every agent and agent-assisted feature be inventoried with owner, purpose, risk tier, version, and connected tools? | Unknown agents cannot be quarantined reliably |
| Can the customer distinguish human action, delegated agent action, and autonomous agent action? | Attribution drives evidence and accountability |
| Can the customer suspend an agent without deleting logs? | Containment should not destroy the investigation |
| Can write/action permissions be revoked while preserving forensic read access? | HR may need evidence without allowing further workflow changes |
| Can generated outputs be placed on hold across ATS, HCM, payroll, scheduling, messaging, and case systems? | Stopping the agent is not enough if its work keeps moving |
| Can the vendor export prompts, model version, tool calls, source data references, reviewer actions, and downstream workflow state? | The audit room needs decision-level evidence |
| Can the system map affected candidates, employees, managers, roles, requisitions, and time windows? | Blast radius in HR is people-centered |
| Can the customer define automatic blocking conditions for scope exceeded actions? | CSA data suggests most organizations still do not auto-block |
| Can vendor-side model, prompt, connector, or feature changes trigger re-review? | Quarantine may be caused by changes outside the customer’s tenant |
| Can decommissioning prove tokens, connectors, embedded files, prompts, and downstream outputs are closed? | Retirement debt is a long-term risk |
These questions will separate three kinds of vendors.
The first kind sells automation but treats containment as an admin setting. Those products may work in pilots, but they will struggle in regulated employment workflows.
The second kind sells governance dashboards but cannot freeze workflow outputs. Those products will help the audit committee and still leave HR exposed when a bad recommendation moves downstream.
The third kind connects identity, workflow, evidence, and recovery. Those vendors will be more expensive to implement, but they will be easier for CHROs, CISOs, legal teams, and procurement to defend.
Microsoft, ServiceNow, Workday, Okta, and Ping are not building the same product. Their center of gravity differs: productivity suite, workflow platform, HCM system of record, identity layer, runtime authorization. But their current direction points to the same buyer expectation. Agents need to be visible, owned, scoped, monitored, paused, blocked, logged, reviewed, and retired.
That expectation will reach every HR tech vendor.
An ATS with AI screening will need output hold and decision records. A payroll platform with AI remediation will need action quarantine and manual fallback. A performance platform with generated reviews will need provenance tags and calibration holds. A scheduling tool with optimization agents will need shift-impact review. An employee service product will need policy-answer correction and case escalation.
The next procurement worksheet will not ask “Do you use AI?”
It will ask “How do we quarantine it?”
The Queue After Quarantine
Return to the unknown recruiting agent.
The clean ending would be simple. Security blocks it. IT revokes the token. HR declares the matter closed.
That is not how the queue ends.
The first screen shows the agent in quarantine: execution disabled, write permissions revoked, evidence preserved. The second screen shows the OAuth grants that created the problem. The third shows candidate files and requisitions reached during the suspicious time window. The fourth shows generated artifacts that cannot move downstream until reviewed. The fifth opens a manual review queue for shortlists and rejections. The sixth assigns legal hold. The seventh asks a manager to confirm whether a shortlist was used. The eighth schedules vendor and internal post-incident review.
The agent is contained.
The work is not finished.
That is the reality of HR AI. Digital workers will become part of recruiting, payroll, performance, employee service, scheduling, learning, and mobility. Some will be official. Some will be improvised. Some will be abandoned by employees who leave. Some will use credentials that outlive their purpose. Some will act correctly until a policy changes, a prompt is injected, a connector exposes more than intended, or a human grants a scope that looked reasonable at the time.
The winning organizations will not be the ones that pretend this can be avoided entirely.
They will be the ones that can discover the agent, isolate it, preserve the record, freeze its outputs, keep HR operating, and prove the system did not continue acting after the first minute.
The future HR AI stack needs a quarantine layer for the same reason offices need locked doors and badge readers. Most days, nothing happens. The control still matters.
One day an unknown digital worker will appear in the registry with access to people data and no living owner.
Someone will ask whether it is still running.
The answer has to be more than a meeting.
This article provides a deep analysis of agent quarantine, digital worker identity, and HR AI control planes. Published May 3, 2026.