The Exit Meeting

The renewal meeting was supposed to be short.

The employer had already decided to leave. The recruiting AI pilot had helped with scheduling, candidate summaries, and interview packet drafts, but the product had not made the final buying round. A larger HCM vendor had bundled similar features into a broader contract. Procurement wanted a clean exit. Legal wanted the data returned. Recruiting operations wanted the open jobs migrated without breaking the pipeline.

Then employee relations asked a harder question.

Six months earlier, a candidate had disputed an AI-assisted rejection. The company had reopened the file, corrected the record, and kept the person in process. The issue looked closed. But the vendor still held the original model output, the prompt template, the ranking configuration, the connector trace, the reviewer log, and the support ticket that showed when the disputed output was corrected.

The employer asked for the evidence package.

The vendor could export candidate records. It could export reports. It could provide a PDF summary of the incident. It could not provide a portable, signed, machine-readable evidence file that the employer could keep after termination and use later in an audit, a regulator inquiry, a discovery request, or a candidate appeal.

That was the real lock-in.

The buyer was not trapped because it could not move resumes. It was trapped because the evidence of how employment-impacting AI outputs were produced, corrected, reviewed, and retained remained inside the vendor’s operating environment.

That is where HR AI procurement is headed next.

The last month of HR AI governance has built a control vocabulary: recovery SLAs, vendor remediation warranties, evidence escrow, decision recall, correction propagation, subprocessor chain of custody, and runtime tool approval ledgers. Each answered a necessary question. Can the employer respond when an agent causes harm? Can it make the vendor help? Can it prove the evidence existed before the dispute? Can it find downstream copies? Can it show which subprocessor, model, tool, identity, and approval path touched the output?

Evidence portability asks the next question.

Can the employer take the evidence with it?

If the answer is no, HR AI governance becomes dependent on the vendor relationship staying healthy. That is a fragile place to put employment records. Vendors get acquired. Products are sunset. Contracts are terminated. Customers migrate. Support tiers change. Retention periods expire. Litigation arrives after the people who managed the system have left. A regulator asks for an explanation two years after the workflow was replaced.

In ordinary SaaS, exit rights focus on customer data. In HR AI, the exit right must include decision evidence.

That evidence is not a static archive. It is a chain of prompts, configurations, model routes, retrieval sources, tool calls, human review actions, correction receipts, downstream propagation records, vendor support artifacts, and retention metadata. It is the material that lets an employer answer the most basic question after the system is gone:

What happened to this person, and can we prove it?

Why Portability Became the Next Contract Fight

HR teams are adopting AI before their governance machinery is ready.

On April 30, 2026, ICIMS and Aptitude Research released survey data from more than 400 U.S. talent acquisition leaders and practitioners. Sixty-nine percent of companies said they were already using AI in some capacity in talent acquisition. Only 18% said they were using it broadly across hiring processes. Candidates were moving faster: 74% of companies said candidates were using AI in the job search.

The use cases were not peripheral. Screening led at 58%, followed by candidate communication at 54%, assessments at 50%, and sourcing at 46%. Nearly half, 46%, said they were using or planning to use agentic AI in talent acquisition.

That combination creates an evidence problem. Screening creates explanations. Candidate communication creates messages. Assessments create scores. Sourcing creates lists. Agentic workflows create actions across systems. Each artifact may become relevant if a candidate, employee, manager, auditor, or regulator later challenges the process.

The operating pressure is just as important as the adoption curve. Greenhouse’s 2026 benchmark report, based on more than 6,000 companies and more than 640 million applications from 2022 to 2025, found that annual applications per recruiter rose 412%, from 146 to 746. Applications per job rose 111%, from 116 to 244. Recruiters per organization fell 56%, from 10.43 to 4.62. Time to fill rose from 43.64 days to 59.67 days.

This is the environment in which AI records spread. Recruiters do not wait. Managers download packets. Interviewers copy summaries into notes. Scheduling tools move candidates through panels. HR service agents generate answers. People analytics teams snapshot funnel data. Vendors retain telemetry. A model output becomes part of the company’s working memory before anyone has decided how long that memory must live or how it will survive vendor exit.

SHRM’s 2026 HR AI research shows the governance gap from inside the function. SHRM surveyed 1,908 HR professionals in December 2025 and found that 39% had AI adopted in HR functions, with another 7% intending to launch AI in HR during the year. Across organizations, 62% were using AI somewhere. More than half of HR professionals, 56%, said they did not formally measure the success of AI investments at all.

That matters because portability is measurement under stress. A company that cannot measure its AI use while the vendor is live will struggle to prove decisions after the vendor is gone.

Grant Thornton gives the broader enterprise version of the same problem. In its 2026 AI Impact Survey, based on 950 C-suite and senior business leaders, 78% lacked strong confidence that they could pass an independent AI governance audit within 90 days. The report describes organizations scaling systems they cannot explain, measure, or defend.

HR cannot treat that as a generic AI problem. Employment records have long tails. A hiring decision, pay decision, performance review, promotion, schedule assignment, leave decision, or employee relations record can be challenged long after the system that produced the first output has changed.

The buyer’s problem is not only whether the vendor can produce evidence during an active subscription. The problem is whether the evidence remains usable when the commercial relationship ends.

In traditional HR software, portability meant moving structured records: candidate profiles, requisitions, employee records, payroll files, performance documents, learning completions, case records, and reports. That work was painful but familiar. The data had schemas. Migration teams could map fields. Contracts could specify export formats. Archives could hold PDFs or CSVs.

AI evidence is stranger.

It includes the runtime context that gave a record meaning. A candidate summary is not enough without the input sources, prompt version, model route, retrieval materials, tool approvals, human reviewer action, and correction history. A performance recommendation is not enough without the policy documents retrieved, manager notes used, calibration settings applied, and downstream packet sent. A payroll anomaly recommendation is not enough without the timekeeping data, exception rule, model output, analyst override, and correction receipt.

The old export file tells the new system what the record says.

The evidence file tells the company why the record exists.

What Must Move

The simplest mistake is to treat evidence portability as “download the logs.”

Logs are raw material. They are not always complete, readable, signed, linked to employment context, or useful to a non-engineering reviewer. A useful portability package must preserve the chain from business event to AI output to human action to downstream record.

For HR AI, the portable evidence set should include at least eight layers.

Evidence layerWhat must be portableWhy it matters after exit
Business eventCandidate, employee, job, case, payroll, performance, or scheduling event IDTies technical evidence to the employment action
Input recordSource fields, attachments, retrieved documents, policies, profile data, and timestampsShows what the system saw at the time
Configuration statePrompt template, workflow rule, ranking setting, feature flag, model route, fallback ruleShows how the output was shaped
Model and tool traceModel provider, model version, MCP server, tool call, token audience, identity, and subprocessor pathShows which systems touched the output
Human reviewReviewer identity, role, decision, override, comments, elapsed time, approval scopeShows whether oversight was meaningful
Output recordGenerated summary, score, recommendation, email, packet, case answer, or queue actionPreserves the disputed artifact
Correction and recallDispute flag, hold, corrected record, downstream receipts, manager acknowledgementShows how the company repaired the error
Retention and integrityHash, signature, export timestamp, custody owner, schema version, retention periodMakes the evidence usable outside the vendor

This list is larger than most vendor exports because AI decisions are composite events. They are not just database rows. They are business decisions assembled by models, tools, policies, integrations, and humans.

Portability also has to preserve relationships. A model output without the prompt template is weak. A prompt template without the retrieved policy is incomplete. A tool call without the approving identity is risky. A reviewer note without elapsed time does not show whether the person had a real chance to intervene. A correction without downstream receipts does not prove the old output stopped working.

The export has to be both human-readable and machine-readable. Legal may need a readable packet for a dispute. Security may need event logs. HR operations may need a case timeline. People analytics may need to backfill metrics. A new vendor may need to import open appeal states. An auditor may need integrity proof. A regulator may ask for the main elements of a decision in plain language.

One file will not satisfy all of those jobs.

A serious portability package would look more like a small evidence room:

  • A case timeline in plain language.
  • A structured event file in JSON or another documented schema.
  • Original and corrected outputs.
  • Source data references and retention status.
  • Model, prompt, workflow, and tool metadata.
  • Human review and approval records.
  • Downstream propagation receipts.
  • Vendor support and remediation records.
  • Hashes or signatures that show the package was not edited after export.
  • A data dictionary that lets another system interpret the fields later.

The last item matters more than buyers think. Without a data dictionary, evidence becomes a museum object. People can store it, but they cannot use it.

HR already learned this lesson from old ATS migrations. Companies still hold archives of requisitions, applicants, email templates, and disposition codes that nobody fully understands because the fields were exported without enough context. AI will make that failure more expensive. A cryptic model trace is not evidence if no one can connect it to a decision.

Regulation Is Making Evidence Long-Lived

The law is not yet using the phrase “AI evidence portability.” It is building the pressure around it.

The EU AI Act starts with traceability. Article 12 says high-risk AI systems must technically allow automatic recording of events over the lifetime of the system, with logging capabilities appropriate to the intended purpose. Article 86 gives affected persons a right to obtain clear and meaningful explanations of the role of certain high-risk AI systems in decisions that produce legal or similarly significant effects.

Employment and worker management are part of the high-risk map under Annex III. That pulls recruitment, candidate evaluation, worker management, task allocation, performance, and related employment decisions into a stricter evidence regime.

The practical implication is simple. If an employer has to explain an AI-assisted employment decision after migrating away from the system that produced it, the employer still needs the evidence. A vendor portal that disappears at termination does not satisfy that operational need.

California adds retention pressure. The California Civil Rights Council’s final employment automated-decision system regulations require employers and covered entities to preserve employment records for four years, and the final text includes automated-decision system data within the record set. The rule does not solve export format. It makes the absence of export format harder to defend.

The EU Data Act adds a broader software-market signal. It has applied since September 12, 2025, and the European Commission says it enables cloud users to switch between cloud providers or use several providers in parallel. The Data Act explainer frames Chapter VI around switching between data processing services. The official Data Act text defines exportable data for switching purposes as input and output data, including metadata, directly or indirectly generated or co-generated by the customer’s use of a data processing service, with certain exclusions.

That is not an HR AI rule. It is still a sign of the regulatory direction. Switching rights are moving beyond raw customer files. They are starting to include the metadata generated through use of a service.

For HR AI, metadata is not incidental. It may be the evidence.

NIST’s Generative AI Profile gives procurement teams a contract vocabulary. The NIST AI RMF Generative AI Profile tells organizations to document third-party GAI incidents, establish third-party GAI incident response plans, define ownership, rehearse response plans, monitor third-party systems, address redundancy for model weights and system artifacts, and review vendor contracts for incident response, liability, serious incident notification, response times, and critical support.

Those actions imply an evidence lifecycle. A company cannot document third-party incidents, define ownership, rehearse response, or preserve redundancy if the evidence sits only in the vendor’s current production console.

Portability is the missing verb.

The vendor must not only create evidence. It must release it in a form that survives renewal failure, migration, investigation, bankruptcy, acquisition, product sunset, or customer termination.

That is not a compliance nicety. It is the difference between having a right on paper and having proof when the system is gone.

Control Planes Can See More Than They Can Release

The platform race is moving quickly toward agent visibility.

Microsoft made Agent 365 generally available on May 1, 2026. Starting in June 2026, Microsoft Defender will provide context mapping for each agent, including devices, configured MCP servers, associated identities, and reachable cloud resources. Its Bring Your Own MCP server preview routes registered MCP servers through the Agent 365 Tooling Gateway so IT admins can review server details and declared tools, approve or reject requests, grant Entra permissions, and monitor tool invocations.

That is close to the trace HR buyers need. It can show where an agent ran, what tools were available, what identities were involved, and what resources could be reached.

Workday starts from a different place. Its Agent System of Record is now generally available, and more than 65 global partners are connecting their agents to Workday’s ASOR. Workday says ASOR and Agent Gateway support MCP, A2A interactions, and OpenTelemetry so agents can work across systems while customers keep visibility into metrics in one place.

For HR, Workday has an obvious advantage: many employment records already live there. If an agent touches worker data, skills, compensation, performance, onboarding, payroll, finance, or job architecture, Workday can become a natural place to record the event.

ServiceNow approaches the problem through action. At Knowledge 2026, ServiceNow expanded AI Control Tower across discover, observe, govern, secure, and measure, with 30 new enterprise integrations and runtime controls for agentic workloads. It also opened Action Fabric, whose generally available MCP Server spans IT, HR, customer service, security, risk and compliance, and app development. ServiceNow says actions run through AI Control Tower with identity verification, permission scoping, audit trails, session management, and role-based tool packages.

These products are important because they make the invisible execution graph visible.

Visibility is not portability.

A control plane can show a trace and still make it hard to export a legally useful evidence package. A vendor can offer dashboards, search, alerts, and audit logs while limiting retention, redacting fields, withholding model internals, using proprietary schema, restricting API access, charging high support fees, or refusing to export after termination. A platform can govern an action in production but fail to produce a standalone case file two years later.

That gap will become a buying issue.

Procurement will ask one set of questions:

  • Can we export all AI decision evidence at termination?
  • In what format?
  • How long after termination will the export be available?
  • Does the export include prompts, configurations, model routes, tool traces, reviewer logs, and correction receipts?
  • Is it signed or hash-verifiable?
  • Can our new vendor ingest it?
  • Can we preserve it under legal hold?
  • What support SLA applies if a regulator asks for explanation after the contract ends?

Security will ask another:

  • Does the export preserve identity and token context?
  • Are service accounts mapped to human sponsors?
  • Are MCP server calls tied to approved tool definitions?
  • Are downstream systems and cloud resources included?
  • Can the evidence be searched without rehydrating the old vendor environment?

HR will ask the simplest question:

Can we still answer the employee or candidate?

The answer cannot depend on which vendor won the next renewal.

The Portability Exhibit

Evidence portability needs to become a contract exhibit, not a support courtesy.

The exhibit should define what moves, when it moves, how it moves, who pays, who validates it, and what happens after termination. Generic language about “customer data export” is not enough. AI evidence contains derived outputs, metadata, system artifacts, approval records, and vendor-created operational records that vendors may not treat as customer data by default.

The exhibit should start with scope.

For employment-impacting workflows, the export should cover candidate screening, candidate communication, assessments, sourcing, interview scheduling, offer workflows, onboarding, employee service, payroll support, workforce management, leave, accommodation, performance, promotion, internal mobility, learning recommendations, employee relations, and any analytics output used to affect an employment benefit.

Then it should define artifact classes:

Artifact classRequired portability term
Input and output recordsExport original, corrected, and superseded versions with timestamps
Prompt and workflow configurationExport versioned templates, feature flags, routing rules, and policy settings used for each decision
Model metadataExport provider, model name, version or release identifier, region, fallback route, and material update history
Tool and connector traceExport MCP server, tool schema, invoked tool, permission scope, identity, token audience, returned data class, and error state
Human reviewExport reviewer identity, role, decision, comment, override, elapsed time, and approval basis
Chain of custodyExport subprocessors, downstream systems, retention boundary, and evidence owner for each handoff
Correction recordExport dispute status, hold, recall, propagation receipts, manager or employee acknowledgements, and unresolved stale copies
Integrity recordExport hash, signature, schema version, export manifest, and validation report

The exhibit should also define timing.

There are at least four moments when portability matters:

  • Routine periodic export for internal evidence escrow.
  • Incident export after a dispute, audit request, regulatory inquiry, or litigation hold.
  • Pre-termination export before a vendor relationship ends.
  • Post-termination assistance when a later inquiry reaches back into historical AI use.

Each needs a clock. Routine export may run monthly or quarterly. Incident export may need to arrive within 24 to 72 hours. Pre-termination export should arrive before system access is cut off. Post-termination assistance may need to survive for the record retention period, not merely the support wind-down period.

The exhibit should define acceptance criteria. A vendor should not be able to satisfy the clause by dropping a CSV into a portal. The employer should be able to validate that the package covers the required time range, contains the required artifact classes, opens without proprietary tooling, matches a published schema, and includes integrity checks.

There should also be a “no dark archive” rule. If the vendor keeps evidence after the customer has exported it, the contract must say what remains, why it remains, how long it remains, who can access it, whether it can be used for model improvement, and how it will be deleted or anonymized when retention ends.

This is not adversarial drafting for its own sake. It reflects the operating reality of HR AI. The evidence that protects the employer, candidate, employee, and vendor is often produced by several parties at once. If the contract does not define ownership and export rights before an incident, the parties will negotiate under pressure after trust has already failed.

Why Vendors Will Resist

Evidence portability changes the vendor’s leverage.

SaaS businesses know that data export is one of the last moments of power in a customer relationship. If migration is painful, renewal is easier. If historical evidence stays behind, leaving is riskier. If audit support requires premium services, compliance becomes a revenue line.

AI makes the leverage stronger because the most valuable evidence may not look like customer data. Vendors can argue that prompt templates are proprietary. Model routing rules reveal architecture. Evaluation artifacts include trade secrets. Tool traces include security-sensitive metadata. Telemetry contains multi-tenant operational signals. Support records include privileged analysis. Subprocessor paths change dynamically. Full exports may be expensive. Long retention may create privacy and security risk.

Some of those arguments are legitimate.

They do not defeat portability. They shape it.

The buyer does not need a vendor’s entire source code or model weights to understand an employment decision. It does need enough context to reconstruct the decision path. The export can redact proprietary internals while preserving the artifact class. It can describe a model route without exposing all routing logic. It can identify a prompt version and material instructions without handing over unrelated product templates. It can preserve hashes of artifacts that remain under escrow. It can separate full legal-hold export from routine operational export. It can use neutral third-party custody for sensitive materials.

The harder issue is business model.

If evidence portability becomes standard, buyers can switch vendors with less fear. New vendors can compete on better governance imports. Auditors can compare systems across time. Employers can separate record retention from vendor renewal. Incumbents lose some lock-in.

That is why the first strong portability terms may appear in large enterprise contracts, regulated industries, public sector procurement, and multinational employers exposed to the EU AI Act or California employment ADS rules. Smaller employers may get standard exports for records but not full AI evidence. Vendors will offer better packages as paid governance tiers. Platform vendors will claim their control plane reduces the need for separate exports. Point solutions will argue that deeper exports are technically hard.

Buyers should expect all of that.

They should still ask.

The purchasing logic is straightforward. If a vendor’s AI output can affect a hiring, pay, promotion, scheduling, performance, leave, or employee relations outcome, the evidence cannot be held hostage by the subscription.

The renewal date should not decide whether an employer can explain a decision.

The New Migration Workstream

HR technology migrations used to have familiar workstreams: data mapping, integrations, security, payroll parallel run, change management, training, reporting, and cutover.

AI adds another one.

Evidence migration.

Before a company leaves an HR AI vendor, it should inventory every employment-impacting AI use case and classify the record duty attached to it. Candidate screening has one evidence pattern. Payroll correction has another. Performance summary has another. Employee service answer has another. Scheduling recommendation has another.

The migration team should then decide what happens to four types of evidence:

Evidence typeMigration decision
Open evidenceActive disputes, appeals, holds, unresolved corrections, pending manager acknowledgements
Closed evidenceCompleted decisions with no active dispute but within retention window
Derived evidenceAnalytics snapshots, model evaluation data, ROI dashboards, quality reports
Vendor-held evidenceSupport tickets, model traces, prompt versions, subprocessor records, telemetry

This is operationally boring. That is the point. Boring controls are the ones that survive incidents.

The migration should also include a portability test. Pick five records: one candidate screening decision, one candidate communication, one assessment output, one employee service case, and one corrected or disputed AI output. Ask the vendor to produce the full evidence package. Ask a new system or internal archive to ingest it. Ask legal, HR operations, security, and an auditor to read it without vendor assistance.

If the company cannot explain those five records after export, it does not have evidence portability. It has a data dump.

The test will reveal gaps quickly. Prompt versions may not map to outputs. Model names may be missing. Tool approvals may sit in a separate admin system. Human review logs may show only final approval, not elapsed time. Correction receipts may not include downstream systems. Vendor support tickets may not link to employment record IDs. Some logs may be retained for 30 days while employment records must be kept for years.

Those gaps are easier to fix before termination.

They are much harder to fix after the vendor has lost the renewal.

The Evidence That Outlives the Product

Every HR AI buyer wants to believe the product decision and the evidence decision are the same. Choose a trusted vendor, keep the contract current, and the records will be there when needed.

That was never fully true. It is less true with agents.

Agentic HR workflows are distributed by design. A single output may involve an ATS, HRIS, model provider, MCP server, identity layer, document store, workflow platform, email system, analytics warehouse, implementation partner, and vendor support console. The company may later replace one piece while keeping others. The evidence cannot live only where the first output was generated.

The next mature HR AI stack will separate three things:

  • The system that performs the work.
  • The system that governs the work.
  • The archive that preserves evidence after the work changes hands.

Sometimes one platform will own all three. More often, they will be split. Microsoft may see productivity and identity traces. Workday may hold people and finance context. ServiceNow may govern cross-functional actions. The ATS may hold candidate disposition. A vendor may hold model telemetry. Legal may hold preservation. People analytics may hold derived reports.

Evidence portability is the discipline that lets those layers change without losing the ability to answer for past decisions.

It will not feel urgent during a product demo. Demos show what the agent can do today. Portability asks what the employer can prove after the agent is gone.

That is why it belongs in procurement.

The employer leaving the vendor in the opening meeting did not need a better dashboard. It needed an exit file that could stand on its own: the disputed output, the input record, the model and tool trace, the human review, the correction, the downstream receipts, the integrity proof, and the retention commitment.

The vendor could still lose the renewal.

The evidence had to remain.


This article provides a deep analysis of HR AI evidence portability and vendor lock-in. Published May 11, 2026.