A Request That No Longer Starts in Workday

The employee did not open Workday.

She was in Outlook, finishing a thread about a team member’s leave plan, when she asked Microsoft 365 Copilot how many vacation days she had left. The answer did not come from a generic HR chatbot. It came through Workday’s Sana Self-Service Agent, which had been embedded inside Copilot one day earlier.

The employee asked the next question in the same window. Could she request three days off next month?

That small sequence is the new shape of HR software. The work began in a productivity app. The question passed through an AI assistant. The action was executed against the HR system of record. The approval rules, permissions, and transaction logic stayed inside Workday. The user never entered the old portal.

For two decades, HR technology was built around destination software. Employees went to the HCM portal for pay, benefits, time off, policies, performance tasks, and employee service cases. Recruiters went to the ATS. Managers went to dashboards. HR operations lived in queues. Vendors competed to own the workflow by owning the screen.

In May 2026, that assumption is breaking.

On May 13, Workday announced that the Sana Self-Service Agent for HR and finance is generally available inside Microsoft 365 Copilot. Employees can check time-off balances, update personal information, view payslips, review tax withholding, request leave, check expense status, and look up policies from Microsoft 365. Managers can approve timesheets in bulk, review goals, start performance reviews, and submit payroll input. Workday says there is no separate login, no new deployment project, and no additional licensing requirement for eligible customers.

That is not a feature note. It is a map of the next platform war.

The question is no longer whether HR AI can answer employee questions. That layer is becoming ordinary. The harder question is where the question begins, which platform is allowed to turn it into an action, which system records the evidence, and which vendor gets paid when the action runs.

The hotter fight is upstream. Workday, Microsoft, ServiceNow, OpenAI, Google, SAP, ATS vendors, employee service platforms, and model companies are converging on one control point: the front door of work.

Whoever owns that front door will not own all HR data. Workday, SAP SuccessFactors, Oracle, ADP, UKG, Greenhouse, iCIMS, and others will still hold systems of record. But the front door may decide which system is called, which context is retrieved, which approval rule is shown, which action is metered, which evidence is kept, and which interface employees remember.

Why May 2026 Changed the Question

The timing matters because the product signals arrived together.

Microsoft had already set the enterprise frame in March. In its announcement of the First Frontier Suite, Microsoft said Microsoft Agent 365 would be generally available on May 1, priced at $15 per user, as a control plane to observe, govern, manage, and secure agents across an organization. The same post cited an IDC forecast of 1.3 billion agents in circulation by 2028 and said 80% of the Fortune 500 were already using Microsoft agents. It also said tens of millions of agents had appeared in the Agent 365 Registry during a two-month preview, while Microsoft itself had visibility into more than 500,000 internal agents producing more than 65,000 employee responses a day over the prior 28 days.

The numbers are partly marketing. They still reveal the strategic move. Microsoft does not want Copilot to be one assistant among many. It wants the Microsoft 365 security, identity, productivity, data, and admin stack to become the operating environment for agents that touch everyday work.

Google made a similar move from the Workspace side. On May 4, Google Workspace began rolling out an AI control center in the Admin console for Enterprise Standard and Plus customers. Google described it as a central place for security and governance settings for generative AI and agent actions. Its first modules cover AI access, product security settings, foundational controls such as classification labels and data protection rules, and privacy or compliance standards. At launch, the dashboard covers usage across Gmail, Drive, Docs, Sheets, Slides, Meet, Calendar, Chat, and the Gemini app.

OpenAI attacked the same surface from the model and app layer. On April 22, OpenAI introduced workspace agents in ChatGPT, shared agents that can handle long-running workflows inside organizational permissions. The release notes for ChatGPT Business then added two more clues in early May: ChatGPT for Excel and Google Sheets on May 5, and new Analytics and Agents areas in the global admin console on May 6. The Business release notes describe agent views that let admins inspect connected apps, memory files, schedules, activity, and agent analytics.

ServiceNow made the most explicit “system of action” argument. At Knowledge 2026 on May 5, ServiceNow expanded AI Control Tower across five dimensions: discover, observe, govern, secure, and measure. Discovery now reaches 30 new enterprise integrations, including AWS, Google Cloud, Microsoft Azure, SAP, Oracle, and Workday. ServiceNow said it processes 100 billion workflows and 7 trillion workflow transactions annually. It also added an AI Gateway for Model Context Protocol transactions.

In a separate Knowledge 2026 post, ServiceNow opened Action Fabric through a generally available MCP Server. The company framed ServiceNow not only as a record platform but as a governed action layer. A record inside ServiceNow can trigger workflows, playbooks, business rules, assignment flows, and SLA timers. The company wants external agents from Claude, Copilot, or a customer’s own environment to call those actions headlessly.

Then Workday put the HR version into the flow of work.

Workday had already introduced Sana from Workday on March 17 with more than 300 skills and connectors to systems such as Gmail, Google Drive, Microsoft Outlook, Salesforce, ServiceNow, SharePoint, Slack, and Zoom. Workday’s argument was that Sana does not merely suggest. It acts across systems using Workday’s security model, configuration, policies, and audit framework.

The May 13 Microsoft 365 Copilot integration turned that architecture into a visible distribution move.

The inference from these launches is straightforward: enterprise AI is moving from browser tabs and vendor portals into the tools where employees already live. Once that happens, HR software competes on whether its data and actions can be safely exposed through other front doors without losing control.

Front Door, System of Record

The front door and the system of record used to be the same place. In HR, that was convenient for vendors and painful for users.

An employee who wanted time off opened Workday, SAP SuccessFactors, Oracle, UKG, ADP, or another HR portal. A candidate went through an ATS workflow. A manager approved a timesheet in the HCM system. A HR service request entered ServiceNow or a case management tool. The user experience was fragmented, but the ownership question was simple. The vendor that held the transaction usually owned the screen.

AI breaks that link.

A single employee request can now start in Teams, Outlook, Gmail, Slack, ChatGPT, Copilot, Gemini, ServiceNow, an internal portal, a spreadsheet, or an agent running on a schedule. The front door can be a conversation, a sidebar, a voice request, a mobile push, a spreadsheet cell, or a background task. The system of record may never be opened by the human.

That does not make the system of record less important. It may make it more important.

The front door needs context, permissions, business rules, and a trusted place to write the result. A vacation request still has to know balances, policies, blackout windows, manager approval rules, payroll cutoffs, and local labor rules. A performance-review action still has to respect role permissions, calibration cycles, employee visibility, legal hold, and retention rules. A recruiting action still has to preserve candidate notices, assessment data, recruiter notes, interview packets, disposition codes, and audit history.

The front door cannot invent those controls. It must call them.

That is Workday’s bet. Let Copilot own the surface if necessary, but make Workday the governed backend for people and money actions. The transaction still flows through Workday’s rules. Workday keeps the sensitive data in its trusted system. Copilot becomes the interface, not the source of truth.

ServiceNow’s bet is adjacent but broader. It wants to be the action layer across departments. If an external agent wants to reset a password, provision access, onboard an employee, route a case, update a procurement step, or escalate a risk workflow, ServiceNow wants that action to pass through its governed workflows. The record triggers the work. The SLA timer starts. The policy is enforced. The audit trail follows.

Microsoft’s bet is to make the productivity suite the command surface and Agent 365 the control plane. If work starts in Office, Outlook, Teams, SharePoint, Excel, or Copilot, Microsoft can observe and govern the agent layer even when the transaction touches Workday, ServiceNow, Salesforce, SAP, or custom systems. That gives Microsoft leverage without forcing it to replace every system of record.

Google is making the Workspace version of the same move. It may not own HR transactions, but it owns Gmail, Drive, Docs, Sheets, Meet, Calendar, and Chat for many enterprises. If agents operate on top of Workspace data, the Admin console becomes a governance surface.

OpenAI is the least anchored to a single enterprise application stack and the most ambitious about becoming a horizontal agent platform. Frontier and workspace agents point to a layer where companies create shared agents, connect apps, add custom MCP servers, run scheduled tasks, and inspect agent behavior. If ChatGPT becomes the workbench for cross-system work, OpenAI can sit above HR tools, finance tools, code tools, and documents.

The front door, then, is not one thing.

It is a control position.

It decides which request gets translated into which workflow. It decides which app is called first. It decides how much context the agent sees. It decides whether the user notices the underlying system. It decides where the usage meter sits. It decides which log becomes the first version of the truth.

In HR, that matters because employment actions are not ordinary productivity events. They affect people.

Workday’s Narrow But Powerful Bet

Workday’s position is clearest in HR and finance because it already owns high-trust operational data.

The company says Workday is used by more than 11,500 organizations and more than 65% of the Fortune 500. Those customers do not treat Workday as a lightweight collaboration app. They use it to manage worker records, organizational structures, payroll inputs, benefits, time, absence, performance cycles, compensation, procurement, and finance workflows.

That creates a specific kind of AI advantage. Workday does not need to win by being the most open-ended assistant. It can win by making sensitive actions feel safe enough to automate.

The March launch of Sana from Workday made that strategy explicit. Workday described four capabilities: find, act, build, and automate. “Find” returns cited answers from company knowledge and Workday data. “Act” executes tasks across connected systems. “Build” creates dashboards, summaries, and documents. “Automate” sets up no-code, multi-step workflows. The examples were not abstract. They included updating a home address, checking the tax and benefits impact, changing a contract value, generating a recruiting pipeline dashboard, and reviewing email receipts against policy before submitting an expense.

The important phrase in Workday’s positioning is not “agentic.” It is “deterministic rails.”

HR and finance workflows can tolerate probabilistic interfaces only if the final action is constrained by deterministic policy, permission, and process logic. A model can interpret the request. It can draft a response. It can propose the next action. But the system needs to know who the user is, what they are allowed to see, which field may be changed, whether a manager approval is required, and how the transaction should be recorded.

That is why the Microsoft 365 Copilot integration is strategically useful for Workday. It lets Workday accept that employees may prefer to begin in Microsoft 365. Workday does not have to force every interaction back through its own UI. It can become the trusted execution layer behind another front door.

There is a risk in that concession.

When the user’s memory of the task is “I asked Copilot,” the front-end assistant gets the credit. Workday becomes infrastructure. That can weaken the emotional attachment to the HCM portal over time. It can also give Microsoft better visibility into work intent, cross-app behavior, and employee-level AI adoption.

Workday appears willing to take that risk because the alternative is worse. If employees and managers are going to use Copilot anyway, Workday wants those interactions to resolve through Workday’s business rules rather than through unsupported scraping, exports, custom connectors, or shadow agents. Better to be the official backend than the bypassed portal.

This is the new HR software bargain.

The system of record may lose the screen. It must not lose the action.

For Workday, the question becomes how far this model extends. Time off and payslips are safe starting points. Performance reviews, promotion packets, compensation planning, succession, employee relations, recruiting disposition, and workforce planning are harder. They involve judgment, legal exposure, manager discretion, employee trust, and sometimes privileged information.

If Workday can expose those workflows through outside assistants while keeping control of permissions, approvals, evidence, and audit trails, it strengthens its role as the people-and-money control layer. If it cannot, productivity-suite agents will still push toward those workflows through weaker paths.

That is the front door dilemma.

If the official door is too limited, users will find another one.

ServiceNow’s Broader Bet: The System of Action

ServiceNow is not trying to be the HCM system of record. It is trying to be the place where enterprise work gets routed, governed, and completed.

That makes its HR AI strategy different from Workday’s. Workday starts from sensitive people and finance data. ServiceNow starts from workflow execution across departments. Employee service is one domain among many, but it is a natural fit because many HR requests already cross IT, security, legal, procurement, finance, workplace services, and payroll.

Onboarding is the cleanest example.

In a HCM system, a new employee is a record. In a service workflow, that record triggers work: equipment provisioning, access grants, background-check follow-ups, payroll setup, building access, mandatory training, manager tasks, and policy acknowledgments. The value is not the row. The value is the coordinated chain of actions that makes the person ready on day one.

ServiceNow’s Action Fabric is built for that claim. The company says AI agents can call governed enterprise actions headlessly through its MCP Server. A Claude, Copilot, or internal agent does not need to present a traditional ServiceNow UI to run a workflow. It can submit the action, and ServiceNow can apply its workflow logic, playbooks, business rules, approvals, SLA timers, and audit trail.

This is a more aggressive front door strategy than it first appears.

ServiceNow is saying that if agents become the interface for work, the action platform underneath becomes more valuable, not less. The UI may fragment. The governed workflow layer can consolidate.

Its AI Control Tower reinforces the same position. The May 5 expansion covers AI assets outside ServiceNow, 30 new enterprise integrations, runtime observability, NIST and EU AI Act aligned risk frameworks, identity access governance, real-time shutdown, and cost measurement. ServiceNow also says its Evaluation Suite has already been used by more than 150 customers across about 1 million AI interactions.

The platform message is clear: AI sprawl creates demand for a control layer that can see across agents, models, prompts, data sets, workflows, identities, and costs.

HR is a high-value test case because employee service is repetitive, sensitive, and cross-functional. A leave request may touch policy, manager approval, payroll, scheduling, workforce planning, and local compliance. A relocation question may touch immigration, tax, finance, legal, and IT access. An employee relations inquiry may require careful routing and restricted visibility. A new-hire request may involve a dozen systems before the person starts.

The front door for those requests may be Copilot, ChatGPT, Slack, Teams, Gemini, a voice agent, or a mobile app. ServiceNow’s bet is that the enterprise will still want a single governed work engine underneath.

That creates a competitive tension with Workday.

Workday wants HR and finance actions grounded in Workday. ServiceNow wants cross-enterprise actions grounded in ServiceNow. Microsoft wants the employee to start in Copilot. OpenAI wants the agent to start in ChatGPT. Google wants Workspace agents governed in Workspace. The ATS wants recruiting actions in the hiring workflow. The payroll vendor wants pay-related actions on its rails.

No single vendor can fully own all of it.

The fight is over which platform becomes the default orchestrator when a request crosses boundaries.

OpenAI and Google Move the Admin Console Into the Agent Era

OpenAI and Google are not HR vendors, but they are changing the distribution layer around HR work.

OpenAI’s workspace agents are important because they give organizations a way to create shared agents for repeatable tasks: reports, code, messages, workflows across connected apps, scheduled runs, Slack channels, custom MCP servers, version history, analytics, and admin controls. HR teams will not be the only users, but they are obvious users. A recruiting operations team can check requisition hygiene and missing feedback. A HR business partner can prepare for manager meetings. A people analytics team can reconcile workforce files in a spreadsheet-native interface.

Those workflows may begin outside the HCM system. They may also touch sensitive data.

That is why OpenAI’s admin console changes matter. When admins can see agents, connected apps, memory files, schedules, activity, tool interactions, connector interactions, and analytics, the enterprise agent platform starts to look less like a chat product and more like a managed work environment. The agent directory becomes a governance object.

Google’s move is similar from the collaboration-suite side. Workspace data already contains HR-relevant material: policy documents, manager notes, employee communications, interview packets, onboarding docs, meeting recordings, compensation spreadsheets, performance calibration decks, and employee service attachments. If Gemini or third-party agents can act on that data, Workspace needs controls that look different from ordinary document sharing.

The AI control center’s first modules are simple, but the direction is clear. Admins need to see who uses AI, which services are enabled, what data protection rules apply, how oversharing is prevented, and how agent access to Workspace data is governed.

This creates a quiet shift in HR tech buying.

The CHRO may still buy HCM. The TA leader may still buy ATS. The HR operations leader may still buy ServiceNow HR Service Delivery. But the CIO and CISO will increasingly decide which AI front doors are allowed to touch HR data. The productivity suite’s admin console becomes part of the HR AI governance stack.

SHRM’s 2026 research helps explain why this shift is likely. SHRM surveyed 1,908 HR professionals in December 2025 and found that 39% already had AI adopted in HR functions, another 7% intended to launch AI in HR during the year, and 62% were using AI somewhere in the organization. But 56% of HR professionals said they did not formally measure the success of AI investments at all.

That gap leaves room for IT, security, legal, and procurement to shape the front door.

The same pattern appears in talent acquisition. ICIMS and Aptitude Research found in April 2026 that 69% of companies were using AI in some capacity in talent acquisition, but only 18% were using it broadly across hiring processes. Candidates were moving faster: 74% of companies said candidates were already using AI in the job search. The same research reported that 46% of companies were using or planning to use agentic AI in talent acquisition, while 45% lacked a formal AI governance framework.

That is the market condition in which front doors multiply.

Recruiters are overloaded. Candidates are automating. Managers want faster answers. HR teams lack measurement. Vendors are racing to embed agents into existing surfaces. IT wants control. Legal wants evidence. Security wants visibility.

The front door war is not a vendor fantasy. It is an organizational vacuum.

Recruiting Will Feel the War First

Recruiting is usually where HR AI pressure shows up early because volume is visible and delays hurt. Screening, candidate communication, assessments, and sourcing are already among the top AI use cases in talent acquisition. Those are exactly the tasks that can move out of the ATS interface.

A recruiter may ask Copilot to summarize hiring-manager feedback from email and meeting notes. A ChatGPT workspace agent may reconcile a weekly requisition report in Google Sheets. A Gemini-enabled Workspace flow may pull interview packets from Drive. A ServiceNow workflow may provision system access for a new recruiting coordinator. A Workday or ATS agent may update the candidate record. A scheduling agent may touch calendars. A sourcing agent may operate in a CRM. A candidate communication agent may send follow-ups.

The hiring decision still has to be recorded somewhere. But the work around the decision may happen across five front doors.

That creates three problems.

The first is context fragmentation. A recruiter summary in Copilot may include information from Outlook and Teams but miss a note inside the ATS. A ChatGPT agent may clean a pipeline spreadsheet but miss the latest disposition code. A Workspace agent may rely on a Drive document that has not been updated after a policy change. The front door is only as good as its context contract.

The second is action ambiguity. If an agent drafts a candidate email, updates a CRM field, moves a candidate stage, or schedules an interview, who authorized the action? Was it the recruiter, the agent owner, the workspace admin, the ATS administrator, or the vendor workflow rule? The more front doors exist, the more important runtime tool approval becomes.

The third is evidence split. A challenged rejection may require the ATS record, the email thread, the meeting note, the assessment result, the agent prompt, the retrieval source, the model route, the tool call, the reviewer action, and the final disposition. No single system naturally holds that full chain when work begins in one platform and completes in another.

This is why the HR AI front door war cannot be separated from governance.

California’s employment automated-decision regulations, effective October 1, 2025, require employers and covered entities to preserve employment records including automated-decision data for at least four years, according to Mayer Brown’s summary. Colorado’s 2026 automated decision-making bill advanced in May, with the official fiscal note describing covered domains that include employment and employment opportunities. The EU AI Act treats recruitment, selection, worker management, promotion, termination, task allocation, and performance monitoring as high-risk contexts.

The legal direction is not subtle. Employment-impacting AI needs records, explanations, review, correction, and accountability.

The product direction is also not subtle. Employment work is moving into agents, admin consoles, productivity suites, and systems of action.

The conflict between those two facts is the opportunity.

The winner in recruiting will not be the vendor with the best chatbot demo. It will be the vendor, or coalition of vendors, that can let recruiters work from the natural front door while preserving enough context, permission, and evidence to defend the process later.

That is harder than answering “How many candidates are in stage three?”

It means proving why they got there.

From Seats to Actions

The front door war is also a pricing war.

Enterprise software has long relied on seats, modules, records, transactions, or usage tiers. AI agents strain each model. A human seat can generate only so many clicks. A scheduled agent can run hundreds of actions across systems. A multi-agent workflow can query, summarize, update, notify, and escalate without a person opening the application.

That changes the economics of SaaS.

If a company pays for Workday, Microsoft 365, ServiceNow, an ATS, a payroll system, and ChatGPT Business, what happens when a ChatGPT or Copilot agent calls Workday, triggers a ServiceNow workflow, updates a spreadsheet, drafts an email, and writes back to an ATS? Which vendor gets to meter the value? Which vendor pays for model inference? Which vendor absorbs support risk? Which vendor owns the audit evidence?

ServiceNow is already explicit that actions matter. Action Fabric exposes governed actions to external agents. AI Control Tower measures cost and ROI. The platform’s value claim is not that it stores a record; it lets AI safely act across enterprise workflows.

Workday uses Flex Credits for Sana and agentic capabilities. It also emphasizes that the Copilot integration has no additional licensing requirement for eligible customers. That framing reduces adoption friction, but it does not remove the long-term pricing question. If more employee and manager work shifts through agentic interfaces, HR software vendors will need a way to price the work being done, not only the humans logging in.

Microsoft’s Agent 365 price gives another signal. A control plane for agents can be a separately priced enterprise layer. If agents become another workforce class, governing them becomes a billable product.

OpenAI’s Business release notes for ChatGPT for Excel and Google Sheets mention plan credits after a preview period. Spreadsheet-native work is not only a feature. It is a route into everyday operational usage that can be metered.

The economic fight will be messy because the same action may create value for multiple platforms. A time-off request from Copilot that executes in Workday and notifies a manager in Outlook consumes Microsoft distribution, Workday transaction logic, model inference, identity checks, audit logs, and compliance retention. Each layer can argue it created the outcome.

For HR buyers, the near-term risk is not only higher cost. It is cost opacity.

An HR leader may see lower case volume because agents answer routine questions. A finance leader may see rising AI credits, API calls, workflow actions, and premium control-plane licenses. A recruiter may save time on summaries while the company pays more for connectors, agent governance, evidence retention, and external action calls.

The old ROI question asked whether AI saved recruiter or HR service time.

The new ROI question asks whether agentic work merely moved cost from seats to actions, controls, and evidence.

This is why the “front door” matters commercially. The platform that owns the entry point can shape the meter. It can define what counts as a run, an action, a transaction, a workflow, an interaction, a governed call, or an automation success. It can also show the dashboard that claims the productivity gain.

Control the front door, and you influence the ROI story.

Evidence Under the Front Door

The front door war will eventually become an evidence schema war, but not in the narrow sense of an export format.

An employment-impacting agent action now needs a multi-platform record. Consider a simple employee service case:

LayerExample artifactWhy it matters
Front doorCopilot, ChatGPT, Gemini, Slack, Teams, or ServiceNow interactionShows where the request began and what the user saw
IdentityUser, role, manager relationship, agent identity, delegated accessShows whether the requester and agent had permission
ContextPolicy documents, HR records, email, case history, calendar, spreadsheetShows what information shaped the output
ActionTool call, workflow trigger, field update, message, approval requestShows what the agent did
System of recordWorkday, SAP, Oracle, ADP, ATS, payroll, ServiceNow recordShows the authoritative transaction
Human reviewApproval, override, rejection, escalation, commentsShows where judgment entered
EvidencePrompt, model route, retrieval source, response, version, log, receiptShows how the output can be reconstructed
RetentionLegal hold, export, deletion rule, downstream correctionShows whether the company can answer later

No one vendor owns every layer.

That is the core problem. Microsoft may know the front-end Copilot interaction. Workday may know the HR transaction. ServiceNow may know the workflow. OpenAI may know the agent configuration. Google may know the Workspace document context. The ATS may know the candidate disposition. The security platform may know the identity and risk event. Legal may need the full chain.

If these systems cannot produce a coherent evidence trail, the front door becomes a liability. It may make work faster while making later explanation harder.

The last month of HR AI governance topics has been building toward this. Decision evidence packets asked what must be preserved. Audit rooms asked what HR, IT, Legal, Security, Procurement, and vendors must bring to the table. Runtime tool approval ledgers asked who approved each sensitive tool call. Evidence portability asked whether evidence can leave a vendor. Post-termination support asked whether that evidence remains usable after the contract ends.

The front door war adds one more requirement.

Evidence must follow the action across entry points.

That means HR AI buyers should not accept front-door integrations that only promise convenience. They need answers to dull but decisive questions:

  • Does the system log the original user request and the exact action shown to the user?
  • Does it record which assistant, agent, model, connector, MCP server, and workflow executed the request?
  • Does it preserve the permission state and policy version at the time of action?
  • Does the HCM, ATS, payroll, or service system receive a trace ID from the front door?
  • Can the company reconstruct the same event across Copilot, ChatGPT, Workspace, ServiceNow, Workday, and downstream records?
  • Can the evidence be exported in a form Legal and Security can read without the original vendor UI?
  • Can a correction propagate back to every front door that displayed or cached the old output?

These operational questions are strategic.

The vendor that answers them best will become more than an assistant provider. It will become the proof layer for AI-mediated work.

What Buyers Should Watch Now

The practical buying question is no longer “Which AI assistant has the best HR answers?”

That question is too small.

A better buyer checklist starts with five control points.

First, identify the true front doors. Employees may say they use Workday, but their first HR questions may start in Outlook, Teams, Slack, Google Chat, ChatGPT, or a manager’s spreadsheet. Recruiters may say they work in the ATS, but their summaries, reports, and candidate follow-ups may be generated elsewhere. Map where work begins, not only where records end.

Second, separate answer rights from action rights. It is one thing for an agent to answer “What is our leave policy?” It is another to submit leave, change payroll input, update a candidate status, send a rejection email, approve a timesheet, open an employee relations case, or create a performance review. Read-only access, draft creation, workflow triggering, and system writes need different controls.

Third, require trace continuity. A front-door agent should pass a trace ID, action ID, or evidence reference into the system of record. Without that bridge, audit reconstruction becomes manual. Manual reconstruction works for a demo. It fails in litigation, regulator response, high-volume hiring, and large employee service operations.

Fourth, ask how actions are priced. If the vendor cannot explain how agent runs, workflow actions, model calls, connector use, API events, evidence retention, and control-plane licenses add up, the buyer cannot judge ROI. The first wave of HR AI promised efficiency. The second wave will need cost accounting.

Fifth, test correction. Do not only ask how the agent completes a task. Ask how it unwinds one. If the wrong policy was used, the wrong candidate summary was written, the wrong manager packet was generated, or the wrong payroll input was submitted, can the organization recall, correct, notify, and prove propagation across every front door and downstream system?

This is where vertical HR vendors still have leverage.

They understand the employment context. They know which actions are sensitive, which records must be retained, which workflows require approvals, which fields carry legal meaning, and which interactions candidates and employees may later challenge. Productivity-suite and model vendors can provide distribution, interface, and horizontal agent infrastructure. They do not automatically understand HR’s decision semantics.

But HR vendors cannot hide behind that expertise.

If their workflows remain trapped in portals, users will route around them. If their data cannot be safely exposed to Copilot, ChatGPT, Gemini, Slack, ServiceNow, or internal agents, buyers will build unofficial paths. If their evidence cannot connect to cross-platform agent logs, Legal and Security will treat them as incomplete.

The strongest HR AI products will look less like standalone destinations and more like governed capability layers. They will expose actions safely, carry evidence across systems, support multiple front doors, meter work transparently, and preserve the system of record’s authority without requiring every user to live inside its UI.

That is a different product discipline.

It is also a different sales motion. The buyer is no longer only HR. It is HR plus IT, Security, Legal, Procurement, Finance, and sometimes the enterprise AI center of excellence. The vendor has to prove not only that the feature works but that the action can be governed in the company’s broader agent environment.

When the Portal Fades

The old HR portal will not disappear quickly.

There will still be pages, forms, dashboards, records, approvals, and administrative screens. HR operations teams need them. Power users need them. Compliance teams need them. Some actions are too complex or sensitive to fit a conversational surface.

But the portal is losing its monopoly on everyday work.

The employee asking about vacation in Copilot is not making a philosophical choice about enterprise architecture. She is using the window already open. The manager approving timesheets from a Microsoft 365 surface is not trying to weaken the HCM vendor. He is reducing friction. The recruiter running a pipeline cleanup agent in a spreadsheet is not rejecting the ATS. She is trying to survive volume.

That is how platform shifts usually begin. Not with a formal replacement, but with a change in where small actions start.

Workday’s Sana integration with Microsoft 365 Copilot shows one response: keep the transaction on trusted rails while letting the front door move. ServiceNow’s Action Fabric shows another: make governed workflow callable by any agent. OpenAI’s workspace agents show a third: let teams create shared, scheduled, cross-app agents. Google’s AI control center shows a fourth: bring agent governance into the collaboration suite. Microsoft’s Agent 365 shows the bundle strategy: manage agents with the same seriousness once reserved for users, devices, and data.

The HR tech market will absorb all of these moves unevenly. Some vendors will become backends behind bigger front doors. Some will become action gateways. Some will become evidence layers. Some will try to own the whole experience and find that users have already moved.

At the end of the leave request, the employee may see only a short confirmation in Copilot.

Behind that sentence, the real contest has already happened: Microsoft held the surface, Workday executed the transaction, identity controls decided access, audit systems wrote the evidence, and HR will be accountable if the answer was wrong.

The front door looked simple.

It was not.


This article provides a deep analysis of the emerging HR AI front door war. Published May 14, 2026.