The Steering Committee Meeting That Did Not Include HR

At 7:32 a.m. on a Tuesday, a CHRO at a large employer opened a calendar invite with an unremarkable title: “AI risk and workflow review.”

The attendees told the real story.

The CIO was there. The general counsel was there. The chief compliance officer was there. So was the leader responsible for shared services. The HR systems director had been copied. The head of talent acquisition had not. Neither had the leader who owned employee relations, workforce planning, or learning.

That would have been easy to dismiss a year ago as one odd internal meeting. In 2026 it is becoming the normal shape of AI decision-making in many companies.

The reason is not that HR stopped caring about AI. In fact, HR is using it. Recruiters are screening with it, sourcers are searching with it, employee service teams are piloting it, and CHROs are telling every conference audience that AI will reshape hiring, skills, and workforce planning. The problem is subtler and more consequential. HR is often arriving after the architecture decisions have already been made.

That is what the latest evidence now shows.

SHRM’s State of AI in HR 2026 report says less than half of organizations will use AI in HR this year. Among those that do, the most common uses are still concentrated in recruiting, HR technology, learning, and employee experience. More revealing than the adoption rate is the ownership pattern. SHRM found legal and compliance functions primarily lead AI governance and oversight in 37% of organizations. Over half, 52%, say HR is not directly involved, or even involved collaboratively, in overall AI strategy and vision.

That is not a small organizational detail. It is the clearest signal yet that the center of gravity in HR tech is moving away from feature buying and toward governance, orchestration, policy, identity, and auditability.

Once that happens, the buyer changes.

A recruiter copilot can be bought by talent acquisition. A chatbot that resolves simple HR questions can be bought by an employee service team. But an agent that touches policy, identity, approvals, compliance, knowledge retrieval, and system execution across multiple tools stops being “just HR software.” It becomes governed infrastructure. That is where IT, legal, and compliance move in first.

This helps explain why the most important recent product announcements in enterprise software do not sound like old HR tech launches. Workday talks about an Agent System of Record. ServiceNow talks about AI Control Tower and Autonomous Workforce. Salesforce talks about Agentforce HR Service, but it packages it inside employee service and a broader agent platform that already sells into IT, service, and operations.

For a decade, HR tech wanted to be bought like software.

Agentic AI is forcing it to be bought like a control system.

That puts HR’s seat at the decision table under pressure. Not because HR should control every model, every runtime, or every security policy. It should not. But if HR does not define the workforce rules that those systems enforce, someone else will.

The Data Says HR Is Using AI but Not Governing It

The cleanest way to understand the moment is to separate AI usage from AI ownership.

Usage is rising. Ownership is consolidating somewhere else.

SHRM’s 2026 data makes that split difficult to ignore. The report says AI use in HR is still relatively narrow. Recruiting is the most common practice area at 27%. HR technology follows at 21%. Learning and development is at 17%, and employee experience is at 14%. At the low end are inclusion and diversity, C-suite and board relations, and ESG, ethics, and compliance, each at 2% or less.

That alone says something important. HR is mostly applying AI where productivity wins are easy to explain: search, scheduling, matching, drafting, knowledge retrieval, service response. It is not yet deeply operationalizing AI inside the governance-heavy parts of the function.

Then comes the harder finding.

SHRM says HR functions are “rarely the primary drivers of AI implementation.” Legal and compliance lead AI governance in 37% of organizations. HR and IT collaborate occasionally in 42% and frequently in 27% when developing or updating AI-related guidelines and policies. When it comes to overall AI strategy and vision, 52% of organizations do not involve HR directly or in a collaborative cross-functional way.

That combination creates a structural gap:

  • HR is close enough to feel AI’s workforce impact.
  • Legal and compliance are close enough to define the guardrails.
  • IT is close enough to own the systems and integration logic.
  • But HR is often not close enough to encode the operating rules early.

This is the gap that will matter most over the next two years.

It already shows up in the labor market. LinkedIn said in January 2026 that U.S. applicants per open role have doubled since spring 2022. Two-thirds of recruiters say it is harder to find qualified talent. At the same time, 93% say they plan to increase AI use in 2026, and 66% plan to use more AI for pre-screening interviews. Early adopters of LinkedIn’s Hiring Assistant are already saving more than four hours per role, reviewing 62% fewer profiles, and seeing a 69% improvement in InMail acceptance rates.

That is a meaningful operational shift. It is also exactly the point where governance questions stop being theoretical.

If recruiters are using AI to pre-screen candidates, prioritize profiles, generate outreach, or route people into different parts of a funnel, then the system is no longer just drafting copy. It is participating in employment-related decisions. If employees are using an AI agent to ask about leave policy, relocation benefits, pay changes, or internal mobility, the system is no longer just a search box. It is becoming a front door to policy execution.

By now, adoption and governance cannot be separated.

LinkedIn’s Davos 2026 labor market release shows the same pressure from the demand side. Jobs requiring AI literacy skills in the U.S. grew 70% year over year. Over the last two years, LinkedIn says 1.3 million new AI-enabled jobs have emerged globally. In other words, organizations are not just experimenting with AI inside HR. They are rebuilding job requirements, workflow expectations, and capability models around it.

Yet the governance layer is still forming.

That is what makes today’s ownership patterns look so unstable. HR can feel the urgency first because the workforce changes show up in hiring, onboarding, learning, service, and performance. But legal, compliance, and IT often move earlier because they are the functions built to manage risk, policy, system access, and control frameworks.

The result is a familiar but dangerous split.

HR sees the use case.

Someone else defines the operating model.

This shift is not mainly political. It is architectural.

Legal and IT are moving first because the modern AI workflow touches three things that HR alone usually does not own: regulation, identity, and execution across systems.

Start with regulation.

The compliance perimeter around employment AI is no longer hypothetical. Orrick’s U.S. AI Law Tracker notes that Illinois’ AI in Employment amendments took effect January 1, 2026, making it a civil rights violation to use AI in employment decisions without notice or in discriminatory ways. On March 6, 2026, Connecticut Attorney General William Tong issued guidance explaining how existing anti-discrimination, privacy, and unfair trade practices laws apply to AI, including in employment contexts. In Europe, the EU AI Act already bans some workplace uses such as emotion evaluation, and high-risk obligations tied to systems used for recruitment, promotion, termination, task assignment, and performance evaluation begin applying from August 2, 2026 or August 2, 2027 depending on system type.

This is exactly the kind of terrain where legal and compliance teams will insist on leading.

Then comes operational strain.

Mitratech’s State of HR Compliance 2026 report says 75% of respondents believe their compliance needs have changed, 54% say those needs have increased over the last two years, and 51% rank AI and automated decision-making compliance as the top emerging compliance trend for the next 12 to 18 months. The report’s core message is blunt: organizations are scaling automation faster than their governance structures are maturing.

Now add identity and system access.

A recruiting assistant that only summarizes resumes is relatively easy to pilot. An agent that opens cases, routes requests, updates records, triggers approvals, surfaces policy, or makes recommendations across HRIS, collaboration tools, service systems, and workflow engines is different. That agent needs to know what data it can access, what actions it can take, which system is authoritative, and how its output is logged.

That is not a classic HR software question.

It is an enterprise control question.

Workday said this explicitly when it announced the Agent System of Record in February 2025. The company described a system that would onboard new AI agents, define their roles and responsibilities, track their impact, budget and forecast their costs, support compliance, and provide transparency and control for IT and business leaders. Outside allies used the same frame. Constellation Research chief executive R “Ray” Wang called the rise of enterprise agents an urgent reason to build a centralized system of record. AWS executive Swami Sivasubramanian said organizations would need a way to ethically manage these agents. Those are infrastructure arguments, not classic HR software talking points.

ServiceNow used almost the same logic in March 2026 when it argued that enterprises need deterministic outcomes from workflows, not just probabilistic model outputs, and said every action in its autonomous workflows is traceable and governed by policies embedded in the workflow layer itself.

The governance logic moves upstream from there.

The moment an AI system can act across systems, not just advise inside one screen, the conversation is no longer about user experience alone. It becomes a discussion about:

  • what the agent is allowed to do,
  • whose policy it follows,
  • how exceptions are handled,
  • where accountability sits,
  • and what evidence remains after the action is taken.

Legal and IT have decades of muscle memory for those questions.

HR often does not, at least not in a form the platform can execute.

That does not mean HR is unimportant. It means HR’s knowledge has to be translated into policy, permissions, escalation paths, and evaluation rules earlier than before. The old HR tech buying model let HR define the process and ask a vendor to support it. The new model increasingly asks whether the process is formalized enough to automate, govern, and audit in the first place.

That changes who gets the first meeting.

It also changes the type of vendor that wins it.

The Control Plane Is Becoming the Product

This is the part many HR teams still underestimate.

The most powerful enterprise AI products are not being sold as better assistants. They are being sold as control layers for governed work.

The strongest signals in the market are coming from companies that already sit above or across the workflow.

Workday is the clearest example from the HR side. When it launched the Agent System of Record on February 11, 2025, it did not frame the product as another recruiter tool or help desk enhancement. It framed it as a secure system to manage an organization’s “entire fleet of AI agents” from Workday and third parties alike. It said the platform would define roles, track impact, support compliance, and budget agent costs. It also linked that control layer directly to role-based agents in recruiting, talent mobility, succession, policy, payroll, and financial auditing.

That product language matters. Workday is no longer just saying, “Here are AI features for HR.” It is saying, “Here is the system that governs digital labor across people and money.” That is a much bigger budget claim.

ServiceNow is making a parallel move from the workflow side. Its AI Control Tower was launched as a centralized command center to govern, manage, secure, and realize value from any AI agent, model, and workflow. In the same announcement, UKG chief product officer Suresh Vittal said the joint opportunity with ServiceNow could help mutual customers self-service 80% of their most requested HR tickets. Then in March 2026, ServiceNow pushed the idea further with Autonomous Workforce and EmployeeWorks. It said its own autonomous workforce was already handling more than 90% of employee IT requests and that the newest AI specialist was resolving assigned IT cases 99% faster than human agents. Bhavin Shah, who leads Moveworks and AI for ServiceNow, described EmployeeWorks as an AI front door that “doesn’t just summarize, it completes the work.” More important than those headline numbers was the operating logic behind them: the system understands organizational structure, approvals, and authorization, and executes tasks across systems while maintaining governance and audit trails.

Salesforce is doing something similar from the employee service side. Its May 2025 Agentforce HR Service launch described AI capabilities embedded directly into HR Service, with employees able to manage common tasks conversationally in Slack or the employee portal. Salesforce said its own HR team used the system to manage nearly 10 million searches and resolve 96% of employee inquiries without HR intervention. President and chief people officer Nathalie Scardino called Salesforce “customer zero” for the model and said its teams were already working alongside agents daily. Again, the critical point is not the chatbot wrapper. It is that the agent is grounded in company data, knowledge articles, policies, and integrations into HRIS and HCM systems.

These moves are different in product packaging, but they point to the same market structure.

VendorControl layer being soldWorkforce angleWhy the buyer changes
WorkdayAgent System of RecordRecruiting, talent mobility, policy, payroll, successionOnce agents are managed like workforce assets, HR buying merges with finance, IT, and compliance oversight
ServiceNowAI Control Tower plus workflow orchestrationEmployee service, IT service, cross-system approvals and executionThe value sits in governed execution across systems, which pulls in CIO, shared services, and risk leaders
SalesforceAgentforce platform plus HR ServiceEmployee self-service, case resolution, HR and IT supportHR capability is packaged inside a broader employee and enterprise service layer that already spans multiple buyers

This is the real product shift in HR tech.

The winning layer is not necessarily the screen where the employee or recruiter interacts with AI. The winning layer is the one that decides what the AI can access, what it can do, how its actions are logged, and where escalation happens when the model is uncertain or wrong.

That is what people mean when they talk about a control plane, even if the term gets overused.

And control planes change power.

The vendor that owns the control plane does not just sell efficiency. It shapes standards. It becomes harder to replace. It accumulates more telemetry. It can bundle governance, orchestration, and execution into adjacent workflows. That is why Workday wants the system of record for agents, why ServiceNow wants the tower, and why Salesforce wants the employee interaction layer attached to its agent platform.

The control plane is becoming the product because AI has started to act, not just answer.

The Budget Is Moving With the Control Plane

Every software market eventually reveals its real buyer.

In HR tech, that buyer used to be relatively clear. Talent acquisition tools were sold to TA leaders. Learning systems were sold to L&D. Core HR was sold to HRIS and HR operations. The integration work was painful, but the budget lines were legible.

Agentic AI is blurring those lines fast.

Consider the two most common enterprise use cases where AI is already proving concrete value: hiring workflows and employee support. LinkedIn says recruiters are using AI to discover hidden-gem candidates faster, screen more efficiently, and reduce profile review time. Salesforce says HR inquiries can be resolved through employee self-service at a 96% rate. ServiceNow says governed autonomous workflows can resolve internal support issues at machine speed. Workday says agents should be onboarded, role-defined, monitored, and cost-managed like part of the workforce itself.

None of those benefits live neatly inside one function.

Hiring touches talent acquisition, compliance, identity, assessments, data privacy, and increasingly workforce planning. Employee support touches HR, IT, payroll, security, facilities, and service operations. The old budget logic breaks because the AI workflow crosses too many boundaries to be governed locally.

Budget authority is moving upstream in three distinct ways.

First, IT gets leverage because system execution, identity, and integration matter more than ever. When an agent can update records, route approvals, or retrieve policy across systems, the CIO’s organization becomes hard to bypass.

Second, legal and compliance get leverage because the cost of being wrong is rising. Employment AI is no longer just a fairness talking point. It is a live risk surface with notice, documentation, accountability, and anti-discrimination implications. If the system makes or meaningfully shapes employment-related decisions, legal will demand a seat.

Third, shared-services and employee-service leaders get leverage because AI is compressing service categories that used to be separate. The same employee who asks about leave policy may ask about laptop replacement, payroll changes, or relocation support through the same conversational front door. That naturally favors the platforms already positioned around enterprise service, not only pure-play HR tools.

The implications for independent HR tech vendors are obvious but uncomfortable.

If the differentiator stays at the level of UI convenience, drafting quality, or narrow task automation, the vendor risks getting absorbed into a larger platform layer. If the differentiator moves into governance, auditability, or workflow control, the vendor must sell into buyers who may sit outside traditional HR. Either way, the market gets harder for products that were designed around a single departmental budget.

Which is why the phrase “who pays” matters so much now.

Here is a more realistic map of the buyer than most vendor decks show:

AI workflowOld likely buyerEmerging buyer mix in 2026What tips the decision
Candidate search, screening, interview flowTA leaderTA plus HR ops plus legal plus ITWhether the system affects candidate ranking, identity, notice, and audit requirements
Employee policy and supportHR shared servicesHR plus CIO plus employee service ownerWhether the agent executes actions across systems or only answers questions
Internal mobility and skills matchingTalent managementCHRO plus CIO plus workforce planning plus data ownersWhether the use case depends on enterprise skills data and cross-system permissions
Agent governance and observabilityRarely a separate line itemCIO plus risk plus compliance plus business sponsorWhether the company sees agents as governed digital workers rather than point features

This is also why the “HR should just own AI” argument misses the point.

HR is not going to win by demanding sole ownership of every AI decision. SHRM’s own data suggests HR professionals do not even believe that is the right model. The report says few HR professionals think HR should lead any aspect of AI implementation on its own. The highest support is for active contribution in change management and employee adoption, not sole control.

That is sensible.

The issue is not whether HR should dominate the governance stack. It is whether HR can remain a downstream stakeholder while others encode workforce rules into upstream systems. If HR is absent when access, policy logic, escalation, and evidence requirements are defined, then the workflow will still ship. It will just reflect someone else’s priorities.

And those priorities will often default to one of three things:

  • reduce service cost,
  • reduce risk,
  • or standardize execution.

All three matter. None is enough by itself.

Workforce systems also need to reflect hiring quality, employee trust, skills development, mobility, manager burden, and the messy exceptions that make people policy different from pure ticket routing. That is the part HR must bring.

What HR Must Own Before It Loses the Workflow

The practical question is not whether HR should lead AI.

It is what HR must define, now, before the workflow hardens around someone else’s assumptions.

The answer starts with a more disciplined view of ownership.

HR should not try to own infrastructure, model security, or enterprise architecture. Those are real disciplines and other functions are better built for them. But HR does need to own the workforce logic that the system is increasingly executing. That means four things.

1. Define where employment judgment ends and automation begins

Too many HR teams still discuss AI at the level of principles. Fairness. transparency. human oversight. Those matter, but the platform needs operational rules, not only values statements.

HR has to answer concrete questions:

  • Which hiring or employee-service decisions can be automated?
  • Which can be recommended but not executed?
  • Which must always stay human?
  • What evidence must be stored when the system influences an employment outcome?
  • What counts as an acceptable override?

If HR does not answer these, legal will answer them in the narrowest defensible way, and product teams will encode that logic into the workflow.

2. Turn policy into executable standards

The biggest hidden weakness in many HR functions is that policy often exists as prose, not as system logic.

A leave policy may be clear to an experienced HR business partner but ambiguous to an agent. A promotion rubric may be written in principle language but not in decision-ready criteria. A mobility policy may exist in slide decks and manager folklore but not in a format that can govern recommendations or approvals.

That gap becomes expensive in an agentic environment.

Workday’s Policy Agent, ServiceNow’s governed workflows, and Salesforce’s policy-grounded HR Service all assume the same thing: enterprise knowledge can be formalized enough to power action. If HR cannot structure policy that way, the control layer will still be built, but it will rely more heavily on legal simplification, IT process logic, or vendor defaults.

3. Build outcome metrics that go beyond deflection and speed

This is where HR most often gets outmaneuvered.

The first AI dashboards in enterprise service environments almost always celebrate response times, ticket deflection, labor savings, and automation rates. Those are useful metrics. They are also incomplete.

A hiring agent should also be measured on quality of slate, downstream interview yield, protected-class impact, candidate confidence, and appeal rates when relevant. An employee support agent should be measured not only on case deflection but on policy accuracy, employee trust, resolution durability, escalation quality, and whether it reduces or increases manager confusion downstream.

If HR does not insist on those measures, the system will optimize for the easiest ones to count.

4. Claim the workforce side of AI governance without trying to own all of it

This is the subtle governance move that many CHROs still have not made.

The goal is not to chair every AI committee. The goal is to make sure there is a clearly owned workforce workstream inside the governance model, with decision rights that cannot be treated as optional consultation.

That workstream should cover:

  • hiring and mobility policy,
  • employee-facing knowledge quality,
  • job architecture and skills data,
  • workforce impact assessments,
  • manager and employee adoption,
  • escalation rules for high-risk employment scenarios,
  • and ongoing review of business outcomes, not just technical performance.

SHRM’s January 2026 guidance for CHROs was directionally right on this point. CHROs should not sit back and hope the enterprise AI mandate resolves itself. They need to claim a leadership role in translating business technology into workforce operating choices.

That does not guarantee HR will own the budget.

It does make HR much harder to route around.

The Seat Is Still Open, for Now

The most dangerous mistake in this market is to think HR has already lost.

It has not.

But the window is narrower than many teams assume.

The control layers are being built now. Product packaging has already changed. Workday is telling buyers to manage digital workers like a governed workforce. ServiceNow is telling buyers that workflows, not models, are what make AI enterprise-ready. Salesforce is telling buyers that employee service can become an agentic front door grounded in company policy and integrated system actions. Regulators are moving from abstract debate to actual enforcement posture. Recruiters are scaling AI usage because applicant volume and speed pressure leave them little choice.

The systems will not wait for HR to become comfortable.

At the next steering committee meeting, the CIO will still be there. So will legal. So will compliance. They belong there. The question is whether HR shows up only to react to a nearly finished architecture or whether it arrives with a view of what the workforce workflow should optimize for, what the guardrails should be, and what evidence should remain when the system acts.

That is the real seat at the table.

Not a symbolic invitation.

The right to define what a good decision looks like before the platform defines it for you.


This article provides a deep analysis of why AI governance in HR is shifting toward legal, compliance, and IT, and what HR leaders must still define before governed agent workflows harden into the next enterprise control layer. Published April 19, 2026.