The Manager Who Had to Onboard a Nonemployee

At 9:18 a.m. on a Monday, a manager at a large services company opened a request that looked almost like a normal onboarding ticket.

The new contributor needed access to a customer knowledge base, a case-management queue, a reporting dashboard, a shared mailbox, and a collaboration channel. It needed a business owner. It needed an escalation path. It needed permission to draft replies, route cases, summarize policy, and flag exceptions. It also needed limits. No compensation records. No disciplinary notes. No unsupervised action on a sensitive employee case.

The only strange part was that the contributor was not a person.

The IT team wanted to know which systems the agent could touch. Legal wanted to know who would be accountable if it made a bad recommendation. HR wanted to know whether it belonged in the workforce plan, the skills architecture, or some new inventory no one had named yet. Finance wanted to know where the cost should sit. The business manager mostly wanted to know whether the agent would actually reduce the work or simply create another thing to supervise.

That is where the “AI agent manager” conversation becomes real.

The title itself is already in the market. Microsoft said in its 2025 Work Trend Index that 28% of managers were considering hiring AI workforce managers to lead hybrid teams of people and agents, while 32% planned to hire AI agent specialists over the next 12 to 18 months. The same report said leaders expected their teams, within five years, to redesign business processes with AI, build multi-agent systems, train agents, and manage them.

Mercer’s 2026 Global Talent Trends report pushed the idea into HR’s own operating model. It said 82% of C-suite leaders believe the future of HR lies in managing human talent and digital agents side by side. That is not a small statement. It means the agent question is no longer just an IT architecture problem. It is becoming a workforce management problem.

But the most useful conclusion is not that every company will soon post a role called AI Agent Manager.

Some will. Many will not.

The deeper change is that management itself is being rewritten. If agents can take action inside workflows, the manager’s job expands from supervising people to designing, assigning, auditing, and correcting mixed teams of humans and software. That affects job architecture, skills, performance reviews, workforce planning, compliance, and trust. It also makes a weak assumption visible: companies want the capacity of digital labor, but they still depend on human judgment to make that labor legitimate.

The next HR technology fight is therefore not only about which platform has the best agents. That argument has already started. The more durable fight is about who defines the human side of the agent operating model.

If HR does not do it, IT and legal will.

The Job Title Is Smaller Than the Operating Change

New job titles are usually the loudest but least stable sign of a market transition.

The cloud era produced cloud architects, DevOps engineers, site reliability engineers, FinOps leads, and platform teams. Some titles became durable. Others were transitional names for responsibilities that later spread across engineering, finance, security, and operations. The important change was not the wording on LinkedIn. It was that companies learned to run infrastructure differently.

The same thing is likely to happen with AI agent management.

There will be explicit jobs. Large companies will hire people to govern agent fleets, design automations, tune workflows, measure ROI, and enforce policy. Consultancies will package the work. Vendors will sell role-based tools. Internal centers of excellence will produce templates, standards, scorecards, and approval rules. For a period, “AI agent manager” will sound like a new occupation.

But the title will not contain the full shift.

Microsoft’s Work Trend Index is useful because it separates the specialist role from the broader management change. It says some leaders are considering AI workforce managers and agent specialists, but it also says 36% of leaders expect managing agents to become part of their team scope within five years. That second number matters more. If agent management becomes ordinary team work, it stops being a niche title and becomes a management capability.

That is how the work will spread.

A customer support manager will decide which cases can be resolved by an agent and which require a person. A recruiter will decide when an interview scheduling agent can contact a candidate directly and when it must wait for human review. A payroll leader will decide which variances an agent can remediate and which need approval. A sales operations manager will decide whether a forecasting agent can update a field in the CRM or only recommend an action.

In each case, the manager is not simply “using AI.” The manager is shaping a new production system.

That is why the phrase “agent boss” is catchy but incomplete. A boss assigns goals, evaluates work, coaches, corrects, escalates, and carries responsibility. An agent boss must do all of that while also understanding model limits, data permissions, workflow boundaries, cost behavior, and failure modes. The work sits between management, systems design, and risk control.

It also changes what counts as management skill.

For decades, white-collar management was built around the coordination of people. Managers translated goals into plans, allocated work, handled exceptions, assessed performance, mediated conflict, and created enough context for teams to function. Software helped track the work, but it did not usually perform the work with partial autonomy.

Agents change that balance. They can draft, search, classify, route, recommend, reconcile, and sometimes act. Once that happens, the manager’s central task is no longer only to make people productive. It is to decide where judgment should remain human, where machine execution is acceptable, and how the handoff is documented.

That is an HR problem because it changes jobs.

It is an IT problem because it changes systems.

It is a legal problem because it changes accountability.

The companies that treat it as only one of those will build brittle operating models.

Human-Agent Teams Have Moved From Demo to Workforce Plan

The reason this topic became publishable now is that several signals have converged.

Microsoft’s April 2025 Work Trend Index was early and explicit. It analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor-market patterns, and Microsoft 365 productivity signals. The report said 82% of leaders considered 2025 a pivotal year to rethink strategy and operations, and 81% expected agents to be moderately or extensively integrated into their AI strategy over the next 12 to 18 months. It also said 24% of leaders had already deployed AI organization-wide, while only 12% remained in pilot mode.

That matters because the language moved from experimentation to operating model.

The report introduced the idea of a human-agent ratio: how many agents are needed for which roles and tasks, and how many humans are needed to guide them. It also argued that companies will need new ways to allocate and manage intelligence resources, possibly blending HR and IT or creating new leadership structures around human and digital labor.

That is not a product feature. It is workforce planning.

Mercer’s February 2026 research makes the same point from a different angle. Its Global Talent Trends 2026 report, based on nearly 12,000 executives, HR leaders, employees, and investors across 16 geographies and 16 industries, found that 98% of executives plan organizational design changes over the next two years. Sixty-five percent expect 11% to 30% of their workforce to be redeployed or reskilled because of AI in that period. Sixty-three percent of C-suite leaders said they need to move toward skills-powered talent practices.

Those numbers describe a company that cannot treat AI as a side tool.

If a meaningful share of the workforce will be redeployed, reskilled, or reorganized because AI changes work, then every agent deployment creates second-order people questions. Which tasks disappear from a role? Which tasks move to agents? Which human skills become more valuable? Which managers can redesign work without breaking trust? Which employees get access to agents first? Which teams get measured against agent-amplified output before they have the training to deliver it?

This is where HR technology gets pulled into a harder layer.

S&P Global Market Intelligence’s 2026 HR technology forecast describes talent intelligence, people analytics, and employee experience as high-growth strategic layers. It puts talent intelligence at a 17.9% compound annual growth rate, people analytics at 12.4%, and employee experience at 10.2%. Its explanation is important: HR technology is moving toward data-driven workforce decisions, predictive planning, skills intelligence, and personalized engagement.

That is the machinery companies will need if human-agent teams become normal.

You cannot manage a human-agent ratio with an org chart alone. You need to understand skills, workflows, task exposure, employee readiness, policy constraints, performance signals, and the real cost of supervision. You need to know whether an agent reduced work or moved work into review queues. You need to know whether a manager gained leverage or inherited invisible risk.

The old headcount plan did not have a column for that.

The new one will.

Workers Want Agents as Teammates, Not Managers

The labor-market signal is not only coming from executives.

Workday’s August 2025 global research landed on a sharper tension: employees are more open to working with agents than being managed by them. Workday said 75% of workers were comfortable teaming with AI agents, but only 30% were comfortable being managed by one. Eighty-two percent of organizations were expanding their use of agents, but workers wanted clear boundaries. Only 24% were comfortable with agents operating in the background without human knowledge.

That distinction matters.

The easiest mistake in agent strategy is to assume that more autonomy always means more value. In HR and workforce systems, autonomy has a social cost. Employees may accept an agent that summarizes a policy, recommends a skill, drafts a response, or helps a manager find a pattern. They react differently when the agent appears to evaluate, monitor, discipline, rank, or silently shape a decision about them.

Workday’s data also shows that trust depends on exposure and task sensitivity. Among people exploring AI agents, only 36% trusted their organization to use them responsibly. Among those further along, the number rose to 95%. Trust was highest in areas such as IT support and skills development, and lower in sensitive areas such as hiring, finance, and legal.

That creates a simple but difficult rule for HR leaders.

Agent management cannot be designed only for efficiency. It has to be designed for legitimacy.

Capgemini’s research points in the same direction. In July 2025, the Capgemini Research Institute said agentic AI could unlock up to $450 billion in economic value by 2028, yet only 2% of organizations had fully scaled deployment. It also found that confidence in fully autonomous AI agents fell from 43% to 27% in a year. Nearly all executives saw competitive advantage in scaling agents, but nearly half of organizations still lacked a strategy for implementing them.

That is the operating contradiction.

Executives see a capacity lever. Workers see a boundary question. Vendors see a product cycle. Risk teams see uncontrolled execution. Managers sit in the middle and inherit the actual work.

This is why the AI agent manager cannot be just a technical role. A purely technical agent manager can define access scopes, logs, and model routes. That is necessary. It is not enough. The human side includes consent, disclosure, feedback, escalation, work redesign, and psychological safety. If employees believe agents are being deployed around them rather than with them, adoption will be shallow even when the tool is powerful.

There is also a responsibility trap.

An April 2026 arXiv paper on AI-human teams, based on four experiments with 1,801 participants, found that people attributed more responsibility to a human decision maker when the human was paired with AI than when paired with another human. The authors call this AI-Induced Human Responsibility. Their explanation is intuitive: when AI is seen as a constrained implementer, the human becomes the default place to locate discretion and blame.

That finding should bother every manager being told to “use agents.”

If an agent makes work faster, the company may count the productivity gain. If the agent makes a morally or legally consequential mistake, the manager may still be treated as the responsible actor. The tool does not erase accountability. In some settings, it concentrates it.

That makes training, policy, and workflow design more than adoption work. They are protection for the humans who will be asked to supervise systems they did not build.

What an Agent Manager Actually Manages

The agent manager role becomes clearer if it is broken into operational responsibilities rather than job-title language.

The first responsibility is scope. Someone has to decide what the agent is allowed to do. Can it only retrieve information? Can it draft? Can it recommend? Can it update a record? Can it trigger an approval? Can it contact an employee, candidate, customer, or vendor? A vague scope turns every deployment into a hidden policy decision.

The second is context. Agents perform differently depending on the data they can access and the instructions that frame the task. Too little context makes them weak. Too much context creates privacy, security, and overreach risks. The manager does not need to be a model engineer, but they do need to understand which business context is necessary and which data should remain off limits.

The third is handoff design. Human-agent work fails most often at the boundary. The agent handles 80% of a task, then drops an exception into a queue no one owns. It drafts a recommendation without explaining uncertainty. It escalates too late. It routes a case to the wrong team. The hard work is not making the agent look smart in a demo. It is making the handoff boring, visible, and recoverable.

The fourth is quality control. Managers need a way to inspect agent output, sample decisions, track errors, measure drift, and compare performance across teams. If an agent saves time but quietly lowers decision quality, the manager needs to know before the quarterly review, the compliance audit, or the customer complaint.

The fifth is economics. Digital labor is not free labor. It has licensing costs, compute costs, integration costs, oversight costs, and failure costs. Gartner warned in June 2025 that more than 40% of agentic AI projects could be canceled by the end of 2027 because of costs, unclear value, or inadequate risk controls. It also said many offerings were being sold as agentic without substantial agent capabilities. That warning is not anti-agent. It is a reminder that the unit economics of agents have to be managed like any other operating system.

The table below is a more useful way to think about the work.

Agent-management responsibilityManager questionHR technology implication
ScopeWhat can this agent decide, draft, update, or trigger?Role design must include agent permissions and human approval points
ContextWhich data and policies does the agent need?Skills, job, policy, and employee data need clean governance
HandoffWhen does work move from agent to human?Workflow tools need visible exception paths and ownership
QualityHow do we know the output is good enough?Performance systems need agent outcome metrics, not only human KPIs
TrustDo employees know where agents are involved?Employee experience tools need disclosure, feedback, and escalation channels
EconomicsIs this agent creating leverage or new supervision cost?Workforce planning must compare human effort, agent cost, and risk exposure

This is why a company cannot solve agent management by creating one central team and telling everyone else to file requests.

A central team can define standards. It can approve vendors, create risk tiers, design control templates, and maintain the agent inventory. But the people closest to the work still have to decide whether the agent is helping. They know when a candidate question is sensitive, when a payroll exception is unusual, when a performance summary misses context, or when a customer case needs a human voice.

Agent management therefore becomes a distributed capability with a central control layer.

That pattern already exists in other domains. Security teams define policy, but every manager handles access requests. Finance sets budget rules, but teams manage spend. HR defines performance frameworks, but managers conduct reviews. Agent governance will work the same way. The center sets the rules. The edge does the real supervision.

The mistake will be pretending the edge can do that supervision without training.

HR Cannot Leave This to IT

The reason HR has to care is not that agents are fashionable.

It is that agents will change the evidence used to judge people.

ADP’s January 2026 launch of new ADP Assist agents shows how close this is getting to ordinary HR operations. ADP said the agents can think, plan, and take action with human oversight across payroll and HR functions. The company described persona-based agents for employees, managers, HR, and payroll practitioners, built on a data platform spanning 1.1 million clients, more than 140 countries and territories, and 42 million wage earners. The use cases include payroll variance audits, tax registration gaps, policy guidance, custom reports, employee-level workforce dashboards, and talent actions initiated through natural language, including promotions.

That is not abstract automation. It is AI entering the moments where workforce records become decisions.

If a manager can type “promote Jordan Smith” and an HR agent starts a guided process, the organization needs to know what evidence appears, what evidence is missing, which rules apply, and what human review is required. If an analytics agent can answer which direct reports earn less than a threshold, HR needs to know how pay equity, access control, and manager interpretation are handled. If a payroll agent suggests remediation, payroll needs human oversight, but HR also needs to understand how trust in pay accuracy is preserved.

IT can secure the system. It cannot define the employment meaning of those actions by itself.

SHRM’s 2026 State of AI in HR report shows why this gap is dangerous. Ninety-two percent of CHROs expect AI to be further integrated into the workforce this year, and 87% expect greater adoption inside HR processes. Yet SHRM’s survey of 1,908 HR professionals found that only 39% currently have AI adopted in HR functions, while another 23% have AI elsewhere in the organization. Recruiting remains the most common HR use case, at 27%, followed by HR technology, learning and development, and employee experience.

That pattern means many organizations are still using AI in narrow HR pockets while broader AI adoption runs ahead elsewhere in the business.

The human-agent team problem will not wait for HR maturity. Business units will deploy agents because they need capacity. IT will approve tools because the architecture is feasible. Vendors will package agents into suites. Managers will experiment because the work is overloaded. If HR arrives late, it will be asked to clean up roles, skills, morale, performance questions, and employee trust after the operating model has already hardened.

There is a practical way to avoid that.

HR should treat agent management as part of job architecture. Every role should eventually have an AI work profile: tasks that remain human, tasks that are agent-augmented, tasks that can be automated, required oversight skills, sensitive-data boundaries, and new performance expectations. This does not need to be perfect on day one. It does need to exist.

The AI literacy article in this series argued that companies cannot keep adding “AI literacy required” to job descriptions without building a proof system. The same logic applies here. Companies cannot keep saying managers will supervise agents without defining what good supervision looks like.

That definition should include at least five capabilities:

  • knowing when to delegate to an agent and when not to,
  • writing instructions with enough business context,
  • checking outputs without redoing all the work,
  • recognizing when an agent has crossed a policy or trust boundary,
  • and explaining agent-assisted decisions to employees, candidates, customers, or auditors.

Those are management skills.

They should show up in learning systems, manager training, promotion criteria, performance expectations, and succession planning. Otherwise agent management will become another hidden competency that some employees learn through access and trial, while others fall behind because no one made the skill explicit.

That would reproduce the same problem now visible in AI literacy: the market changes the requirement before companies change the measurement system.

The Entry-Level Job Becomes a Management Job Earlier

One of the stranger consequences of agents is that management may move downward.

Microsoft’s report makes this point directly. It says that in “Frontier Firms,” even entry-level employees can become managers from day one because they manage AI. It also says 83% of global leaders believe AI will let employees take on more complex, strategic work earlier in their careers.

That sounds optimistic. It may be true for some roles. It also creates a training problem.

Early-career work has historically been a way to build judgment through repetition. Analysts cleaned data, drafted first versions, prepared decks, summarized calls, reconciled spreadsheets, checked documents, and watched how more experienced people edited the work. Much of that labor was inefficient. It was also apprenticeship.

When agents take more of the first draft, the early-career employee may get leverage sooner, but they may also lose some of the slow exposure that built taste and judgment. A junior marketer managing campaign agents may produce more options in a week than a previous junior marketer could produce in a month. The question is whether they can tell which options are good, why one failed, when to ignore the model, and how to learn from the work rather than merely route it.

This is where the agent manager story connects to human skills.

Workday’s January 2025 research said human-centric skills become more vital as AI adoption rises, with many workers expecting the desire for human interaction to intensify. Microsoft’s report points to a similar split. Employees turn to AI for availability, speed, and ideas. They do not primarily turn to it because they dislike human judgment or collaboration. That suggests the future manager is not less human. The future manager has to be more deliberate about where human judgment enters the system.

The Stanford-linked arXiv paper on the future of work with AI agents, revised in February 2026, also points away from a simple automate-or-not view. The researchers gathered preferences from 1,500 domain workers and expert assessments across 844 tasks in 104 occupations. Their framework maps where workers want automation, where they prefer augmentation, and where capabilities and desires do not align. The paper’s most useful idea for HR is the Human Agency Scale: a way to think about how much human involvement workers want in different tasks.

That may become one of the missing tools in workforce planning.

Companies are good at asking whether a task can be automated. They are worse at asking whether it should be automated from the worker’s point of view, the customer’s point of view, or the organization’s trust model. Human-agent teams need both questions. Capability without acceptance creates resistance. Acceptance without capability creates disappointment. Capability plus acceptance without accountability creates risk.

Managers will live inside those tradeoffs.

That is why the first generation of agent managers may not be the people with the most technical vocabulary. They may be the people who understand work deeply enough to redesign it without lying to themselves about what has been lost. They know when the agent is doing real work and when it is producing a polished approximation. They know when a human review is meaningful and when it is a rubber stamp. They know which tasks are boring but formative. They know which exceptions should never be automated away.

That kind of judgment is difficult to buy as a job title.

It has to be developed as a management discipline.

The Product Market Is Selling Control, Not Just Labor

The vendor market has already noticed that managing agents may become as valuable as deploying them.

Workday is the clearest HR example. In February 2025, it announced an Agent System of Record to manage an organization’s fleet of AI agents from Workday and third parties in one place. The company framed the system around onboarding agents, defining roles and responsibilities, tracking impact, budgeting costs, supporting compliance, and maintaining oversight. In June 2025, Workday added an Agent Partner Network and Agent Gateway, with partners including Accenture, AWS, Google Cloud, Microsoft, PwC, Paradox, and others connecting agents to Workday’s system.

The language was revealing. Workday said organizations would need to hire, onboard, assign responsibility, and manage agent outcomes in much the same way they manage people. It also said the system would define what data agents can access, control what actions they take, and track performance.

That is a workforce-management claim.

ADP is coming from payroll and workforce data. ServiceNow and Salesforce are coming from workflow, service, and agent runtime. Microsoft is coming from productivity, identity, collaboration, and Copilot distribution. Each company will describe the category differently because each starts from a different control point. But the direction is similar: agents need identity, permissions, policies, metrics, cost controls, and human ownership.

This is why the agent manager discussion should not be separated from procurement.

When a buyer evaluates an agent product in 2026, the surface demo matters less than the management model behind it. Can the organization see who owns the agent? Can it assign a role? Can it restrict data? Can it log actions? Can it measure outcomes? Can it pause or roll back a workflow? Can it distinguish between a recommendation and a decision? Can employees know when an agent is involved? Can managers inspect work without drowning in review tasks?

Those questions will decide which agent deployments survive the first wave of hype.

Gartner’s warning about project cancellations should be read in that light. The problem is not that agents are useless. The problem is that agent projects sold as magic labor can fail once they meet integration cost, unclear ROI, weak governance, and messy workflows. A chatbot can be impressive in isolation. A digital worker inside payroll, HR, finance, recruiting, employee service, or customer operations has to survive the real organization.

That is much harder.

It also explains why HR tech buyers should be skeptical of tools that make the human role disappear from the story. If a product cannot explain what the manager does, when the manager intervenes, how the manager learns, and how the manager is protected, the product has not solved agentic work. It has pushed the hard part downstream.

The winning products will make human supervision visible without making it unbearable.

That is the design problem.

The New Management Contract

The AI agent manager may begin as a title. It will not stay only a title.

In the short term, companies will appoint specialists because they need someone to clean up the first wave. These people will write playbooks, build agent inventories, classify risks, help teams redesign workflows, and decide which pilots are worth scaling. Some will sit in IT. Some will sit in HR. Some will sit in operations, transformation, or risk. The best ones will speak all four languages.

Over time, the capability will spread into ordinary management.

A manager who cannot work with agents may start to look like a manager who could not work with spreadsheets, shared documents, or dashboards in earlier eras. Not every manager will build agents. Most will have to understand how agentic work changes delegation, evidence, accountability, coaching, and employee trust.

That creates a new management contract.

The company gets leverage. The manager gets more capacity, more data, and faster execution. The employee gets new tools, new expectations, and sometimes new anxiety. The agent gets a role inside the workflow. The missing piece is an explicit agreement about boundaries: what the agent can do, what the human must own, what gets disclosed, what gets measured, and what happens when the system is wrong.

Without that agreement, agent management will become a quiet tax on managers.

They will be asked to deliver AI productivity while absorbing AI risk. They will be told to move faster while checking more outputs. They will be responsible for employee trust in systems they did not select. They will inherit accountability for hybrid decisions that the organization still describes as “automation.”

That is not a sustainable operating model.

The better version is more demanding but more honest. HR defines the role and skills implications. IT defines architecture and access. Legal and compliance define risk boundaries. Finance tracks cost and value. Managers own the workflow reality. Employees get a voice in where human agency matters. Vendors provide controls that match the seriousness of the work.

This is why the agent manager is less interesting as a job posting than as a test of organizational maturity.

Any company can name someone the AI agent manager. Fewer can explain which decisions agents are allowed to influence, which humans remain accountable, how worker trust is protected, and how the human-agent ratio changes the business.

At the end of the Monday onboarding ticket, the manager approved only part of the agent’s access. The agent could draft case summaries and recommend routing. It could not send final replies, touch compensation records, or open a performance action. The manager added a weekly review for the first month and named two human owners for exceptions.

It was not a dramatic decision.

That was the point.

The future of work will not be built only in keynote demos or new job titles. It will be built in small permission choices, review queues, escalation rules, training sessions, and the everyday discipline of deciding where machine speed ends and human responsibility begins.


This article provides a deep analysis of AI agent managers, human-agent teams, and the next management layer in HR technology. Published April 22, 2026.