The GenAI Decision CHROs Can’t Defer (Anymore)

Insight Categories:

The GenAI question facing CHROs has shifted. It’s no longer ‘Should we act?’ but ‘What’s already happening while we wait to decide?’ The answer: employees are reshaping work daily—drafting communications, summarizing policies, preparing materials—well ahead of strategy or governance. 

For a while, delay felt prudent. The tools were moving fast. Legal wanted clarity. IT wanted standards. HR wanted a framework before committing. All familiar. All understandable. But that pause no longer buys safety. It buys inconsistency. 

This reality emerged clearly during a recent CHRO roundtable: small, everyday examples—drafting communications, summarizing policies, preparing manager materials—surfaced as evidence that GenAI is already being used informally, well ahead of strategy or governance.


The decision you’re already making

Recent workforce data shows AI use at work is rising, but not evenly. Gallup’s latest research found that nearly half of U.S. employees have experimented with AI at work, while roughly a quarter use it weekly or more—while many organizations remain unaware of this informal adoption (Gallup, State of the Global Workplace: AI Edition, 2025). This uneven adoption creates the coordination problem that will land on HR’s desk: different teams applying different standards to the same work, managers approving outputs they didn’t shape, employees guessing what’s acceptable. 

Consider what this looks like in practice: At a mid-sized professional services firm, project managers might use ChatGPT to draft client proposals. If disclosure practices vary by manager, clients notice—and governance questions land with the CHRO. 

That pattern is widespread. You don’t get to decide whether GenAI shows up in your organization. That decision has already been made by employees. What you do get to decide is whether HR helps shape how it shows up—or whether HR gets pulled in later to explain, contain, and correct. 


Why “we’re not ready yet” is no longer neutral

Many HR leaders still talk about GenAI as something that can be sequenced: 
readiness → pilots → governance → scale.
What’s happening on the ground looks different: 
use → uncertainty → uneven outcomes → retroactive controls & rework. 

Once use comes before shared understanding, everything else becomes harder. Governance turns restrictive. Managers hesitate. Employees either stop experimenting—or do so out of sight. This dynamic was visible in the workshop data as well: every participating organization reported some level of experimentation, but maturity varied widely, and most placed themselves in the early or emerging phase. 

Gartner has been blunt about where this breaks down. In late 2025, Gartner reported that only a small minority of HR leaders believe their people managers are currently equipped to use AI effectively in their management work, even as employee experimentation continues to grow (Gartner HR Symposium, October 2025). 

That gap—between employee behavior and managerial capability—is where most GenAI efforts stall. Not because the tools fail, but because the organization never decided how work was supposed to change. This risk is real. 

Gartner reports that only 14% of organizations support managers on integrating GenAI into daily tasks, while many managers are already experimenting. Without training on what to validate versus accept, managers using AI for performance reviews could generate language that violates employment law— turning what should be a scaling opportunity into weeks of damage control. 

This is usually the moment when CHROs say, “We need governance.” They’re right— but governance alone won’t fix a lack of work clarity.

The alternative approach: a structured pilot that builds capability. For example, HR Business Partners use AI to draft initial responses to employee questions, with clear requirements. This approach generates clear data on where AI helps, where it confuses, and what training managers need—turning a pilot into a roadmap. 


What’s actually shifting underneath all of this

GenAI doesn’t replace jobs. It reshapes parts of them. Specific tasks. Drafting steps. Analysis cycles. Decision points where judgment still matters—and points where it doesn’t.

Ethan Mollick has described this as the “jagged frontier” of AI: systems perform extremely well in some slices of work and unreliably in others (One Useful Thing, 2024–2025). Very few organizations have mapped that frontier. So employees do it themselves. Some thoughtfully. Some sloppily. Managers sign off on outputs without knowing how they were produced. HR policies fall behind lived practice.

In the workshop, this surfaced not as resistance but as urgency: participants consistently named the need for clearer use cases, manager guidance, and shared understanding of where AI helps—and where it doesn’t. At that point, the issue isn’t misuse. It’s an unspoken redesign of work


Why this lands with HR—whether HR wants it or not

IT can standardize tools. Legal can define risk thresholds. Finance can ask for returns. But only HR sits at the intersection of:

  • How work is defined,
  • How managers lead,
  • How performance is judged,
  • How trust is sustained.

GenAI pushes on all four at once. This is why, by the end of the workshop, multiple CHROs described a shift in how they saw their role: not as policy owners or gatekeepers, but as the ones responsible for shaping how AI is actually experienced across the workforce. That doesn’t make HR “the owner of AI.” It makes HR responsible for whether AI becomes usable, legible, and safe inside real work. Most CEOs already sense this. They just don’t have the language yet. That’s where the CHRO role shifts.


The funding conversation you’re heading toward

Sooner rather than later, CHROs are being asked some version of: “What exactly are we funding here?” A credible answer doesn’t start with platforms, it starts with capability.

We’re funding the organization’s ability to use GenAI in ways that improve decision quality, reduce rework, and protect trust—without slowing the business down.

That framing mirrors what participants in the roundtable committed to next: team workshops, shared prompting exercises, explicit AI use goals, and performance-management pilots—not as side projects, but as part of how work gets done.

You don’t need to promise transformation. You need control with momentum.


What to avoid next

At this stage, the most damaging moves tend to look sensible:

  • Freezing progress until things are better defined
  • Letting a small set of experimentation run without guidance
  • Treating governance as a static policy
  • Measuring productivity without asking what happens to the time saved. When AI creates capacity, most organizations haven’t defined what that capacity should produce. The result: employees either fill the time with more of the same work, or productivity gains evaporate entirely.”

Each of these is incomplete because they assume GenAI is something you roll out. It isn’t. It’s something work absorbs—or rejects—based on how clearly leaders shape it. And this risk is tangible, as many are discovering problems reactively: AI-generated content that requires legal review, outputs that violate policy, or tools that create more rework than value. The pattern is consistent: deployment without preparation leads to damage control, not scale.


The decision that actually matters

The GenAI decision CHROs can’t defer isn’t whether to adopt, but whether HR will actively translate GenAI into work—clearly, pragmatically, and credibly—or whether that translation happens informally and HR is asked to make it safe after the fact.

That choice—whether to actively translate GenAI into work or wait for problems to emerge—determines whether this year unfolds reactively or intentionally. The coordination problem is already here. The question is whether HR shapes it or inherits it.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *