FlexHRBook
← All insights

AI in fintech HR: the 80% who haven't acted

· 7 min read

91% of CHROs cite AI as their top concern. Only 1 in 5 organizations has rebuilt processes around AI. In regulated fintech, the gap is wider — and the opportunity is sharper.

The data

Two numbers from SHRM's 2026 State of AI in HR report tell most of the story:

  • ·91% of CHROs cite AI and digitization as their top concern.
  • ·20% of organizations have rebuilt their work processes around AI. The other 80% have deployed AI in at least one function but haven't rebuilt anything.

The penetration is uneven across the HR surface. Recruiting is at 27%. HR technology is at 21%. Learning and development is at 17%. Employee experience is at 14%. The remaining areas — compensation, performance, org design, change management, employee relations — are at near-zero AI penetration.

In regulated fintech, those numbers skew lower. The model risk management overlay, fair-lending compliance, customer-data privacy constraints, and state-by-state employment law all create friction that delays AI deployment. A B2B SaaS company can pilot an AI screening tool in two weeks. A fintech operating in California, New York, and the EU has to navigate adverse-impact testing, candidate-disclosure requirements, and DPIA paperwork before the same pilot can run.

The result: more pressure to act, more constraints on acting, less progress to show.

What "rebuilt processes around AI" actually means

The 20% figure is doing a lot of work. Most companies that count themselves in it have done one or more of the following:

Recruiting. Replaced the inbound application review with an AI-assisted screening layer that scores against a structured rubric. The score isn't dispositive; it ranks candidates so the manager spends time on the right ten instead of the wrong fifty. Done well, this halves time-to-hire for high-volume roles. Done poorly, it codifies the past hiring biases of the company into a model that scales them.

Performance. Used LLMs to summarize 1:1 notes, manager observations, and peer feedback into a draft review document that the manager edits. Saves manager time, reduces avoidance ("I don't have time to write this") and creates more consistent inputs into calibration. Risk: lazy use produces shallow reviews that the employee can tell were AI-generated.

Comp drift detection. Statistical models that flag comp anomalies by role, level, geo, and tenure. Surfaces pay-equity gaps before they become litigation exposure. Useful precisely because the manual version (annual audit) misses 80% of what continuous monitoring catches.

Onboarding. AI-assisted personalization of the 30/60/90 plan based on role, manager, and company stage. Reduces the manager's "I keep forgetting to schedule the security training" problem.

Manager coaching. Conversational AI that simulates difficult conversations (PIPs, comp conversations, terminations) so first-time managers practice before doing it live. Highest-leverage application for most companies. Almost no one is doing it.

The 80% gap isn't "we should use AI." It's "we should rebuild specific processes around AI capabilities." Most CHROs cite AI as a concern; few have specified which processes they're going to rebuild and in what order.

What the 80% are getting wrong

Three patterns we see in fintech specifically:

Pattern 1: AI-as-a-tool, not AI-as-an-architecture. A fintech buys an AI sourcing tool, plugs it in, and considers itself "AI-enabled in recruiting." The sourcing tool surfaces more candidates. The downstream pipeline (interview loops, scorecards, calibration) hasn't changed. So the bottleneck moves from sourcing to interviewing, and time-to-hire doesn't improve. The company has bought a tool without rebuilding the process. The 80% does this constantly.

Pattern 2: model risk management as a stop-sign, not a guardrail. Compliance teams in fintech (rightly) flag AI deployment as a model-risk-management issue. The CHRO sees "MRM" on the email and assumes deployment is blocked. It isn't blocked; it's gated. The companies that move on AI in HR have a documented MRM-lite process for HR-AI tools — input data review, bias testing, override logging — that lets the company deploy with controls. The companies that don't move treat MRM as a wall.

Pattern 3: missing the AI-policy conversation entirely. The AI use policy at the company is being written by ops, security, or marketing. The CHRO isn't in the room. So when a manager asks "can I use ChatGPT to write a perf review draft?" — there's no answer. The lack of answer becomes its own answer: managers either use AI silently and unevenly, or they don't use it at all and fall behind their peers at companies that have written the policy.

In all three patterns, the company has the AI capabilities. What's missing is the architecture that makes those capabilities operational.

What the 20% have done

The companies that count as the 20% have typically done four things:

One. Documented an AI use policy specifically for the people function. Who can use what, for what, with what guardrails. Reviewed by counsel. Posted in the handbook. Updated quarterly.

Two. Picked one high-leverage process to rebuild around AI, not five. Most often, this is recruiting (highest volume, most data). The CHRO has been deliberate about the order: pick the process where the AI capability is mature, the constraints are manageable, and the ROI is measurable.

Three. Built a feedback loop. The AI-assisted process gets measured. Comp-drift models get tuned. Recruiting screens get bias-tested. The 1:1 summary tool gets manager-rated. The companies that succeed treat AI deployment in HR as an operational practice, not a one-time deployment.

Four. Trained the managers to use it. The capability isn't useful if the manager doesn't know it exists or how to use it. The 20% have built short, hands-on training (often inside their first-time-manager curriculum) on how to use AI in their daily work.

How to move from 80% to 20%

Three steps in order. Each takes weeks, not months.

Step 1: AI use policy. Write it. Not the company-wide one — the people-function one. What managers can use AI for. What HR can use AI for. What recruiting can use AI for. What's gated and how. What's prohibited. Two pages. Counsel-reviewed. Posted internally.

Step 2: pick one process. Recruiting is the most common starting point. Performance summarization is the highest-leverage if your manager cohort is sophisticated. Comp drift detection is the highest-leverage if your company is in a pay-transparency-active state and you have the data. Pick one. Rebuild it.

Step 3: measure and tune. After ninety days, measure: time saved, error rate, manager adoption, candidate experience. Use the data to either expand the deployment to the next process or fix what isn't working in the first one. Don't deploy a second process before the first one is calibrated.

The 80% who haven't acted are mostly waiting for clarity that's never going to land. The technology is faster than the certainty. The companies that move now — with documented policies, deliberate process selection, and measurement — pull ahead within twelve months. By eighteen months out, the gap between the 20% and the 80% will be measurable in retention, hiring throughput, and pay-equity defensibility.

That's the opportunity. It's open right now. The 80% won't be the 80% in two years; the question is whether you're in the cohort that closed the gap or the cohort that fell further behind.

Want this applied to your company? Take the Manager Operating System Diagnostic — or book a call.