Devi B October 15, 2025
Topics: AI

The Trust Gap in AI Hiring: What 6,000 Voices Reveal About the Future of Recruitment

Recruiters and candidates embrace AI in hiring. Both demand transparency. Yet they fundamentally disagree on what constitutes fraud in the recruitment process.

This paradox sits at the heart of Checkr's newly released Alignment Advantage report, which surveyed 6,000 respondents who completed the hiring process within the last 6 months. While the research set out to examine AI adoption and fraud concerns, it uncovered something more interesting: a trust gap that challenges the hiring relationship that AI promises to reshape.

On a recent episode of Talent Experience Live, host Devin Foster sat down with Ilan Frank, Chief Product Officer at Checkr, to unpack the report's findings. Their conversation reveals why transparency matters more than technological capability, and how leading organizations are building trust while some struggle with adoption.

Check out the full episode or scroll down for can't-miss insights!

The AI Adoption Gap: Why Candidates Are Already Ahead

Walk the floor at any HR technology conference and the message is clear: AI will accelerate hiring, expand talent pools, and reimagine recruitment. What's less discussed is that candidates have already gotten the memo, and they're using AI more extensively than the organizations trying to hire them.

Candidates are refining resumes, preparing for interviews, and completing take-home assignments with AI assistance. Meanwhile, more than a quarter of HR teams lack clear goals for AI implementation, creating an asymmetry where job seekers leverage tools strategically while employers adopt reactively.

The hesitation isn't simply about newness. AI's non-deterministic nature (its ability to produce varied outputs from the same input) creates unique challenges in regulated HR environments where lawsuits and compliance concerns loom at large. Frank, who has navigated technology shifts from client-server through mobile and cloud computing, notes a familiar pattern: "Some departments usually get mandates later. The sales team, the R&D team, and then finally the HR."

This creates a critical gap. While organizations deliberate on governance frameworks and vendor selection, candidates are already using AI to optimize every touchpoint. "Candidates are ahead of us in their AI usage. Most of it is honest, productive usage. We shouldn't view candidates negatively for leveraging these tools," Frank explains.

Without defined objectives, HR teams struggle to:

  • Measure whether AI investments deliver actual hiring improvements

  • Justify continued budget allocation or request additional resources

  • Determine when AI should assist decisions versus when humans must lead

  • Build trust with candidates who expect transparency about AI use

"I don't think it's fair to make a decision purely [with AI]. It certainly assists workflows in the hiring process, but it needs a human element. That's how we see ourselves: enabling humans to make decisions," Frank emphasizes.

This approach addresses the core tension CHROs face. AI promises efficiency and scale, but hiring remains at its core, a trust-building exercise between organizations and people. Technology that removes human judgment entirely risks commoditizing candidates and creating experiences that feel transactional rather than relational. The question isn't whether to adopt AI, but how to close the gap while building rather than breaking trust.

Building Transparent AI Frameworks That Work

Transparency the practice of openly disclosing when, where, and how AI influences hiring decisions — emerged as the one area where employers and candidates aligned perfectly in the report. Yet agreement on principle doesn't translate to consistent practice.

"It's perfectly acceptable to use AI during the hiring process, just let us know when you've used it. Whether you're refining your resume or completing a take-home assignment with AI assistance." Frank explains. This transitions AI from a potential source of suspicion into a tool that both parties openly utilize. Candidates who will be expected to use AI extensively in their roles can demonstrate proficiency during the interview itself.

Related Watch: AI vs. AI: Using Agentic Intelligence to Protect the Hiring Process

The framework distinguishes between helpful AI usage and fraudulent misrepresentation when candidates use AI to fabricate qualifications, create fake identities, or have someone else complete interviews on their behalf. "Everyone agrees that a total deep fake of a fraudulent candidate, an AI avatar taking [interviews], is not right," Frank notes. The line isn't about whether candidates use AI, but whether they hide its use completely.

This mutual transparency model addresses the report's finding: employers and candidates disagree on what constitutes fraud. By establishing clear expectations upfront, organizations create shared understanding rather than discovering misalignment after hiring.

Balancing AI Efficiency with Human Oversight

While AI automation continues to gain traction, the report reveals that 75% of candidates prefer that the majority of their hiring interactions remain human-based. This preference reflects the personal significance of hiring decisions rather than skepticism toward new technology.

Frank sees this preference as a critical guardrail. "When AI starts to break trust instead of building it, we've gone too far. Trust fundamentally requires human-to-human interaction."

The question isn't whether to use AI, but where human judgment and connection remain essential. Frank recommends technology that expands candidate pools and reduces bias in initial screening, then ensures human interaction at critical trust-building moments. "I don't think  human interaction needs to be 100% of the process, but you need to know the team that you're working with. The connection is important."

The technology didn't replace human judgment; it expanded the right pool of candidates who received human consideration. This approach addresses one of recruitment's persistent challenges: qualified candidates eliminated by resume screening never get the opportunity to demonstrate capabilities that don't appear neatly on paper.

The Future of Hiring: From Resumes to Soft Skills

AI fundamentally changes what organizations should evaluate in candidates. As AI equalizes access to knowledge and information, the differentiators shift dramatically from what people know to how they think, collaborate, and adapt.


“AI equalizes access to knowledge. Resumes effectively capture someone's knowledge and experience, but they don't reveal soft skills — the behaviors and interpersonal qualities demonstrated in previous roles. Ironically, these are exactly what we need to prioritize in an AI-enabled world, because they're what AI can't replicate.” - Ilan Frank, Chief Product Officer, Checkr


Agentic AI that conducts conversational initial interviews can surface what Frank calls "diamonds in the rough" by asking questions that reveal soft skills rather than merely credentials. This enables a human-to-agent-to-human workflow. AI expands and screens the candidate pool through conversation, while qualified candidates move forward to human interaction for final evaluation and team fit assessment. 

The report illustrates that within two to three years, organizations will finally move beyond the resume as the primary screening mechanism. Applications will become conversational, allowing candidates to demonstrate capabilities that don't appear on traditional documents.

Building Trust as AI Adoption Grows

The Alignment Advantage report establishes a baseline that Checkr plans to track year-over-year, revealing trends as the industry matures. Frank anticipates that transparency will increasingly become table stakes, fraud definitions will converge as practices normalize, and the trust gap will close as both sides develop shared expectations.

Most importantly, he emphasizes that AI adoption at scale doesn't require eliminating roles. His department's 90% AI usage goal has been achieved while hiring, not cutting positions. "We're doing more with less," he notes, but the productivity gains enable growth rather than replacement.

The jobs will change. Interview scheduling will become more automated, resume screening will give way to conversational assessment, and administrative tasks will shift to AI assistants. But the core hiring decision remains fundamentally human. "I don't see a world, certainly not anytime soon, where AI makes the final hiring decision. Candidates won't want to join a company that removes human judgment from something so personal."Frank concludes.

Organizations that leverage AI to expand pools, reduce bias, and eliminate manual work while preserving human judgment at critical moments will build the trust that drives successful hiring outcomes. Those who automate for automation's sake will discover that efficiency without trust creates new problems more challenging than the ones technology promised to solve.

Take a proactive approach to interview compliance. Download the Interview Intelligence Guide to learn how leading organizations reduce bias and building trust in hiring

Devi B

Devi is a content marketing writer who is passionate about crafting content that informs and engages. Outside of work, you'll find her watching films or listening to NFAK.

Get the latest talent experience insights delivered to your inbox.

Sign up to the Phenom email list for weekly updates!

Loading...

© 2025 Phenom People, Inc. All Rights Reserved.

  • ANA
  • CSA logo
  • IAF
  • ISO
  • ISO
  • ISO
  • ISO
  • ANAB