I have been researching, speaking, and writing about the impact of AI in hiring for years, long before large language models entered the mainstream. AI’s deep penetration in recruitment was always likely. People spend much of their lives online, including while working (or pretending to), while firms have invested trillions in digital systems designed to capture, store, and analyze the resulting data from this. A technology like AI, capable of translating this ocean of data into insight was therefore inevitable.
The opportunity to use AI to make hiring more data-driven and more meritocratic is substantial. Traditional science-based tools are often costly and cumbersome to deploy. AI changes this by enabling new digital talent signals (from virtual interviews and passive data scraping to game-based assessments and objective performance records) and by converting these signals into predictive indicators of future performance.
Although I’m as enthusiastic as ever about the potential power of AI to transform hiring, we must be realistic about what has happened so far. For all the talk about AI supercharging talent, we should start with a simpler and far less glamorous reality check. First, talent markets remain as inefficient as ever, with employers struggling to find the right person for the right job, and employees remain disenchanted with their jobs and careers. Second, thanks to AI, hiring has become a noisy, crowded arms race of automation, often more inhumane for both job seekers and hiring managers.
Advances in recruitment technologies have historically centered around new talent tools deployed by employers and recruiters. Today, innovations around AI hiring technologies are causing even bigger changes to how job seekers approach the process, with most candidates deploying AI to optimize their search and perfect their applications, including their interview tactics. With generative AI, applicants can produce immaculate CVs, perfectly crafted cover letters, and highly polished interview responses. They can complete assessments with algorithmic coaching and apply to hundreds of roles in a single afternoon.
The result is an ecosystem where both sides are inundated, sometimes fooled, occasionally impressed, and mostly exhausted, with a rising crisis of trust.
The Reality Behind the Hype
Over the past decade, I argued that AI would create new behavioral signals that would replace traditional assessment tools. That is exactly what has happened. Structured AI-led interviews, textual analyses, coding evaluations, conversational chatbots, and immersive metaverse job simulations, not to mention passive data scraping of internal and external candidate data, can now generate abundant behavioral data at a scale that traditional human-driven processes could never match. In theory, this ought to enable deeper insight into human talent and potential, creating a marketplace of already-assessed candidates that could be automatically matched to existing and potential jobs, both within and between organizations.
Much of this is in fact a reality, albeit not yet the norm, in some of the best known and most successful organizations today. According to a recent World Economic Forum report, 90% of employers use some form of automated or algorithmic system to prioritize, rank, or deselect candidates (even higher than the 70% number that experts predicted last year). Chipotle uses conversational AI to speed up hiring for its restaurant by an estimated 75%. Amazon develops its own proprietary AI algorithms to scan, analyze, vet, and match internal and external candidates based on their hard skills, soft skills, and resumes (instead of relying on human recruiters for this manual, time-consuming job). Goldman Sachs and Unilever have used AI-enabled digital interviews to recruit candidates for high-volume positions, like graduate intakes. Siemens, E.On and Walmart have experimented with game-based assessments (typically scored with AI). With permission of job applicants, Nvidia uses AI to mine public data on candidates on LinkedIn and social media. ManpowerGroup’s recruiters rely on agentic AI to automate standardized tasks and components of its high-volume hiring, freeing up recruiters to spend more human-hours with candidates. Major job platforms like LinkedIn, Monster, and ZipRecruiter rely widely on AI-powered algorithms to recommend candidates to employers—so, by extension, any candidate and employer using these platforms is using AI, whether they know it or not.
There are myriad other examples showcasing the many applications of AI in hiring. And AI-recruitment is still very much a work in progress; we are probably in the equivalent of the dial-up phase of this innovation, with the WiFi phase still over the horizon.
Even the most fanatical AI enthusiast must admit that progress to date has been incremental and gradual rather than exponential, with a number of non-trivial new problems added to what was already an ineffective and sub-optimal field.
How AI Has Negatively Impacted Hiring So Far
Here are just a few of the ways that AI has made finding a new job, or a new job candidate, more difficult:
Trust has deteriorated.
Rather than improving the identification of high performers, AI has enabled the mass production of artificially polished candidates who merely look great. Employers know this, and as a result increasingly distrust the very signals they collect. Even well-designed screening systems are being gamed: large language models tailor résumés to job descriptions, mimic professional tone, and fabricate achievements that appear legitimate. In more extreme cases, AI-generated avatars can sit through remote interviews, projecting competence while concealing the person behind the screen.
When everything can be faked at scale, organizations are forced to question not just candidates, but their own evaluation tools. The predictable response is a retreat to the familiar: face-to-face meetings, referrals, and trusted networks. This regression into what might be called “medieval hiring” undermines the potential value of AI. The irony is that technologies meant to democratize opportunity end up reinforcing the very inequities they promised to dismantle.
While speed has increased, accuracy has decreased.
AI delivers clear efficiency gains, enabling recruiters to process larger candidate pools at lower cost. This matters. Scale can improve quality, and lower barriers to entry can widen the funnel to include more diverse and non-traditional candidates. Yet there is still no convincing independent evidence that AI outperforms established, science-based assessment tools on accuracy, predictive validity, or quality of match. Nor, outside a narrow set of high-volume, low-skill, highly standardized roles, is it realistic to remove humans entirely from the hiring process. For most jobs that matter, human judgment remains essential, not as a bottleneck, but as a safeguard.
Objective outcome data remains scarce.
A central limitation of predictive AI in hiring is the absence of rigorous, objective criteria. Training models to predict who will impress interviewers or secure promotion does not mean identifying the best performers. More often, it means learning who aligns with human preferences, and humans are biased by design. AI is therefore optimized to detect political skill and the performative aspects of job performance, rather than genuine value creation. As I argue in my latest book, this is why impression management so often outperforms authenticity at work. Without better data on what employees actually deliver, produce, and sustain over time, AI will remain better at predicting visibility rather than performance.
Ethical issues and reputational risks loom large.
When organizations deploy AI without proper oversight, they risk augmenting pre-existing human biases rather than eliminating them. Models trained on historical hiring or promotion data will inadvertently learn patterns of inequality, rewarding candidates who look like yesterday’s workforce while penalizing those who deviate from legacy norms.
In this sense, AI does not merely automate decisions; it can automate inequality, scaling biased judgments at speed. And when these systems fail, the reputational consequences for organizations can be severe. From discriminatory filtering and wrongful rejections to privacy breaches and algorithmic opacity, the ethical missteps of poorly-governed AI systems tend to generate negative publicity, regulatory scrutiny, and public backlash, undermining the very employer brand companies hope to enhance through innovation. Far from de-risking decisions, irresponsible AI adoption introduces a new category of strategic risk.
Where AI Actually Helps Hiring Managers (When Used Properly)
Despite its many shortcomings, AI can play a constructive role in hiring when deployed with scientific rigor and clear intent. Its real value lies not in replacing human judgment, but in disciplining it. Used well, AI reduces bias, improves measurement consistency, and acts as a structured filter before subjective evaluations take over.
Enforcing structure and consistency.
AI performs best when it imposes structure where humans are prone to improvisation. It asks all candidates the same questions, avoids interviewer drift, and applies consistent scoring rules. This standardization matters because structure is one of the strongest predictors of validity in selection. AI-led interviews, when properly designed, tend to be clearer, more job-relevant, and more comparable than their human counterparts.
Evidence from a Stanford–USC field experiment illustrates this point. Candidates screened through structured, AI-led interviews evaluating both technical and soft skills subsequently performed better in human interviews (with 20% more going through) than those filtered via traditional résumé screening. Crucially, the gain did not come from AI “finding better people,” but from enforcing discipline and relevance early in the process.
Reducing noise rather than adding intelligence.
AI’s comparative advantage is not superior insight, but lower noise. Algorithms do not get tired, distracted, or swayed by charisma, accents, or familiarity. By applying the same criteria to everyone, they create a fairer baseline for evaluation. When these systems are validated against real performance data from incumbents, they can meaningfully raise the floor of hiring quality. When they are not, they merely automate inconsistency at scale.
Freeing humans to do human work.
The biggest opportunity lies in combining AI’s efficiency with the validation standards of industrial-organizational psychology and the judgment and empathy of experienced recruiters. Automation should be used to reclaim time, not to remove humans from the loop. If algorithms handle screening and triage, people can focus on what technology still cannot do well: understanding motivation, coaching candidates, conveying culture, and helping individuals navigate complex career decisions.
The most effective model looks less like full automation and more like the real estate market. Platforms (e.g., Zillow and Idealista) shortlist options, but trusted professionals guide the final decision. Data does the heavy lifting, but humans remain accountable for judgment, context, and trust.
A Better Path Forward
To improve hiring, leaders must resist the temptation to treat AI as a cure-all. Sure, the technology is powerful, but its value depends entirely on how it is trained and deployed. At its best, AI reduces noise, enforces consistency, and boosts meritocracy. At its worst, it accelerates depersonalization, exacerbates bias, and automates poor judgment.
In most cases, the most predictive hiring protocol still comprises a well-designed job analysis, a structured interview, science-based assessments, and meaningful performance data tied to real job outcomes, interpreted by humans with sufficient expertise to understand context and interpret profiles holistically. AI can no doubt strengthen this architecture by handling what it does best: screening large applicant pools, standardizing early interviews, detecting patterns in behavioral data, and flagging risk or mismatch. What it cannot replace are the distinctly human tasks that matter most: sense-making, motivation assessment, ethical judgment, cultural interpretation, and the ability to build trust with candidates.
The goal, then, is not to remove humans from hiring, but to redeploy them more intelligently. The time AI saves should be reinvested in deeper conversations, better evaluation, and more accountable decisions. Used this way, AI does not make hiring more automated. It makes it more human.