26 Feb 2026

When AI marks its own homework: The hidden risk in modern hiring

AI is now deciding who gets hired, and most organisations think this is progress. But AI isn’t fixing hiring - It’s exposing how flawed hiring already was.

discovery_ai_recruitment_header.png

Written by Jonathan Evans from Discovery ADR Group Limited

Artificial intelligence has fundamentally changed recruitment.

What once took weeks now takes hours. Systems can analyse thousands of applications instantly, identify patterns invisible to human recruiters, and match candidates to roles with remarkable efficiency.

In an era defined by speed, scale, and global competition for talent, this capability is transformative.

Yet embedded within this progress is a paradox few organisations have fully recognised.

Candidates are now using artificial intelligence to write their CVs. Employers are using artificial intelligence to assess them.

AI is, in effect, marking its own homework.

This moment represents one of the most profound shifts in the history of hiring, not simply because of the efficiency AI brings, but because of what it changes about how human capability is recognised.

The advantages of AI are undeniable. Research consistently shows reductions in cost-per-hire and time-to-hire when AI is deployed effectively.

Organisations can process far greater volumes of candidates without increasing recruitment resources. Tasks that once consumed enormous amounts of human time, (screening CVs, scheduling interviews, responding to applicants) are now automated.

This allows recruiters to focus on higher-value activities. It enables organisations to compete in talent markets that would otherwise be unmanageable. It brings consistency and scalability to a process that has historically been constrained by human bandwidth.

In many ways, AI is doing exactly what it was designed to do: optimise.

But optimisation and understanding are not the same thing.

The rise of AI-generated CVs illustrates this distinction clearly. Candidates can now produce applications that are structurally perfect, optimised for keywords, aligned precisely to job descriptions, and designed to perform well in algorithmic screening systems.

These CVs are not necessarily dishonest. They simply reflect the candidate’s ability to use modern tools effectively.

Yet they introduce a subtle but significant distortion. AI systems are highly effective at recognising patterns they have been trained to value. When candidates use AI to present themselves in ways that align with those patterns, the system begins to reward optimisation rather than capability.

The candidate who understands how to optimise for the algorithm may appear more suitable than the candidate with greater actual potential.

The signal becomes increasingly difficult to interpret.

At the same time, AI introduces risks that extend beyond optimisation. Research has demonstrated that automated assessment systems can disadvantage candidates with non-standard communication styles, regional accents, or disabilities. These systems rely on pattern recognition, not contextual understanding. They measure what is visible, not necessarily what is meaningful.

This reflects a fundamental limitation. Artificial intelligence can identify correlations. It cannot fully understand people.

Human capability is rarely linear. Potential does not always present itself in predictable ways. Some of the most effective leaders, innovators, and performers would not have appeared exceptional based purely on structured data early in their careers. Their potential was recognised because someone saw beyond what was immediately visible.

This act of recognition is not purely analytical. It requires judgement.

But judgement alone is not enough. It must be anchored in evidence.

The fundamental purpose of good recruitment is not to compare people to one another. It is to evaluate evidence of capability, suitability, alignment, and potential against the requirements of the role and the strategic needs of the organisation. The role exists for a reason. It exists to deliver outcomes that matter. Its importance is defined not by its title, but by its contribution to organisational success.

This evidence must be gathered deliberately and systematically throughout the process - not inferred at the end.

Yet in reality, many hiring decisions are made by individuals who have never been formally trained in how to assess capability. Hiring managers are typically experts in their functional domain, not in the discipline of hiring itself. They are asked to make one of the most consequential decisions for organisational performance without the structured frameworks required to do so effectively.

As a result, interviews often drift away from assessment and towards advocacy. Hiring managers spend significant portions of the conversation selling the organisation, selling the role, or validating their own thinking, rather than rigorously evaluating the candidate. Others gravitate towards individuals who think like them, communicate like them, or reflect their own experiences, confusing familiarity with suitability.

This is not a failure of intent. It is a consequence of unstructured evaluation.

Without a disciplined, evidence-based approach, hiring decisions become shaped by subjective comfort rather than objective capability. Candidates are compared to one another, rather than evaluated against the role itself. Confidence is mistaken for competence. Similarity is mistaken for fit.

The strongest candidate in a poorly matched group is not necessarily the right candidate.

It is simply the best of what happened to be available.

In effect, organisations risk selecting the one-eyed individual in a field of the blind.

Artificial intelligence does not solve this problem. In some cases, it can amplify it. If AI systems optimise for patterns derived from imperfect historical hiring decisions, they may reinforce the very limitations organisations are trying to overcome.

The solution is not to remove human judgement, but to strengthen it through evidence.

Every stage of the hiring process should contribute to building a clear, structured understanding of the candidate’s capability in relation to the role and its strategic purpose. Assessment should not be a single event at the end of the process. It should be a continuous accumulation of evidence. Each interaction, each evaluation, and each insight should contribute to answering a single question: to what extent does this individual demonstrate the capability, alignment, and potential required to succeed in this role within this organisation?

Artificial intelligence can support this process. It can accelerate it. It can surface relevant information and reduce administrative burden.

But it cannot replace the human responsibility to interpret that evidence.

Artificial intelligence excels at processing information. It excels at identifying patterns. It excels at optimisation. But it does not possess judgement. It does not understand organisational context. It does not recognise the difference between surface alignment and genuine capability.

The risk organisations face is not that AI will fail. It is that AI will succeed in optimising processes while gradually removing the structured, evidence-based interpretation necessary to ensure those processes remain meaningful.

Recruitment risks becoming a closed loop. Candidates optimise themselves for algorithms. Algorithms reward optimisation. Hiring managers compare candidates to one another rather than to the role itself. Decisions become faster, but not necessarily better.

This does not diminish the importance of AI. It elevates the importance of human oversight and disciplined, evidence-based hiring methodology.

The organisations that will benefit most from artificial intelligence will not be those that seek to replace human judgement, but those that strengthen it. They will use AI to enhance visibility, not define outcomes. They will ensure that decisions remain anchored in evidence of capability, suitability, alignment, and potential, not simply optimisation.

Artificial intelligence has transformed recruitment. It has made it faster, more scalable, and more efficient than ever before.

But recruitment has never been solely about efficiency.

It has always been about recognising human potential and demonstrating, through evidence, the capability required to deliver organisational success.

Artificial intelligence can support that process. It can enhance it. It can accelerate it.

But it cannot replace the uniquely human responsibility to judge wisely.

The future of recruitment will not be defined by artificial intelligence alone.

It will be defined by the quality of the judgement that surrounds it.

Are your hiring decisions driven by evidence of capability… or by optimisation?

Jonathan Evans is the founder & ceo of Discovery ADR Group, specialising in behavioural assessment, leadership capability, and evidence-based hiring. He helps organisations improve hiring decisions by aligning capability directly to organisational strategy.