FAQ's

Frequently Asked Questions

Quick answers to questions you may have about AI in hiring.

What AI Does in Hiring

What does AI do in a talent assessment, and what does it not do? 

AI in talent assessment analyses candidate responses, scores them against predefined criteria, evaluates performance on tasks, and generates structured outputs, such as competency scores or shortlists, for human review.

What it does not do is make the hiring decision. The role of AI is to surface evidence. The role of the human is to act on it. The most credible assessments are grounded in what candidates actually say, write, or do in response to structured, job-relevant tasks, not in inferences drawn from how they look or sound.

What is a "multi-agent" AI system, and why does it matter in hiring?

Most AI hiring tools perform a single function, a chatbot that screens, an engine that scores, a tool that schedules. A multi-agent system is different: it involves several distinct AI components, each with a defined role, working in coordination.

In assessment, this means different agents can handle different parts of a candidate's experience, an initial exchange, a task or scenario, a structured debrief — with context flowing between them. The result is more layered behavioural evidence, and a more coherent experience for the candidate.

A candidate moving through a series of coordinated scenarios yields evidence closer to how they would perform in the role. A candidate completing isolated tasks at separate funnel stages yields fragmented data points.

What's the difference between an AI interviewer, an AI scorer, and an AI simulation?

An AI interviewer conducts a structured conversation with a candidate, asking questions and capturing responses. The candidate knows they are being evaluated.

An AI scorer works behind the scenes. It processes assessment outputs, written responses, task data and spoken answers, to generate scores against a defined framework. It is not candidate-facing.

An AI simulation does something different from both. Rather than asking candidates questions about a job, it places them inside a scenario that replicates the job. The candidates interact with AI characters, handle realistic tasks, and respond to situations they would encounter in the role. The distinction is between being assessed about a job and being assessed doing it.

Bias and Fairness

Can AI assessments discriminate?

AI assessments can reflect bias, but with the right safeguards, that risk is manageable and significantly lower than the bias inherent in unstructured human hiring.

Organisations evaluating AI assessment tools should ask the assessment providers directly: has this tool been tested for bias across gender, ethnicity, age and disability? Is that testing independently verified? And what happens if a disparity is found post-deployment?

Is AI fairer than a human interviewer, or just differently biased?

Human interviews carry well-documented bias risks. The risk with AI is different in nature.

The practical implication for organisations is that the choice is not between biased AI and unbiased humans. It is between different types of bias, and the question is which type is more visible, more auditable, and more correctable. AI bias, when monitored properly, can be identified, measured and fixed. Unconscious human bias, in an unstructured process, typically cannot.

The standard to look for in any AI assessment is independent bias auditing, transparent scoring criteria, and ongoing post-deployment monitoring, not a claim of zero bias.

Science and Validity

What is the difference between IO psychology-backed scoring and a purely AI-generated score?

IO psychology-backed scoring starts with the job. A qualified occupational psychologist defines the competencies that matter for a specific role, constructs the criteria against which candidates are assessed, and validates that the scoring framework predicts real-world performance. The AI applies and scales that framework - it does not generate it.

A purely AI-generated score works differently. The model learns patterns from data, often historical hiring decisions or performance outcomes, and produces a score based on those patterns. The risk is that the model may be optimising for correlations that are not genuinely job-relevant, or that reflect historical bias rather than actual capability.

The distinction matters because only one of these approaches can be meaningfully explained, defended, or audited. If a hiring manager asks why a candidate scored the way they did, IO-grounded scoring can answer that question. A black-box AI score often cannot

Candidate Experience

What accessibility considerations should AI assessments account for?

Any AI assessment deployed at scale will be completed by candidates with a wide range of needs, and accessibility cannot be an afterthought.

The practical baseline includes: compatibility with screen readers and assistive technologies; support for speech-to-text and text-to-speech; mobile and desktop accessibility; sufficient time allowances that do not disadvantage candidates with processing differences; and clear, jargon-free instructions that do not create unnecessary cognitive load.