Podcasts
20 min
December 18, 2025

Season 1 Recap: AI, Trust, and Better Hiring Decisions

Show Notes

01:20 – Adoption starts with hiring managers (Simon Defoe)
Why assessments only work when managers see value, feel ownership, and experience them as a useful product rather than an imposed process.

04:15 – What good science looks like in a commercial setting (Darren Jaffrey)
Balancing rigour, relevance, dignity, and scalability without being blown off course by short-term pressure.

06:55 – AI, cheating, and early careers assessments (Janine Larkin)
Designing assessments so candidates don’t need to cheat, and why authenticity and potential matter more than polished answers.

10:32 – Building trust in AI (Jarret Hardie)
Why transparency, consistency, and human oversight are essential if AI is to act as a smart assistant, not a black box.

12:47 – Rethinking ROI in hiring (Kraig Payne)
Moving beyond time-to-hire and cost metrics towards productivity, outcomes, and business impact.

16:01 – Red flags in assessment implementations (Nick Brown)
Why poor change management kills adoption, and how communication and engagement make or break rollouts.

19:28 – Season close
Looking ahead to Season 2 and the conversations still to come.

Transcript

Caroline Fry (00:00)
This episode is a bit of a special one. We’re wrapping up our first season, and instead of a single guest, we’ve pulled together some of our favourite moments from the conversations we’ve had since May. We just want to say a huge thank you to everyone who came on the show.

Through the episodes, a few themes keep coming up. One is AI. Not as a magic fix, but as a tool. The people doing this well aren’t trying to automate judgement out of hiring. They’re using AI to support better decisions, not replace them. Another is how we define success. Rather than moving faster or hiring more people, our guests told us they define success as making decisions you still feel good about months later, and building processes hiring managers can actually trust. And the last one is candidate experience, as a signal of culture, priorities, and how much the organisation genuinely cares about the people going through the process.

We’re really excited to come back in the new year with more conversations like these. Honest, curious, and grounded in what hiring really looks like in practice. Thanks for listening, and enjoy the episode.

Nicola Tatham (01:20)
So going back to the very beginning of your career, and I didn’t know this about you, but before moving into talent acquisition, you worked in store where you managed a retail team. How would you say that this hands-on experience has shaped the way you think about assessments today, especially when it comes to driving adoption amongst hiring managers who perhaps don’t live and breathe TA science in the way that you might do every day?

Simon Defoe (01:45)
Yeah, reflecting on that, I think the first point I would make is that managers are very time-poor in those environments. If I go back to my time there, you’re very much thinking, “I’ve got 20 pages of delivery, I’ve got these tasks to do throughout the day.” Your mind is typically on what needs to be done by a certain point in the day, and what your team needs to do.

So when you’re taking time out to interview people, do a role play with them, or whatever it might be in store, that quite often feels like a distraction from what you really need to do that day, and it adds to your workload. I think there’s an element there around how you embed the process, make it feel slick, make it feel valuable, so managers are thinking, “Actually, I’m seeing enough of this person to know if they’re right for my role or not.”

Simon Defoe (02:48)
I think the other bit, and I wouldn’t say I’ve learned this only from those environments, but also more broadly, is that we’ve got some great teams in the business here that do this really well.

Just thinking about it as a great product. If you’ve got a great product that managers can relate to, where they can say, “I can really see the value in how I’m understanding this person, their skills, and what they’re going to bring to the role,” people are going to want to use it. Quite often in HR, we take the approach of saying, “We’ve decided this is the process and this is the standard everybody needs to follow,” and sometimes that becomes a bit of a stick approach. But if you translate that and think about it from a product perspective, if you’ve got a great product, people are going to want to use it. That’s one for me.

The other point is managers still having ownership of the process from an adoption point of view, particularly that final gatekeeping step. We might do all the slick bits upfront and say, “We’ve got three candidates, they’re all great, and you could hire any of them,” but you still need to make the decision on which one. The reason I reflect on the ownership piece is that a lot of the tech we’re seeing now can take some of that ownership away from managers.

When that happens, you feel less responsible for your hire. Whereas if you’ve made the decision, you feel a sense of ownership to say, “I’ve hired this person, and I’m responsible for making sure they get the right upskilling and the right onboarding journey,” rather than saying, “That team gave us this person, their assessment tool doesn’t work.” You end up feeling less ownership of that decision. That would be my third point.

Nicola Tatham (04:15)
What does good science look like in a commercial setting, and how do you protect that in the face of competing pressures? For example, candidate experience is one of them.

Darren Jaffrey (04:24)
I think you’ve got to set some rules for yourself and your organisation about how you want to behave. I’ve used these inside my organisation in the past, so my team will be familiar with them. I really like them because they get very quickly to the heart of the balance you need to strike between assessment science and the commercial reality of a business.

It is a trade-off. I get that, because everything is a trade-off in life, but particularly in technology you are always offsetting those two things. The three things I always ask the team to focus on are rigour with relevance, data with dignity, and science for scalability. If you can get those three things in balance, you’re well on the way to serving both constituencies properly, in terms of good science and the commercial balance you want to achieve.

Nicola Tatham (05:18)
Rigour with relevance, data with dignity, and science with scalability. All of those feel like foundational elements of what we do. Is there any tension in achieving those, or what tension is there for your team when they’re trying to deliver that with clients? I’m thinking about things like speed, speed of delivery. Do we have to make compromises to hit a client deadline? They need something ready by September.

Darren Jaffrey (05:46)
The one thing I won’t allow to happen in an organisation I’m leading is for it to be blown off course by today’s wind. If you set a course for where you want to get to, you might deviate a little bit along the way. There will be nuances and things you need to take into account, but ultimately don’t zigzag your way there. You know where you’re going. Go there.

The decisions along the way that could distract you and blow you off course, 29 times out of 100 the answer is just no. No, I don’t want to do that, because I still want to get to here. It’s that balance when you’re looking at science and scalability, for example. There might be things that are really cool to do, but they just don’t allow the organisation to scale. They might feel like a great short-term win, but you know long term they’re going to impede the organisation’s ability to scale to where it deserves to be.

Caroline Fry (06:55)
We’re finding this with a feature we’ve fairly recently brought out called Integrity Guard, around cheating and AI tools. AI tools are completely ubiquitous now. Candidates can use them. Cheating is a contentious topic. Sometimes you think, where’s the line between being well-prepared and using AI? How can TA teams design their early careers processes to minimise cheating, or to some extent even define what cheating is? Whether it’s how questions are structured, the mix of assessments you use, or how you verify responses, what have you found works to make sure you’re getting genuine insight into a candidate?

Janine Larkin (07:36)
This was one where my team and I really had to scratch our heads a couple of years ago, because as ChatGPT and other AI tools started to rise, it came out of nowhere. We definitely saw an increase in people using them, and rightly so. Outside of work, inside work, I use them too. We can’t shy away from it. I want people who think outside the box and help themselves make their lives easier.

For me, the short answer is: make it so they don’t need to cheat. We redesigned the questions to make candidates pause and reflect a little before answering. Some candidates even said in their recorded responses, “That’s really made me think,” which I think is brilliant. We moved away from competency-based questions like, “Give me an example of a time when…”, because you could just put that into ChatGPT. We could clearly see candidates reading from another screen. It wasn’t getting the best out of people.

When we really looked at what we’re looking for in people, it came down to authenticity, which is an easy word to throw around. But when people have just graduated or are leaving school or college, it’s really all about potential. We need to understand what excites them and what’s important to them as a person.

We’ve had real success with that. It started as an idea I had where I said, “Let’s try it and see how it works.” We’re now doing it for the second year running. We’ve tweaked the questions slightly, but we now ask things like lessons-learned questions or questions about people who inspire them, so we can really dig into what candidates value and what matters to them. For me, that’s far more important and gives you a truer reflection of the person than a very scripted answer.

Caroline, you mentioned earlier the difference between being over-prepared and being prepared enough. With JCB, you can do Google searches and you’ll get the same first page every time, and we were seeing that. So we had to do something quite drastic to change it. What we tend to do now is still dig into candidates’ motivations for wanting to come and work at JCB, of course, but we do that in person, where we can have an assessor who’s worked here for some time talk about their experiences and bring that to life on a more personal level. It’s worked really well.

To summarise, it’s about being really upfront at the front end of the process about the parts where we’d recommend not using AI. We can’t be surprised by people using it, and we can’t reject them for using it, if we haven’t told them it’s not okay. In every other part of their life, they probably assume it is. I think we have a responsibility, as businesses mapping out these processes, to be fair to people. I don’t want to reject someone for showing initiative, because that will serve them well in a career at JCB.

Nicola Tatham (10:32)
So how do we build AI that people, candidates and clients, can actually trust? How do we get to that stage?

Jarret Hardie (10:40)
I think people are becoming increasingly comfortable with the idea of AI. It certainly helped when ChatGPT entered the mainstream. People are now accustomed to working with AI, and that helps build trust. I’ll start with the last thing I mentioned, which is transparency. Gone are the days when you could just say, “Trust me, the algorithm knows best.” That doesn’t work anymore. Candidates want to know what’s being measured and why. It’s a bit like showing your working when you did maths at school.

For hiring managers, the important thing is being able to drill down and see why someone was flagged, for example, as a strong fit. So no black boxes. Over time, that builds trust in the system, because you have the ability to fully examine what it’s doing.

Consistency is also a key element in building trust. Traditional interviews vary widely. We know that. Well-trained interviewers can be more consistent than most, but nevertheless there is still variation. AI in recruitment, if it’s done right, evaluates every candidate against the same criteria every time, and that builds a certain amount of fairness over time.

The other thing is giving people agency. Maybe not the best choice of words, given the conversation around agentic AI, but it’s about empowerment. Things like candidates being able to retake assessments if there was a technical issue, or hiring managers being able to override the AI.

Jarret Hardie (12:18)
AI really should feel like a smart assistant, not a dictator. One of the key ways organisations can support that is through their policies, recommendations, and guidelines, making it clear that AI is a smart assistant, not a crutch. We don’t want people to become lazy or blindly accept everything the AI does. It’s a balancing act.

Caroline Fry (12:47)
If we fast forward a couple of years, what do you think ROI and assessments will look like? What will organisations be measuring that maybe they aren’t today? Or do you think there are some core fundamentals that persist, things that are evergreen?

Kraig Payne (13:02)
I do think there are core fundamentals. Regardless of what I’ve just said, time and cost per hire and quality of hire still matter, even if that’s not always the language we use with leadership. When you boil those down, what do they relate to? Generally, efficiency and cost. As business drivers, they’re not going anywhere.

There will always be a requirement to talk about efficiency and cost, and the next one for me is productivity. We don’t talk about productivity very much in talent acquisition. It’s a hard thing to nail down from a data perspective, but productivity is king in a business context. Where we can link into productivity measures or returns, the better, because that starts to move the needle for leadership in terms of talent and how you assess and bring people through.

For example, if we can show correlations between better assessment and better hires, and those hires drive higher productivity and output, we’re no longer speaking talent language. We’re speaking business language, production language. That drives a very different conversation. It’s very tricky, given where we sit in the talent process.

To the question, I don’t think we’re losing cost, time, quality, experience, and efficiency. I think they’re wrapped into cost and efficiency. But if we can move the conversation into productivity, that’s where things get really interesting.

Nicola Tatham (14:46)
And have you seen that done well?

Kraig Payne (14:54)
No. Don’t get me wrong, there are a lot of people now heading in that direction and thinking this way. From a hiring perspective, there’s more talk about hiring on time, hiring when needed, rather than reducing time to hire.

I’ve got a bit of a bugbear with time to hire. I think it’s a vanity metric that was created years ago to help TA teams say, “We’re doing better, time to hire is going down.” But it doesn’t matter whether your time to hire is 30 days or 300 days. What matters is hiring someone when the business needs them to deliver output, whether that’s for a project, growth, or something else.

Assessment plays a part in that. The question for us is how we tell that story through our part of the process chain, so organisations can hire people when they actually need them.

Caroline Fry (16:01)
Considering your experience, let’s talk red flags. You’ve probably noticed certain things during discovery calls where your ears prick up and you think, “This project might be heading down a tricky path,” or “There are things we’ll need to navigate carefully.” Can you share what some of those red flags are?

Nick Brown (16:19)
Usually it’s when someone can’t articulate the value. Why do they want an assessment? I remember an early example when I was working in-house for an RPO. We were consulting with clients and acting as that middle layer between the vendor and the client. A recruitment manager once said to me, “We’re implementing a video interview assessment because I just want the team to be happier.”

Okay, but what does that mean? How do you define “happier”? Not losing recruiters, people not leaving, morale being better? It was very vague. That’s a red flag. Another one is a disconnect between the buying team and the delivery or operational team. It’s something you can overcome, but it’s a concern when a very isolated team is buying something in a top-down way.

Nick Brown (17:20)
If you’re talking enterprise and global, the COE model isn’t a red flag in itself. But if you’re rolling something out globally, you really need to make sure change management is thought through. As a former recruiter, the worst thing is feeling like something is being done to you.

Nick Brown (17:45)
You don’t want to feel like something is being done to you. Suddenly being told, “You have to use this tool now,” and the first time you hear about it is on go-live day. So thinking about the comms planning around a change is really important. If that hasn’t been thought through, that’s okay. It’s an amber flag, because we can turn it into a green flag by consulting and guiding clients through that process.

But going back to the earlier point, if it’s all about pace, we need to make sure we’ve got some leeway to do the comms plan properly. Think about a roadshow, or what I used to call a movie trailer before the training. Go and demo the assessments with all of your recruiters globally. Get all the questions out. Identify any detractors who might be cynical about assessments before you get to training, because otherwise you just won’t get engagement in the training itself.

Those are a couple of key pieces. I’ve worked with plenty of clients who just want logins. “When can I get logins? When can I get logins?” But we haven’t configured anything yet. We haven’t done anything. There’s zero value in giving you a login if you don’t know how to use the tool. I’m not going to give you the keys to the car before we’ve done some driving lessons.

Those are red flags as well. If clients don’t appreciate that we’re not doing a two-year HR Workday implementation, of course we’re not, but even if it’s a two-week implementation, I still want that discovery phase. You might kick some of it off in a 15-minute kickoff call, but we still need those steps. It means everyone will be happier when it’s implemented, and you’re going to be more successful.

Caroline Fry (19:28)
Thanks so much for listening this season. We’ll see you in the new year with new episodes and plenty more conversations worth having.

Key Takeaways

Hiring tools only work when hiring managers want to use them.Assessments shouldn’t feel like an imposed HR process. When designed and positioned as a useful product that helps managers understand candidates better, they encourage ownership of hiring decisions and lead to stronger adoption and accountability.

Simon Defoe, Senior Talent Assessment Manager, Vodafone

Darren Jaffrey, CEO, Sova

Strong assessment science requires consistency of direction, not constant compromise.
Balancing rigour, relevance, dignity, and scalability means resisting short-term pressures that distract from long-term goals. Organisations need clarity on where they’re going and the discipline to say no to decisions that undermine sustainable impact.

Janine Larkin, Resourcing Manager, JCB

Authentic insight comes from thoughtful design, not tighter controls.
Rather than trying to police cheating, assessments should be designed to encourage reflection and reveal motivation, values, and potential. This approach is particularly effective in early careers hiring, where authenticity matters more than polished answers.

Jarret Hardie, CTO, Sova

Trust in AI depends on transparency and human oversight.
AI earns trust when candidates and hiring managers understand what’s being measured, why decisions are made, and when they retain the ability to challenge or override recommendations. Used well, AI should support judgement, not replace it.

Kraig Payne, Customer Success Director, Sova

The future of hiring ROI lies in productivity, not vanity metrics.
Time to hire and cost per hire still matter, but they don’t tell the full story. The real value of assessment emerges when organisations link better hiring decisions to improved performance, output, and long-term business outcomes.

Nick Brown, Global Professional Services Director, Sova

Adoption fails when change feels imposed rather than understood.
Successful assessment rollouts prioritise communication, education, and engagement well before go-live. When recruiters understand the purpose and value of a tool, resistance drops and implementation outcomes improve significantly.

What is Sova?

Sova is a talent assessment platform that provides the right tools to evaluate candidates faster, fairer and more accurately than ever.

Start your journey to faster, fairer, and more accurate hiring