Podcasts
32 min
November 20, 2025

Balancing Tech and Human Judgment to Build Hiring Processes Everyone Can Trust

Show Notes

00:00 – Welcome and introductions

01:16 – Icebreaker

03:36 – How frontline retail experience shapes assessment design

07:36 – The biggest red flags when AI pushes too far

10:52 – Balancing objectivity, bias, and human judgement

14:24 – Where AI tools go wrong: transparency, rigour, and “black box” scoring

17:52 – The challenge of measuring skills that evolve fast

22:45 – Candidate experience vs. process depth in high-volume hiring

26:05 – Why realistic previews matter for retention

26:50 – Science or Fiction: rapid-fire myths about hiring and assessment

31:20 – Final takeaway: great product design drives real adoption

Transcript

Caroline Fry (00:00)
Welcome to The Score, where we make sense of hiring trends, sort science from fiction, and find out what's new in talent acquisition from top experts in the field.

Nicola Tatham (00:08)
Hi, I'm Nicola Tatham. I'm the Chief IO Psychologist here at Sova. I've got two decades of experience designing fair, predictive, and science-backed talent assessments. I'm here to cut through the noise and talk about what actually works in hiring.

Caroline Fry (00:24)
And I'm Caroline Fry, Head of Product. I spend my time turning smart assessment ideas from Nic into tools that work. Scalable, inclusive, and ready for whatever AI throws at us next.

Nicola Tatham (00:34)
Each episode we're joined by a guest to unpack a big question in hiring, because talent deserves better than guesswork. And today we're joined by Simon.

Hi Simon, good to have you here. So Simon is the Senior Talent Assessment Manager at Vodafone, where he manages the recruitment of thousands of candidates each year across contact centers, retail, early careers, and tech. Simon's got over a decade of experience designing large-scale assessment programs at John Lewis, Waitrose, and now at Vodafone. So Simon's seen firsthand how hiring, skills, technology, and candidate expectations have evolved. Thank you for being with us today, Simon.

Simon Defoe (01:16)
Thanks for inviting me. I feel like I need to use that intro for myself, actually. Better than I do.


Nicola Tatham (01:22)
You can introduce you at any time.

Simon Defoe (01:23)
We'll keep that one. That's good. I should take a note.

Caroline Fry (01:27)
To ease you in, Simon, we will ask you at least one gentle question to start. What is the most surprising or memorable interview experience you've seen in your career, whether it made you laugh or cringe?

Simon Defoe (01:40)
I feel like I could give you half an hour worth of content just on that question. I won't include all the usual brain-teaser type questions.

One that is definitely cringe—and I'm thinking to not name the organization but an experience I heard about—let's say it's when a couple of candidates rejected the job offer for this role, and their feedback as to why they rejected the job offer was that the organization's values didn't align with what they were expecting from the organization.

When the people responsible probed further, "What do you mean by that? I don't quite understand," the feedback was that the people doing the hiring at the time—it was the business area's first birthday, so it had been open for a year—thought it would be appropriate to make all of the candidates sing "Happy Birthday" to the shop as part of the process. Definitely cringe, but memorable, I suppose.

Caroline Fry (02:44)
Yeah.

Nicola Tatham (02:46)
Yeah, certainly memorable.

Caroline Fry (02:48)
Hope it hasn't marred "Happy Birthday" for them forevermore.

Simon Defoe (02:52)
I can totally get the rationale for rejecting that job offer.

Caroline Fry (02:56)
Feeling that back to the teams must have been fun as well.

Simon Defoe (02:59)
Yeah. You're just like, "What do I do with that information?" That's such a bad... At what point did you think, "This is a really appropriate thing for us to do with these candidates?" At no point? Nothing in your mind stopped you from doing that.

Nicola Tatham (03:16)I think that's one of the winners so far. We'll actually at the end of the year have to rank order and do the top 10 of answers, but I think that's definitely in the top three of the ones that we've heard so far. You could probably write a book on them by the sounds of it, but we'll come back to that. It's the next career.

Simon Defoe (03:26)
That's got to be in the top three, surely. I was thinking of lots of bad ones, that was up there.

Caroline Fry (03:26)
Yeah, that's true. I'd say it's up there. Incredible.

Nicola Tatham (03:36)
Going back to the very beginning of your career, I didn't know this about you, but before moving into talent acquisition, you worked in-store where you managed retail teams. How would you say that hands-on experience has shaped the way that you think about assessments today, especially when it comes to driving adoption amongst hiring managers who perhaps don't live and breathe TA science in the way that you might every day?

Simon Defoe (03:45)
Reflecting on that, I think the first point I would make is I think managers are very time-poor in those environments. If I go back to my time there, you're very much thinking, "I've got 20 cages of delivery, I've got these tasks to do throughout the day." Your mind is typically on, "Those are the things I need to do by X point of the day," and my team needs to do. When you're taking time out to go, "I need to interview people, I need to do this role-play with them," or whatever it is you might be doing in-store, that feels quite often like a distraction from what you really need to do on that day, and it adds to your day.

I think there's an element for me there about how do you embed the process, make it feel slick, make it feel valuable, where they're going, "Okay, actually, I'm seeing enough of this person to know if they're right for my role or not."

The other bit that I think applies—and I wouldn't say I've learned this a little bit from those areas, but also more broadly, you know, we've got some great teams in the business here that do that—is just thinking about it being a great product. If you've got a great product that managers can relate to, in terms of, "I can really see the value of how I'm understanding this person, their skills, what they're going to bring to the role," people are going to want to use it.

Quite often in HR, we kind of take this perception of going, "Well, we've decided this is a process and this is a standard that everybody needs to follow." We take a bit of a stick approach sometimes. But translating that more from a product perspective: if you've got a great product, people are going to want to use it, right? I think that's one for me.

Then the other bit is managers still having ownership of the process from an adoption point of view as well, particularly that final gatekeeping, that final step. TA, we might do all the slick bits upfront to go, "We want to get you three candidates that are all great, and you could hire any of them," but you need to make the decision on which one.

The reason I was reflecting on the ownership piece is I think a lot of the tech and a lot of what we're seeing now is almost taking some of that ownership away from managers, and then you feel less responsible for your hire. Whereas actually, I think if you've made that decision, you feel the sense of ownership to go, "Okay, I've hired this person, I'm also responsible for making sure they've got the right upskilling and they have the right onboarding journey," rather than going, "That team gave us that person because their assessment tool doesn't work." And you feel less ownership of that decision. So that'd be my third point, really.

Nicola Tatham (06:19)
Mm. Yeah. Yeah, that makes a lot of sense, thank you.

Simon Defoe (06:27)
Yeah, because I think sometimes we start flipping it. I suppose from a psychologist point of view, we start going, "Let's talk about the validity, the reliability, the fair piece." But actually, if I'm in that environment, I start going, "Well, actually, that's the bread and butter we should be thinking about so other people don't have to think about it." And actually, their critique should be, "Yeah, I can see how this helps me identify this person for a retail store or this person for a contact center," or whatever the role is.

Nicola Tatham (06:33)
Yeah. Yeah. Yeah, so as the psychologists and as the HR teams, we don't need to be bothering them with a reliability of 0.35 or 0.3. They just need to know that this thing works and it talks to them in their language in a way that they can harness and utilize with their own recruits.

Simon Defoe (06:59)
Exactly. Exactly. Yeah. Yeah. It's the same when we go into their environment, right? If we go into a retail store, actually, we just want to know, "Have you got the stuff I need in stock? Can I pick it up? Maybe is it in date? That's helpful. And can I get through the till quickly?" Like all of the background stuff that the shop are doing—actually, does it, you know... and there's loads of it... shouldn't matter to us. And I think it's the same from an assessment point of view.

Nicola Tatham (07:20)
Yeah. Yeah, it's a useful way to flip it actually, to see it from a different perspective. Thank you. I think Caroline's going to ask you the next question.

Caroline Fry (07:36)
I am. I am. Probably unsurprisingly since it is everywhere generally, but also everywhere in hiring. You mentioned AI and you mentioned about the decision-making, you know, and feeling ownership over what those teams in-store are doing, for example. And so we know AI is certainly becoming pretty ubiquitous in hiring, but what's your biggest red flag when you see tech creeping too far into the decision-making process?

Simon Defoe (08:01)
I think I'm like AI'd out in terms of the amount of conversation. Biggest red flag? That's a... I probably have multiple. Probably the two that come to my mind the most:

One is that kind of lack of explainability. So I think particularly when we're going into high-stakes environments like talent acquisition or talent decision-making in terms of progressions, it's lack of explainability or the black box side of things. And I think that's probably been one of my frustrations, I think, as an assessment industry is other parts of HR are much quicker, and other industries and other functions are much quicker at adopting AI technology and implementing it. But obviously, sometimes it's less high-stakes. So, you've got really great coaching practice tools with AI simulations—although they can be risky as well—they're not making decisions, let's say, about people, or simulations in terms of contact center type conversations and things like that. So definitely that lack of explainability, I think is one. And you see it quite a lot where companies are going, "Yeah, we can definitely do that from a talent acquisition perspective," and you're going, "I'm not quite sure how the scoring's going to work. You're not really explaining that particularly well," for me.

The second one, I think, maybe pre—and I'd be interested to hear your reflections on this—probably predates AI a little bit. And maybe this is like an occupational psychologist thing that we do, is almost outsourcing that manager judgment to an algorithm or a machine. Us going, "Well, actually, they've got score X on this assessment or that interview," and almost going like, "We have to distill that person down to a score to make a fair decision," which I obviously totally get and understand and agree with in principle. But there's so much more to it than that, which we kind of disregard in terms of like, "Well, actually, how do you think that person will do in your team? You know, what skills will they bring? What would a conflict be? How will you work with them?" And all of that kind of stuff. It may be harder to turn into a number, not impossible, but harder to turn into a number. We've tried to say almost like, "Don't think about those things, because that's bias, and you can't turn it into a score." But actually, I think there's something to me about going, "But actually acknowledge where those feelings and what you're thinking are coming from, and take that into consideration as well."

Nicola Tatham (10:08)
And I guess it's just trying to create some objectivity around that, isn't it? It's trying to avoid those unconscious biases that slip in. "I think they'll do well because they grew up in the same area that I grew up in," as opposed to, "I think they'll fit in well because I've seen their Team Stars report, for example, and I can see the elements that they're more likely to bring to this retail unit will fit really well with the way that the team operates."

Nicola Tatham (10:52)
So it's sort of combining that more objective data with the line manager thought processes, but in a way that is always fair and objective and not biased, which is really difficult. It is really difficult. We're all human.

Simon Defoe (11:05)
Yeah, exactly. And just really acknowledging where that comes from. I think sales roles is always an example because I think that for a sales role, likability is such an important thing: building rapport really quickly, being likable, building that trust. And actually, I know we can say, "Did they build rapport quickly? Did they build trust?" and score it. But actually, that's a really subjective thing because me, you, and Caroline could all sit on an interview panel interviewing someone and I'm like, "Yeah, I really like that, and they were great. I could see them selling loads of stuff." And Caroline might go, "They're terrible. I don't know who you were listening to, Simon, but they really grated on me." Although we might have both turned it into a data point, is it objective?

Nicola Tatham (11:45)
Yeah, absolutely fair enough. I guess... sorry Caroline, go on.

Caroline Fry (11:48)
Sorry Nic, I was just gonna say, yeah, there's a lot that you touched on there, Simon. I agree. I think the scalability issue that technology has created—and then AI has sort of exacerbated on top—means that if you said (although there's individual issues with bias as well), if you said the idea was that everybody would have human interaction be humanly assessed, even at the scale that I know you guys work to when you have your requisitions going through, then maybe there is a future world in which AI can really get down to that kind of processing of all of those softer skills, the likability, the culture fit. I think working with it right now, the way that we do, there's quite a way to go there from what we've seen, what we've experienced, what we're trying to build.

I'd also say, going back to something you said right at the start about we're all a bit exhausted with AI and the lack of transparency, I think I've seen a lot of AI-first HR tech startups coming that don't have the discipline that you guys as Occ Psychs have in assessment. And that's where it's falling down because you have to have both of those things still. And your comment about the assessment industry being slower to take... assessment in HR tech specifically. I agree. It's completely because you're making decisions and the Occ Psychs in the industry are completely committed to doing it right. Which means it takes longer. You can't have a quick win with an AI. You have to go through all those checks and balances. AI isn't a shortcut. It has to also operate in all the same principles that I know Nic applies to everything that we see come out in terms of content for us.

I think there's still so much more that AI could do, but we're at this point at the moment where everyone's doing everything and some of those things are the right things and some of those things are not. And I think as a vendor, it's our responsibility to make it clear that we've done the work to build trustworthy assessments that use AI. And that's not happening everywhere. And I wondered if you've seen a tool or approach—you don't have to name any names—that's made you think, "No, that's not what good hiring looks like."

Simon Defoe (14:04)
I've seen a lot of those. I think what I was reflecting on actually, as you were talking, was what I found fascinating probably in the past 18 months to two years compared to previously, is I think we're all figuring out what it means for our industry and our areas at the same time. Actually, having conversations with you and your teams and others, we are kind of figuring out: How does assessment work in the world of AI? What's the right approach to your point around Nic's work around fairness and how can we be innovative whilst also doing the right thing? But that's one of the opportunities, right? It's a risk, but it's also a real great opportunity for the industry as well.

As you're asking about something, have I seen that? Yeah, maybe triggered me. Let's say I think that there's a lot of what I've seen generally in the industry going, "Hey, you can use that for talent acquisition," but to the point around how would the scoring, how is the fairness check... that's just clearly not there in a lot of suppliers. Which I think we'll just start to see being used more with teams that don't understand the rigor behind it. So unfortunately, I think we will see that. Something that I went through a while back... well, I suppose one of the bits I love about my role is you always have people reaching out and, "Hey, can we do a product demo for you? Do you want to try this tool?" And I'm like, "Yeah, great." Cause I'm inherently really curious to see what's out there and what people are doing... it was just a standard psychometric but used from, I would say, the design process was probably using more of a clinical approach, let's say from a psychologist perspective. And it was very much questions along the lines of: "How many siblings did you have? What was the relationship with your siblings? What was the relationship with your parents?" And very much the personal childhood stuff—you're analyzing. Depending on what's going on in my family at that moment in time or previously, I was like, "I'm feeling quite triggered by this assessment just from completing it." I have no idea now, I can't remember if it might have been able to predict things. I don't know if it could or couldn't; there's probably a logic to... So, can some of those things predict how you're going to behave in the future? And then the report output was beautiful. It's a beautiful report. It looked great. I was like, "If you just gave me the report, this looks wonderful." But actually, the process of getting there... But I suppose that's the reason I raised that one, because I was like, "Actually, there were certain bits." I was like, "Oh yeah, if we gave this to our candidates, this is a really pretty report, looks really good, looks really on brand, but the whole approach to getting there I found really triggering." And a few of my colleagues that did the same tool said exactly the same thing. They were like, "I found this really intrusive as to what it was collecting."


Nicola Tatham (17:04)
Yeah, I thought we'd moved on from those style of questions decades ago. Some of the original personality tests—none of the questions are springing to mind right now—but they were very sort of personal, not necessarily work-related. And it felt like we'd moved on from that. But yeah, it'd be interesting to see where that goes. So thank you. I'm going to shift gear a little bit now. Everyone's talking about AI. Everyone's also talking about skills.


Nicola Tatham (17:52)
And we're seeing more and more data, and as the pace of change in the world of work is so quick, the shelf life of a skill keeps shrinking. So if, let's say, every two years our definition or our library of key skills becomes outdated, how do you future-proof what you measure?


Simon Defoe (17:52)
Hmm. How do we future-proof the skills that measure? I think that's an interesting one because I think we are seeing more and more, particularly from a hard skills perspective, that actually, I don't even think they're changing over years like they were a few years ago. I think it's every few months, you're kind of seeing what you actually mean by that. And you hear it obviously when you think a little bit about your OpenAIs and Anthropics and that. Actually, when they're talking about racing to get something out, they're like, "We need to be a month before them. Otherwise, we're too late." So the pace of things is moving along quite a lot more quickly than I think it used to. I think we still have to measure skills, though. So if I'm reflecting, not just for our organization, but more broadly, I always think there's three lenses to look at things.

There's the behaviors—whatever your organization's behaviors are that are critical to success there. I think there's the skills that are relevant to the role specifically, that might be hard or might be soft skills; and depending on where someone is at in their career defines what level of proficiency you want to measure that at. And then I think the final bit for us is looking at the person's potential or their learning agility, however you want to define that, in terms of that ability to pick things up quickly, learn, and apply in different scenarios. I don't buy in... I feel like I've just got a critique of the assessment industry now.

I don't necessarily buy into what you hear a lot, going like: "Don't assess skills, actually assess their potential, their curiosity, their willingness to learn"—and we call them whatever we call them in the industry, different things in different companies. Yes, I agree with that to some extent, that that might be the right approach with graduates or early careers people, or if people are re-skilling or switching roles, let's say mid-career. But I think fundamentally, that's maybe something we've done as psychologists, because measuring skills feels too difficult. Yeah.

Nicola Tatham (20:10)
Yeah, I think our ethos, having spent time thinking about this, is that they don't need to be mutually exclusive. I know every minute counts when it comes to assessments and candidate experience, but there's a way of looking at: "Can you deliver what we need now, in terms of the soft skills that we need you to bring to the role?" And then obviously if there are some hard skills that are a bit more specific—not necessarily from our library, but from some of the hard skills test providers—we can look at that. Then also looking at that alongside what you've just described, particularly in early careers roles: you want that person that's going to be willing to roll their sleeves up and learn something new, move into a different team, pick up some new software and run with it. So you can't test everything across everybody. So if you can test the "Do I need to function well in the job right now?" and then "Am I going to get those people with the potential to learn as well?"—I think there's probably a holistic way that we can try and achieve that. I don't think they necessarily need to be mutually exclusive.

Simon Defoe (21:21)
Yeah, absolutely. And I obviously put the telco hat on where half of our employees are in digital and IT and those types of roles where there are quite a lot of hard skills for the different functions within that.

Nicola Tatham (21:28)
Yeah, absolutely. But I guess there are also lots of those hard skills that probably change. The technologies that you use are regularly changing as well. So you're looking for people that have got that now to go and learn those new ones.

Simon Defoe (21:42)
Yeah. And I would say also, the last few years from my perspective and my role, it's become a little bit easier. Assessment providers at least are becoming a bit more like, "Okay, cool, I understand that actually this is our niche in terms of what we assess," and we might be really focused on skills, or we might be really focused on the behavioral sides and the potential side of things. And I think actually, as an industry, that helps us create better products and better services for our customers, in terms of managers, recruiters, and candidates as well.

Nicola Tatham (22:19)
Yeah, having those options, absolutely.

Caroline Fry (22:21)
Okay, I've got one more serious question for you, Simon. So picking up on something that I think you both touched on there around candidate experience and the length of candidate experience: You lead high-volume hiring, where every extra minute in the process can add up fast. So how do you provide a great candidate experience without making the process shallow?

Simon Defoe (22:45)
Good question.

Caroline Fry (22:45)
Eternal question.


Simon Defoe (22:47)
So I think a couple of things. I think typically, what does the hiring process take for a typical frontline role where maybe it's fairly standardized? It might take a week if it's a company that's super, super efficient. Maybe up to a month as an end-to-end journey. So my mind kind of goes a little bit to: is an extra 20 minutes or 30 minutes in the process really that painful as a step?

I actually think the area in terms of that really slick process comes in when we've got poor integrations and poor setups in terms of how these things get set up and how these things get triggered. Caroline, your team have probably heard enough of me and probably sick of me going on about: "That's four clicks for recruiters to do that. It needs to be one click." Because actually, four clicks means they're not going to set it up and they're going to complain to me about the process.

Caroline Fry (23:37)
No, not at all. That click mantra follows through every piece of work we do as well. Because yeah, at scale, it adds up, right? You can't underestimate that.

Simon Defoe (23:55)
Exactly. Exactly that. I think if you're having to do training demos of how to set something up, you know, it'd be like if Amazon said, "I will give you some training on how you can find the buy button when you find a thing." It's a terrible process. So that's one bit, I think, in terms of your tech stack: how can you make it super efficient?

Caroline Fry (24:13)
Yeah, it's not the specifics of the assessment time. It's all of the movement transitions between those, the communication that happens, any automation that you can add in. Yeah, makes a lot of sense.

Simon Defoe (24:28)
Exactly. And I think quite often we conflate those two things. We kind of go, "Our assessment takes 20 minutes and therefore the interview process is long." Because you've given me a structured guide, and actually I used to just have a conversation with them. So I think there are those bits about slickness of the process and myth busting.


Simon Defoe (24:57)
I also find—and probably something I've learned over the last few years—is seeing recruitment and assessment journeys in terms of deciding who we're going to hire as much a branding exercise in bringing the company and role to life as it is an assessment exercise. And you have to take the time to do that, right? So that accounts for: "This is typically what the day looks like. This is actually what our culture is about and what you're going to experience." And if you can do that through the process, it gives you two things. It allows people to select out if they're like, "Actually, this really isn't for me." Now you've explained the role. And actually, if we were just worried about speed and efficiency, we wouldn't be doing those things. And all you end up doing is going, "Great, you've been in role a month, three months." And you go, "Oh, I'm going to leave because actually, I didn't know that's what the role was going to be like."

Caroline Fry (25:38)
It's actually inefficient in the end.


Simon Defoe (25:52)
And we found that particularly with our frontline contact center and retail roles, really bringing those roles to life and the types of interactions you're going to have is just super, super beneficial for both candidates and for managers.

Nicola Tatham (26:05)
Yeah, I guess when you compare it to people used to go into the store armed with their CV and having an interview, they would literally live the experience and know what it felt like to work in the store. And we don't have that sort of 'luxury' nowadays. So it's about trying to bring it to life for candidates so that they know what they're getting themselves into. Basically, they know it's going to be right for them.

Nicola Tatham (26:29)
Yeah. We've given you the grilling that we were going to give you now. So now we're going to move on to a little game that we call Science or Fiction. So what we do is we're going to throw a few statements your way. Some are grounded in science and others not so much. So your job is to tell us which is which from your perspective and just to help set the record straight on some of these statements. So I'll start with the first one, which is: "The best predictor of future performance is past performance." Is that science or fiction in your view?


Simon Defoe (26:56)
I would say fiction. I think typically we'd be saying reasoning ability is a better predictor, even though I've definitely heard that loads of times, and I've almost definitely said it to people as well. But we'll ignore that.

Nicola Tatham (26:58)
Fiction. Yes. More to it than that, absolutely.

Caroline Fry (27:11)
Okay, ready for the next one? "Candidates trust AI decisions more than human ones, as long as they're explained clearly."

Simon Defoe (27:17)
I think the "as long as it's explained clearly" makes it true. Otherwise I would have said false.

Nicola Tatham (27:23)
It's probably emerging, isn't it, as a research area just yet. It'll be interesting to see where we get to on that. The next one is: "Making assessments shorter always improves candidate experience."

S

imon Defoe (27:35)
Definitely fiction.

Caroline Fry (27:36)
We just spent some time discussing that, right? I think we've got to guess that. Okay, next one for me: "Soft skills are becoming more important than technical ones."

Nicola Tatham (27:36)
I thought you were going to say that.


Simon Defoe (27:38)
Yeah. I would say I don't know if there's any research to back that up, but my view would be true. Science.


Nicola Tatham (27:51)
Mm-hmm. Yeah. Which is kind of aligned with what we were talking about earlier.


Simon Defoe (27:55)
Do you actually know the answers to these, by the way? Have you got like...

Caroline Fry (27:58)
I think they're somewhat subjective, frankly.

Simon Defoe (28:00)
Yeah, I was thinking, depends what journal or whatever paper you read.

Caroline Fry (28:06)
You've got every single one right so far, Simon, of course.

Simon Defoe (28:09)
Great. I think the last one on the soft skills... yeah, I think we're just talking about it more and more. I was saying, actually, you're using AI to maybe do some of that harder stuff, but actually how you interact, how you influence, how you learn and be curious—I think this feels more important.

Nicola Tatham (28:10)
Yeah. I think, yeah, I think that's the ethos behind this really. And we've talked about this on the podcast before as well: the technology is doing a lot of that heavy lifting work for us. So being able to do a calculation, being detail-conscious, you know, a lot of that work is done for us now. So it really is about lifting up to, like you've just said, the influence and skills. Do you get on with people? Can you take people with you? Do you listen to others? Can you demonstrate empathy? I think they're the things that are coming to the fore because the robots are doing a lot of the other stuff for us, so it's those human capabilities that really matter.

Simon Defoe (29:05)
That's just made me realise I appreciate ChatGPT for making sure I don't have to proofread my own work. When you talk about something else... yeah, I've forgotten about that terrible proofreading my own work.

Nicola Tatham (29:10)
Everything you do. Do you remember those days back in the olden days?

Caroline Fry (29:17)
You have to keep remembering to thank the robots though for when they take over. I'm always careful to do that.

Nicola Tatham (29:17)
Yeah. Yeah. We'll sort the Christmas cards out from us all. I think the final one is: "Hiring managers don't need to understand assessment science to use it well."

Simon Defoe (29:21)
Yeah. Yeah. I think true, I would say. Yeah, I think they have to understand the process and understand the approach. But I don't think if you start... if I started explaining, I don't know, Cronbach's alpha to the hiring manager, I'd be like, "Please go away." Yeah.

Nicola Tatham (29:47)
Yeah, yeah, we sort of dipped into that earlier. That's your job to make sure that it has got a good Cronbach's alpha so that they know that they can just use it. And I guess it's not solely about understanding assessment science per se; it's understanding the logic for using the assessment and why it's important. It's that buy-in, I guess. And when it gets to things like those final interviews, understanding the concepts of unconscious bias and why use behaviorally anchored rating scales and all that fun stuff. Yes, there's an element of the science, I guess, that is the top end of it, as opposed to the deep dive into the heavy psychometrics and the like.

Simon Defoe (30:24)
Yeah, it goes back to that product design. It's just built in by design of a process. So actually, they don't need to. Exactly. So it's the same thing. Actually, I can pick it up, I can look at it and go, "Cool, this makes sense. I understand the logic to it and the value of it without being an expert."

Nicola Tatham (30:27)
Yeah. Yeah. I don't need to know how an iPhone works to use it. Yeah. Like a firework, you don't need to know how they work, but you need to understand the rules on how you deploy them and how you use them so that you remain... you're using them safely. Well, thank you, Simon. This has been really insightful. So thank you for your time and obviously the expertise that you bring to the podcast as well. If there is one thing that you would like people to remember from this episode, what would that be?

Simon Defoe (30:50)
I feel like I've mentioned product design loads of times. So I think I will go with using that as a basis for anything, I suppose, decision-wise that you're working on. If you're making a great product that people want to use, actually, you don't need to worry too much about the kind of adoption and forcing it too much because people want to use it.

Caroline Fry (31:32)
Thanks for hanging out with us on The Score. If you enjoyed this deep dive into candidate experience, fairness, and making assessment rollouts actually work, don't miss what's coming next. New episodes drop every two weeks on YouTube, Spotify, or wherever you get your talent assessment insights.Thank you, Simon. That was great.

Nicola Tatham (31:48)
Thank you, thanks Simon.

Simon Defoe (31:49)
Thank you both.

Nicola Tatham (31:49)
That was really good.

Key Takeaways

Getting hiring managers to adopt new assessment tools is often a struggle. They frequently view the process as a bottleneck rather than a benefit. Simon Defoe, Senior Talent Assessment Manager at Vodafone, argues that the problem usually isn't the managers. It is the design of the process itself. He joined us to explain why we need to stop treating assessments like a compliance exercise and start treating them like a product.

Treat Your Assessment Process Like a Product

We often try to force hiring managers to follow a process by focusing on compliance or explaining the complex science behind it. Simon argues that this approach usually fails. Hiring managers are time-poor. They do not care about reliability statistics or validation studies. They just want to know if the tool solves their problem.

You need to view your internal process through the lens of product design. If you build a system that is slick, saves time, and delivers great candidates, managers will want to use it naturally. Adoption solves itself when the product is actually good.

Don't Let Managers Outsource Responsibility

There is a temptation to let AI or algorithms make the final decision to remove bias. However, Simon warns that this creates a lack of ownership. When a manager feels that a machine simply handed them a recruit, they feel less responsible for that person's success.

When the manager is the one making the final decision, they are much more invested in the onboarding and coaching journey required to make that hire successful. Use data to guide the decision, but keep the manager in the driver's seat.

Fix Friction Before You Fix Length

A common belief is that shorter assessments always equal a better candidate experience. This is not necessarily true. Candidates are often willing to invest twenty or thirty minutes if the assessment gives them a realistic preview of the role. It helps them decide if the job is right for them.

The real killer of candidate experience is technical friction. If a recruiter has to click four times just to send an invite, they won't do it. If a candidate has to navigate poor integrations or confusing login screens, they will drop out. Focus on slick integration and ease of use before you worry about shaving minutes off the test time.

Hire for Agility Over Static Skills

The shelf life of a technical skill is shrinking rapidly. In the past, a hard skill might stay relevant for years. Now, with tools like ChatGPT and rapidly evolving software, requirements change in months.

Focusing too heavily on a checklist of current hard skills is a short-term strategy. Since AI is handling more of the technical heavy lifting, the real differentiators are soft skills like empathy, influence, and curiosity. You need to assess whether a candidate has the learning agility to pick up the next tool, rather than just checking if they know the current one.

Remember Assessment is Branding

Every interaction in the hiring process tells the candidate who you are as a company.

Your assessment process is a branding exercise. If the test feels relevant and fair, it builds trust. If it feels like a black box or asks about their childhood, it damages your reputation. Make sure the assessment reflects the culture you actually want to promote.

What is Sova?

Sova is a talent assessment platform that provides the right tools to evaluate candidates faster, fairer and more accurately than ever.

Start your journey to faster, fairer, and more accurate hiring