Podcasts
26 min
October 23, 2025

AI, Juggling, and Job Ads Gone Wild

Show Notes

00:00 – Introduction

00:54 – Ice-breaker
Nicola shares the time she claimed she’d be a Formula 1 driver; Caroline admits her first job required her learning how to juggle.

03:31 – Skills inflation and the madness of modern job ads
Why job descriptions keep growing longer, how hiring managers overcompensate, and what “must-have skills” really mean.

07:37 – The future of skills
AI, remote work, and rapid change: which human skills are becoming more valuable, and which ones are fading out?

12:13 – How AI is really used in assessment design
Caroline unpacks what’s real and what’s “AI washing” in hiring tech, and where generative AI genuinely helps recruiters and candidates.

17:33 – Designing for three audiences
How product teams balance the needs of candidates, recruiters, and decision-makers without watering down the experience.

20:34 – Fairness vs. engagement
Nicola and Caroline talk psychometric rigour, accessibility, and why great assessment design doesn’t have to be boring.

24:54 – The time-pressure trap
Why a five-minute test might feel good but won’t predict performance — and how to find the sweet spot between accuracy and experience.

25:11 – Closing reflections
A candid moment on collaboration between science and product — and why the best assessments are built when both sides push each other.

Transcript

Caroline Fry (00:00)
Welcome to The Score, where we make sense of hiring trends, sort science from fiction, and find out what's new in talent acquisition from top experts in the field.

Nicola Tatham (00:08)
Hello, I'm Nicola Tatham. I'm the Chief IO Psychologist here at Sova. I've got two decades of experience designing fair, predictive, and science-backed assessments. I'm here to cut through the noise and talk about what actually works in hiring.

Caroline Fry (00:22)
And I'm Caroline Fry, Head of Product. I spend my time turning smart assessment ideas from Nic into tools that work — scalable, inclusive, and ready for whatever AI throws at us next.

Nicola Tatham (00:32)
Usually we’re joined by a guest to help us unpack a big question in hiring. But today’s a bit different. There’s no guest, because Caroline and I have decided we want to interview each other.

Caroline Fry (00:42)
Exactly. We’ve each come armed with a few questions for the other — no prep and no notes — so you’ll get our most honest answers. I’ve got some things I’ve been meaning to ask Nic.

Nicola Tatham (00:51)
And I’ve got a few questions lined up for Caroline too. Let’s see where this takes us.

Caroline Fry (00:54)
So, as usual, before we start, I think it’s only fair that we both discuss the strangest question we’ve ever been asked during an interview. So Nic, what’s the weirdest thing you’ve ever had?

Nicola Tatham (01:03)
I’ve got two examples. One is something I was actually asked, and I think it’s my response that will be more shocking to anyone who knows me than the question itself.
The question — asked by another psychologist — was: “If you weren’t a psychologist, what would you be?” And my answer was: “A Formula One racing driver.”
Now, anyone who knows me knows that is certainly never going to be true.

During the feedback session afterwards, the psychologist who’d interviewed me asked if that was something I was genuinely interested in. The truth is, I don’t know why I said it — and I did not get that job.

The other example isn’t something I was asked personally, but it was an anecdote I heard at a social event. One of the attendees was particularly proud of his own achievements and those of his children, which is quite normal. But I was very shocked when he told me that when his son graduated, he’d applied for many jobs and had been put through “these damn psychometric tests,” never getting anywhere. Then he added, “And do you know, when he applied to Organisation X, he was simply asked where his father worked and what school he went to — and he got the job.”

I don’t think that person fully understood what my role was or why I do what I do, but that conversation really stuck with me. Early in my career, it made me realise that what we’re doing really matters — and that change is necessary. I do think we’ve made massive progress since then, but it was still quite a shock to hear someone share that story so proudly.

Anyway, what about you? Have you had any strange interview questions?

Caroline Fry (02:52)
Yeah, so mine aren’t as good as yours. Mine are just a little, you know, slightly odd. My first job was at a theme park when I was sixteen, and during the interview process they asked me, “Can you juggle?” I wasn’t going in for any kind of performance job. I was going to be manning a retail till, selling ride photos and various other things, climbing frames and so on. And they asked if I could juggle.

The answer was no. I still got the job. And they tried to teach us to juggle in our induction training. And I still failed.

Nicola Tatham (03:27)
But we’ll perhaps come back to the juggling when we talk about what we’re about to talk about later, I suspect.

Caroline Fry (03:31)
Let’s see, let’s see if we make our way back there. Okay. Well, on to some real questions then, Nic, if I get to start and give you a little grilling.

So first thing — about skills, which we’ve already talked about a few times on this podcast. I wanted to ask you about skills inflation. We hear a lot about that — like job ads asking for more and more skills, even for entry-level roles. So from your perspective, what’s driving that trend, and how should organisations rethink what skills actually matter for success?

Nicola Tatham (04:01)
Yeah, great question. And I think this is where the juggling comes in. As an element of skills inflation, they didn’t necessarily need to juggle, but maybe they were looking for potential or growth. Yeah, so, you know, as we’ve talked about before, when we’ve written about it, skills inflation refers to that rising average level of knowledge and ability that’s expected from workers over time.

In terms of the why, I think it’s probably pretty multifaceted — lots of different elements to it. One is around hiring managers perhaps trying to reduce risk, or perceived risk. So by listing a long list of desired skills, they’re basically trying to cover all bases, to make sure that nothing is lost and there’s no risk of somebody turning up without everything.

Possibly also around trying to anticipate change — thinking, “We don’t need juggling now, but we might need it in six months’ time,” so sort of adding that into the mix as well, when perhaps that’s not necessarily the right thing to be measuring at that point in time.

I think often we don’t have a very clear and empirically grounded view of what actually predicts success in the job. So what does excellent performance really look like? What do their star performers actually do? Do we know that? Do people make the time and the effort to understand that? Or do they just, as we said, cover all their bases?

I think there’s another element of this that probably relates to the world that we live in, in terms of technology and search optimisation and all of that good stuff. When we put a job ad out, we want to catch a wide candidate pool. We want to impress candidates. We want to make the job sound strategic. We want to use flashy, flashy job titles. So I think there’s that element to it as well — trying to impress people. But in doing that, you can also be putting people off from applying as well.

There’s a lot of research around that sort of thing. So yeah, I think, you know, there’s pressure to look strategic and to impress. You get these bloated wish lists, essentially, and then they put these unrealistic expectations on the hires that we are making.

I think you also asked me, how can we, you know, what can we do about this? How can we fix it?

I guess it relates back to most of the points that I’ve made. So we need to understand what matters. What are the skills that we truly need for this role? What are those must-haves? So not continuing to ask for a degree in engineering just because that’s what we’ve always done, but unpack that role in a more systematic manner. Do we need a degree? Don’t we? Are all these things that we’re asking for actually relevant?

We can do that using techniques such as job analysis, statistical validation — they can form a really great foundation for understanding the role. What are those soft skills? What are the hard skills that we need? What are the must-haves? What are the nice-to-haves?

We can also be speaking to high-performing job holders, understanding the challenges and demands of the role from their perspective. But then also, we’re trying to balance this sort of skills inflation while also acknowledging that there is a direction — what’s the strategic direction of the organisation? What are the organisation’s priorities and challenges, and what are the implications of those on the role as well? What will the role holders need to be successful in the longer term?

I guess the key thing is we have to just admit defeat and accept that change is the only constant in life, and we need to look for those people with the potential to learn new skills as well.

So not only looking at what they need now, but also looking for those people who are likely to be able to operate in a fast-changing, dynamic environment where they’re going to need to re-skill, up-skill, relearn, unlearn — to be able to succeed and thrive going forward.

Caroline Fry (07:37)
That’s a really full, complex answer. It makes me think the load on those TA teams to do that kind of analysis ahead of any hiring — especially when they’re under pressure to hire so regularly — is a lot. And I guess that’s where, you know, IO psych expertise comes in to help with that kind of analysis, and the tools and techniques to really drill down to what the skills are that matter. That’s the support that, you know, colleagues like you can offer.

Nicola Tatham (08:00)
Yeah, we can offer that. And it can feel very painful, you know — time-consuming at the outset — but there’s literally no point assessing somebody unless you know what you’re assessing for, because they’re just not going to perform well in your job. It’s going to cost you more time and more money in the longer term if you don’t know how to build a good assessment.

Caroline Fry (08:14)
Yeah.

Nicola Tatham (08:23)
I can’t stress enough how important it is to understand what the role requirements are and to build your assessment around that, rather than rinse and repeat — despite the fact it might feel really painful at the point when you’re having to do that exercise.

Caroline Fry (08:37)
I think that rinse and repeat idea leads me into my next question about the shifting skills landscape as well. You talked about what future skills are — because I know we’ve often talked about, even this morning when we were discussing something else, you said, “What my role looked like eighteen months ago is different now,” with things like AI coming in.

So you often talk about how the skills employers value today can look very different, obviously, from the past. And then flipping that forward, what shifts are the most significant right now? I mentioned AI — we’ll probably talk about that — but how do you see assessment science keeping up with the pace of that change? Because obviously it really fundamentally… well, I guess there are some fundamentals we’ve talked about before that underpin what you look for and how the assessment science works. But there are probably several strands that play at all times.

Nicola Tatham (09:22)
Yeah. Yeah. So I guess the first part of that question was around what changes we’re seeing — what are the shifts that we’re seeing in what really matters. Obviously, we’ve talked about AI at length and new technology, so that’s part of setting the scene for why change is so prevalent. Also, things like working remotely — geography becomes less of a factor.

Caroline Fry (09:35)
Mm-hmm.

Nicola Tatham (09:45)
But then that also means that cultural diversity is more important. The pace of change is so much quicker in business and other organisations now than it ever was. So I think with all that in mind, generally, we’re looking for those people who will stay up to date with technology and will adapt to the different challenges that are going to be thrown at them — the people who learn quickly, who want to learn, and who thrive in that fast-paced environment.

Caroline Fry (09:45)
Mm-hmm.

Nicola Tatham (10:12)
They favour variety over a very narrow remit and are happy to turn their hands to other elements.

So as technology picks up a lot of the tasks that humans used to do, we see a shift in focus — more emphasis on those soft skills, the things that computers and technology can’t do. We’re talking about things like communication and teamwork, genuine human empathy, influencing skills.

AI is not going anywhere, so we need to be comfortable integrating AI into our daily tasks — being open to doing that, learning about what it can do for us, but also understanding the risks and challenges that AI brings as well. It’s not as simple as some might think — you don’t just start using it; there’s a lot more to it.

Yeah, technology-related skills are definitely on the rise. I think, conversely, the research is showing us that things like reading, writing, maths, manual dexterity, precision, and attention to detail tend to be seeing a decline over time, because we’ve got a lot of those checks and balances being built into technology for us.

So yes, certainly an interesting time to be alive — that’s for sure.

Caroline Fry (11:21)
Yeah, I think we’ve said it before as well on previous podcasts — it’s just the pace of change that’s so real and visceral now, I think for all of us. How things might have changed more slowly over five to ten years, but now we’re looking at six months, twelve months — and that can feel quite overwhelming sometimes.

But yeah, I think for me, outside the IO Psych expertise, the concept of skills is quite helpful for that. It feels more tangible somehow, probably because it’s got the parallel in hard skills — everyone sort of knows what that kind of skill is — and then you can make the cognitive leap to soft skills and understand a bit more, breaking things down in a way that, for me, not being a psych, competency has always felt a little bit more conceptual or sophisticated, even though there are a lot of similarities.

We’ve talked about that too. So I think skills help navigate that — they feel a bit more broken down into something that you can organise and reorganise as the landscape around you changes, I suppose.

Nicola Tatham (12:13)
Is it my turn now? Can I take the helm and ask you some questions?

Caroline Fry (12:30)
I think so — you can take the wheel and turn the tables.

Nicola Tatham (12:34)
Okay, so we’ve talked about AI in this podcast before, and today as well. It’s a regular theme, and we’ve said time and again that it’s become somewhat of a buzzword in assessments. In practice though, how are you seeing it enhance the assessment experience — both in terms of efficiency for recruiters and engagement for candidates?

Caroline Fry (12:38)
Mm-hmm. Okay, well, I’ll take the buzzword part first. I think, probably regardless of the industry you’re in, AI used intentionally can definitely add value — but the buzzword accusation comes in when it’s AI for AI’s sake.

Something that I know — and remember you said before, Nic — about the founding principles of psychometric assessment design also holds true on my side of the house. If we stick to product principles, we’re not trying to fit an AI solution to a problem. It’s about asking: what’s the problem we’re looking to solve? Then you ideate conceptually and technically, and AI may be part of that solution — or it may not.

Speaking of the technical side, the buzzword accusation also comes into play with what I’d call AI washing. You’ll find a lot of products claiming that something is AI when actually it’s just algorithms, calculations, or automation in some way. Those claims are really using the term “AI” in the loosest possible way.

So that’s the buzzword thing — there are a few layers going on there. But when it’s used intentionally, to answer the other part of your question, AI has a lot of genuinely helpful utility in the assessment experience — like productivity gains around administrative processes for recruiters and hiring managers.

It can take on tasks within pre-hire screening, candidate correspondence, interview scheduling, note-taking, guidance and support for candidates — bots, instant-response tools — as well as data analysis and more. A lot of the regular desk work that’s fairly universal, like planning, organising, analysing data, and reporting.

Obviously, again, with all the usual checks and balances you need when using AI — don’t just blindly trust an analysis. Check it, dig into the data, don’t just give it a document to analyse and then run with it. But from that administrative recruiter or hiring manager side, that’s one thing.

Then, with the candidate experience specifically — that’s where it becomes a bit more exciting and innovative. Generative AI can start to be explored in terms of immersive and engaging experiences that are truly novel per candidate. And we’re experimenting with that ourselves, as you know.

It’s not easy, because the AI landscape is changing at pace, and there are very real constraints in terms of viability and, of course, scientific rigour that we need to maintain as part of an assessment process — especially in our industry.

But it’s a fun challenge. Like you were saying earlier, the technology has unlocked a lot of opportunities for doing things we couldn’t before. From a multimedia perspective, GenAI brings a whole new layer of lower-effort, higher-impact production values, which — done right — can really enhance candidate engagement while maintaining authenticity.

That should ultimately result in higher validity, like you were saying, because the engagements feel more realistic. And I think, again, the scalability of the technology — the speed with which you can create multimedia, visuals, etc. — in a way that wasn’t possible even five years ago, that’s really exciting. It feels innovative, and it allows us to evolve the assessment experience faster than we used to be able to.

Nicola Tatham (16:17)
It feels like we’re having a different conversation to the one we were having twelve to eighteen months ago, which was very much, “How do we protect the assessments in light of AI?” But now it’s, “How do we improve the assessments in light of AI?” — which is a very different conversation.

I don’t think we’ve necessarily solved the first one as an industry. I think we’ve made some great steps at Sova, for example, and you see other people doing exactly the same. So yeah, not necessarily solved all of that, but it does feel really exciting to be harnessing AI to improve an assessment rather than worrying about what impact it’s going to have on the candidate test-taking experience.

Caroline Fry (16:56)
Yeah, and like you say, we haven’t solved that, and that hasn’t gone away — and that will continue to be the case. But as we always used to say to each other at the time, there have always been concerns around integrity. Well before AI entered the picture, candidates would always find ways to cheat — that’s just a fact of life.

So I think it’s a really interesting dichotomy: the opportunities AI gives and the ever-evolving challenges as it becomes more sophisticated. Things that used to work as safeguards no longer work the same way. Keeps us on our toes, like you said.

Nicola Tatham (17:29)
So you’ve talked there about both recruiters and candidates, but you’ve mentioned that when you’re working on assessment platforms, you’re not designing just for one user — not even for two, but for three users. You’ve got your candidates, your recruitment teams, but also your buyers and decision makers — the people who are actually making the purchase.

How do you balance their needs without compromising the experience for any of those groups?

Caroline Fry (17:57)
Well, this is a perpetual challenge for product teams where there are multiple users in the mix. But going back to product principles, it’s about truly understanding your user and buyer personas — that’s how we separate them.

The dream is not to compromise for any group, but the reality is — and we’ve talked about trade-offs already — there are always trade-offs. Knowing the personas intimately and speaking to real users, customers, and prospects regularly helps us understand which are the most impactful problems to solve for each group, and which things are merely nice-to-have.

In an ideal world, you’d have what in product terms we call delighters — the stuff that really raises your game and becomes essential for a certain group. But as with any multitasking, you have to keep making progress incrementally across user groups so that no group is left behind.

And yes, sometimes you do get those great features that have something for everyone. But the main thing is considering all of those personas at the outset and being clear about what a feature is delivering for each group — which problems it’s solving — so that you can articulate that value to users. Clarity in how you position whatever you’re bringing out in the platform is essential.

Since all those groups, in our instance, interrelate, it doesn’t mean that if we’re building a feature really focused on candidates, there isn’t interest or value for the recruitment side or the buyer stakeholders — that’s actually a good thing. So even if something isn’t directly impactful to a certain persona or user group, as long as you’re clear about the value it brings, that goes a long way as well.

Nicola Tatham (19:37)
Yes, I guess it’s unusual for a new feature to be detrimental to any one of those groups. It might just be that it doesn’t enhance their experience of the platform. Yeah, makes sense.

Caroline Fry (19:46)
Exactly. Yeah. I think, in fact, you might be doing something wrong if it’s negatively going to impact any of your key user groups. But yeah, like you say, it could just be a sort of neutral experience — unimpactful — but if it’s articulated well and positioned well, because you’ve understood the challenges that each group has, then they should understand the benefit that it brings to their colleagues, if not to themselves.

Nicola Tatham (19:53)
So yeah. Yeah. And why one thing’s being prioritised. Yeah. Yeah.

Okay, the next question that I’ve got relates more to the candidate experience specifically. Sometimes candidate experience might clash with psychometric rigour or with operational constraints. What are some practical design choices that your team have had to make to make assessments feel engaging and transparent without watering down the science? It’s a balance. How do you balance those things?

Caroline Fry (20:39)
Yeah — it’s when we come to fisticuffs in all those meetings. So, as you know, Nic, we have to be highly collaborative when working on new assessments and assessment experiences — now more than ever, as we were saying earlier.

And I’ve heard you say in previous podcasts that good assessment design does not work against UX or UI principles. In fact, I’m pretty sure we had a Science or Fiction question about IO psychs being the original UX designers — if you remember that one. I can’t remember who we asked, but I remember it well. And I think we maybe came to the conclusion that they were — that usually there’s a lot of alignment. So we do sing from the same hymn sheet.

And I’d say, with assistive technology enhancements and guidelines like WCAG, it’s possibly becoming easier rather than harder. We don’t have to sacrifice design quality due to the science, because things like accessibility toolbars give us flexibility that we didn’t used to have. We don’t have to strip back a design because a user with an assistive toolbar can be in control of what they need — how it needs to be displayed for their own requirements.

We don’t have to try and do this one-size-fits-all approach because we have something that allows them to customise it the way they need. I think as long as we bake in that kind of compatibility, and we’re already collectively focused on making assessments feel fair, engaging, transparent, and scientifically sound — which we are — they’re founding principles for us.

And we do that from both angles — from psych and product. I think other advancements in technology also mean we can easily do things like A/B testing and get smart about how we standardise and re-standardise assessments and do other kinds of validation studies too. So I’m not sure we need to compromise in the way we once did.

And in terms of practicality and a practical approach, I’d say I simply, personally, think it’s right to design with accessibility in mind because that enhances the experience for everyone. And I’m pretty sure that both of our teams are very conscious of that — and aligned, as I said.

Nicola Tatham (22:43)
Yeah, yeah. We’re also pushing in the same direction, but perhaps just coming at it on a different road.

Caroline Fry (22:48)
Yeah, exactly. Again, it’s a bit like that multidisciplinary approach, where we can also fill in gaps for each other — because you’re focused on things from your side, and I’m… you know.

But I think, practically and operationally — which you asked about as well — maybe it’s a different matter. And maybe one more for you. We’ve talked before about time pressure and everyone buying into stereotypes about Gen Z — that they don’t read, or they want everything in video format, or they can’t pay attention for more than a few minutes.

And while I’m probably the first to say to you, “Well, we’ve got three screens of instructions — could that be one?” or “Could we make that a video?” or “Okay, this is thirty minutes — could it be twenty or fifteen?” — I know you’re already acutely aware of that and already thinking along those lines.

But equally, I know that you always advocate for making sure we get data from our assessment experiences that is accurate and meaningful — and you can’t do that in three minutes. In fact, a new client recently told us, having had their heads turned by a competitor promising just that, that they found out it was too good to be true.

So I think some of those practical challenges are not actually internal, but external — around expectation-setting with prospects.

Nicola Tatham (23:52)
Yeah. I think there’s an element of face validity with that as well. You know, if you’ve got a candidate that comes along and they’re given an assessment that takes three, four, five minutes, there’s a nervousness around that. I’m hearing that anecdotally — “Are they really getting to the bottom of who I am and what I’m all about?”

But at the same time, we don’t want — you know, back in my past life — we’d have people sat in front of an assessment battery of tests for hours and hours and hours. Even just a verbal, a logical, and a personality questionnaire could take someone one to two hours to complete.

So I think we have to make the most of new technology and new analytical approaches to really fine-tune our assessments so that they don’t have to be mutually exclusive. It’s about finding the sweet spot between all of those factors that really matter.

Caroline Fry (24:47)
Mm-hmm. Yeah, that’s a fair balance, I think.

Nicola Tatham (24:54)
Yes — still on the new technology theme. Obviously, product teams — people like yourself and your team — do keep innovating, but we know from experience that sometimes hiring processes can be slower to change.

Caroline Fry (25:00)
Mm-hmm. We try.

Nicola Tatham (25:11)
Okay, Caroline, thank you. I’ve found today really insightful, interesting — and a bit fun as well. A little bit different to what we normally do. So thanks for sharing your thoughts. Is there anything that you’d like our listeners to take away from today’s session? Maybe it’s just that we should get guests back in again.

Caroline Fry (25:17)
I think, again, what I really enjoy about working with you, Nic, is the fact that you can apply those psychological principles. I think I learn from you and apply some of that in what we’re doing in UI and UX design — even things like operant conditioning. There are principles there that you can apply to good design and user experience.

And I just think it’s a real privilege to be able to work with psychs on the daily because everyone’s fascinated by psychology and the human condition. I think it’s a real value add to our little world of SaaS tech to have that additional set of experts in the business — which you don’t get in every company, certainly not all tech startups.

So I’ve just been reflecting on that as you’ve talked about all of your psychometric knowledge and skill. But yeah — just, you know, shout out to all the psychs in our company. Thank you.

Nicola Tatham (26:19)
Yeah, I agree. I think the teams learn from one another, don’t they? We’ve spent hours labouring over some very small tweaks to images or the system, and it’s worth it — it’s worth the effort. And we’re all doing it for the right reasons, which just makes it easier to have those conversations.

Caroline Fry (26:30)
Mm-hmm. Yeah, I agree.

Thanks for hanging out with us on The Score. If you enjoyed this conversation on all things psychology and product, don’t miss what’s coming next. New episodes drop every two weeks on YouTube, Spotify, or wherever you get your talent acquisition insights.

Key Takeaways

Hiring is getting harder to make sense of. Job ads now list impossible wish lists, everyone’s trying to “use AI,” and candidates are caught between wanting quick experiences and wanting to be understood. In this episode of The Score, Chief IO Psychologist Nicola Tatham and Head of Product Caroline Fry discuss trending topics in the world of hiring, from skills inflation to AI hype

Skills inflation is real, and mostly self-inflicted

“Skills inflation” isn’t necessarily about people suddenly needing to know more, but about managing risk. Hiring managers trying to reduce risks by asking for everything at once. The longer the list of requirements, the safer it feels, at least on paper.

The irony is that this approach often narrows the talent pool instead of improving quality. By asking for ten skills when three would do, organisations screen out great candidates who might have learned the rest quickly on the job.

The fix starts with clarity. Rather than copying and pasting old job descriptions, teams should take the time to analyse what actually predicts success. That means understanding what top performers really do, speaking to current role holders, and focusing on potential, not just past experience.

The future belongs to learners

The pace of change means today’s “must-have” skill might be obsolete in six months. AI, automation, and new technology are reshaping how work gets done, and the skills that matter most are no longer technical.

Empathy, communication, and adaptability are becoming the real differentiators. As Nic puts it, “We need people who can re-skill, up-skill, unlearn and relearn.” It’s less about what candidates know now, and more about whether they can evolve.

For TA teams, that means identifying and measuring potential, something assessments can do well when grounded in behavioural science.

AI isn’t magic, but it can make hiring better

Caroline’s take on AI is refreshingly honest: not every “AI-powered” assessment actually uses AI. There’s a lot of “AI washing” in the market: tools that claim automation as intelligence.

Used intentionally, though, AI can genuinely enhance hiring. From faster candidate communication to better data analysis and more engaging assessment formats, it can streamline processes without losing rigour.

The key is intent. AI should solve a clear problem, not be added just for show. As Caroline says, “We’re not trying to fit a solution to a buzzword", we’re trying to make assessments feel more human while keeping the science solid.”

Fairness and experience can coexist

There’s a common misconception that scientific assessments and great candidate experience are at odds. In reality, as Caroline points out, the two often reinforce each other.

The good design principles of clarity, accessibility, transparency are the same foundations that make assessments more valid and fair. With assistive technologies and accessible UX now standard, it’s possible to design processes that feel inclusive and engaging without compromising the data.

When candidates feel respected and informed, they engage more authentically, and that improves predictive accuracy. Fairness and experience are mutually reinforcing.

Finding the balance between speed and rigour

One of the most frequent challenges TA teams face is the pressure to shorten assessments. Competitors promise “five-minute” tests that claim to measure everything, but science says otherwise.

Shorter assessments may feel convenient, but they often fail to capture meaningful insight. The sweet spot lies in designing experiences that are concise, engaging, and genuinely predictive. Nic puts it simply: “There’s no point assessing someone unless you know what you’re assessing for.”

Final thought

Modern hiring sits at the crossroads of psychology and technology. The temptation is to choose sides: efficiency or fairness, AI or human judgement, science or experience. But as Caroline and Nic show, the real progress happens when those worlds meet.

Fair assessments shouldn’t feel clinical, and engaging ones shouldn’t lose their validity.
Done right, they can do both, helping organisations make better decisions and helping candidates show who they really are.

What is Sova?

Sova is a talent assessment platform that provides the right tools to evaluate candidates faster, fairer and more accurately than ever.

Start your journey to faster, fairer, and more accurate hiring