How AI can help address Ethics, Equality and Empowerment in Assessment

The continued rise of AI is certainly one of the hottest leadership topics for 2019. Consulting firm DDI’s list of top trends for 2019 includes many that are linked to the application of AI – diversity, the eruption of data, and robotics, included.

But AI has received some negative press recently. Last year Amazon scrapped its ‘sexist AI tool’ and data scientist Cathy O’Neill called out algorithms as ‘weapons of math destruction’ secret and harmful, sorting the winners from the losers in a Ted Talk on the era of blind faith in big data.

Using technology for good

At Sova, we believe that AI in assessment has the potential to do great good. In our own experience it helps organisations find more right-fit employees, eliminate bias in hiring and makes the whole recruitment journey more efficient and better for the candidate. We’ve used it with clients to improve their organisational agility and to create fair and robust development processes.

But organisations do need to approach AI with the right mindset if they’re to get the best from it. Here we explain the risks and how to mitigate them.

To read more on the topic download your copy of our new white paper: Ethics, Equality and Empowerment: Assessment in the age of AI

Approaching AI with the right mindset

To get the full benefits of AI in assessment, it’s important to be aware of the risks and how they can be managed by adopting some good practices, – as is the case with any offline assessment. Risks that we explore in the whitepaper include:

 Automating inequality: AI used badly can inadvertently embed stereotypes and bias in organisations and in society.

  • Predicting the past: We need to be able to use AI to develop a workforce for the future so it’s important not to base assessment on frameworks rooted in the past.
  • ‘Computer says no’: This is a feeling of disempowerment when there’s no explanation, transparency or communication about an outcome from assessment.
  • Ethical concerns: The risk of amplifying adverse impact by applying powerful tools to a process that isn’t right in the first place.
  • Overcomplication: An algorithm needs to be simple enough to explain to a candidate who appeals against a decision or to defend a position from a legal standpoint.

The human touch

We can manage these risks by applying psychology with as much rigour as we apply technology. What we teach the models, put into algorithms, and the insight from humans is where the true value from AI is to be found.

“With the rise of AI in the workplace, there has been an increased attention and value for what is uniquely human” says the DDI in its 10 Hot Leadership Topics for 2019. People are being “challenged to manage new technologies and exercise greater critical thinking and judgment”.

This is certainly the case for AI and assessment and as a business we’re putting our energies into applying advanced psychology to advanced automation. When used at its best – in partnership with human expertise, thoroughly evidenced and ethically applied – AI has the potential to revolutionise the way we attract, assess and develop our greatest business asset.

To read more about AI in assessment and how to make it work for your organisation, download your copy of Sova’s latest white paper here: White paper: Ethics, Equality and Empowerment: Assessment in the age of AI