Event overview: Assessment in the age of the algorithm, 8th May 2019
In a morning dedicated to understanding the age of the algorithm, ‘humanity’ won out as a way to ensure that we make the most of both the risks and opportunities of AI.
Our speakers were introduced by Sir David Walker (Chairman, Sova Assessment): Lord Clement-Jones (former Chair of the House of Lords Select Committee on Artificial Intelligence); Hema Bakhshi (Future of Work Strategist); Dr Alan Bourne (CEO Sova Assessment); and Jarret Hardie (CTO Sova Assessment).
We often hear that the qualities that make us human are those that will differentiate us from machines. That our ability to adapt and think critically will save and evolve our purpose and our jobs. Our speakers shared valuable ideas and insights on how we might incorporate something as nebulous as ‘humanity’ into AI and machine learning, to make it not only efficient and future-proof but, most importantly, ethical.
HR professionals have a unique perspective on organisational issues stemming from the introduction of AI, including the loss of jobs.
Lord Clement-Jones discussed an organisational change which replaced parts of a customer complaints process with AI decision-making. The change removed 200 roles from an organisation, out of 750, but the company did not want to simply make 200 people redundant. He suggests mitigating the number of employees actually leaving an organisation by ensuring that HR and legal professionals are part of the initial scope and introduction of the technology. HR and Legal should be working alongside the technologists, he says, rather than sweeping up after them.
Speaking more generally, Lord Clement-Jones says organisations and governments should invest more in re-skilling and in particular in further education, not necessarily to protect people from losing jobs that are being done more efficiently by machines, but instead to enable them to move to jobs that can’t be done by machines, and that can be augmented and assisted by AI technology.
Hema Bakhshi explained that as the workforce adapts to the demands of five generations of workers and a longer life, our ability to tap into people’s capacity for learning, ‘unlearning’ and re-learning is more important than ever before.
Alan Bourne added that by knowing your workforce and the skills your organisation already possesses, you will be able to adapt quickly and efficiently. Data collected by talent assessment systems that use more than a simple competency framework, can offer insight into ways in which teams can be moulded, while ensuring that adaptability is part of the assessment in the first place.
AI must be guided by an ethical framework and overseen by processes not directly linked to the technology being used; that could be an internal human resource, an external advisory board, and/or indeed using AI to audit other forms of AI. The key, our speakers said, is ensuring that humanity, laws, regulations and ethics guide these principles.
Lord Clement-Jones believes that a combination of ethics and innovation is where the UK is at a huge competitive advantage in the development of AI, worldwide, with data monopoly also being monitored to ensure big companies are not misusing algorithms in the pursuit of profit.
Hema explained how as leaders, organisationally and even societally, we have responsibility in the digital age now to carefully curate the work of the future, and we can do this by ensuring that we are adhering to strong principles.
Alan and Jarrett, in a practical example, showed how by tailoring an AI-led assessment programme to an organisation’s specific requirements, not only are you able to lean on the experience of both psychologists and technologists who have a broader ethical code, but you can ensure that the ethics and values of your organisation are taken into account in the technology, as well as assessed in the candidate.
A strong code of ethics leads directly to trust. Lord Clement-Jones stressed the importance of building this public trust in any AI development, as without this, there is no ability to innovate. Again, this is where human intervention is key, ensuring that questions are answered and accountability is inbuilt. Hiring people who can use and explain AI and its applications, not just build them, will be key to successful organisations. Hema took this further, showing that organisations will need to develop people with the creativity and critical thinking to develop and apply AI in the future. All speakers agreed that AI needs to be introduced far earlier in the education system: schools should not only be learning about, but also incorporating AI, leading to a better understanding and more trust.
A key takeaway for HR managers was distilling public trust to the trust they need to engender from their applicants and workforce. Lord Clement-Jones called this “explainability”. It is possible to build the transparency directly into the spec and algorithm to start off with. HR managers, for example, need to be able to explain how their technology made decisions at a discrimination case in court or at a tribunal – there is no acceptable “black box” answer. Jarret gave the example of his team that builds the assessment systems, how they work hand in hand with psychologists to ensure that the process is transparent and explainable. They use machine learning to optimise the process, constantly refining the algorithm throughout the campaign to increase fairness, visibility of process and accuracy.
What about where our humanity can actually be a hindrance? Lord Clement-Jones explained that humans are inherently biased, and machines relying solely on humans to learn will inherit this bias. In order for organisations to truly embrace and incorporate diversity, AI needs to be built to spot this bias and eradicate it.
(By ‘diversity’ we are not just talking about cognitive diversity, which is not, as Silicon Valley might think, a replacement for cultural and ethnic diversity.)
There are ways in which humans can help with this, for example, Lord Clement-Jones recommends ensuring that for graduate recruitment, diverse universities are chosen as partners. Sova’s approach is “responsibly innovative AI” – finding the balance in the opportunity to increase ambition while still ensuring ethical and scientific rigour.
Full write-ups and videos for each of the three speakers at this event will be available shortly. To request an alert when they are available, and for more information about how to incorporate AI into your organisation’s assessment systems, please contact email@example.com.