AI in recruitment: are the rewards worth the risks?

AI is now being used to automate recruitment more than ever before. AI tools can scan CVs, analyse video interviews, and compare them with successful past applications to spot personality traits that fit the role, allegedly eliminating human bias.

A recent survey by Ceridian showed that 42% of executives worldwide are already using AI in recruitment, and a further 46% plan to do so. That’s basically everyone.

However, AI recruitment tools are not hypercompetent robot HR managers who can understand abstract hiring criteria the way humans can. All they know how to do is find patterns, making statistical links between features of past CVs and interviews and desired personality traits, then looking for them in new applicants.

This may not be as helpful for recruiters as it seems. The link between behavioural cues and personality is still debated by psychiatrists. The software also raises ethical concerns about privacy and consent. What’s more, it may actually perpetuate bias; if the successful past applicants are mostly white men, and you tell an AI to find people with similar behavioural cues, there are no prizes for guessing what you’ll get.

In 2019, Amazon stopped using an AI CV-scanner for exactly that reason: trained on applications that were mostly from men, the AI had become sexist. It downgraded CVs that mentioned “women’s” activities and favoured words more commonly used by men, like “captured” and “executed”. Meanwhile, a German study found that AI video analysis could be picking people based on “personality traits” like their video background, their hairstyle, and whether they wore glasses.

It’s not yet clear whether AIs can be taught to overcome this kind of bias. Training them to disregard appearance and intonation would defeat the purpose. In fact, since the AI runs on the kind of superficial differences that lead to bias, it could be more useful as a way to spot biases to avoid.

Privacy concerns also raise thorny problems; not all candidates want their personality analysed by a computer, but if they’re given the choice to opt out or select which results get shared with the employer, this could reintroduce bias: candidates might only choose to share flattering results, or interviewers could be less willing to hire those who withheld results.

Ethical AI recruitment demands highly controlled use by specialists who understand the tools. At this point, it’s worth asking whether the time and effort saved are worth the time and effort it will take to make AI recruitment truly ethical. AI analysis of hiring practices, rather than the people being hired, might be a better route to eliminate bias.

Recent Posts

Mental Health in the Tech Sector and How to Help May 14, 2024

The tech centre is known for being high pressured. The demand can be intense and not just at senior management levels. In the industry, 90% of tech staff have been diagnosed with a mental health disorder, and according to the OSMI Mental Health in Tech Survey, over 64% reported that mental health had negatively impacted on their work. In this post, we’re going to explore the issue in more depth to try and find solutions to the issue of mental health in the tech sector.

Breaking Barriers: Diversity and Inclusion in Science Recruitment April 18, 2024

In the ever-evolving landscape of scientific discovery, diversity and inclusion have become crucial elements for success. As the world becomes more interconnected, the need for diverse perspectives in scientific research and innovation has never been more apparent. In this article, we will explore the importance of diversity and inclusion in science recruitment and discuss strategies for employers to attract the best talent while breaking down barriers.