Cartoon of the recruitment process: five candidates line up holding masks against their faces to conceal their identities.
Image: © aleutie/Stock.adobe.com

Unconscious bias: The trouble with using AI in the recruitment process

22 Jul 2022

Natalie Cramp, CEO of data science consultancy Profusion, warns that AI should not be seen as ‘infallible’ in the recruitment process.

Last week, the UK’s data watchdog revealed plans to investigate whether the use of AI in recruitment leads to issues of bias.

The Information Commissioner’s Office (ICO) said it would conduct the probe following accusations that automated recruitment software was discriminating against minority groups by discounting them from the hiring process.

UK information commissioner John Edwards his office would look at the impact AI tools for screening job applicants could be having “on groups of people who might not have been part of the development of the tool, such as neurodiverse people or people from ethnic minorities”.

AI can be used by companies and recruiters to take some of the hassle out of the hiring process. However, there have long been concerns that some people could be overlooked due to in-built biases in this tech.

In 2018, it was revealed that Amazon had to scrap its AI hiring tool due to allegations that it was discriminating against candidates based on their gender.

The recruitment tool rated prospective candidates out of five stars, similar to how Amazon products are rated. However, due in part to the lack of women in the tech industry, the tool’s development was mostly informed by job applications from men. Therefore, it trained itself to prefer male candidates, and penalised applications with words such as ‘woman’ and ‘women’.

Amazon maintained that the tool wasn’t used to hire for roles at the company, but admitted that recruiters looked at its recommendations. The company ultimately got rid of the tool.

According to Natalie Cramp, CEO of data science consultancy Profusion, the way to prevent this kind of oversight is to increase public understanding of the root causes of the biases.

“What needs to happen is a better understanding of how the data that is used for algorithms can itself be biased and the danger of poorly designed algorithms magnifying these biases. Ultimately an algorithm is a subjective view in code, not objective.”

‘We need people to have a more fundamental understanding of AI. Principally, it is not infallible – its outputs are only as good as the data it uses and the people who create it’
– NATALIE CRAMP

Cramp welcomed the ICO’s decision to investigate potentially discriminatory algorithms, calling it both “welcome and overdue.”

She said that organisations need more training and education to verify the data they use and to challenge the results of any algorithms. “There should be industry-wide best practice guidelines that ensure that human oversight remains a key component of AI. There should be absolute transparency in how algorithms are being used.”

She also recommended that companies keep their teams diverse and not “rely on one team or individual to create and manage these algorithms”.

“If the data scientists who create these algorithms and monitor the data that is used come from more diverse backgrounds and experiences, they are much more likely to be able to identify biases at the design stage.”

There are researchers investigating how AI could be leveraged for bias-free recruiting. Kolawole Adebayo, a researcher at Science Foundation Ireland’s Adapt centre for digital content, is looking into eliminating bias across different HR workflows using natural language processing techniques.

His project aims to implement AI models that can understand the contents of HR documents to extract and remove information that can lead to unconscious bias and discrimination at the attraction and selection phases of hiring.

Earlier this year, Adebayo said the project will leverage AI to assess a candidate’s suitability based on their skills. “Hiring bias can lead to undue discrimination of quality candidates from the disadvantaged or minority groups such as women, people of colour, and those in the LGBTIQ community,” he cautioned.

According to Cramp, the ICO investigation alone will not tackle societal issues that lead to unequal hiring practices – and tech can’t be blamed.

“We need people to have a more fundamental understanding of AI. Principally, it is not infallible – its outputs are only as good as the data it uses and the people who create it,” she said.

“Mandatory safeguards, standards of design, human oversight and the right to challenge and interrogate results are all essential to the future of AI. Without this safety net, people will quickly lose confidence in AI and with that will go the huge potential for it to revolutionise and better all our lives.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Blathnaid O’Dea
By Blathnaid O’Dea

Blathnaid O’Dea worked as a Careers reporter until 2024, coming from a background in the Humanities. She likes people, pranking, pictures of puffins – and apparently alliteration.

Loading now, one moment please! Loading