An AI-powered robotic arm selects a candidate, indicating the potential of bias and poor decision making from biased AI.
Image: ©AlexanderLimbach/

How to tackle AI bias in the hiring process

29 May 2024

AI tools are combing over the digital history of potential hires to determine who exemplifies ‘the ideal candidate’, but the factors that lead to that decision may be biased.

Humans have given AI the power to exclude candidates based on details such as their past social media usage, their address and even their name. 

The conversation around AI and the inherent bias that plagues the system needs to be had. We can’t ignore the obvious permeations of candidate bias in areas such as ethnicity, age, gender, social class and disability. 

But rather than side-lining a valuable resource, innovators in the field of AI can acknowledge their own potential for bias and incorporate structures within the research and development stage that would help to negate those foundational biases.

Vice-president of assessment products at Criteria Corp, Dr Matthew Neale has worked at the intersection of technology, AI and psychology for more than 25 years and is confident about the role that psychologists can play in “ensuring that AI is used in a fair, equitable and effective manner in organisations”. 

Building bias

AI technology has potential to help with the hiring process for everyone involved, from the innovators and candidates to the recruiters and employers. But as Neale states, AI is only useful when it is developed and deployed correctly, and as it stands, “some are and some are not”. 

“What’s critical is that the AI tool is fit for purpose,” says Neale. Developers should be emboldened by core knowledge on how the tool was developed, the data that the tool processes and was trained on, how this data was verified and any risks involved in the usage of the tool, such as biased decision making. 

The issue will always be the capacity for human error, either deliberate or accidental. Neale suggests imagining an AI model that has been trained to replicate the choices of a human recruiter, who already holds a bias towards marginalised groups.

He explains: “If the recruiter is biased in their decision making, then the AI will learn that bias and those decisions will be replicated by the AI when it decides which resumes to forward and which resumes to reject.”

He cites real-life examples including an instance in 2018 where it was discovered that an AI tool utilised by retail giant Amazon disposed of CVs from women because it had been trained to replicate hiring patterns of the last ten years that dramatically favoured men. 

He also notes a recent Bloomberg article that claimed OpenAI’s ChatGPT ranked CVs from people with Asian and white sounding names significantly higher than those seemingly from people with names that sounded more stereotypically Black. 

For Neale, this is an area of clear cut bias that companies should make note of. “Biased AI tools can lead to a lack of diversity within a given industry, ultimately decreasing profitability, creativity and productivity,” says Neale. “When there is a lack of diversity in a workplace, the culture suffers.”

Identifying relevancy

Arguably, AI is allowed to analyse data that is irrelevant to the task at hand and for Neale, “it’s not relevant for the AI to have access to the names of candidates”. He also notes issues in the scanning of “gaps” within CVs, wherein a person has not disclosed private and personal reasons for taking an extended break from work. 

He says, in order to mitigate these issues, “it’s highly important to ensure diverse representation among the teams designing and implementing the AI tools” and to be transparent with candidates about how AI is being deployed. Also, “incorporate human oversight and intervention into decisions about candidates”.

AI is a powerful resource that when used correctly can greatly benefit the world of work and the people within it. Especially in areas of accessibility and work-life balance. “AI can make more consistent and reliable decisions than humans,” says Neale. 

As to how job applicants can limit the effects of AI bias in the pre-interview stages, Neale advises ensuring you only include relevant information on your documentation, including any social media accounts linked to your application. You should also check that your CV is easily read by AI systems and does not include unnecessary graphics or fun and fancy fonts. 

Legally, AI usage has left users and those who are subject to its usage, in a precarious position. “Some laws provide applicants with the ability to opt out of automated processing,” explains Neale, however, opting out of the process can also affect your chances of securing the job. 

“Anti-discrimination laws apply to employment decisions made using AI just as much as they apply to selection processes that don’t use AI. So, in relation to employment discrimination, there are existing laws that can be enforced to address biased AI decision making,” he says. 

But AI technologies still pose a risk, for example, in the lack of transparency around how decisions are made and the ability of AI to convincingly imitate a human. The introduction of laws, such as the EU’s AI Act will hopefully “help users of AI tools to make more informed decisions about the risks and benefits of the AI tools”. 

At the end of the day, says Neale, “you’re much more likely to be discriminated against by a human than by an AI”. However, from a developer and a user perspective, there is always a “responsibility to monitor the tool to ensure that it is performing as expected”.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts. 

Laura Varley
By Laura Varley

Laura Varley is a Careers reporter at Silicon Republic. She has a background in technology PR and journalism and is borderline obsessed with film and television, the theatre, Marvel and Mayo GAA. She is currently trying to learn how to knit.

Loading now, one moment please! Loading