Amazon had to scrap its AI hiring tool because it was ‘sexist’ and discriminated against female applicants, a report from Reuters has found.
Amazon’s hopes for creating the perfect AI hiring tool were dashed when it realised that the algorithm contained one huge, glaring error: it was “sexist”, according to a report released by Reuters yesterday (10 October).
Reuters reporter Jeffrey Dastin spoke to five machine-learning specialists, all of whom elected to remain anonymous. In 2014, these programmers were all working on hiring algorithms to sift through job applications for use in a recruitment tool. The tool used AI technology such as machine learning to rate each applicant between one and five stars, much in the same way Amazon products are rated.
Amazon had hoped the tool would be a “holy grail”, according to one of the programmers. “They literally wanted it to be an engine where I’m going to give you 100 résumés, it will spit out the top five and we’ll hire those.”
However, by 2015, it became apparent that the algorithm was discriminating based on gender. The computer models were trained to rate candidates by analysing patterns in résumés submitted to Amazon over the past 10-year period. Given that the tech industry has historically been, and continues to be, male-dominated, most of the applications the computer observed came from men.
The system then taught itself that male candidates were preferred, and penalised applications that included the word ‘women’s’, such as in ‘women’s basketball team’ or ‘women’s chess club’. The programmers interviewed by Reuters said that the AI downgraded graduates of two all-women colleges. The programmers did not specify which colleges these applicants came from.
While the company did attempt to edit the programs to make these terms appear neutral, it couldn’t guarantee that the tool wouldn’t continue to discriminate against women. The team was disbanded in 2015 after executives “lost hope” for the project, Reuters reported.
Amazon said that the tool “was never used by Amazon recruiters to evaluate candidates” but stated that recruiters did look at the recommendations generated by the tool while hiring for positions at the company. The company declined to comment further on the matter.
The HR community is becoming increasingly interested in the potential of AI to help speed up the recruitment process. In early 2018, a report on emerging HR trends released by LinkedIn found that 76pc of HR professionals acknowledged that AI will be at least somewhat significant to recruitment work in coming years. AIs were rated as most helpful at sourcing, screening and scheduling candidates.
This recent development, however, may deflate AI enthusiasts hoping to roll out these technologies in their enterprises in the near future. As AIs are only as good as the sum of the data they are given, they will likely continue to reflect bias in a way that mirrors the diversity issues that exist in the real working world.
Yet some would still argue that AI is the solution to human-generated biases. A HR study released by IBM found that recruiters spend as little as six seconds looking at a résumé that comes across their desk. IBM argues that when making decisions based purely on instant impressions, HR professionals are more inclined to fall back on their own implicit bias, something an AI could correct by helping to expedite the initial hiring stages.
In any case, many of those working in HR harbouring fears that an AI may replace their job will likely breathe a sigh of relief knowing that the technology still has a long way to go before it totally supplants human workers.