For AI to be effective in healthcare, gender bias must be addressed

8 Mar 2021

Dr Orna Fennelly. Image: University College Dublin

Researcher Orna Fennelly discusses the benefits that AI can bring to the healthcare sector – once gender bias is tackled first.

AI has become a part of our day-to-day lives. From recommendations on Netflix and online advertising on social media, to facial recognition on our phones and fitness trackers.

While many applications of AI are making our lives more efficient by (sometimes) speeding up our choice of the next TV programme to binge, there is huge potential for AI to revolutionise the provision of healthcare.

AI can be used to quickly identify abnormalities on CT scans such as strokes and immediately notify the relevant healthcare professionals. It can also help healthcare professionals to make more accurate decisions, alert them to changes in a patients’ condition and even help them document patient care more efficiently, allowing them to spend more time with patients.

Although we have seen some healthcare applications of AI, unlike other fields, AI is not yet widespread in the delivery of healthcare.

One reason for the relatively measured adoption of AI in healthcare is to avoid the risk of reproducing gender-based and other types of discrimination within the algorithms, and consequent misdiagnosis.

While decision-making by machines has the potential to mitigate stereotypical thinking, our societal conscious and unconscious biases are often present in the training data or they can be coded into the algorithm by biased developers.

This not only replicates existing biases and discrimination but can exacerbate them and increase the negative economic and social consequences. The ‘black box’ nature of AI also means that the bias can be hidden from view. This includes historic imbalances in other areas such as hiring practices and the gender pay gap.

This is exactly what Amazon uncovered in its job recruitment engine which rated male candidates higher than women for technical positions as the algorithm had observed a male-dominated pattern of employment.

The risk of reproducing these biases is very concerning as women only represent 25pc of the STEM workforce in Ireland, 18pc of Irish businesses have no women in senior management roles, and women working in scientific research and development positions in Ireland earn on average 30pc less than men.

Similarly, a study found that Google Translate exhibited a strong tendency toward male defaults, in particular for STEM fields. As well as replicating gender-based biases within the workforce, training AI algorithms on data which is not diverse results in inferior technology.

A recent study found that facial recognition software was 99pc accurate for light-skinned males but was as low as 65pc accurate for darker female faces. Similarly, voice recognition systems have been shown to be up to 13pc more accurate for male than female voices.

These inaccuracies pose huge societal challenges. In 2017, an Irish female vet said she failed her computerised oral English test required to stay in Australia, alleging there was a flaw in the software. Pearson, the online test provider, denied there was anything wrong with its computer-based test.

AI and gender bias in healthcare

The importance of the technology recognising the differences between sexes and genders, as well as ethnicity, race, geographic location and socioeconomic status, is further strengthened in healthcare.

Diseases manifest differently for men and women and can occur more commonly in certain populations. However, gender-based differences are often not considered and recognised in the development of algorithms for healthcare according to a recent study.

This risks a health condition being missed or misinterpreted and poses major patient safety concerns. According to the Irish Heart Foundation, heart attacks and strokes are one of the biggest killers of women in Ireland, but they are often missed as the symptoms experienced by women, for example, nausea and back pain, can be different compared with men who are more likely to get the crushing pain in the chest that shoots down the arm.

Similarly, chronic pain affects a higher proportion of women but, according to the International Association for the Study of Pain, women are less likely to receive treatment due to a lack of awareness of how women experience pain differently. Albeit unconscious bias, these gender-based biases can exist within the AI training data and be reproduced in the healthcare setting.

Despite concerns that AI reproduces bias, AI can also contribute to the reduction of gender inequalities and biases and provide huge benefits to healthcare. This requires improvements in the diversity, inclusivity and equality of the training data, the AI developers and the testers of these technologies, and the involvement of key stakeholders.

AI developers also need to be trained to recognise bias and use ethical and transparent approaches, with ongoing monitoring and greater regulation of data aggregation, for quality validation and the algorithms.

It is now widely recognised that the lack of diversity in the AI developer community is related to the resulting biases in the technical products. To avoid perpetuating these biases into the digital world, we need to tackle these gender inequalities in the real world, which currently sees women making up only 12pc of AI researchers.

More women in tech for gender balance is no longer a ‘nice to have’ but a necessity which must be prioritised in order to avoid developing products which perpetuate existing biases and exacerbate healthcare disparities.

By Orna Fennelly

Dr Orna Fennelly is an e-health researcher at the Irish Centre for High-End Computing (ICHEC) at NUI Galway. Fennelly is also a physiotherapist and has a PhD in health sciences from University College Dublin.