Dr Amir Atapour-Abarghouei’s research aims to remove data biases to ‘pave the way for AI systems that are fairer and more equitable’.
Dr Amir Atapour-Abarghouei’s PhD work focused on computer vision and scene understanding, using artificial intelligence (AI) and machine learning technologies.
One component of his project involved the task of monocular depth estimation.
“This challenging task involves estimating how far the objects in an image are from the camera by predicting a depth value (distance relative to the camera) for each pixel, given a single RGB image,” Atapour-Abarghouei explains.
“Obtaining accurate depth images can be very time-consuming, expensive or even practically intractable, which requires thinking outside the box.”
To solve this problem, Atapour-Abarghouei certainly found a novel data source.
“I turned to the popular video game Grand Theft Auto V, which turns out to be a great source of nearly photorealistic synthetic images accompanied by pixel-perfect depth information from complex, if sometimes a bit overly exciting, driving scenarios.”
Nowadays, Atapour-Abarghouei is an assistant professor in the Department of Computer Science at Durham University in the UK where he continues to conduct exciting research using machine learning and AI.
Tell us about your current research.
My current research primarily explores the exciting realm of machine vision and cognition, where fast-paced, cutting-edge disciplines such as machine learning, deep learning and computer vision converge to revolutionise our understanding of the world.
One of my recent areas of research involves removing algorithmic bias from future automated skin lesion classification systems.
AI systems for melanoma detection already demonstrate dermatologist-level performance and are transforming the way skin lesions are diagnosed and treated.
It is crucial to recognise that these AI systems are not immune to the biases present in the data they are trained on. Such biases can manifest in various forms, such as underrepresentation of certain demographic groups or overrepresentation of specific skin types, leading to prediction irregularities that may disproportionately affect certain individuals or communities.
The bias can also present in the form of surgical markings and rulers placed on the skin by clinicians for diagnostic purposes.
Suggesting that dermatologists avoid using these aids in the future is highly unrealistic and could potentially be detrimental to their performance.
Another form of bias relates to the imaging instrument used to capture lesion images. This means a typical AI model that has been trained on data captured in a specific clinic under certain environmental condition using a particular sensor cannot be reliably deployed in other clinics under different conditions.
Failing to address these biases could result in misdiagnoses, reduced trust in the technology and perpetuation of healthcare disparities.
My research involves using the automatically generated labels to robustly remove skin-type bias from the melanoma classification pipeline using ‘bias unlearning’.
Such a technique forces the machine learning model to learn the useful cues that can lead to the correct classification of the lesion while intentionally disregarding, or ‘unlearning’, any knowledge of skin tone.
This improves the accuracy of the system beyond the performance of an experienced dermatologist with the approach generalising to images of individuals from differing ethnic origins with a reduction in the performance disparity between melanoma detection in lighter and darker skin tones, even if the training dataset is dominated by lighter skin tone individuals.
In your opinion, why is your research important?
My research, like that of any other researcher following the scientific method, is inherently important as it drives knowledge expansion, innovation and technological advancement – in my case, in the areas of computer vision and machine learning.
It plays a pivotal role in finding solutions to pressing issues currently hindering the impact of intelligent AI systems in various aspects of daily life.
The current research I discussed earlier focuses on automated diagnostic systems for skin cancer using AI. An automatic diagnosis system for skin lesions or any other domain within health and medicine can provide an accessible and cost-effective solution, ensuring that essential healthcare reaches individuals who may have limited access to specialist medical practitioners.
As such, research on medical diagnostic tools that takes advantage of AI systems can have significant impact on the early detection of life-threatening conditions, improve patient outcomes and alleviate the strain on healthcare resources.
It is important to state also that algorithmic bias can lead to unfair or prejudiced decisions, often exacerbating existing societal inequalities. By eliminating bias, we pave the way for AI systems that are fairer and more equitable in their decision-making processes. This means that AI can be effectively used to improve our daily lives and provide a more level playing field for everyone, irrespective of their background.
Ultimately, striving to eliminate bias from machine learning systems holds the promise of creating a fairer, more innovative and harmonious future.
What inspired you to become a researcher?
This is an interesting question. Thinking back over my life to find out what led me down the path of becoming a researcher, especially in machine learning and computer vision, I have realised my interest in this type of research is a result of many small, seemingly insignificant, events during my developmental years; from the first time I saw a Commodore 64 (those can only be found in museums now), my first coding experience in Visual Basic (it was not very nice!), the first computer game I ever played (Prince of Persia ported to MS-DOS installed via a 3.5 inch floppy disk), the first time I entered a robotic competition at school (building a useless robot that could not even see anything) to my first hackathon in high school (my team lost, badly!).
However, if I were to pick the most vivid memory that sparked my interest in cutting-edge research, it would be reading about Navlab experiments when I was around 14 years old. Navlab 1 was the first self-driving car based on a neural network.
The vehicle was an old Chevrolet panel van equipped with cameras, sensors and computers. It used neural networks to process visual information from the cameras and make driving decisions. The neural network was trained to recognise road markings, signs and obstacles, allowing the vehicle to steer, accelerate and brake based on the detected information.
This was, of course, a very early example of using neural networks for autonomous driving. The technology and available computing power of the time was very limited, and the self-driving capabilities of Navlab 1 were modest, to the point of being funny, compared to today’s standards.
But reading about the hype of a car that can drive itself in a gossip magazine when I was still defining myself led me to question the boundaries of what I thought was possible. This was one of the moments that made me think ‘I want to do that!’
What are some of the biggest challenges or misconceptions you face as a researcher in your field?
My research comes with a lot of day-to-day operational challenges as well as more inherent long-term difficulties, but of course since the goal of research is to solve problems, additional constraints only add to the fun of finding solutions.
Challenges that any AI researcher needs to deal with include the scarcity of high-quality data needed to train machine learning models effectively and the need for extensive computational resources, which can be expensive and limit the accessibility of research.
Keeping up with the rapid advancements of the field can also be a challenge. Even though it is very exciting to follow new advances in machine learning and AI, the field evolves very rapidly, and researchers must invest a significant amount of time to stay up to date with the latest techniques, frameworks and research papers but this rapid advancement also means that it is a great time to be an AI researcher.
There are also many common misconceptions within the current ‘frenzy’ of machine learning and AI that one needs to be aware of. For instance, many people think that many of the recent AI systems receiving a lot of attention from the public have human-like general understanding, or that machine learning models are infallible and don’t make mistakes, which of course is not true at all.
There are many ethical concerns regarding the recent fast-paced development of AI systems that need to be addressed to guide the direction of research for the good of humanity, such as bias and fairness, privacy concerns, inequality and access, transparency and explainability, accountability and liability, misinformation and threats to democracy, and many others.
However, many of these valid concerns are often drowned out by the sensational overhyped misconceptions regarding the “imminently superintelligent, self-aware AI that can rebel and operate beyond its programming to take our jobs”.
Addressing these misconceptions and promoting realistic expectations is important, which brings forth another important challenge for researchers in my field – effective communication with non-technical audiences.
Explaining complex machine learning concepts to non-technical stakeholders, such as policymakers, industries, businesses and the general public can be challenging but crucial for the healthy advancement of science and technology.
Do you think public engagement with science has changed in recent years?
Yes, I think public engagement with science has indeed undergone profound changes in recent years and events like the Covid-19 pandemic have substantially influenced these changes.
The pandemic has heightened the interest in scientific topics, leading to a surge in the general public seeking information about viruses, vaccines, epidemiology and public health measures.
I, myself, as someone with no scientific knowledge on viruses and vaccines, experienced this first-hand.
This new invigorated public interest extends beyond the pandemic and can be seen in areas such as AI and machine learning, quantum computing, space exploration and astronomy, climate change and environmental sciences and renewable energy technologies, among others. This increased public engagement emphasises the importance of accessible and accurate science communication.
On the other hand, the Covid-19 pandemic also highlighted the prevalence of misinformation and pseudoscience. It underscored the importance of countering false narratives and the need for effective and objective involvement of the scientific community within ‘the marketplace of ideas’ to combat misinformation.
My goal has always been to disseminate my scientific findings and engage the public by means of clear and accessible communication to explain complex concepts, though this is not always easy.
Avoiding jargon and technical terms that might alienate non-experts can make the field more approachable and drive public engagement. I am always happy to collaborate with schools, libraries and community centres to offer workshops, talks or exhibits that introduce AI to diverse age groups and people from different backgrounds.
I also follow the philosophy of community contributions via open-source projects. The source code of all the software produced as part of my research projects and even teaching demonstrations are all open source and can be freely accessed by everyone. This is a great practice and can underscore the collaborative nature of the field, which facilitates public engagement.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.