How machine learning can help support cancer survivors


26 Nov 2024

Image: Philip O'Brien

Philip O’Brien from SETU’s Walton Institute says the hype around AI has led to many misconceptions about if and when to use it.

Using federated (decentralised) machine learning, Philip O’Brien’s current research aims to predict mental health outcomes of cancer patients with the ultimate goal of improving early interventions for this vulnerable group.

With a degree and a master’s in electronic engineering, O’Brien is technical lead with the Mobile Ecosystem and Pervasive Sensing (MEPS) Division in the Walton Institute at South East Technological University. He specialises in artificial intelligence (AI) and machine learning to drive innovation and solve complex problems.

‘Some companies approach AI with misconceptions – not just about what AI can do for them, but whether they actually need it’

Tell us about your current research.

My current focus is the FAITH project, the overarching goal of which is identifying trends in the mental health of cancer patients who have undergone therapy. This project brings together a diverse consortium of nine partners across five EU member states. We’re conducting trials with cancer survivors in hospitals in Madrid and Lisbon, collecting a wide range of data including activity levels, sleep patterns, nutrition and even changes in vocal features.

Click here to listen to Future Human: The Series.

Throughout the trials, we align this data with clinically validated depression questionnaires conducted through the hospitals. Our aim is to find markers in the dataset that correspond with changes in these questionnaires, with the hope of using this data to predict mental health trends.

In your opinion, why is your research important?

I believe this research is important because we’re aiming to uncover new insights into why cancer survivors experience high rates of depression and anxiety. By working closely with hospitals in Madrid and Lisbon and leveraging their expertise in patient care during our trials, we’re able to cast a wide net over the data people generate in their daily lives. The goal is to identify markers in the data that might otherwise go unnoticed, and then use those markers to predict the onset of depression or anxiety before it becomes a serious issue.

The significance of this work lies in the potential to enable early identification and prediction of negative mental health trends. For cancer survivors who’ve already endured so much, this kind of early intervention could be truly empowering. If we can play even a small role in reducing the incidence and severity of these mental health challenges, it would lead to an overall improvement in wellbeing for this vulnerable group. That’s the fundamental impact we’re striving for.

What inspired you to become a researcher?

There wasn’t a single moment that sparked my desire to become a researcher; it was more of a gradual journey. I initially studied electronic engineering and worked as a research assistant in that field. After completing my master’s in electronics, I found myself gradually shifting towards software engineering. Over time, I realised that what truly fulfils me is using these skills to tackle complex problems. The work is rewarding because not only do you collaborate with talented people, but you also contribute, in some small way, to advancing solutions to important challenges. That’s a deeply satisfying way to work.

What are some of the biggest challenges or misconceptions you face as a researcher in your field?

That’s an interesting question because there are several challenges and misconceptions we often encounter. At Walton, we’re in a unique position. We work on a variety of research projects – European initiatives, SFI projects, and more – while also engaging with start-ups, SMEs and larger companies through our innovation gateway. Each of these touchpoints presents its own set of challenges.

Having worked in AI for nearly 10 years, one recurring issue I’ve observed is that many people are exposed to AI at a very surface level, often through media snapshots. This can lead to a lot of hype in the wrong areas because it makes for an intriguing soundbite. Consequently, some companies approach AI with misconceptions – not just about what AI can do for them, but whether they actually need it.

I’ve encountered businesses, for instance, that felt they had to adopt AI because they kept hearing about it, or because their CEO wanted it integrated into the product roadmap. However, the real challenge they faced wasn’t a lack of AI, but rather fundamental issues like not having the right data infrastructure in place. Before considering AI, they needed to ensure they were collecting enough of the right kind of data over a sufficient period. This is a common misconception we often must address.

Do you think public engagement with science and data has changed in recent years?

Public engagement with science has definitely changed in recent years. A major shift occurred during Covid, when scientific concepts that were once confined to academic circles suddenly became part of everyday conversation.

The internet, social media, and platforms like TikTok have made scientific information more accessible than ever, which on the surface seems positive. However, when that information is manipulated to support certain political or ideological agendas, it can be very harmful. This manipulation has contributed to a growing erosion of trust in the scientific community and the scientific process itself.

Even when scientists do their best to communicate, the polarisation we see today makes it harder for people to trust the process.

Overall, while there’s greater public awareness and interest in science, there’s also more potential for scepticism and misinterpretation. The scientific community could certainly improve how it communicates with the public to address these challenges.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.