If you are going to own a smart speaker to always listen in on you, you may as well let it help you detect sudden heart attacks.
By their nature, smart speakers such as the Amazon Echo or Google Home are designed to always be listening to what’s going on around them. While they are only supposed to be activated by an agreed wake-up statement – such as ‘Hey, Google’ – what if they could be trained to activate in other ways?
One such way has been developed by a team of researchers at the University of Washington, revealing a new tool that can work on most smart speakers and smartphones to detect when someone has had a heart attack. The idea is that when someone is asleep or just walking around their home and experiences a sudden cardiac arrest, this new tool could detect the gasping sound of agonal breathing and call for help.
The algorithm used to detect distress was trained using real agonal breathing captured from calls made to emergency services. Publishing its findings in Npj Digital Medicine, the team said that the proof-of-concept tool detected agonal breathing events 97pc of the time from up to six metres away.
“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said co-corresponding author Shyam Gollakota.
“We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”
How it was trained
Agonal breathing appears in 50pc of cases of people experiencing cardiac arrests and these breaths often mean a better chance of surviving. Dr Jacob Sunshine, another co-corresponding author, said it usually occurs when a patient experiences really low oxygen levels.
“It’s sort of a guttural gasping noise, and its uniqueness makes it a good audio biomarker to use to identify if someone is experiencing a cardiac arrest,” he said.
In addition to the emergency services calls used to train the tool, the team used 83 hours of audio data recorded during sleep studies, resulting in 7,305 sound samples, many of which contained snoring or sleep apnoea.
By learning these sounds, the algorithm was trained to differentiate between normal sleeping sounds and agonal breathing. During testing, it only incorrectly identified agonal breathing 0.14pc of the time, and this fell to 0pc when the team had the tool classify something as agonal breathing only when it detected two distinct events at least 10 seconds apart.
Looking to the future, the team hopes the technology could function like an app to run passively on smart speakers or smartphones.