Why context-aware computing is the future of AI tech

19 Oct 2022

Lama Nachman. Image: Intel

Lama Nachman and her team at Intel are working on adding context into computing with sensor technology to better support people in the real world.

Click to view the entire Digital Transformation Week series at SiliconRepublic.com

AI and smart computing are already advancing at breakneck speed. And now with emerging tech in the areas of sensor networks, computer architecture, embedded systems and wireless technology, the applications could be endless.

This is what Lama Nachman is working on at Intel. With more than 20 years of experience, Nachman is an Intel fellow and director of the Intelligent Systems Research Lab in Intel Labs.

Her research is focused on creating contextually aware experiences that understand users through sensing and sense-making, anticipate their needs and act on their behalf.

“If you really want to support people, you really need to be aware of the context that they’re in,” she told SiliconRepublic.com

“So, if I’m trying to help somebody in the [fabrication plant], depending on what specific action they’re taking, and what is in front of them, and where they’re struggling, you need to be able to comprehend that, see that, process that and then provide them with the right assistance, knowing all of these things.”

‘A camera can tell you what somebody is doing. But sometimes a camera is very intrusive’
– LAMA NACHMAN

Nachman also said computers that can ‘see’ you are more likely to understand the context of the problem. For example, children struggling with a maths problem may not say that in so many words, but they may shake their heads instead.

This, she explained, is the bones of context-aware computing.

“If you’re thinking about location-based technologies, you’re trying to search for something. Well, if you actually search within the area that somebody is in, that’s essentially using the context of their location to help make the system a bit smarter. So now, take that on steroids on all of these things.”

Sensor and wireless tech

In order to bring this kind of computing into the real world, a lot of sensing technology is required to provide that context piece.

One area Nachman and her team have worked on is in the healthcare space, where it’s not just information about the environment and what people are doing that is needed, but also data on heart rates, chronic skin responses or other data points that are important to a person’s health.

This is where due diligence on responsible AI comes into play.

“One of the areas that we’ve been looking at specifically is if you really want to improve privacy, you want to limit in some sense the amount of data that’s being collected to what you’re trying to extract out of it,” she said.

“For example, a camera can tell you what somebody is doing. But sometimes a camera is very intrusive.”

In elder care for example, a common goal would be to keep people comfortable in their homes for longer. However, using cameras to make sure they’re OK can feel invasive and may lead them to turning the cameras off altogether.

“So, one of the areas that we’ll be looking at is, if we want to understand their activities, you could actually push other sensors and that could be cameras that don’t have actually RGB information, but just thermal images so that you don’t see exactly what’s happening.”

Nachman also said wireless signals could be used to sense a person without being as invasive as transmitting full images.

“When people walk around an environment, they actually disrupt the wireless signals. So, you could actually look at the disruption and the wireless signals and try to extract movement in the environment from it. And you can start to map that movement into activity,” she said.

“So you know that somebody just walked around, went to the kitchen, did their usual things over time, and then you can say that person is actually engaged in their own activities of daily living, versus ‘Oh my God, they’re not doing anything today. What’s going on?’”

Once you can sense and track these wireless signals, machine learning can then be used to extract knowledge and apply it to specific use cases. This could include Nachman’s elder care example, but also other applications.

One simple example she gave was opening and locking a computer depending on where the user is.

“You can leverage that same mechanism and say, ‘OK, somebody’s approaching, let’s bring up the PC so it’s ready for interaction.’ Or, ‘Someone is leaving, let me actually lock the machine.’ So that’s just a very basic usage in terms of making the PC app smarter,” she said.

“The other thing that we’ve been looking at with wireless is actually even extracting breathing signals out of wireless. And somebody can think that that’s insane. In some sense, yes, the movement of the chest is very small, so it’s hard to actually understand that movement. But it’s very periodic, so you could actually make it easier to actually pick up.”

Nachman said using wireless tech in this way could be used to manage certain health conditions or monitor stress levels without the use of physical wearables on your chest. “It’s almost like a radar.”

Creating context responsibly

Replacing cameras with things like wireless tech and sensors that limit the data collected is already a step in the right direction when it comes to privacy, but Nachman said responsible AI remains a key challenge.

“You really want to extract the information that the user wants you to extract, but in the least intrusive way possible. And least intrusive could be for some people, ‘I don’t want anything on my body.’ It could be that ‘I don’t want something that can take a picture of me somewhere’ and so on,” she said.

“One of the challenges is how do you do these things in a privacy-preserving way and ensure that you’re being mindful of all of the possible worries or constraints that people have.”

She also said it’s important that users have informed consent when it comes to how these systems work and the data they are gathering. “It’s not enough to say, ‘I’m going to put like 1,000 things somewhere and you’re going to consent to it’, because then they can’t make sense out of it.”

‘It’s essentially technology that can adapt over time to different needs’
– LAMA NACHMAN

Another challenge is around making the AI learning system resilient and sustainable. Nachman said that AI systems tend to fail when faced with things that they have not seen before. Therefore, any changes to tasks or the environment the AI system is in reduces its resilience.

“So typically, what people will do is go retrain the systems over and over again for the different settings. That is not very sustainable,” she said.

“That’s not how people learn. You don’t learn everything from scratch every single time you need to learn something. So, one of the things that we’ve been really looking at is, since you’re actually trying to assist the human to do something, the human can give you a lot of feedback so that you can make the system learn from the user’s feedback.

“This way you reduce the amount of effort that’s needed to train the system in the first place, but then you make it more resilient over time as it encounters new things.”

Nachman said one of the elements of this technology she is most excited about is how it can adapt to people with different constraints.

“Then you can start to think about accessibility as a spectrum. So it’s not ‘technology for the disabled and technology for the people who are able’, right? It’s essentially technology that can adapt over time to different needs, that’s more contextually aware, that can help bridge that gap,” she said.

“My hope is that that starts to really translate into accessible by design.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com