How scientists are trying to make autonomous tech safer

15 Apr 2022

Image: © helivideo/Stock.adobe.com

A new guidance that aims to help businesses make machine learning-based autonomous products safer has been developed in the UK.

With the rise in automation is clear with self-driving cars, delivery drones and robots, ensuring that the technology behind them is safe can often prevent serious damage to human life.

But for a long time, there has been no standardised approach to safety when it comes to autonomous technologies. Now, a team of UK scientists is taking on the challenge to develop a process that it hopes will become a standard of safety for most things automated.

New guidance has been developed by researchers working for the Assuring Autonomy International Programme (AAIP) at University of York in the UK. The aim is to help engineers build a ‘safety case’ that boosts confidence in technologies based on machine learning – before tech reaches the market.

“The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place,” said Dr Richard Hawkins, senior research fellow at the University of York and one of the authors of the new guidance.

Hawkins thinks that most sectors that use autonomous systems are struggling to develop new guidelines that are fast enough to ensure people can trust robotics and similar technologies. “If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident,” he added.

The methodology, known as Assurance of Machine Learning for use in Autonomous Systems (AMLAS), has already been used in applications across the healthcare and transport sectors, with clients such as NHS Digital, the British Standards Institution and Human Factors Everywhere that use it their machine learning-based tools.

“Although there are many standards related to digital health technology, there is no published standard addressing specific safety assurance considerations,” said Dr Ibrahim Habli, a reader at the University of York and another author of the guidance. “There is little published literature supporting the adequate assurance of AI-enabled healthcare products.”

Habli argues that AMLAS bridges a gap between existing healthcare regulations, which predate AI and machine learning, and the proliferation of these new technologies in the domain.

The AAIP pitches itself as an independent and neutral broker that connects businesses with academic research, regulators, insurance and legal experts to write new guidelines on safe autonomous systems.

Hawkins said that AMLAS can help businesses and individuals with new autonomous products to “systematically integrate safety assurance” into their machine learning-based components.

“Our research helps us understand the risks and limits to which autonomous technologies can be shown to perform safely,” he added.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com