7 things to consider before adding speed-limiting technology to cars


29 Mar 2019

Image: © MiaStendal/Stock.adobe.com

Prof John McDermid of University of York outlines the key considerations needed before cars across the EU are fitted with speed-limiting technology.

A version of this article was originally published by The Conversation (CC BY-ND 4.0)

The recent announcement of EU rules to fit speed-limiting technology to new cars from 2022 was welcomed by many, including the European Transport Safety Council, as a move that will save lives. However, not everyone is convinced by this ‘guardian angel’ technology.

The AA pointed out that there are times – when overtaking, for example – when temporarily exceeding the speed limit may be safer. Others have said that proposed ‘black boxes’ that would record a vehicle’s speed, among other things, amount to Big Brother surveillance. So is this surveillance and intrusion justified given the potential benefits?

Speed assistance technology uses GPS to establish a car’s location and then sends it a message about the road’s speed limit. Cars can also be fitted with a camera to identify speed limit signs by the road. The car – rather than the driver – would use these two inputs to keep below the speed limit. The technology the EU proposes would allow drivers discretion, however, so they would have the option to override the reduction in speed by pushing the accelerator.

This technology, which is already available in some cars, can be seen as a step towards autonomous vehicles (AVs), which will need to respect speed limits. But there are still a number of more detailed issues we need to address to resolve the question of whether the pros outweigh the cons.

1. Driver behaviour

First, how would such technology influence driver behaviour and what impact would this have on overall safety and driving skills?

A study of Volvo employees showed that drivers of AVs build trust in the autonomy of a car. In experiments, the drivers were told that they had responsibility for emergency braking, but when faced with an emergency situation only one-third applied brakes promptly, one-third applied them late and the final third didn’t apply them at all.

Before introducing speed assistance, then, we need to understand how drivers will respond to such technology. Will they always drive at the speed limit, relying on the car’s autonomy – even where lower speeds would be more appropriate, for example in more difficult road conditions such as ice or snow?

2. System safety

Second, how do we show the safety of the system before it is launched, or even used in trials?

Technologically, this does not seem too challenging, but the critical thing will be to identify scenarios that might confuse a system and which could lead to inappropriate decisions.

For example, what would happen in a contraflow on a motorway, where one lane goes in the opposite direction to the rest of the traffic? Hopefully, the system wouldn’t think it should reverse.

3. The SoS problem

Third is the issue of safe interaction between speed assistance and other systems – often referred to as a system of systems (SoS) problem.

This is true for a single car – how will the speed limiter interact with cruise control, autonomous emergency braking and so on? It is also true between vehicles – what is a sensible deceleration profile on moving from, say, a 50kph to a 30kph limit? Should that profile be different if the car is being followed very closely by a heavy goods vehicle? If so, how will the car sense this?

These problems are being addressed by developers of AVs. For example, by having sensors to detect the proximity of vehicles behind as well as in front of the car.

4. Responsibility gaps

Then there is the issue of ethics or moral responsibility. A possible unintended consequence of introducing such technology is the creation of what are called ‘responsibility gaps’, characterised simply as situations where nobody has enough control over a system’s actions to assume responsibility for them.

For example, if the speed limiter is slowing the car down (braking) as it approaches a lower speed limit, and the driver presses the accelerator to get past an obstacle quickly, will the driver or system ‘win’? And what if the driver was right, but the system hasn’t left a big enough margin of error for the driver to make a correction? This might be seen as a responsibility gap.

5. The law

The Law Commission, the organisation that reviews laws in England and Wales, is running a study into the law related to AVs. The study questions whether or not AVs should be allowed to break the speed limit.

My view is that all the manoeuvres the car plans to undertake should be defined to stay within limits, but an AV should be allowed to exceed the limit where this minimises risk, such as completing an overtaking manoeuvre where road conditions have changed unpredictably since starting to overtake.

Drivers will have discretion over this – but we would still need to know how drivers would respond to the technology.

6. Black boxes for cars

While some see the proposed black boxes as a Big Brother device, they are likely to facilitate better accident and incident investigation and potentially be very beneficial to overall road safety.

Analysis of data from flight recorders in the aerospace industry has been a major factor in improving flight safety. But a key point here is defining what is recorded. Consider the SoS problem – what do we need to know about other systems in the car and would we need data from the black boxes in other vehicles to get a full understanding of an accident?

Nonetheless, black boxes in cars could be a much greater contribution to road safety in the long run than speed limiters, especially as we move towards AVs.

7. Theory v reality

It seems likely that this technology can be net beneficial; a guardian angel that is also a fairly benevolent Big Brother. But studies will need to confirm this and some of the other issues that speed assistance technology raises.

As it currently stands, the proposals are ‘driving as imagined’ not ‘driving as done’. We need to be as sure as possible that these proposals are beneficial in the real world, with real drivers on real roads, not just in theory.

The Conversation

By Prof John McDermid

John McDermid is professor of software engineering at University of York and director of the Lloyd’s Register Foundation-funded Assuring Autonomy International Programme, which focuses on the safety of robotics and autonomous systems. His research covers a broad range of issues in systems, software and safety engineering.