Driverless cars: Are the myths we’ve all heard as true as they seem?

By David Emm

The Department for Transport recently claimed it wants to see fully autonomous cars tested on UK roads by 2021. Astonishingly, this expectation was set out after several fatal crashes in Arizona.

Since its design in the 80s, the communications infrastructure used in cars (known as a Controller Area Network) is still in use today. It was developed for exchanging information between different micro controllers. Essentially, what we have is a peer-to-peer network – and an old one at that. 

The troubling issue here is that, since their creation some 40 years earlier, these networks weren’t built with security in mind. As time has gone on, modern-day functionality has been layered on top of existing functions, all connected to the CAN. This provides a greater opportunity for criminals to access cars, whilst owners are left with no access to the control or security features of their vehicles.

Although there is yet to be any real-world hacks that have been executed in this way, it is a proven possibility. In 2015, two researchers and a journalist were able to use wireless technology to drive a Jeep Cherokee off the road, which resulted in half a million cars being recalled due to the fault.

The layering of these emerging technologies on top of an old infrastructure, can result in  serious security implications that are not being fully considered by manufacturers.  Potential security issues don’t just reside in the underlying communications network of the car itself. The apps used by the owner, that allow them to remotely control the functions on the car, including locking and unlocking, can become infected by possible hijackers.

Driverless car myths: Humans? Who needs them?

Following a fatal incident in Arizona last year, that resulted in the death of a pedestrian,  it was predicted that it would be many years until autonomous cars replace human drivers. But the truth of the matter is, I don’t think driverless cars will or should ever replace human drivers in the way that we presume they will do in the future;  although the majority of people will continue to occupy a private car, it will be self-driving. Whether removing the human aspect of vehicles takes form in private vehicles, or a co-ordinated public transport system, the issues of how this technology is implemented in society remain a constant.

The apprehension surrounding the rise of driverless cars is progressively increasing. And people are developing a greater awareness of just how safe, or not so safe, these vehicles really are. The idea of watching a film, or sleeping, while a car transports us, feels understandably ‘wrong’ to many people and the human control aspect of driving remains as important as it always has.

When it comes to autonomous vehicles, there are a range of add-on features available from something as well-known as parking assistance through to completely driverless cars. A ‘grey area’ lies between the two, where the driver has very little to do, but has responsibility for the vehicle and might need to take control at some point.

In the latter scenario there’s a danger that the driver may switch off because they don’t feel required to be in full control and might therefore be unable to regain control in an emergency.

The very tangible danger of autonomous vehicles can be proved in the amount of  driverless car fatalities since the emergence of the technology, and it is therefore reasonable to question whether it is wise to resume the use of them so quickly after the incident.

Making safety a priority

As recent stories featuring autonomous car testing have demonstrated, there are real safety concerns surrounding pedestrian and driver wellbeing if driverless cars are to be successfully launched in society.

It is not just safety that is an issue for self-driving cars, moral and ethical issues are also at the forefront of the debate. Christian Wolmar raised the issue of ‘the Holborn problem’: if driverless cars are programmed to stop when they sense a pedestrian, what happens when they are confronted with a mass of people milling across a busy road?  Will they wait all day?  Or will we be programmed to operate with a lower safety bar? Or if the car is given the choice to avoid hurting pedestrians or the passenger in the car in the lead up to an accident, how and who will it choose? A car isn’t able to make moral-based decisions on its own, the need for a human decision remains a constant.

Removing ethical arguments from the picture and focusing solely on the cybersecurity of self-driving cars, it is important to remember that nothing can be 100% secure. Likewise of cleaning your house,  security is never ‘done’ – you need to continually repeat the process of vacuuming and dusting as the dirt will be back next week. When securing the increasingly advanced technology in modern cars the same logic applies. Before autonomous cars can make a regular appearance on our roads, there are still many questions to be answered and scenarios to be considered. And if this this is to happen before 2021, we had better get ourselves into gear!


Read more: “Maximising safety through innovation”: How the UK is tackling driverless car regulation