Most drivers and vehicle owners understand the need for safety. Driving a car is an inherently dangerous activity and a moment’s inattention can lead to serious injury or worse. Thus the potential for fully autonomous vehicles to reduce accidents and injuries is very appealing, but we need to make sure that these vehicles operate in a safe and secure way.
Cars as mobile computers
Modern cars can come with over 80 central processing units, kilometres of cable, several hundred MB of software and half a dozen in‐vehicle networks. In other words your car is a mobile computer that just happens to carry you and your family. In the future (maybe by 2030/2035?) when fully self-driving cars are delivered they will require even smarter processing power to cope with all the scenarios that can occur during a dynamic, fully automated car journey. These scenarios included longitudinal challenges such as keeping a safe distance with the car in front and lateral challenges such as pedestrians and other cars and objects moving in front of your vehicle. All of this will require split second decisions from reliable software and hardware that can only be achieved if the vehicle has been designed to be both safe and secure.
The „trolley problem“: there is no right answer
It is interesting to consider society’s appetite for autonomous vehicles. The decision making ethics of autonomous vehicles will likely be challenged, probably in the law courts. Why? Let’s consider the following thought experiment, which is also known as the “trolley problem”: a vehicle gets into a situation where there is no way out and a collision cannot be avoided. The vehicle can either only avoid hitting the baby stroller that rolls in front of the car, or the elderly lady walking next to the car. No matter how the vehicle reacts, someone will be harmed. Does it injure the toddler or the elderly person? Or what does the vehicle do if the alternative is to injure either two or five people? There is no right answer, but maybe a less worse answer.
Manipulation by hackers possible
Also there are enormous cybersecurity challenges, especially if hackers could influence this decision making to bias a particular outcome in the decision engine. Where previously the artificial intelligence system may chose a less worse answer it could be altered so it became more destructive for a particular section of society. If this hacking is subtle then such an attack may not be recognised until a lot of statistical data has been assessed – probably far too late.
Secure software for safe cars
Writing secure software is immensely difficult. The millions of lines of code in an autonomous vehicle will contain bugs that will need to be patched and managed over the lifetime of the car. As we transition up the stages of vehicle automation each new level could demand an order of magnitude improvement in computing power, software and hardware. Cybersecurity risks will increase similarly as the attack surface area expands.
Manufacturers in particular need to understand the inherent cybersecurity risk in their products and technologies. They need to ensure these products are secure by design as well as being safe by design. Manufacturers should not wait for regulators to force them to consider product cybersecurity risk as these concerns should be part and parcel of automotive design and development as much as engines, batteries and transmission units.
Your motor vehicle can no longer be safe if it is not secure. So the next time you go to buy a new car, ask the salesperson how cyber secure the car is. Unfortunately I think I know the answer…