Connected vehicles: Driving to an insecure future?

Published on the 21/02/2018 | Written by Kevin Bocek


The machines can’t be trusted says security strategist Kevin Bocek…

From accessing a Jeep’s on-board computer and taking control of the vehicle, to the security flaws that allowed anyone to unlock and drive Hyundai vehicles through a mobile app, the ‘smash and grab’ mentality of car thieves has developed into a more sophisticated ‘hack and grab’.

Given that connected vehicles already present a range of security vulnerabilities, it is no surprise the next step for transport – autonomous vehicles – are a cause of concern. With road tests for the likes of Google, Tesla and Uber moving in to production, the development of driverless cars has stepped up a gear recently. Because both the public and insurers worried about safety, manufacturers and their operators must ensure the cyber security of these vehicles is a priority, and not simply an afterthought.

Whether it’s cars held for ransom or a fleet of vehicles targeted by terrorists, new threats will emerge. For example, a repeat of the Uber hack could next time be about the identities of thousands of vehicles and gaining control over them, rather than focusing on the identities of millions of customers.

A blind spot in the making
As global lawmakers prepare for autonomous vehicles to hit the roads, they need to acknowledge that cyber security standards must be set sooner rather than later. The government in the UK and the US have each developed their own respective guidelines for self-driving cars. The guidance covers a range of security issues for the vehicles; from having a back-up when sensors fail, to managing personally identifiable data, with a focus on cybersecurity. However, in their current form, these documents are only broad security guidelines for best practices.

In the same way the government identifies a driver with a license number, or a car with a registration plate, both autonomous vehicles and their ‘drivers’ will need to be certified.

It’s now up the transportation regulators to become more agile, and intelligently regulate the entire ecosystem of autonomous vehicles, including the machine identities which enable a vehicle to know its components are safe, as well as knowing it can trust the cloud it is taking commands from.

There is still much work to be done on how vehicles take instructions, receive updates, and communicate with applications, particularly in the cloud. The plans outlined by the UK government and US congress omit much of the nuance that is vital for cybersecurity – including validating and protecting the machine identities involved in autonomous cars, from the vehicle’s components to the cloud which shares data to control the car.

Used to authenticate, communicate, create privacy with encryption data, and establish trust for executing code, machine identities are already vital in today’s global economy, but are poorly understood and badly protected, if at all.

Many companies do not comprehensively discover, monitor, revoke and replace these machine identities – yet these are a critical part of securing almost every connected device. The incredible power of machine identities makes them attractive to hackers, nation states, and terrorists.

So, securing machine identities will be even more vital in a world of driverless cars where autonomous functions run in the cloud, send commands, and thousands of pieces of software must be trusted to safely operate a single vehicle at any given second.

Where we’re going, we need security
Autonomous vehicles, at their most basic, are essentially a collection of machines in constant communication with other machines. These ‘machines’ send and receive information from sensors and systems that control the vehicle’s internal operation and movements, and share data back and forth to the cloud for directions, diagnostics, software updates, and commands.

Before a vehicle acts on any information it receives, it should validate the identity of the machine it is communicating with. Only then can the machine ensure the data being received is secure and has not been tampered with. If the machines sharing information – be they the sensors, the dashboard computer or a driverless car operator like Uber – are not validated then the communication could be hi-jacked and used to send false information, misdirect a vehicle or manipulate the controls.

In future, hacks could take over the identities of one or many vehicles or the identity of the vehicle company’s cloud – sending instructions, software updates, and commands to create chaos, hold for ransom, or drive terror and destruction.

Researchers have already proved it possible to bypass a lack of machine identity and authenticate a software update remotely. The definition of ‘machines’ should therefore include not just the hardware and software in vehicles, but the algorithms that control the vehicle’s actions, and the connected systems that will ultimately drive the vehicle.

Licence to kill
In the same way we have come to expect our phones or laptops to have at least a minimal level of trust built in to identify us, such as the need to enter a password or approve a software update, we must apply the same logic to machines. The government and the public must insist on the security of every machine involved in controlling driverless vehicles, from the dashboard computer, to the sensors measuring weather conditions and the algorithms and software controlling the movements of the vehicle from the cloud.

Only trusted applications should be allowed to share information with autonomous vehicles. This is vital to securing driverless cars and keeping future roads safe.

Governments validate a person’s fitness to drive via a driving test and currently subsequently track their adherence to the ‘rules of the road’ through a driving test. Fitness to be on the roads will still be an issue with autonomous cars.

We’ll need MOTs to measure the fitness of the algorithm ‘behind the wheel’, and constantly re-test new algorithms and the software updates which are deployed. To ensure driverless vehicles are safe for both passengers and pedestrians, we cannot neglect the process of identifying, safeguarding and even regulating the algorithms and other components of these vehicles.

Machine identities will be even more important in a world where an entire network of cars could be held for ransom or used as weapons. Giving free reign to a machine which can be manipulated really could be a licence to kill.

writer_Kevin BocekABOUT KEVIN BOCEK//
Kevin Bocek is chief security strategist at Silicon Valley cybersecurity software vendor Venafi.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

No items found
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere