Published on the 31/05/2018 | Written by Jonathan Cotton
Self-driving Ubers may be a hazard to pedestrians but the wheels are still turning for the driverless future – and that’s a good thing…
At the time, it read like a sick joke: During a test run, one of Uber’s driverless vehicles (a Volvo XC90 with a safety driver onboard) failed to auto-brake at a critical moment, striking and killing 49-year-old pedestrian Elaine Herzberg.
The report into the incident has now been released. Investigators found that the self-driving Uber vehicle did, in fact, see the pedestrian – a full six seconds before impact – and calculated it would need 1.3 seconds to stop. In a tragic oversight however the automatic braking system – as well as several other driver assist features – had been disabled by Uber “to prevent erratic driving”.
Despite fighting words from the company, the tragedy seems to have scuttled Uber’s plans for self-driving pioneering, at least in the short term. The 500 staff at self-driving car operation in Arizona certainly weren’t laughing as they get handed their walking papers a week ago, with the company confirming that it’s shutting up its Arizona operation as it waits for the outcome of an official investigation into the pedestrian’s death.
“Waiting for the cars to perform flawlessly is a clear example of the perfect being the enemy of the good.”
The whole sorry mess is a sobering reminder that, cutting edge or not, self-driving technology is still very much in its experimental stages.
While Uber is certainly smarting from its failure, it’s unlikely that this piece of spectacularly bad PR will dampen enthusiastic investment in the technology. Indeed, seemingly moments after Herzberg was struck down, the US government pledged US$100 million for further automated vehicle R&D, including US$60 million to fund projects that test the feasibility and safety of self-driving vehicles, along with underwriting a study into the long-term employment impact of self-driving cars.
PR disaster or not, new research from Juniper confirms that the self-driving market is well and truly set to become a reality by in the US by 2026.
Bolstered by competition from the likes of Google and heavy investment from Volvo, Audi, Daimler and GM, the research company predicts that within a decade one in four cars sold will be driverless, with an estimated 45 million on-road vehicles having some form of Advanced Driver Assistance Systems (ADAS) functionality by the end of this year – with adoption reaching 100 million by 2020.
So business is good, but the question on the lips of even a casual observer surely is: Just how safe should a driverless car be before it’s allowed on the road?
Rand researchers Nidhi Kalra, David G. Groves set out to answer just that question last year. To do so, they developed a computer model that could compare road fatalities over time under, first, a policy that allows highly automated vehicles to be deployed for consumer use when their safety performance is just 10 percent better than that of the average human driver and second, a policy that waits to deploy HAVs only once their safety performance is 75 or 90 percent better than that of an average human driver.
The results?
“Waiting for the cars to perform flawlessly is a clear example of the perfect being the enemy of the good,” Kalra says.
Their research found that, in the short term, more lives are cumulatively saved under a more permissive policy than stricter policies requiring greater safety advancements in nearly all conditions, and those savings can be significant – to the tune of hundreds of thousands of lives.
“There is good reason to believe that reaching significant safety improvements may take a long time and may be difficult prior to deployment.”
“Therefore, the number of lives lost while waiting for significant improvements prior to deployment may be large.”
The conclusions make sense. After all people die everyday on the roads, and we persist regardless. Why do we have so little tolerance, comparatively speaking, when machines make human-like mistakes? Isn’t there a moral imperative to get over our fear of driverless tech in the interest of saving lives?
Don’t answer yet, because there might be one more psychological barrier to be overcome: As driverless technology enters the mainstream, expect similar changes to impact the aviation industry. Yup, in all likelihood, your pilot will soon be a drone.
“A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have – ever,” says Jeremy Straub, Assistant Professor of Computer Science, North Dakota State University. “By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.”
“Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds or thousands more hours of flight experience than any human pilot.”
For all intents and purposes, we already had the technology in place for fully automated commercial flights. The barrier is psychological, not technical.
And it’s that barrier that is currently making gathering public support difficult.
“A major backlash against a crash caused by even a relatively safe autonomous vehicle could grind the industry to a halt,” Kalra said, almost prophetically, late last year, “resulting in potentially the greatest loss of life over time.”
“The right answer is probably somewhere in between introducing cars that are just better than average and waiting for them to be nearly perfect.”
View the full Rand research here.