From the ITEE College Board Chair Geoff Sizer Friday, 12 August 2016

Autonomous vehicles – promises with risks

Autonomous vehicles offer great promise for improved transportation utility and efficiency. But the current push which is seeing growing numbers of vehicles with increasingly levels of autonomy on (or soon to be on) our roads, comes with major risks.

An autonomous vehicle is not “driven by a computer” – it is driven by an algorithm, which in turn was designed and implemented by engineers. The computer and the software running on it are merely the servants of the designers. Hopefully, infrequent but critical life-and-death driving decisions made by the autonomous vehicle are ultimately those of the system designers. This also means that when situations occur which are beyond the bounds of the scenarios considered by the designers, the system will be on its own, and is unlikely to react appropriately.

This brings us to a moral dilemma. When faced with a life-and-death situation, what action have the designers decided is appropriate? If an accident is unavoidable, have the designers decided that it is better to run over a pedestrian rather than risking a skid which endangers the vehicle’s occupants? Or to kill an elderly pedestrian whilst avoiding a child? In such situations, humans react consciously or instinctively, based on societal, cultural and life experience. What if the cultural background of the development team places a different emphasis in the value of self-preservation versus the safety of others?

Let’s assume that the algorithms built into the autonomous vehicle consider all plausible scenarios, and are finely tuned to the culture and attitude of the society where they are used so as to best match appropriate human behaviour. The system remains subject to the impact of software bugs, and the effects of failure of sensors and other hardware elements. This compounds the difficulty of foreseeing and dealing with all of the “what if” scenarios which need to be considered.

Another issue with semi-autonomous vehicle operation is the handover from autonomous to manual mode. Perhaps the worst possible time for an autonomous driving algorithm to say to the driver “it’s too hard for me – back to you” is the type of situation which precipitates this.  Even worse, if the autonomous algorithm gets the vehicle into trouble, then leaves it to the human driver to sort out the mess!

We are attuned to and accept mistakes made by human drivers leading to accidents which cause damage, injury and death. Long-term exposure reduces sensitivity to this risk. But is society ready to accept a similar level of frailty from imperfectly conceived and implemented algorithms running in software on computers? Even faced with evidence that autonomous vehicles are substantially safer overall than human drivers, I doubt that society is ready to be forgiving.

It is up to engineers to ensure that driver-augmentation and autonomous vehicle systems are as effective as they can practically be; and along with other decision-makers, to avoid over-reaching or getting ahead of society’s acceptance of the new technology, and thereby risk its rejection.