The first fatality involving the use of Tesla’s autopilot* occurred last May. The Guardian reported that the autopilot sensors on the Model S failed to distinguish a white tractor-trailer crossing the highway against a bright sky and promptly tried to drive under the trailer, with decapitating results. What’s emerged is that the driver had a history of driving at speed and also of using the automation beyond the maker’s intent, e.g. operating the vehicle hands off rather than hands on, as the screen grab above indicates. Indeed recent reports indicate that immediately prior to the accident he was travelling fast (maybe too fast) whilst watching a Harry Potter DVD. There also appears to be a community of like minded spirits out there who are intent on seeing how far they can push the automation… sigh. We could just dismiss this as another of those stories about idiots and how they ruin it for everyone, roll our eyes and move on. But there are some actual lessons to be learned here, or more accurately relearned. When we introduce automation into a situation there is always the question of how the users will actually use it. Now Tesla designers can assume that their automation is for driver assistance and that the driver is still, well, ‘driving’. But that’s just an assumption. In practice, as we’ve found when automating the flight decks of aircraft, operators need to be rigorously trained to use new automation appropriately and, even with all the training, they still occasionally use it inappropriately. Another lesson learned is that once you move people from the task of ‘doing’ to that of ‘monitoring’ there are all sort of interesting psychological effects, including people being functionally asleep while still responding to vigilance tests**, as the railways have know for years. A final ‘lesson’ if you will from aviation is that once you introduce a shared control mode between people and automation there is always the opportunity for conflict.
The problem Tesla faces is that (unlike Airbus or Boeing) you’re not dealing with a small homogenous cadre of professionals but with the greater public i.e users of widely varying abilities and motivations who receive no such rigorous training. As a result you can reasonably expect to get more than the occasional violations of procedure. And as there’s going to be a lot of
idiots consumers, driving Tesla’s in the near future this becomes a ‘big numbers’ problems or Tesla. Personally I’m in favour of Tesla making their software smart enough so that it can recognise a pattern of abuse by the user at which point their ‘privileges’ are withdrawn.
Well yes. Note that the reports indicate that the autopilot couldn’t discriminate the white truck against the bright sky. Now this is a tough ask for any sensor, including Eyeball Mk I, that’s why in the old days of dog fighting fighter pilots worried about ‘the hun in the sun’, and why old school pilots would fly into the aforementioned sun to defeat early heat seeking missiles. So sensor degradation is not a new thing. However we also know that you need to account for such failures in your automation, if you don’t then the operator will likely continue on unaware that the situation is worsening until a crisis point is reached at which point either the automation hands off to the operator with a jaunty Hail Mary or, as was the case in this instance, continue blithely on to destruction.
So what to do? Here there may actually be a straight-forward technological answer. The failure modes of the sensor, or sensors, including when the environment exceeds it’s specification can be characterised and the Autopilot designed to respond by reducing speed, degrading the services provided or in the worst case handing back authority to the operator. That the Autopilot didn’t do this indicates that the engineers over at Tesla (or at the sensors design house) overlooked this particular failure mode and at least in this case the autopilot is exhibiting all the hall marks of what’s call ‘strong but silent automation‘. I’d be very interested to see what sort of a system safety program Tesla operates, given that this is the sort of error that such programs are intended to prevent. I’d also be interested to know whether this was the only such omission, again a system safety program is intended to ensure such lessons are applied more generally.
*A possibly really unfortunate turn of phrase given that it’s not actually an autopilot…
**The problem with most ‘naive’ vigilance systems is that the they require a simple response to a audible/tactile alarm. Unfortunately this sort of simple challenge response behaviour is actually very immune to fatigue effects unlike higher level cognitive functions (which is what are critical). As a result, right now, given the number of Tesla cars on the road, there’s probably a Tesla driver out there whose functionally asleep at the wheel, vigilance systems not withstanding.