Tesla and automation surprises

02/07/2016 — 2 Comments

Joshua Brown screen grab

Keep your eyes on the road, and your hands upon the wheel…

The first fatality involving the use of Tesla’s autopilot* occurred last May. The Guardian reported that the autopilot sensors on the Model S failed to distinguish a white tractor-trailer crossing the highway against a bright sky and promptly tried to drive under the trailer, with decapitating results. What’s emerged is that the driver had a history of driving at speed and also of using the automation beyond the maker’s intent, e.g. operating the vehicle hands off rather than hands on, as the screen grab above indicates. Indeed recent reports indicate that immediately prior to the accident he was travelling fast (maybe too fast) whilst watching a Harry Potter DVD. There also appears to be a community of like minded spirits out there who are intent on seeing how far they can push the automation… sigh. 

We could just dismiss this as another of those stories about idiots and how they ruin it for everyone, roll our eyes and move on. But there are some actual lessons to be learned here, or more accurately relearned. When we introduce automation into a situation there is always the question of how the users will actually use it. Now Tesla designers can assume that their automation is for driver assistance and that the driver is still, well, ‘driving’. But that’s just an assumption. In practice, as we’ve found when automating the flight decks of aircraft, operators need to be rigorously trained to use new automation appropriately and, even with all the training, they still occasionally use it inappropriately. Another lesson learned is that once you move people from the task of ‘doing’ to that of ‘monitoring’ there are all sort of interesting psychological effects, including people being functionally asleep while still responding to vigilance tests**, as the railways have know for years. A final ‘lesson’ if you will from aviation is that once you introduce a shared control mode between people and automation there is always the opportunity for conflict.

The problem Tesla faces is that (unlike Airbus or Boeing) you’re not dealing with a small homogenous cadre of professionals but with the greater public i.e users of widely varying abilities and motivations who receive no such rigorous training. As a result you  can reasonably expect to get more than the occasional violations of procedure. And as there’s going to be a lot of idiots consumers, driving Tesla’s in the near future this becomes a ‘big numbers’ problems or Tesla. Personally I’m in favour of Tesla making their software smart enough so that it can recognise a pattern of abuse by the user at which point their ‘privileges’ are withdrawn, idiots.

Tesla-camera (Image source: Angelos Lakrintis)

So the interplay between automation and human, and the human’s inevitable quest to ‘optimise’ and’play’ is one issue. Are there any others?

Well yes. Note that the reports indicate that the autopilot couldn’t discriminate the white truck against the bright sky. Now this is a tough ask for any sensor, including Eyeball Mk I, that’s why in the old days of dog fighting fighter pilots worried about ‘the hun in the sun’, and why old school pilots would fly into the aforementioned sun to defeat early heat seeking missiles. So sensor degradation is not a new thing. However we also know that you need to account for such failures in your automation, if you don’t then the operator will likely continue on unaware that the situation is worsening until a crisis point is reached at which point either the automation hands off to the operator with a jaunty Hail Mary or, as was the case in this instance, continue blithely on to destruction.

So what to do? Here there may actually be a straight-forward technological answer. The failure modes of the sensor, or sensors, including when the environment exceeds it’s specification can be characterised and the Autopilot designed to respond by reducing speed, degrading the services provided or in the worst case handing back authority to the operator. That the Autopilot didn’t do this indicates that the engineers over at Tesla overlooked this particular failure mode and (at least in this case) the autopilot is exhibiting all the hall marks of what we  call ‘strong but silent automation‘. I’d be very interested to see what sort of a system safety program Tesla operates, given that this is the sort of error that such programs are intended to prevent. I’d also be interested to know whether this was the only such omission, again a system safety program is intended to ensure such lessons are applied more generally.

Way back when I was a very young officer in the navy we had to undergo weapons safety training. I remember one particular film which looked at an accidental shooting, at the end of the film the investigating officer noted tersely that ‘no new lessons had been learned’. In the case of Tesla’s autopilot I think the conclusion may well be the same.

Footnotes

*A possibly  really unfortunate turn of phrase given that it’s not actually an autopilot…

**The problem with most ‘naive’ vigilance systems is that the they require a simple response to a audible/tactile alarm. Unfortunately this sort of simple challenge response behaviour is actually very immune to fatigue effects unlike higher level cognitive functions (which is what are critical). As a result, right now, given the number of Tesla cars on the road, there’s probably a Tesla driver out there whose functionally asleep at the wheel, vigilance systems not withstanding. Disturbed yet? 🙂

2 responses to Tesla and automation surprises

  1. 

    The future’s uncertain and the end is always near. The world’s first automobile accident occurred in Ohio City, Ohio in 1891. Inventor James William Lambert was driving the first single-cylinder gasoline automobile, with passenger James Swoveland, when he hit a tree root, causing the car to careen out of control and smash into a hitching post. A few million fatalities and 125 years later, as we attempt to reduce our reliance on human operators of uncertain intellect, we should remember that even a man who went on to patent over six hundred inventions, mostly affiliated with the automobile industry, had accidents.

Trackbacks and Pingbacks:

  1. New PM Articles for the Week of June 27 – July 3 - The Practicing IT Project Manager - August 12, 2017

    […] Matthew Squair looks at the ramifications of the first fatality attributed to Tesla’s autopilot, while humming an old song by The Doors. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s