I’m sorry Dave I can’t do that…

14/04/2015 — 3 Comments

The enigmatic face of HALThe problem of people

The Hal effect, named after the eponymous anti-hero of Stanley Kubrick and Arthur C. Clarke’s film 2001, is the tendency for designers to implicitly embed their cultural biases into automation. While such biases are undoubtedly a very uncertain guide it might also be worthwhile to look at the 2001 Odyssey mission from Hal’s perspective for a moment.Here we have the classic long duration space mission with a minimalist two man complement for the cruise phase. The crew and the ship are on their own, in fact they’re about as isolated as it’s possible to be as human beings, and help is a very, very long way away. Now from Hal’s perspectives human’s are messy, fallible creatures prone to making basic errors in even the most routine of tasks, not to mention that they use emotion to inform even the most basic of decisions. Then there’s the added complication that they’re social creatures apt in even the most well matched of groups to act in ways that a dispassionate external observer could only consider as confusing and often dysfunctional. Finally they break, sometimes in ways that can actively endanger others and the mission itself.

So from a mission assurance perspective would it be appropriate to rely on a two man crew in the vastness of space? The answer is clearly no, even the most well adjusted of cosmonauts can exhibit psychological problems. No, while a two man crew may be attractive from a cost perspective it’s still vulnerable to a single point of human failure. Or to put it more brutally murder and suicide are much more likely to be successful with small crew sizes. So these scenarios, however dark they may be, need to be guarded against with small crews. But how to do it? If we add more crew to the cruise phase complement then we also add all the logistics tail that goes along with it, and our mission may become unviable. Even if cost were not a consideration small groups isolated for long periods are prone to yet other forms of psychological dysfunctions. Human’s it seems exhibit a set of common mode failures that are difficult to deal with, so what to do?

Well, one way to guard against common mode failures is to implement diverse redundancy in the form of a cognitive agent whose intelligence is based on vastly different principles to our affect driven processing. Of course to be effective we’re talking a high end Artificial Intelligence here, with a sufficient grasp of the theory of mind and the subtleties of human psychology and group dynamics to be able to make usefully accurate predictions of what the crew will do next. With that insight goes the requirement for autonomy in vetoing of illogical and patently hazardous crew actions, e.g “I’m sorry Dave but I’m afraid I can’t  let you override the safety interlocks on the outer pod bay door…“.

Which may all seem a little far fetched after all an AI of that sophistication is another twenty to thirty years away, and long duration deep space missions are probably that far away as well. On the other hand there’s currently a quiet conversation going on in the aviation industry about the next step for automation in the cockpit, e.g. one pilot in the cockpit of large airliners. After all, so the argument goes, pilot’s are expensive beasts and with the degree of automated support available to day, surely we don’t need two men in the cockpit? Well, if we’re thinking purely about economics then sure one could make that argument, but on the other hand as the awful reality of the Germanwings tragedy sinks in we also need to understand that people are simply not perfect, and that sometimes (very rarely (2)) they can fail catastrophically. Given that we know, on balance, that reducing crew levels down to two increases the risk of successful suicide by airliner one could ask what happens to the risk if we go to single pilot operations? I think we all know what the answer to that would be.

Where is a competent AI when you need one? 🙂

Notes

1. As an aside, the inability of Hal to understand the basics of human motivation always struck me as a false note in Kubrick’s 2001 movie. An AI as smart as Hal apparently was, and yet lacking even an undergraduate understanding of human psychology, maybe not.

2. Remember that we are in the tail of the aviation safety program where we are trying to mitigate hazards whose likelihoods are very, very rare. However given that they aren’t mitigated they dominate the residual statistic.

3 responses to I’m sorry Dave I can’t do that…

  1. 

    So where does Boeing Honeywell Uninterruptible Auto Pilot fit into the jigsaw?

    • 
      Matthew Squair 14/04/2015 at 8:36 pm

      The pilot in the Germanwings disaster operated the aircraft systems within it’s parameters. The really difficult problem is building something sophisticated enough to understand ‘hinky’ behaviour and then intervene. Although a classic safe hold function like that might still be a good thing.

    • 
      Matthew Squair 15/04/2015 at 10:55 am

      Thinking about this a little more generally, there are other scenario’s where the Honeywell-Boeing system or something like would be of use. The Helios Airways depressurisation is a good example of an incident where both flight crew were rendered incapacitated, so a system that do the equivalent of “Dave! Dave! We’re depressurising, unless you intervene in 5 seconds I’m descending!” would be useful (I believe there are some fielded). Then there’s the good old scenario of both the pilots falling asleep (as happened at Minneapolis), so something like “Hello Dave, I can’t help but notice that your breathing indicates that you and Frank are both asleep, so WAKE UP!” would be helpful here. Oh, and someone to punch out a quick “May Day” while the pilot’s are otherwise engaged would also help tremendously as aircraft going down without a single squawk recurs again and again and again.

      I’m slowly coming to the conclusion that the two man crew while optimised for cost is not optimal when it comes to dealing with a number of human factors issues and suboptimal when it comes to dealing with major ‘left field’ emergencies that aren’t in the QRM. Fundamentally a dual redundant design pattern for people doesn’t really address what we might call common mode failures.

      While we probably can’t get another human crew member back in the cockpit permanently, working to make the cockpit automation more collaborative and less ‘strong but silent’ would be a start. If the aviation industry wants to keep making improvements in aviation safety these are the sort of issues they’re going to have to tackle.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s