Understanding unreliable airspeed…


Occasional readers of this blog might have noticed my preoccupation with unreliable airspeed and the human factors and system design issues that attend it. So it was with some interest that I read the recent paper by Sathy Silva of MIT and Roger Nicholson of Boeing on aviation accidents involving unreliable airspeed.

Part of their analysis involved using Rasmussen’s SRK model to categorise the types of crew error during the detection, understanding, selection and execution of a response phases of crew performance.

The authors concluded that the earlier the error occurs in that sequence the more severe the accident, in fact they found that all accidents with a fatal outcome were categorised as involving an error in detection or understanding with the majority being errors of understanding.

This shouldn’t really be a surprise because ‘understanding’, that high level ability to comprehend and make sense of a new situation, is unfortunately also the ability that’s the most mentally demanding, slowest and and inherently most error prone.

What is interesting is the ‘double whammy’ that this study identifies, where errors in understanding were both far more likely to occur than other error types and when they did much more likely to end in a fatal accident, as a case in point the recent Air France AF 447 air disaster.

Further reinforcing this is the BEA AF447 investigation’s side-study into aircrew responses to thirteen icing events, in which they found no evidence that aircrew had applied the immediate action memory items of the unreliable airspeed procedure. In four of the nine instances where a stall warning sounded the BEA also found that aircrew decided not to initiate a stall recovery, again contrary to procedure.

Now traditionally the aviation industry has treated unreliable airspeed as a problem to which a response could be fundamentally procedural in nature, but what the work of Silva and Nicholson and the the BEA tells us is that such a belief may well be inherently flawed, and any solution based on “it’s all a training/procedural compliance problem” may not really work.

In classical safety engineering critical functions which are also irreversible once initiated receive particular attention, perhaps we should treat tasks that demand high levels of understanding in a situation of uncertainty and time pressure in a similar fashion.

5 responses to Understanding unreliable airspeed…

      Matthew Squair 05/04/2013 at 12:04 pm

      I’m just re-reading Jerry Lederer’s 1942 article on procedures in accident reporting and his doctrine of the last clear chance in determining the proximate (and probable) cause. He also pointed out that there’d been argument about causes since 1935, and it’s good to see that we’re still having these sort of arguments in 2013…

      So for me the ‘last clear chance’ was the point at which the pilots should have, but did not, recognise unreliable airspeed. No recognition no application of memory items. Once they’d failed to recognise, then all the rest follows and all the discussion about automation, HMI, stall recognition and recovery etc, etc is arguing about contributing factors.


    Mark Boardman 22/02/2015 at 4:15 am

    Sorry to see the links in these comments have gone.

    I agree that correct detection is the key to dealing with this issue. However the “high level ability to comprehend” that you talk about becomes progressively more difficult when also faced with:

    1) Confusion caused by alarms. Depending on what has happened (e.g. iced up probes or failed instrument) sometimes you will get different kinds of audio warnings and visual EICAS annunciations. For example, you could have simultaneous stall and overspeed warnings … so which one is right? There are other possibilities.

    2) The noise of loud warning alarms which is distracting.

    3) Suddenly transitioning from relatively low state of arousal to a relatively high state.

    4) Possibly being in a fatigued state as well.

    5) Possible exterior distractions such as lightening, turbulence, and the rather loud sounds of heavy rain or hail on the windscreen.

    As you say, and the authors Silva and Nicholson found, in order to perform the correct recovery actions, you first have to make the correct diagnosis. That should lead you to the correct recall items and allow you to control the situation. However, what you don’t mention is that, having made a decision, the next thing that should be asked is: “Is it working?” If the answer is “NO,” then the next question should be: “Have we correctly diagnosed the problem?” While it is easy to sit here and write this, it becomes progressively more difficult to actually do this when being distracted by some of the items listed above. I believe this is why the aviation industry treats unreliable airspeed as a procedural issue since, initially at least, it de-emphasises thinking in favour of a set of actions which will get you on the road to recovery … if you’ve made the right diagnosis …

    Maybe the way forward is not so much de-emphasising procedural, but improving the ability to assess actions to see if they are producing the desired output. In this sense I totally agree with your last statement about how classical safety engineering treats critical functions which are also irreversible once initiated.


      Matthew Squair 22/02/2015 at 5:46 pm

      I think we have underestimated the effect that stress can have on even trained professionals, see ‘When the fires to hot‘ for an illustrative case study. What I think has been missed in the stall recovery scenarios we’ve been discussing is recognising that you need to wait for the control inputs to have an effect, more so if you’re at high altitude where you loose control authority. Problem is we know that under extreme stress people’s time sense can appear to dilate significantly which means they may not give the aircraft enough time to respond. IMO we should actually design the user interface to take account of these sort of effects.