Occasional readers of this blog might have noticed my preoccupation with unreliable airspeed and the human factors and system design issues that attend it. So it was with some interest that I read the recent paper by Sathy Silva of MIT and Roger Nicholson of Boeing on aviation accidents involving unreliable airspeed.
Part of their analysis involved using Rasmussen’s SRK model to categorise the types of crew error during the detection, understanding, selection and execution of a response phases of crew performance.
The authors concluded that the earlier the error occurs in that sequence the more severe the accident, in fact they found that all accidents with a fatal outcome were categorised as involving an error in detection or understanding with the majority being errors of understanding.
This shouldn’t really be a surprise because ‘understanding’, that high level ability to comprehend and make sense of a new situation, is unfortunately also the ability that’s the most mentally demanding, slowest and and inherently most error prone.
What is interesting is the ‘double whammy’ that this study identifies, where errors in understanding were both far more likely to occur than other error types and when they did much more likely to end in a fatal accident, as a case in point the recent Air France AF 447 air disaster.
Further reinforcing this is the BEA AF447 investigation’s side-study into aircrew responses to thirteen icing events, in which they found no evidence that aircrew had applied the immediate action memory items of the unreliable airspeed procedure. In four of the nine instances where a stall warning sounded the BEA also found that aircrew decided not to initiate a stall recovery, again contrary to procedure.
Now traditionally the aviation industry has treated unreliable airspeed as a problem to which a response could be fundamentally procedural in nature, but what the work of Silva and Nicholson and the the BEA tells us is that such a belief may well be inherently flawed, and any solution based on “it’s all a training/procedural compliance problem” may not really work.
In classical safety engineering critical functions which are also irreversible once initiated receive particular attention, perhaps we should treat tasks that demand high levels of understanding in a situation of uncertainty and time pressure in a similar fashion.