Out of the loop, aircrew and unreliable airspeed at high altitude
The BEA’s third interim report on AF 447 highlights the vulnerability of aircrew when their usually reliable automation fails in the challenging operational environment of high altitude flight.
This post is part of the Airbus aircraft family and system safety thread.
This vulnerability has been traditionally summarized by the term Out Of The Loop & Unaware (OOTLU) (Wickens, Hollands 2000). OOTLU is an adaptive behavioral syndrome where a reliable system’s operators progressively become more and more complacent, trust the automation more, monitor its operations less and are as a result:
- slower to detect failure or intervene when necessary (vigilance decrement),
- less aware of the systems status or raw data processed by the automation (lower situation awareness), and
- less proficient in the manual skills required in order to accomplish the failed automation’s duties (deskilled).
Unfortunately the crew of AF447 appear to have been the victims of this syndrome. The released CVR transcript of the third BEA report shows a crew having difficulty in identifying the system failure they were faced with. Their poor situational awareness hampered their ability to correlate data into a coherent model that could in turn tell them that the aircraft was deeply stalled. And all the while this was going on they were also struggling with the challenge of manually controlling an aircraft at high altitude.
All this represents the unstated cost of automation in human terms, that is the more reliable the automation the greater the level of attentional and cognitive ‘tunnelling’ operators exhibit. Human beings are adaptable creatures and they will adapt their monitoring and interaction strategy based on their perception of a systems reliability. Of course the problem with this adaptation is that it also leaves operator’s ill prepared for when the ‘reliable’ system fails.
So what now must we do? I believe that for aviation the clear lesson is that we need to focus firstly on the design of the aircrew primary flight displays, secondly upon aircrew training and thirdly upon the process of deciding what to automate.
In primary flight displays more attention should be given to providing aircrew with immediate access to the greater body of data available (1). For example stall warnings could be presented visually but the aircraft’s alpha and speed provided along with them in a ‘low lighted’ format. Improved system protection status information could be provided to indicate when and why key protections are removed (such as the removal of stall warning due to low air speed). Articulated pitch ladders could be used to provide more data without additional display cluttering.
But improving the interface is only partial solution at best. The other half of the equation is to train and educate aircrew in the imprecision inherent in their systems as well as the potential effects of OOTLU upon their performance. A specific training outcome of this process should be that of learned attentional ‘defocusing’. In addition aircrew should be trained for both the aircraft handling skills and psychological preparedness necessary to fly manually at high altitude.
Finally the standards regarding the design of the human machine interface for aviation should be amended to require that when we are making design decisions about what to automate we must explicitly consider all the costs and benefits of automation.
1. Wickens, C. D., Hollands, J., Engineering psychology and human performance (3rd ed.), Prentice Hall, NY, 2000.
2. Wickens, Technical Report ARL-00-10/NASA-00-2, Prepared for NASA Ames Research Center Moffett Field, CA, August 2000.
1. Albeit this data should also be presented with somewhat lower salience to allow selective filtering.