Out of the Loop

14/08/2011 — 2 Comments

Out of the loop, aircrew and unreliable airspeed at high altitude

The BEA’s third interim report on AF 447 highlights the vulnerability of aircrew when their usually reliable automation fails in the challenging operational environment of high altitude flight.

This post is part of the Airbus aircraft family and system safety thread.

This vulnerability has been traditionally summarized by the term Out Of The Loop & Unaware (OOTLU) (Wickens, Hollands 2000). OOTLU is an adaptive behavioral syndrome where a reliable system’s operators progressively become more and more complacent, trust the automation more, monitor its operations less and are as a result:

  1. slower to detect failure or intervene when necessary (vigilance decrement),
  2. less aware of the systems status or raw data processed by the automation (lower situation awareness), and
  3. less proficient in the manual skills required in order to accomplish the failed automation’s duties (deskilled).

Unfortunately the crew of AF447 appear to have been the victims of this syndrome. The released CVR transcript of the third BEA report shows a crew having difficulty in identifying the system failure they were faced with. Their poor situational awareness hampered their ability to correlate data into a coherent model that could in turn tell them that the aircraft was deeply stalled. And all the while this was going on they were also struggling with the challenge of manually controlling an aircraft at high altitude.

All this represents the unstated cost of automation in human terms, that is the more reliable the automation the greater the level of attentional and cognitive ‘tunnelling’ operators exhibit. Human beings are adaptable creatures and they will adapt their monitoring and interaction strategy based on their perception of a systems reliability. Of course the problem with this adaptation is that it also leaves operator’s ill prepared for when the ‘reliable’ system fails.

So what now must we do? I believe that for aviation the clear lesson is that we need to focus firstly on the design of the aircrew primary flight displays, secondly upon aircrew training and thirdly upon the process of deciding what to automate.

In primary flight displays more attention should be given to providing aircrew with immediate access to the greater body of data available (1). For example stall warnings could be presented visually but the aircraft’s alpha and speed provided along with them in a ‘low lighted’ format. Improved system protection status information could be provided to indicate when and why key protections are removed (such as the removal of stall warning due to low air speed). Articulated pitch ladders could be used to provide more data without additional display cluttering.

But improving the interface is only partial solution at best. The other half of the equation is to train and educate aircrew in the imprecision inherent in their systems as well as the potential effects of OOTLU upon their performance. A specific training outcome of this process should be that of learned attentional ‘defocusing’. In addition aircrew should be trained for both the aircraft handling skills and psychological preparedness necessary to fly manually at high altitude.

Finally the standards regarding the design of the human machine interface for aviation should be amended to require that when we are making design decisions about what to automate we must explicitly consider all the costs and benefits of automation.

References

1. Wickens, C. D., Hollands, J., Engineering psychology and human performance (3rd ed.), Prentice Hall, NY, 2000.

2. Wickens, Technical Report ARL-00-10/NASA-00-2, Prepared for NASA Ames Research Center Moffett Field, CA, August 2000.

Notes

1. Albeit this data should also be presented with somewhat lower salience to allow selective filtering.

2 responses to Out of the Loop

  1. 

    I agree totally (as a current airline pilot) that more training is needed in recognising and handling these types of problems. There is only a cursory single event of this type demonstrated to pilots during their initial A330 type course, and that has been my experience on Boeing types as well.

    More concerning is that the response of the AF447 crew is eerily similar to the crew of Birgenair 301. The crew were faced with a malfunctioning airspeed indication, eventually stalled the aircraft and predominantly held nose up control inputs till impact. In that case it was a Boeing 757.

    Unfortunately, Airbus (and Boeing) have done very little to advance the ease of recovery from pitot/static faults despite the loud trumpeting of Fly-By-Wire systems as the answer to ‘loss of control’ accidents.

    • 
      Matthew Squair 01/10/2011 at 6:02 pm

      Yes, I noted the similarity of behaviour between Birgenair and AF447 as well as a number of other unreliable airspeed incidents that led to a stall event. There are so many similarities that I think we need to take time out to consider the psychology of these events. Maybe my next post. 🙂 And I agree that manufacturers and regulators have not addressed the worst case effectively, that in part is driven by the predilection for ‘probabilistic’ risk thinking rather than ‘possibilistic’ thinking.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s