The myth of the perfect automatic man

20/09/2009 — 1 Comment

Recent incidents involving Airbus aircraft have again focused attention on their approach to cockpit automation and it’s interaction with the crew.

Underlying the current debate is perhaps a general view that the automation should somehow be ‘perfect’, and that failure of automation is also a form of moral failing (1). While this weltanschauung undoubtedly serves certain social and psychological needs the debate it engenders doesn’t really further productive discussion on what could or indeed should be done to improve cockpit automation. So let’s take a closer look at the Airbus protection laws implemented in the flight control automation and compare it with how experienced aircrew actually make decisions in the cockpit.

Why do we automate?

The high level control laws programmed into the flight control computers of the Airbus fleet are intended to ensure efficient operations while protecting the aircraft from the risk of pilots unintentionally flying the aircraft outside the flight envelope. Under normal law the aircraft’s flight control automation protects against pilot control inputs that may cause excessive structural load factors or exceeding the flight envelope (2). Breaking this requirement apart we are asking the automation to do two things, firstly to assess the situation then to make a decision as to whether to intervene.

Natural decisions

As Klein (1991) points out human natural decision making, especially that of experts, is not a formal and analytical process. Instead it’s based on rapid situation assessment (including context), a serial matching of remembered patterns to the current situation and the selection of the first course of action that satisfices the need (3). Once an action is selected it is mentally simulated by the operator to determine what adverse outcomes there may be.

In naturalistic human decision making the emphasis is not upon analysis and comparison of options but upon situation assessment. Experienced operators develop a sophisticated sense of what the system is doing, and can use it to predict future states (termed expectancy) and adjust the relative salience of various cues.  In aviation this ability is known as ‘flying ahead of the aircraft’. This expectancy also allows operator’s to accept or reject data based on their internal model of the system or update and modify their model should the data call into question the validity of the model.

Automated protection decisions and their vulnerability

The Airbus flight protection laws decision-making is quite different to the way in which human aircrew would make make such decisions. The first stage uses a set of sampling statistics (mean and median values) and rate limiting to eliminate erroneuous data. Having eliminated erroneous inputs the second stage decision agorithm then considers a fixed set of parameter’s and initiates the protection action if required.

There are a couple of philosophical problems with this approach. By using sampling statistics and filtering we are essentially removing information from the control loop. This results in automated protection laws able to cope with aleatory uncertainty (e.g. the random distribution of noise of component failures) but vulnerable to epistemic uncertainty (e.g. events such as failures or noise that violate the assumed distribution). The QF 72 accident is a good example of this type of system vulnerability (ATSB 2008). In contrast a human operator directly monitoring a process would integrate the presence of noise or unexpected values into their understanding (and model) of the system and therefore this would inform their decision as to the advisability of initiating a control action.

Vulnerability is also introduced by the narrow context of data upon which the decision is made. For example the alpha-protection law only considers angle of attack and altitude (both air data) but not the presence of pilot command inputs, even though the law is putatively there to prevent aircrew flying the aircraft outside the envelope. This makes such laws vulnerable to being triggered in the wrong system context as was the case in the Iberia FL 1456 accident (4).

In a broader sense the Airbus protection laws are vulnerable because their sense of ‘expectancy’ is extremely weak, that is there does not exist a strong internal model of system behaviour which is used to check input values and predict future behaviour.For example in the Airbus QF72 incident there was a persistent time history of ‘spike’ values on the ADIRU 2 channel, however this deep history was not considered in determining the validity of that input.

A further limitation of the protection laws is the lack of forward projection or simulation to predict the results of the action, in the case of Iberia FL 1456 the projection of the continued flight-path into impact was assuredly not considered by the automation. Perhaps a ‘meta protection law’ should be introduced that no application of a protection law will cause the aircraft’s flight path to intersect the ground.

Are we addressing a real risk?

A final question to be asked is why are these flight protection laws automated in the first place? Protection laws are intended to ensure the safe operation of the aircraft, but what evidence is there that this was a significant risk in the first instance i.e. were aircrew regularly flying outside the performance envelope and endangering the aircraft?

If we’re going to automate human decision making then perhaps we need to understand the complex way in which humans really make decisions and ensure that the automated version provides all of the capabilities.  We should also be realistic about the ability of automation to make correct decisions under conditions of uncertainty. If skilled human operator’s can make mistakes why do we think that simplistic automated decision making can or will do better. But it seems instead that as we’ve have moved away from the myth of the perfectibility of a human we have in turn created a myth of the perfectibility of automation.

This post is part of the Airbus aircraft family and system safety thread.

Notes

(1)     In some ways this reflects the person based approach to human error, i.e. a focus on the direct unsafe act with causation (design error) deemed to be an aberrant behaviour, ‘inattention’, ‘negligence’ and so on … The underlying views being one of error as moral issue, i.e. the ‘just world hypothesis’

(2)    Such as speed over-shoots, extreme pitch or bank and stall conditions.

(3)   In fact a study of aircrew (Mosier 1991) found that they did not even wait for a complete situation assessment but used a set of key critical triggering parameters to initiate a response the monitored the results using continuous situational assessment.

(4)    In Iberia FL 1456, the decision was whether to land or Takeoff Or Go Around (TOGA), the pilot’s had decided to TOGA due to wind shear effects, the automation overrode that decision based on a pitch rate limit exceedance (5) and committed the aircraft to continuing the landing sequence.

(5)    Alpha rate limits are intended to address pilot induced oscillation (PIO) that can quickly lead to a divergent, and hazardous, phugoid motion. Old timers will remember this effect as the early swept wing fighters ‘sabre dance’.

Further Reading

1.    Klein, G., Intuition at work.  New York: Doubleday, 2002.

2.   ATSB Transport Safety Report, Aviation Occurrence Investigation, AO-2008-070 Interim Factual Report ., 2008.

3.  Mosier, K.L, Expert decision making strategies, Proc. of the Sixth International Symposium of Aviation Psychology, . Columbus, Ohio State University, Ohio, 1991.

Trackbacks and Pingbacks:

  1. AF447 wreckage found - Page 19 - PPRuNe Forums - May 23, 2011

    […] […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s