In an earlier post I commented that in the QF72 incident the use of a geometric mean (1) instead of the arithmetic mean when calculating the aircrafts angle of attack would have reduced the severity of the subsequent pitch over. Which leads into the more general subject of what to do when the real world departs from our assumption about the statistical ‘well formededness’ of data. The problem, in the case of measuring angle of attack on commercial aircraft, is that the left and right alpha sensors are not truly independent measures of the same parameter (2). With sideslip we cannot directly obtain a true angle of attack (AoA) from any single sensor (3) so need to take the average (mean) of the measured AoA on either side of the fuselage (Gracey 1958) to determine the true AoA. Because of this variance between left and right we cannot use a median voting approach, as we can expect the two sensors the right side to differ from the one sensor on the left. As a result we end up having to use the mean of two sensor values (one from each side) as an estimate of the resultant central tendency.

Unfortunately in the case of the mean statistic, a single erroneous data value can adversely affect the estimate of central tendency, or to put it another way the statistic is not robust in the presence of outliers. Which is where *robust* statistical techniques (3) come in, this is a branch of statistics developed to deal with data that is less than ideal in conforming to the assumptions underlying classical statistics. Using robust statistical techniques if we had N>2 values we could substitute the median as a more robust statistic (4). Unfortunately in a two sensor value per computational cycle scenario we can’t apply the median and are stuck with using an arithmetic mean. But if we are stuck with using the mean one robust statistical technique for means is trimming. For example a 5% trimmed mean could be obtained by taking the mean of the 2.5% to 97.5% range of values. Unfortunately again with only two values per cycle to work with this is not really an option and neither are such techniques as weighted means, distance weighted estimator or Winsorising. Another technique is to apply some rule that screens for outliers (based on a predefined set of failure modes) (5) but such rules have the disadvantage of only covering those failure modes that we can imagine (our fault hypothesis) and may therefore be incomplete.

So if we are dubious as to the completeness of such screening, and wish to minimise our exposure to unidentified failure modes, we could consider the use of the geometric rather than arithmetic mean, which is a more robust statistic in the presence of large outlier values. As Fig 1 illustrates, as the error in AoA increases the difference between Geometric and Arithmetic mean proportionally widens. And if all we’re worried about are high value outliers, as occurred for QF72 then we could stop right there…

However, if we consider arbitrarily *small* values we find that the geometric mean performs worse (is pulled downed arbitrarily). As the magnitude of the correct AoA2 increases the geometric mean’s error also proportionally worsens for small value AoA1 errors. So if our sensor failure includes arbitrarily small or close to zero values then the geometric mean cannot be considered a robust estimator and as true AoA increases this worsens. Figure 1 illustrates this as AoA1 drops below the correct AoA value of 2.5. But that’s not quite the end of the story because when AoA 1 & 2 are close to each other and small in magnitude the difference between the geometric and arithmetic mean is still quite small. For example if the error is AoA1 = 0.5 and the true AoA2 = 2.5 then the geometric mean is 1.1 and the arithmetic 1.5. So despite it’s poorer estimation performance at small AoA values (6) the statistic may still be of utility if we are willing to trade precision against robustness.

The conclusion from all this is that the mean (either geometric or arithmetic) as a statistic is inherently non-robust as an estimator of central tendency and to use it in any safety critical application requires careful consideration of both the range of inputs and potential failure modes.

**Notes**

1. Of course if your data is not skewed (1a), or is symmetrically distributed (1b) then a geometric mean is not a good choice.

1a. As a rule of thumb the largest value must be at least 3x the smallest value for geometric mean to be applicable.

1b. Davies (1929) coefficient of skewness test may be used to establish whether the data is symmetrically distributed or not.

2. As result there is the possibility that AoA 2 & 3 values could differ from AoA 1 significantly due to sideslip.

3. That is, a statistic not unduly affected by outlier values or other small departures from model assumptions.

4. One measure of robustness is the breakdown point, that is the proportion of incorrect (i.e. arbitrarily large) observations an estimator can handle before giving an arbitrarily large (incorrect) result. The higher the breakdown point the more robust the statistic. The median (a robust statistic) has a breakdown point of 50%. In comparison the mean has a breakdown point of 0%, i.e. a single outlier can throw it off.

5. This is what the Airbus designers elected to do, if somewhat indirectly. If either AoA 1 or AoA 2 sensor values deviated from the median of all three sensors by some threshold value the most recently valid AoA input to the Flight Control Processor Channel (FCPC) was used and the violating current AoA 1 & 2 values were discarded. The FCPC’s AoA inputs were also rate limited to ensure that rapid changes did not have a significant effect on the FCPC’s computational outputs.

6. For an A330, the typical operational range of AOA is 1 to 10° during all phases of flight. During normal cruise flight, AOA is typically about 2 to 3° (ATSB 2011).

**References**

ATSB, Aviation occurrence Investigation, AO-2008- In-flight upset 154 km west of Learmonth, WA 7 October 2008 VH-QPA, Airbus A330-303 070, Final, 2011.

Butler, RW 2008, A primer on architectural level fault tolerance, Technical Memorandum TM-2008-215108, NASA Langley Research Centre, Hampton, Virginia.

Davies, G., Journal of the American Statistical Society, p 349 – 366, 1929.

Gracey, W., Summary of Methods for Measuring Angle of Attack, NACA Technical Note 4351, Washington 1958.