**The QF 72 accident illustrates the significant effects that ‘small field’ decisions can have on overall system safety**

In an earlier post on the QF 72 accident I discussed the implications of the Airbus voting algorithm had for safety in the presence of ‘bursty’ non time independent noise in a sensor channel. In that post I noted that the median statistic when comparing redundant data is less sensitive to outlier values than the arithmetic mean. Which is why median voting algorithms are preferred for critical control applications.

Of course you may still be left with a problem of trying to derive a valid estimate from only two values (AoA 1 and 2 for the A330) in which case the median algorithm (which needs a minimum of three values) is not much help. In these circumstances we might be tempted to fall back on the use of the arithmetic mean (summing and dividing by 2). However the problem with this is that the statistic will be unduly influenced by high end values. In other words we’re back were we started. Well not necessarily…

If we instead used the geometric mean of the two values, where we multiply them together and take the square root, we can generate a statistic that is much less sensitive to outliers.

**Example.** Using the normal AoA (1.2 degrees) and spike AoA (50.6 degrees) observed on QF 72 we would derive an arithmetic and geometric mean of 26.4 and 7.8 degrees respectively.

In the case of the Airbus logic, use of the geometric mean across the experienced set off AoA values would have resulted in a far smaller resultant AoA, minimising the resultant pitch over severity. To that extent we can say that the geometric mean is more robust a calculation than the arithmetic in the face of ‘out of specification’ values.

What’s interesting to me is not necessarily that one statistic is better or worse but rather how a small ‘local’ design decision, in this case as to the use of a specific statistic, can have significant system level consequences.

### Like this:

Like Loading...

*Related*

Surely you are pre-judging the answer and choosing an averaging mechanism that suits your preferred answer. What if the AoA was meant to be 50.6 and the intermittent fault is a ‘low’ outlier (1.2). Now the GM does not look so good!

What, me cherry picking data? Never! 🙂

Fundamentally a geometric mean, unlike the arithmetic, dampens the effect of very high or low values. It’s not a silver bullet but does have advantages.

In your example the GM = 7.8 while the AM = 25.9, if the ‘true value’ is 50.6 then no that’s not better.

However, for passenger aircraft if you have an alpha of 50.6 you have major problems, see what happened to QF 72 when the PRIM’s processed the erroneous high value. Keep in mind that with all these problems we usually do know something about the ‘physics’ of the situation.

And thanks for your comments.

Agreed, I think the reason they chose the arithmetic mean is exactly that reason. If you don’t know which one of two values is correct, choosing the exact midpoint is the best you can do.

I’ve been thinking about this for a while and statistics (arithmetic means or otherwise) only make sense when your sample points all belong to the same population. If one point is a legitimate outlier then by definition it’s not part of the population and should be excluded from population statistics. Using a mean should only be considered reasonable when you’ve eliminated outlying dodgy data and for the purpose of nulling out random noise in the remaining data. But to do so you need to establish before the fact what you expect the population to look like, which is a theory driven decision and requires you to state your expectations up front. That I think is the subtle error, smershing together the issue of noise handling mechanisms with outlier rejection. To put it another way experiments are never theory free.