Averages and outliers

21/09/2014 — 2 Comments

Right hand AoA probes (Image source: ATSB)

When good voting algorithms go bad

Thinking about the QF72 incident, it struck me that average value based voting methods are based on the calculation of a population statistic. Now population statistics work well when the population is normally distributed, or otherwise clustered around some value. But if the distribution has heavy tails, we can expect that extreme values will occur fairly regularly and therefor the ‘average’ value means much less. In fact for some distributions we may not be able to put a cap on the upper value that an ‘average’ could be, e.g. it could have an infinite value and the idea of an average is therefore meaningless.

Putting it another way, if you use an average in a voting algorithm you are in effect saying that you’re confident that the data will cluster around some common tendency. if it doesn’t then a) be prepared for surprises, and b) don’t expect the number you’re generating to be much help as a descriptor for the population. So in a world where things don’t always neatly congregate if you use an averaging mechanism, such as the arithmetic, geometric or log-average mean, the critical question you need to answer is how well can you screen out the outliers that will inevitably arrive to upset the apple cart. The problem that occurred on QF72 was that that screening turned out to be ineffective.

There are a few more problems that we need to consider when dealing with heavy tailed inputs, the first is that the past proves not to be such a good predictor of future events. This is captured in what’s called the ‘mean excess heuristic’, using this heuristic if the excess of the sampled value above the mean is decreasing as we move away from the mean then our data is probably normally or log-normally distributed, but if it’s increasing then the further away we go, the bigger the outlier value will be, e.g. worse is going to be much worse, and much worse is going to very bad indeed.

Then there’s the problem of tail dependence, which refers to the tendency of dependence between two random variables to concentrate in the extreme values. This is very much of interest to us if we have a number of independent sensor channels all voting on a single ‘good’ output in some fashion. If the inputs exhibit tail dependence then an extreme value in one input will be more likely to be associated with an extreme value in another. Unreliable airspeed incidents in aircraft are a good example of how extreme inputs can ‘associate’. In the case of QF72 the ‘bursty’ noise distribution on the affected input channel made it much more likely that if one data spike arrived as an input it would be closely correlated to another in time, thereby defeating the latching filter mechanism.

It’s interesting the complications that arise when we start to consider that the world is not necessarily shaped like a normal distribution, and we step away from those comfortable assumptions of normalcy…

2 responses to Averages and outliers

  1. 

    Mandelbrot already said that, but politicians, managers, pilots, prescriptors, regulators, are ignorant of math ; they want to use science without worrying about the limits.

Trackbacks and Pingbacks:

  1. Outliers rough draft… « Critical Uncertainties - September 22, 2014

    […] you were wondering why the Outliers post was, ehem, a little rough I accidentally posted an initial draft rather than the final version. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s