Let Slip the Wolves of Error


What the Cry Wolf effect tells us about pilot’s problems with unreliable air data

In a recurring series of incidents air crew have consistently demonstrated difficulty in firstly identifying and then subsequently dealing with unreliable air data and warnings. To me figuring out why this difficulty occurs is essential to addressing what has become a significant issue in air safety.

Sometimes it’s also useful to look outside the original domain in which a problem is occurring to see whether others have met and dealt with it. So leaving the aerospace community for a moment and looking into the nuclear and process industries we find in both that their plant control systems design that they recognise a problem known as the ‘Cry Wolf’ effect (Breznitz 1984) which as the name implies reflects the way in which humans respond to false alarms (1).

Researchers such as Bliss et al (1995) found that an alarm’s expected reliability matched the likelihood of a response to that alarm. If the expectancy was 75% so to would be the response rate, if 25% expectancy so to a lower rate of response.

Much work has been done in these fields to address the problem of so called ‘nuisance’ alarming, but perhaps one thing that has not been covered is the reverse question. What happens when the system is so reliable that operators get no nuisance alarms?

A logical conclusion would be that even if the alarm seemed odd or there was conflicting data there would be a very strong (read very very strong) impetus to ‘trust’ the alarm system. This effect may also extend to the plant data being presented as the following aviation near miss incident illustrates.

…As the first officer’s airspeed/Mach indicator kept increasing, the crew pulled the power back to silence the clacker, but the first officer’s airspeed continued to increase and the captain’s airspeed indicator continued to decrease. The airplane began to shake, which the crew assumed was high-speed Mach tuck. At FL340, the pitch was increased and stick shaker activated. The crew suddenly realized that they were entering a stall…

Erroneous Flight Instruments, Boeing  Aero Magazine No. 8

Notice here the trust response to the overspeed alarm in the face of some conflicting evidence, also note how the crew ‘explained’, read integrated, the stall buffet into their mental model (2) of the situation e.g. the aircraft is over-speeding and therefore this is mach tuck. In the end the combination of climb, pitch and the stick shaker provided sufficient conflicting information that the crew abandoned their existing mental model and substituted for the more appropriate “aircraft is stalling” model.

To me the Cry Wolf effect provides a strong basis in cognitive engineering theory for observed aircrew difficulties in dealing with unreliable air data. As to what we can or should do about it, that is an entirely more difficult question to answer…


Bliss, J. P., Gilson, R. D., & Deaton, J. E., Human probability matching behavior in response to alarms of varying reliability. Ergonomics, 38, 2300–2312, 1995.

Breznitz, Shlomo, Cry Wolf: The Psychology of False Alarms. Lawrence Earlbaum Associates, Hillsdale NJ, 1984.

Getty, D. J., Swets, J. A., Pickett, R. M., & Gonthier, D., System operator response to warnings of danger: A laboratory investigation of the effects of the predictive value of a warning on human re- sponse time. Journal of Experimental Psychology: Applied, 1, 19–33, 1995.


1. The Cry Wolf effect may be seen as a specific version of Normalcy bias.

2. Mental models are adaptive belief constructs used to describe, explain and predict situations.