Highly optimised air data

17/01/2010 — Leave a comment

Pitot sensor (Source: BEA)

The theory of Highly Optimised Tolerance (HOT) predicts that as technological systems evolve to become more robust to common perturbations they still remain vulnerable to rare events (Carlson, Doyle 2002) and this theory may give us an insight into the performance of modern integrated air data systems in the face of in-flight icing incidents. 

What are air data systems, and what do they do?

Modern air data systems have evolved from a federated set of air speed and altitude analog gauges to an integrated suite of electronic sensors, software, computers and displays. These systems calculate mach, angle of attack, free stream pressure, outside air temperature and air density based on measured air data. As part of these calculations they also compensate for pressure and air temperature measurement errors (1) (2). This compensation is intended to make the system more robust in the face of measurement errors introduced by differences between measured air data values and actual free stream parameters (Gracey 1980).

Air data computational path

HOT systems and air data

Based on this current ‘state of the art’ we can infer that, by design, modern air data systems belong to the class of systems that exhibit Highly Optimised Tolerance or HOT. Such systems are designed to provide more robust and reliable functionality in the face of common perturbances than their simpler analog predecessors. However, HOT systems still remain vulnerable to ‘rare’ events that can then lead to major losses. This vulnerability reflects an inherent trade-off between favouring small losses for common events, in this case robustness in the face of of small pressure and temperature measurement errors, at the expense of large losses when subject to rare perturbations, such as icing induced large scale air speed errors.

Mach compensation errors

In the case of air data systems the presence of a mach compensation feedback loop in the processing path inherently increases coupling and allows the propagation of dynamic pressure errors into temperature and static pressure calculations.

For example, the French BEA calculated that for an A330-200 flying at FL 350 at Mach 0.8 in standard atmosphere a decrease in the Mach from 0.8 to 0.3 would cause an increase in temperature of 23 degrees C and a decrease in altitude of 300 ft (BEA 2009).

Near simultaneous changes in air speed (mach), altitude and temperature experienced during recent icing incidents on modern passenger aircraft (3) support the premise that modern air data systems exhibit such behaviour.


The cascading failure vulnerability of modern air speed systems is a result of of an evolutionary design process whose objective was to enhance the robustness of the system against commonly occurring events. Unfortunately this carried with it the unintentional consequence of error cascades across air data computation when subject to rare events, such as large scale air speed errors. As such modern air data systems exhibit a type optimised tolerance consistent with the theories of Carlson and Doyle.


The desire to interconnect processes in order to achiever greater performance or robustness (albeit potentially at the expense of more global robustness) is a key evolutionary force in the development of modern complex technological systems.

However accepting this trade off as a fait accompli is not necessarily inevitable. The great advantage of technological systems is that we can actually modify their architecture in the field so if we recognise that system may be vulnerable to rare events we may in turn be able to detect such an event and respond.

In the case of the air data computations one solution to the identified vulnerability to a large scale error in air speed, would be to switch the compensation processing off thereby preserving other air data albeit at lower fidelity. In effect we would be devolving the air data system to a simpler and safer (in the face of large scale pertubations) federated system architecture.

This post is part of the Airbus aircraft family and system safety thread.


1. Pressure and temperature (systematic) errors are the differences between locally measured values of pressure and temperature and free stream temperatures and pressures.

Such errors are functions of angle of attack, airspeed (mach) and aircraft configuration. Refer to Gracey (1980) for a full treatment.

2. As suggested by Kayton & Fried (1997) firstly correct measured air data using mach (M) and (alpha) for local errors as a function of mach number and angle of attack then iterate the computation of air data for a number of cycles until the parameters converge and finally use the updated mach for compensation in the next cycle.

If the parameters don’t converge or the mach compensation is too great then raw values of qc and Ps are used to generate the mach compensation value.

3. See for example the NTSB Airbus N805NW Flight 8 and the Air Caraibes A330 F-OFDF icing incidents.


BEA, Interim report no. 2, on the accident on 1st June 2009, to the Airbus A330-203, registered F-GZCP, operated by Air France flight AF 447 Rio de Janeiro – Paris, Report Number f-cp090601ae2, November 2009.

Carlson, J.M., Doyle, J., Complexity and Robustness, Proc. of the National Academy of Sciences, 19 February, 2002, vol. 99 suppl. 1 pg 2545.

Gracey, W., Measurement of Aircraft Speed and Altitude, NASA Reference Publication 1046, NASA STI Office, May 1980.

Kayton, M., Fried, W., Avionics Navigation Systems, 1997.

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s