Possibilistic design in aviation

11/12/2011 — 3 Comments

I’ve recently been reading John Downer on what he terms the Myth of Mechanical Objectivity. To summarise John’s argument he points out that once the risk of an extreme event has been ‘formally’ assessed as being so low as to be acceptable it becomes very hard for society and it’s institutions to justify preparing for it (Downer 2011).

Of course this means that we also rely on the risk assessment being highly credible because if the extreme event does occur no disaster preparation will have been undertaken (1), as an obvious current example consider the un-preparedness of the Fukushima facility for the consequences of the tsunami flooding, the subsequent loss of onsite cooling and exposure of fuel rods in the cooling ponds.

Now while in the case of Fukushima the regulatory agencies and industry might assure you that the errors made in the risk assessment were ‘local’ abberations, John’s contention is that there are strong, independent and supporting counter arguments as to why in principal we cannot put our faith in such assessments. Such arguments include Charles Perrow’s (1988) Normal Accident Theory (1988), John’s Epistemic Accident Theory (Downer 2010), Highly Optimised Theory, or my argument (after Popper) that such assessments are inherently pseudo-scientific because there is no real way to falsify such theories and, when failures do occur, the advocates of what John calls mechanical objectivity rush to erect a series of ad hoc defenses of their theory.

This got me to thinking about probabilistic versus possibilistic thinking in the safety certification of aviation systems. Referring to FAA circular AC25.1309-1A you find in clause 5.(a) of the circular the FAA stating that the one should always assume for the purposes of design that any component can and will fail in flight.

5.(a) … (1) in any system or subsystem, the failure of any single element, component, or connection during any one flight should be assumed, regardless of its probability…

FAA Circular AC 25.1309-1A (1988)

Now clearly this is a possibilistic approach to safety. Components will fail, the circular says, and your design must acount for it. Such an approach thereby avoids the epistemic risk associated with assessments of the failure rates for components, for which there is little empirical data (2). Slightly more indirectly AC.25.1309 is also steering us away from a reliance on single components for safety, an approach that may be contrasted with the consequences of relying on a single component (flood barrier) defenses and the associated increased vulnerability to epistemic risk, as the flooding at the Blaiyais and Fukushima nuclear plants illustrates.

On the airborne software front it’s a similar story, DO-187B the aviation industries software certification standard establishes the required design assurance level strictly upon the severity of the possible failure. Again this is a possibilistic approach to safety, which neatly avoids the problematic aspects of probabilistic safety implicit in the approaches of other software safety standards, such as IEC 61508.

I find it interesting that possibilistic reasoning rather than probabilistic reasoning has been clearly placed at the center of the aviation safety process. Perhaps other standardisation bodies should take note?

References

Downer, J., Anatomy of a Disaster:Why Some Accidents Are Unavoidable, Centre for Analysis of Risk and Regulation (ESRC Research Centre), Discussion Paper:61, March 2010.

Downer, J., Why Do We Trust Nuclear Safety Assessments? Failures of Foresight and the Ideal of Mechanical Objectivity, Presentation at 11th Bieleschweig Workshop,August 2011.

Perrow, C., Normal Accidents: Living with High Risk Technologies, Princeton University Press, Updated Ed., 1999.

Notes

1. There’s also a more subtle point here about the tendency of people’s perceptions of risk to slide from ‘extremely improbable’ to ‘impossible’, but that’s a topic for another post.

2. The classic problem of such analyses, for high reliability developmental components per se there’s very little failure data, wide confidence limits on estimates of failure rates and extremely costly reliability trials (in terms of time or numbers of UUT).

3 responses to Possibilistic design in aviation

  1. 

    I agree that a “possibilistic” reasoning is a great starting point for an engineer. It seems to lead directly to asking what it is we know and what we don’t know about engineering a system.

    On the other hand though, engineering only exists as a (small?) part of a society. Society appears quite comfortable with a probabilistic outlook, accepting the risk of some hazards because the probability is considering sufficiently low (e.g. being killed in a motor vehicle accident) compared to the associated benefits.

    If engineers are to serve their society don’t engineers need to embrace probabilistic reasoning also?

    Even FAR 25.1309 appears to use a probabilistic reasoning when it requires that “[t]he occurrence of any failure condition which would prevent the continued safe flight and landing of the airplane [to be] extremely improbable”.

  2. 

    Glad to see that you have started posting again. I’ve been ruminating on writing a paper entitled “The Illusion of Probabilistic Thinking”. I don’t know if you have read any of Bruce Schneier’s essays on the difference between probabilistic and possibilistic thinking. Schneier is one of the leaders of the hipsters who think that possibilstic thinking has driven science into the dark ages. I think he’s full of shit.

    http://www.schneier.com/blog/archives/2010/05/worst-case_thin.html

    Is an example of his work.

    • 
      Matthew Squair 07/02/2012 at 11:39 pm

      Hmm, I read Schneier’s article and didn’t find it particularly impressive as a rebuttal of ‘possibilistic’ thinking.

      I find it interesting that in the ‘risk game’ that most people focus on epistemic uncertainty in likelihood and ignore that associated with severity. Challenging people to think of a range of possible outcomes is in fact a time honoured technique in improving qualitative risk assessments.

      In theory the use of probability and risk can be useful, but you have to know what you’re doing, understand the limits of your knowledge and be able to communicate both effectively. That’s in theory, in practice? Well see Downer’s articles for a discussion of what happens in practice.

      As a practical example of the, presumably unintentional, misuse of probability see the ATSB’s last report on the QF72 upset for a basic mis-understanding of probability estimation and consequently an overly sanguine assessment of the risk. Ditto for the Victorian bush fires, whose magnitude given the perfect extreme conditions was simply not anticipated, in my view a classic example of the absurdity bias at work.

      P.S. And I look forward to your paper 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s