I’ve recently been reading John Downer on what he terms the Myth of Mechanical Objectivity. To summarise John’s argument he points out that once the risk of an extreme event has been ‘formally’ assessed as being so low as to be acceptable it becomes very hard for society and it’s institutions to justify preparing for it (Downer 2011).
Of course this means that we also rely on the risk assessment being highly credible because if the extreme event does occur no disaster preparation will have been undertaken (1), as an obvious current example consider the un-preparedness of the Fukushima facility for the consequences of the tsunami flooding, the subsequent loss of onsite cooling and exposure of fuel rods in the cooling ponds.
Now while in the case of Fukushima the regulatory agencies and industry might assure you that the errors made in the risk assessment were ‘local’ abberations, John’s contention is that there are strong, independent and supporting counter arguments as to why in principal we cannot put our faith in such assessments. Such arguments include Charles Perrow’s (1988) Normal Accident Theory (1988), John’s Epistemic Accident Theory (Downer 2010), Highly Optimised Theory, or my argument (after Popper) that such assessments are inherently pseudo-scientific because there is no real way to falsify such theories and, when failures do occur, the advocates of what John calls mechanical objectivity rush to erect a series of ad hoc defenses of their theory.
This got me to thinking about probabilistic versus possibilistic thinking in the safety certification of aviation systems. Referring to FAA circular AC25.1309-1A you find in clause 5.(a) of the circular the FAA stating that the one should always assume for the purposes of design that any component can and will fail in flight.
5.(a) … (1) in any system or subsystem, the failure of any single element, component, or connection during any one flight should be assumed, regardless of its probability…
FAA Circular AC 25.1309-1A (1988)
Now clearly this is a possibilistic approach to safety. Components will fail, the circular says, and your design must acount for it. Such an approach thereby avoids the epistemic risk associated with assessments of the failure rates for components, for which there is little empirical data (2). Slightly more indirectly AC.25.1309 is also steering us away from a reliance on single components for safety, an approach that may be contrasted with the consequences of relying on a single component (flood barrier) defenses and the associated increased vulnerability to epistemic risk, as the flooding at the Blaiyais and Fukushima nuclear plants illustrates.
On the airborne software front it’s a similar story, DO-187B the aviation industries software certification standard establishes the required design assurance level strictly upon the severity of the possible failure. Again this is a possibilistic approach to safety, which neatly avoids the problematic aspects of probabilistic safety implicit in the approaches of other software safety standards, such as IEC 61508.
I find it interesting that possibilistic reasoning rather than probabilistic reasoning has been clearly placed at the center of the aviation safety process. Perhaps other standardisation bodies should take note?
Downer, J., Anatomy of a Disaster:Why Some Accidents Are Unavoidable, Centre for Analysis of Risk and Regulation (ESRC Research Centre), Discussion Paper:61, March 2010.
Downer, J., Why Do We Trust Nuclear Safety Assessments? Failures of Foresight and the Ideal of Mechanical Objectivity, Presentation at 11th Bieleschweig Workshop,August 2011.
Perrow, C., Normal Accidents: Living with High Risk Technologies, Princeton University Press, Updated Ed., 1999.
1. There’s also a more subtle point here about the tendency of people’s perceptions of risk to slide from ‘extremely improbable’ to ‘impossible’, but that’s a topic for another post.
2. The classic problem of such analyses, for high reliability developmental components per se there’s very little failure data, wide confidence limits on estimates of failure rates and extremely costly reliability trials (in terms of time or numbers of UUT).