The Rumsfeld ignorance management framework

26/02/2013 — Leave a comment

The pentagon is functioning (Image Source: USN)

….And there are still unknown, unknowns

A while ago I posted a short piece on the difference between aleatory, epistemic and ontological uncertainty, using Don Rumsfeld’s famous news conference comments as a good introduction to the subject.

As it turns out Patrick Lambe over on Green Chameleon has taken Rumsfeld’s remarks and developed them, somewhat tongue in cheek, into the Rumsfeld Ignorance Management framework (1). Patrick’s model is a useful one to use because it forces us to explicitly consider how much exposure we may have to uncertainty of knowledge type risks.

Translating the last three quadrants of Patrick’s model into a safety context gives us:

  • Q1 Perception risks. We don’t know we know it. We have the information, but for whatever reason it fails to get to the people who actually needs it or fails to be perceived as salient by those persons, and is discounted.
  • Q2 Aleatory risks. We know it and we know we know it. Where randomness exists it’s understood and fully characterised. The question is whether the loss rate associated with the risk is acceptable.
  • Q3 Epistemic risks. Uncertainty about a known parameter e.g. a known unknown, for example uncertainty about a failure rate or potential severity. We may understand there is a risk but still be uncertain about how much there is.
  • Q4 Ontological risks. Unidentified holes and flaws in our understanding, the unknown unknowns. We  don’t know how many risks there are in our portfolio and may even be uncertain about the types of risks that we may be running.

Patrick suggests that in this fourth quadrant fog being sensitive to a feeling of unease, cultivating curiosity, sensitivity to detail and alertness to small changes are invaluable techniques in alerting us to the presence of an unknown/unknown. He also suggests the use of frameworks (2) to identify if something is missing from our model of the situation. The advantage of these approaches is that by actively regarding and probing the fourth quadrant of unknown unknowns we are less likely to fall prey to such cognitive effects as confirmation bias and framing than if we were to focus just upon what we certain of.

So when we operate a complex high consequence technological system we should not blithely assume that the system is ‘safe as designed’ but instead always be alert for those small signals that  indicate there may be a hazardous hole in our theory (3).

A final conclusion that one can draw from Patrick’s model is that safety cases, which are based upon an advocacy position of ‘demonstrating safety through evidence’, are inherently biased to ignore fourth quadrant risks and this makes safety case arguments epistemologically weak. To address the potential for ontological risk requires at a minimum adopting an adversarial approach that actively trys to disprove the assertion of safety, which brings us back to the philosopher Karl Popper’s take on science that it advances on the basis of attempting to disprove theories rather than prove them (4).

Notes

1. As a comparison to Patrick’s four quadrant model see Sohail Inyahullah’s six box model, contained in this paper.

2. One framework example would be the HAZOPS technique of combining guide-words with system and equipment properties to identify possible hazards. Another technique would be Triz.

3. I’ve always been struck by how many accident reports include a comment that had a precursor warning event or near miss been heeded the accident would not have occurred. The idea that managing the safety of a complex system inherently requires you to listen for weak signals gives an explanation of this correlation. Organisations that actively look for ontological risk don’t have accidents, while those that don’t look for ontological risk do.

4. See my post Simple designs are safer for a take on Karl Popper’s theory of science and what this means for safety engineering. The post deals directly with why simplicity should be preferred in safety systems, but also contains a general introduction to Popper’s theory of refutation.

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s