On paying attention to nothing

24/01/2014 — 6 Comments

Silver Blaze (Image source: Strand magazine)

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”

What you pay attention to dictates what you’ll miss

The point that the great detective was making was that the absence of something was the evidence which the Scotland Yard detective had overlooked. Holmes of course using imagination and intuition did identify that this was in fact the vital clue. Such a plot device works marvelously well because almost all of us, like detective Gregory, fail to recognise that such an absence is actually ‘there’ in a sense, let alone that it’s important.

The problem we face is that is we tend to focus over much on the evidence that is in front of us while under-estimating or overlooking the effect of factors that are not represented. In a famous study on the construction of fault trees Fischoff (1978) showed that when people were presented with an original fault tree against whose branches they allocated probabilities and then a ‘pruned’ fault tree, in which some of the original branches had been removed and an ‘all others’ branch included, they consistently failed to re-allocate the probability of the missing branches to the catch all branch thereby biasing probability estimates to the surviving branches.

Silvera (2005)  interprets these results as evidence of an omission neglect bias, that is we exhibit a general insensitivity to missing information (in the case of a fault tree alternative causal mechanisms). Of course Fischoff”s experiment provided a clue to participants that there was missing information, so if that clue is also missing then omission neglect bias will be that much stronger.

Here’s a small Gedanken that neatly demonstrates the problem in safety analyses, below we have a fault tree for the failure to start of an emergency diesel generator taken from AS/NZS HB89-2013, let’s take a look at it and consider whether we’ve missed anything. Looks reasonable?

Fault tree #1 (Image source: SA/SNZ HB 89:2013)

OK, so now mentally step back from the tree and look at the the corners of the first level of events and think about what’s not in the figure. Did you note that the causes of failure don’t include external events?

Fault tree #2 (Image source: AS/NZ HB89)

The point of this small example is not that we missed something, but that what was represented skewed our attention such that even though we ‘know’ about flooding we didn’t consider it. This misdirection effect is so strong and so well known that people make their living from it, we call them safety analysts er, I mean magicians.

Now let’s engage in another small Gedanken, imagine that before I introduced the above fault tree I’d prefaced the diagram with the comment that the context of the system was an emergency generator system for a large industrial facility located on the edge of a bay, would we be more likely to identify flooding as a missing causal factor? Perhaps. Finally what if I stated that it was actually a nuclear power plant? Perhaps more so.

Fukushima - the second wave (Image source: Unknown)

This misdirection effect is not just constrained to fault trees by the way, any diagram that purports to show all the causal factors or consequences but doesn’t include a ‘here be dragons’ leg will strongly misdirect attention away from what we don’t know towards what we do know. As another example the graphical nature of Goal Structuring Notation used in many safety cases again focuses attention upon the contents of the argument and away from what might actually be missing.

Unfortunately engineers love to use such diagrams as a medium of communication because they give a patina of objectivity and mathematical certitude to what is in reality a messy, subjective and value laden process of getting to a decision (Porter 1995). Unfortunately for us when we’re dealing with high consequence systems it’s actually Nassim Taleb’s unexpected and silent Black Swan events lurking in the outer darkness that we have to worry about and for which such diagrams appear to actually have a negative value because they spawn ontological risk.

References

Fischhoff, B., Cost-benefit analysis and the art of motorcycle maintenancePolicy Sciences, 8(2), 177-202, 1977.

Fischhoff, B., Slovic, P.,&Lichtenstein, S., Fault trees: Sensitivity of estimated failure probabilities to problem representations. Journal of Experimental Psychology: Human Perception and Performance, 3, 330–344, 1978.

Porter, T.M., Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton University Press, Princeton NJ, 1995.

Silvera,D.H., Kardes, F.R., Harvey,N., Cronley, M.L., Houghton,D.C., Contextual Influences on Omission Neglect in the Fault Tree Paradigm, Journal of Consumer Psychology, Vol. 15, Issue 2, 2005, Pages 117-126, ISSN 1057-7408.

6 responses to On paying attention to nothing

  1. 

    “Patina of Objectivity” equally as good, possibly just edging “A Veneer of Rigour”. Great article. Thanks.

  2. 

    I always say that us software developers fix the last problem we encountered, not the current one. I think this is because we don’t tend to use a formal decision tree so the only branch we ‘see’ is the huge impact from the last problem. So we go back down the tortuous path from the last huge issue only to find that, once again, the machine was unplugged.

  3. 

    Are external causes within a reasonable scope of fault tree analysis? As I understand fault trees, they constitute a comprmise between conceptual modeling and systemtic analysis. The notation allows the modeler to express anything as long as it conforms to the fault tree syntax; the notation does not guarantee that it makes any sense. Within a narrow scope, however, one can construct fault trees systematically to some degree: Analyzing a system with regard to component dependencies and fault propagation, one gets an idea how component failures affect the system’s mission. I think your scenario leaves this narrow scope. To identify the flooding risk and its consequences, one needs further techniques besides fault trees. Can we give engineers something they’ll love even more than their fault trees?

    • 
      Matthew Squair 24/03/2014 at 9:40 pm

      If they put all the standby generators in the basement then yes I would! The underlying issue as I see it is that it’s a model, therefore of necessity an abstraction designed by an analyst and the act of deciding what to put in or leave out is a highly subjective one. Hence my interest in cognitive biases.

      • 

        I don’t question the point you make about biases and omissions. Of course the flooding risk needs to be identified, assessed, and treated. I just wonder whether we should expect the particular technique of fault tree analysis to help us discover the flooding concern. Maybe we really need an environment modeling approach to understand which external factors a system has to cope with?

      • 
        Matthew Squair 24/03/2014 at 10:57 pm

        I agree, all systems of interest are open so you can’t avoid looking at both the environmental threats (how big, what frequency) and the common cause vulnerabilities of the system (fault trees are ‘OK’ tools for that sort of system analysis). The Blaiyais reactor flooding incident is an example of what happens when both parts of that risk assessment were flawed. Neither the threat nor the vulnerability were clearly identified.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s