Well it sounded reasonable…
One of the things that’s concerned me for a while is the potentially malign narrative power of a published safety case. For those unfamiliar with the term, a safety case can be defined as a structured argument supported by a body of evidence that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given environment. And I have not yet read a safety case that didn’t purport to be exactly that.
But, I also have to admit to a creeping sense of unease about the concept of being able to mount such an argument for any system of more than moderate complexity, the reasons for this unease were crystallised by re-reading The Black Swan by Nassim Nicholas Taleb and rediscovering two key problems he identifies with such endeavours.
The first is what he terms the ‘inverse problem of rare events’. That is, as an event’s magnitude increases so it’s probability decreases. In turn we are forced to place progressively more reliance on theory, with all it’s epistemic and ontological uncertainty, and less on empirical knowledge. The result? Our certainty of the risk of rare events (probability × consequence) is inversely proportional to their consequence.
This leads in turn to the problem of confirmation bias, Taleb terms this the ‘narrative fallacy’, where a fallacious good news argument is fitted to all that statistical data telling you nothing happens, ignoring the dire effect of extreme outlier events that when they do happen, tend to wipe the slate. For example the nuclear industry has had thousands of reactor years in which nothing much happened, until unfortunately Chernobyl and then Fukushima came along.
The Nimrod Safety Case process was fatally undermined by a general malaise: a widespread assumption by those involved that the Nimrod was ‘safe anyway’ (because it had successfully flown for 30 years) and the task of drawing up the Safety Case became essentially a paperwork and ‘tickbox’ exercise.
The Nimrod Review, Charles Haddon-Cave QC, 2009.
Safety cases by definition are intended to be supported by a body of evidence and there’s always plenty of evidence in the form of accident free operating hours so inherently such arguments are starting from a confirmatory bias perspective. This effect is worsened when the safety argument elides from product related evidence to one based more on process compliance. This tends to lead to a framing effect in readers which can bias them to a positive view of the safety of the system. Recent research actually indicates that this is a very strong effect and can shape people’s decision making even when they know the analysis may be flawed.
Which is not to say that safety cases are constructed with anything other than good intentions, it’s just that if they are built on the premise above then they end up fundamentally flawed.