Archives For Risk Assessment

Singularity (Image source: Tecnoscience)

Or ‘On the breakdown of Bayesian techniques in the presence of knowledge singularities’

One of the abiding problems of safety critical ‘first of’ systems is that you face, as David Collingridge observed, a double bind dilemma:

  1. Initially an information problem because ‘real’ safety issues (hazards) and their risk cannot be easily identified or quantified until the system is deployed, but 
  2. By the time the system is deployed you now face a power (inertia) problem, that is control or change is difficult once the system is deployed or delivered. Eliminating a hazard is usually very difficult and we can only mitigate them in some fashion. Continue Reading…

I was thinking about how the dubious concept of ‘safety integrity levels’ continues to persist in spite of protracted criticism. in essence if the flaws in the concept of SILs are so obvious why they still persist?

Continue Reading…

Another in the occasional series of posts on systems engineering, here’s a guide to evaluating technical risk, based on the degree of technical maturity of the solution.

The idea of using technical maturity as an analog for technical risk first appears (to my knowledge) in the 1983 Systems Engineering Management Guide produced by the Defense Systems Management College (1).

Using such analogs is not unusual in engineering, you usually find it practiced where measuring the actual parameter is too difficult. For example architects use floor area as an analog for cost during concept design because collecting detailed cost data at that point is not really feasible.

While you can introduce other analogs, such as complexity and interdependence, as a first pass assessment of inherent feasibility I’ve found that the basic question of ‘have we done this before’ to be a powerful one.

Notes

1. The 1983 edition is IMO the best of all the Guides with subsequent editions of the DSMC guide rather more ‘theoretic’ and not as useful, possibly because the 1983 edition was produced by Lockheed Martin Missile and Space Companies Systems Engineering Directorate. Or to put it another way it was produced by people who wrote about how they actually did their job… 🙂

How do we assure safety when we modify a system?

While the safety community has developed a comprehensive suite of analyses and management techniques for system developments the number of those available to ensure the safe modifications of systems are somewhat less prolific.

Which is odd when one considers that most systems spend the majority of their life in operation rather than development…

Continue Reading…

The following is an extract from Kevin Driscoll’s Murphy Was an Optimist presentation at SAFECOMP 2010. Here Kevin does the maths to show how a lack of exposure to failures over a small sample size of operating hours leads to a normalcy bias amongst designers and a rejection of proposed failure modes as ‘not credible’. The reason I find it of especial interest is that it gives, at least in part, an empirical argument to why designers find it difficult to anticipate the system accidents of Charles Perrow’s Normal Accident Theory. Kevin’s argument also supports John Downer’s (2010) concept of Epistemic accidents. John defines epistemic accidents as those that occur because of an erroneous technological assumption, even though there were good reasons to hold that assumption before the accident. Kevin’s argument illustrates that engineers as technological actors must make decisions in which their knowledge is inherently limited and so their design choices will exhibit bounded rationality.

In effect the higher the dependability of a system the greater the mismatch between designer experience and system operational hours and therefore the tighter the bounds on the rationality of design choices and their underpinning assumptions. The tighter the bounds the greater the effect of congnitive biases will have, e.g. such as falling prey to the Normalcy Bias. Of course there are other reasons for such bounded rationality, see Logic, Mathematics and Science are Not Enough for a discussion of these.

Continue Reading…

The development of safety cases for complex safety critical systems

So what is a safety case? The term has achieved an almost quasi-religious status amongst safety practitioners, with it’s fair share of true believers and heretics. But if you’ve been given the job of preparing or reviewing a safety case what’s the next step?

Continue Reading…

20120722-210308.jpg

An interesting theory of risk perception and communication is put forward by Kahan (2012) in the context of climate risk.

Continue Reading…