Archives For Risk Assessment

We are hectored almost daily basis on the imminent threat of islamic extremism and how we must respond firmly to this real and present danger. Indeed we have proceeded far enough along the escalation of response ladder that this, presumably existential threat, is now being used to justify talk of internment without trial. So what is the probability that if you were murdered, the murderer would be an immigrant terrorist?

In NSW in 2014 there were 86 homicides, of these 1 was directly related to the act of a homegrown islamist terrorist (1). So there’s a 1 in 86 chance that in that year if you were murdered it was at the hands of a mentally disturbed asylum seeker (2). Hmm sounds risky, but is it? Well there was approximately 2.5 million people in NSW in 2014 so the likelihood of being murdered (in that year) is in the first instance 3.44e-5. To figure out what the likelihood of being murdered and that murder being committed by a terrorist  we just multiply this base rate by the probability that it was at the hands of a `terrorist’, ending up with 4e-7 or 4 chances in 10 million that year. If we consider subsequent and prior years where nothing happened that likelihood becomes even smaller.

Based on this 4 in 10 million chance the NSW government intends to build a super-max 2 prison in NSW, and fill it with ‘terrorists’ while the Federal government enacts more anti-terrorism laws that take us down the road to the surveillance state, if we’re not already there yet. The glaring difference between the perception of risk and the actuality is one that politicians and commentators alike seem oblivious to (3).

Notes

1. One death during the Lindt chocolate siege that could be directly attributed to the `terrorist’.

2. Sought and granted in 2001 by the then Liberal National Party government.

3. An action that also ignores the role of prisons in converting inmates to Islam as a route to recruiting their criminal, anti-social and violent sub-populations in the service of Sunni extremists.

Crowely (Image source: Warner Bro's TV)

The psychological basis of uncertainty

There’s a famous psychological experiment conducted by Ellsberg, called eponymously the Ellsberg paradox, in which he showed that people overwhelmingly prefer a betting scenario in which the probabilities are known, rather than one in which the odds are actually ambiguous, even if the potential for winning might be greater.  Continue Reading…

National-Terrorism-Threat-Advisory-System

With much pomp and circumstance the attorney general and our top state security mandarin’s have rolled out the brand new threat level advisory system. Congrats to us, we are now the proud owners of a five runged ladder of terror. There’s just one small teeny tiny insignificant problem, it just doesn’t work. Yep that’s right, as a tool for communicating it’s completely void of meaning, useless in fact, a hopelessly vacuous piece of security theatre.

You see the levels of this scale are based on likelihood. But whoever designed the scale forgot to include over what duration they were estimating the likelihood. And without that duration it’s just a meaningless list of words. 

Here’s how likelihood works. Say you ask me whether it’s likely to rain tomorrow, I say ‘unlikely’, now ask me whether it will rain in the next week, well that’s a bit more likely isn’t it? OK, so next you ask me whether it’ll rain in the next year? Well unless you live in Alice Springs the answer is going to be even more likely, maybe almost certain isn’t it? So you can see that the duration we’re thinking of affects the likelihood we come up with because it’s a cumulative measure. 

Now ask me whether a terrorist threat was going to happen tomorrow? I’d probably say it was so unlikely that it was, ‘Not expected’. But if you asked me whether one might occur in the next year I’d say (as we’re accumulating exposure) it’d be more likely, maybe even ‘Probable’ while if the question was asked about a decade of exposure I’d almost certainly say it was,  ‘Certain’. So you see how a scale without a duration means absolutely nothing, in fact it’s much worse than nothing, it actually causes misunderstanding because I may be thinking in threats across the next year, while you may be thinking about threats occurring in the next month. So it actually communicates negative information.

And this took years of consideration according to the Attorney General, man we are governed by second raters. Puts head in hands. 

Two_reactors

A tale of another two reactors

There’s been much debate over the years as whether various tolerance of risk approaches actually satisfy the legal principle of reasonable practicability. But there hasn’t to my mind been much consideration of the value of simply adopting the legalistic approach in situations when we have a high degree of uncertainty regarding the likelihood of adverse events. In such circumstances basing our decisions upon what can turn out to be very unreliable estimates of risk can have extremely unfortunate consequences. Continue Reading…

Sharks (Image source: Darren Pateman)

Practical risk management, or why I love living in Australia

We’re into the ninth day of closed beaches here with two large great whites spotted ‘patrolling our shores’, whatever that means. Of course in Australia closed doesn’t actually mean the beaches are padlocked, not yet anyway. We just put a sign up and people can make their own minds up as to whether they wish to run the risk of being bitten. In my books a sensible approach to the issue, one that balances societal responsibility with personal freedom. I mean it’s not like they’re as dangerous as bicycles Continue Reading…

NASA safety handbook cover

Way, way back in 2011 NASA published the first volume of their planned two volume epic on system safety titled strangely enough “NASA System Safety Handbook Volume 1, System Safety Framework and Concepts for Implementation“, catchy eh?

Continue Reading…

I guess we’re all aware of the wave of texting while driving legislation, as well as recent moves in a number of jurisdictions to make the penalties more draconian. And it seems like a reasonable supposition that such legislation would reduce the incidence of accidents doesn’t it?

Continue Reading…

Singularity (Image source: Tecnoscience)

Or ‘On the breakdown of Bayesian techniques in the presence of knowledge singularities’

One of the abiding problems of safety critical ‘first of’ systems is that you face, as David Collingridge observed, a double bind dilemma:

  1. Initially an information problem because ‘real’ safety issues (hazards) and their risk cannot be easily identified or quantified until the system is deployed, but 
  2. By the time the system is deployed you now face a power (inertia) problem, that is control or change is difficult once the system is deployed or delivered. Eliminating a hazard is usually very difficult and we can only mitigate them in some fashion. Continue Reading…

I was thinking about how the dubious concept of ‘safety integrity levels’ continues to persist in spite of protracted criticism. in essence if the flaws in the concept of SILs are so obvious why they still persist?

Continue Reading…

Another in the occasional series of posts on systems engineering, here’s a guide to evaluating technical risk, based on the degree of technical maturity of the solution.

The idea of using technical maturity as an analog for technical risk first appears (to my knowledge) in the 1983 Systems Engineering Management Guide produced by the Defense Systems Management College (1).

Using such analogs is not unusual in engineering, you usually find it practiced where measuring the actual parameter is too difficult. For example architects use floor area as an analog for cost during concept design because collecting detailed cost data at that point is not really feasible.

While you can introduce other analogs, such as complexity and interdependence, as a first pass assessment of inherent feasibility I’ve found that the basic question of ‘have we done this before’ to be a powerful one.

Notes

1. The 1983 edition is IMO the best of all the Guides with subsequent editions of the DSMC guide rather more ‘theoretic’ and not as useful, possibly because the 1983 edition was produced by Lockheed Martin Missile and Space Companies Systems Engineering Directorate. Or to put it another way it was produced by people who wrote about how they actually did their job… 🙂

How do we assure safety when we modify a system?

While the safety community has developed a comprehensive suite of analyses and management techniques for system developments the number of those available to ensure the safe modifications of systems are somewhat less prolific.

Which is odd when one considers that most systems spend the majority of their life in operation rather than development…

Continue Reading…

The following is an extract from Kevin Driscoll’s Murphy Was an Optimist presentation at SAFECOMP 2010. Here Kevin does the maths to show how a lack of exposure to failures over a small sample size of operating hours leads to a normalcy bias amongst designers and a rejection of proposed failure modes as ‘not credible’. The reason I find it of especial interest is that it gives, at least in part, an empirical argument to why designers find it difficult to anticipate the system accidents of Charles Perrow’s Normal Accident Theory. Kevin’s argument also supports John Downer’s (2010) concept of Epistemic accidents. John defines epistemic accidents as those that occur because of an erroneous technological assumption, even though there were good reasons to hold that assumption before the accident. Kevin’s argument illustrates that engineers as technological actors must make decisions in which their knowledge is inherently limited and so their design choices will exhibit bounded rationality.

In effect the higher the dependability of a system the greater the mismatch between designer experience and system operational hours and therefore the tighter the bounds on the rationality of design choices and their underpinning assumptions. The tighter the bounds the greater the effect of congnitive biases will have, e.g. such as falling prey to the Normalcy Bias. Of course there are other reasons for such bounded rationality, see Logic, Mathematics and Science are Not Enough for a discussion of these.

Continue Reading…

The development of safety cases for complex safety critical systems

So what is a safety case? The term has achieved an almost quasi-religious status amongst safety practitioners, with it’s fair share of true believers and heretics. But if you’ve been given the job of preparing or reviewing a safety case what’s the next step?

Continue Reading…

20120722-210308.jpg

An interesting theory of risk perception and communication is put forward by Kahan (2012) in the context of climate risk.

Continue Reading…

The Risk of losing any sum is the reverse of Expectation; and the true measure of it is, the product of the Sum adventured multiplied by the Probability of the Loss.

Abraham de Moivre, De Mensura Sortis, 1711 in the Ph. Trans. of the Royal Society

One of the perennial challenges of system safety is that for new systems we generally do not have statistical data on accidents. High consequence events are, we hope, quite rare leaving us with a paucity of information. So we usually end up basing any risk assessment upon low base rate data, and having to fall back upon some form of subjective (and qualitative) method of risk assessment. Risk matrices were developed to guide such qualitative risk assessments and decision making, and the form of these matrices is based on a mix of classical decision and risk theory. The matrix is widely described in safety and risk literature and has become one of the less questioned staples of risk management. Yet despite it’s long history you still see plenty of poorly constructed risk matrices out there, in both the literature and standards. So this post attempts to establish some basic principles of construction as an aid to improving the state of practice and understanding.

The iso-risk contour

Based on de Moivre’s definition we can define a series of curves that represent the same risk level (termed iso-risk contours) on a two dimensional surface. While this is mathematically correct it’s difficult for people to use in making qualitative evaluations. So decision theorists took the iso-risk contours and zoned them into judgementally tractable cells (or bins) to form the risk matrix. In this risk matrix each cell notionally represent a point on the iso-risk curve and steps in the matrix define the edges of ‘risk zones’. We can also usually plot the curve using log log axes which provide straight line contours which gives us a matrix that looks like the one below.

RiskMatrix

This binning is intended to make qualitative decisions as to severity or likelihood more tractable as human beings find it easier to select qualitative values from amongst such bins. But unfortunately binning also introduces ambiguity into the risk assessment, if you look at the example given above you’ll see that the iso-risk contour runs through the diagonal of the cell, so ‘off diagonal’ the cell risk is lesser or greater depending on which side of the contour your looking at. So be aware that when you bin a continuous risk contour you pay for easier decision making with by increasing the uncertainty of the underlying assigned risk.

Scaling likelihood and severity

The next problem that faces us in developing a risk matrix is assigning a scale to the severity and likelihood bins. In the example above we have a linear (log log) plot so the value of each succeeding bin’s median point should go up by an order of magnitude. In qualitative terms this defines an ordinal scale ‘probable’ in the example above to be an order of magnitude more  likely as ‘remote’, while ‘catastrophic’ another order of magnitude greater in severity than ‘critical’ and so on. There are two good reasons to adopt such a scaling. The first is that it’s an established technique to avoid qualitative under-estimation of values, people generally finding it easier to discriminate between values separated by an order of magnitude than by a linear scale. The second is that if we have a linear iso-risk matrix then (by definition) the scales should also be logarithmic to comply with De Moivre’s equation. Unfortunately, you’ll also find plenty of example linear risk contour matrices that unwittingly violate de Moivre’s equation. Australian Standard AS 4360 as an example. While such matrices may reflect a conscious decision making strategy, for example sensitivity to extreme severities, they don’t reflect De Moivre’s theorem and the classical definition of risk so where you depart you need to be clear as to why you are (dealing with non-ergodic risks is a good one).

Cell numbers and the 4 to 7 rule

Another decision with designing a risk matrices is how many cells to have, too few and the evaluation is too granular, too many and the decision making becomes bogged down in detailed discrimination (which as noted above is hardly warranted). The usual happy compromise sits at around 4 to 7 bins on the vertical and horizontal.  Adopting a logarithmic scale gives a wide field of discrimination for a 4 to 7 matrix, but again there’s a trade-off, this time between sizing the bins in order to reduce cognitive workload for the assessor and the information lost by doing so.

The semiotics of colour

The problem with a matrix is that it can only represent two dimensions of information on one graph. Thus a simple matrix may allow us to communicate risk as a function of frequency (F) and severity (S) but we still need a way to graphically associate decision rules with the identified risk zones. The traditional method adopted to do this is to use colour to designate the specific risk zones and a colour key the associated action required. As the example above illustrates the various decision zones of risk are colour coded, with the highest risks being given the most alarming colours. While colour has inherently a strong semiotic meaning, and one intuitively understood by both expert and lay person alike, there is also a potential trap in that by setting the priorities using such a strong code we are subtly forcing an imperative. In these circumstances a colourised matrix can become a tool of persuasion rather than one of communications (Roth 2012). One should therefore carefully consider what form of communication the matrix is intended to support.

Ensure consistency of ordering

A properly formed matrices risk zones should also exhibit what is called ‘weak consistency’, that is the qualitative ordering of risk as defined by the various (coloured) zones and their cells ranks various risks (form high to low) in roughly the same way that a quantitative analysis would do so (Cox 2008). In practical terms what this means is that if you find there are areas of the matrix where the same effort will produce a greater or lesser improvement of risk when compared to another area (Clements 96) you have a problem of construction. You should also (in principal) never be able to jump across two risk decision zones in one movement. For example if a mitigation only reduces the likelihood of occurrence of a hazard only we wouldn’t expect the risk reduction to change as the severity of the risk changed.

Dealing with boundary conditions

In some standards, such as MIL-STD-882, an upper arbitrary bound may be placed on severity, in which case what happens when a severity exceeds the ‘allowable’ threshold? For example, should we discriminate between a risk to more than one person versus one to a single person? For specific domains this may not be a problem, but for others where mass casualty events are a possibility it may well be of concern. In this case if we don’t wish to add columns to the matrix we may define a sub-cell within our matrix to reflect that this is a higher than we thought level of risk. Alternatively we could define the severity for the ‘I’ column as defining a range of severities whose values include at the median point the mandated severity. So for example the catastrophic hazard bin range would be from 1 fatality to 10 fatalities. Looking at likelihood one should also  include a likelihood of ‘incredible’ for risks that have been removed, and where there is no credible likelihood of their occurrence, can be recorded rather than deleted. After all just because we’ve retired a hazard today, doesn’t mean a subsequent design change or design error won’t resurrect the hazard to haunt us.

Calibrating the risk matrix

Risk matrices are used to make decisions about risk, and their acceptability. But how do we calibrate the risk matrix to represent an understandably acceptable risk? One way is to pick a specific bin and establish a calibrating risk scenario  for it, usually drawn from the real world, for which we can argue the risk is considered broadly acceptable by society (Clements 96). So in the matrix above we could pick cell IE and equate that to an acceptable real world risk that could result in the death of a person (a catastrophic loss). For example, ‘the risk of death in an motor vehicle accident on the way from and to your work on main arterial roads under all weather conditions cumulatively over a 25 year working career‘. This establishes the edge of the acceptable risk zone by definition and allows use to define other risk zones. In general it’s always a good idea to provide a description of what each qualitative bin means so that people understand the meaning. If you need to one can also include numerical ranges for likelihood and severity, such as the loss values in dollars, numbers of injuries sustained and so on.

Define your exposure

One should also consider and define the number of units, people or systems exposed, clearly there is a difference between the cumulative risk posed by say one aircraft and a fleet of one hundred aircraft in service or between one bus and a thousand cars. What may be acceptable at an individual level (for example road accident) may not be acceptable at an aggregated or societal level and risk curves may need to be adjusted accordingly. MIL-STD-882C offers a simple example of this approach.

And then define your duration

Finally and perhaps most importantly you always need to define the duration of exposure for likelihood. Without it the statement is at best meaningless and at worst misleading as different readers will have different interpretations. A 1 in 100 probability of loss over 25 years of life is a very different risk to a 1 in 100 over a single eight hour flight.

Final thoughts

A risk matrix is about making decisions so it needs to support the user in that regard, but, it’s use as part of a risk assessment should not be seen as a means of acquitting a duty of care. The principle of ‘so far as is reasonable practicable’ cares very little about risk in the first instance, asking only whether it would be considered reasonable to acquit the hazard. Risk assessments belong at the back end of a program when, despite our efforts we have residual risks to consider as part of evaluating our efforts in achieving a reasonably practicable level of safety. A fact that modern decision makers should keep firmly in mind.

References

Clements, P. Sverdrup System Safety Course Notes, 1996.

Cox, L.A. Jr., ‘What’s Wrong with Risk Matrices?’, Risk Analysis, Vol. 28, No. 2, 2008.

Cox, S., Tait, R., Safety, Reliability & Risk Management, 2nd Ed., Butterworth, Heinemann, 1998.

MIL-STD-882, System Safety Program Requirements.

Leveson, N., System Safety and Computers – A Guide to Preventing Accidents and Losses Caused by Technology, Addison Wesley, 1995.

Roth, Florian., Focal Report 9: Risk Analysis Visualizing Risk: The Use of Graphical Elements in Risk Analysis and Communications, Risk and Resilience Research Group Center for Security Studies (CSS),  Zürich 2012.

For the STS 134 mission NASA has estimated a 1 in 90 chance of loss of vehicle and crew (LOCV) based on a Probabilistic Risk Assessment (PRA). But should we believe this number?

Continue Reading...

Easter Madness

24/04/2011

Another Easter has come bringing with it the traditional Easter road toll and press hyperbole… But let’s strip away the rhetoric and think about the subject cooly and rationally. Are we really behaving worse at Easter than any other time of the year?

Continue Reading...

People tend to seek out and interpret information that reinforces their beliefs, especially in circumstances of uncertainty or when there is emotion attaching to the issue. This bias is known as confirmatory or ‘myside’ bias. So what can you do to guard against the internal ‘yes man’ that is echoing back your own beliefs?

Continue Reading...

As the latin root of the word risk indicates an integral part of risk taking is the benefit we achieve. However often times decision makers do not have a clear understanding of what is the upside or payoff.

Continue Reading...

Lessons from QF 32

06/11/2010

The recent Qantas QF32 engine failure illustrates the problems of dealing with common cause failure

This post is part of the Airbus aircraft family and system safety thread.

Updated: 15 Nov 2012

Generally the reason we have more than one of anything on a passenger aircraft is because we know that components can fail so independent redundancy is the cornerstone strategy to achieve the required levels of system reliability and safety. But while overall aircraft safety is predicated on the independence of these components, the reality is that the catastrophic failure of one component can also affect adjacent equipment and systems leading to what are termed common cause failures.

Continue Reading…

Satisficing ALARP

27/10/2010

A short and pragmatic guide on how to demonstrate what is practical and reasonable in complying with the ALARP safety goal

The principle of  As Low As Reasonably Practical (ALARP) is enshrined in the Health and Safety legislation of many western countries and embodies a hybrid rights and utility (cost/benefit) decision strategy used to determine when to stop reducing risk associated with the use of a system. (1),(2)

To quote the UK Defence safety standard DEF-STAN 00-56 ALARP is achieved when:

… it has been demonstrated that the cost of any further risk reduction, where the cost includes the loss of system capability as well as financial or other resource costs, is grossly disproportionate to the benefit obtained from that risk reduction. (DEF-STAN 00-56 Issue 3)
Unfortunately the terms reasonable and practical are unbounded qualifiers of the requirement, what is deemed to be either reasonable or practical is inherently subjective and subject to interpretation. This of course is not a good thing when it comes to writing a specification or forming a contract so from this perspective while ALARP may be a laudable societal goal but it is not a good commercial requirement. Further what do you do in practical terms when a naive customer requires you to ‘demonstrate’ the achievement of ALARP for all risks no matter how many of them, or how well they are understood?

The Titanic effect

27/09/2010

So why did the Titanic sink? The reason highlights the role of implicit design assumptions in complex accidents and the interaction of design with operations of safety critical systems

Continue Reading...

So why is one in a million an acceptable risk? The answer may be simpler than we think.

Continue Reading...

The Affect Effect

22/05/2010

Why you think that your mobile phone is good for you, even though it might be cooking your brain.

Continue Reading...