Archives For Risk

What is risk, how dow we categorise it and deal with it.

Black Saturday fires (Image source: ABC)

With the NSW Rural Fire Service fighting more than 50 fires across the state and the unprecedented hellish conditions set to deteriorate even further with the arrival of strong winds the question of the day is, exactly how bad could this get? The answer is unfortunately, a whole lot worse. That’s because we have difficulty as human beings in thinking about and dealing with extreme events… To quote from a post I wrote in the aftermath of the 2009 Victorian Black Saturday fires.

So how unthinkable could it get? The likelihood of a fire versus it’s severity can be credibly modelled as a power law a particular type of heavy tailed distribution (Clauset et al. 2007). This means that extreme events in the tail of the distribution are far more likely than predicted by a gaussian (the classic bell curve) distribution. So while a mega fire ten times the size of the Black Saturday fires is far less likely it is not completely improbable as our intuitive availability heuristic would indicate. In fact it’s much worse than we might think, in heavy tail distributions you need to apply what’s called the mean excess heuristic which really translates to the next worst event is almost always going to be much worse…

So how did we get to this?  Well simply put the extreme weather we’ve been experiencing is a tangible, current day effect of climate change. Climate change is not something we can leave to our children to really worry about, it’s happening now. That half a degree rise in global temperature? Well it turns out it supercharges the heavy tail of bushfire severity. Putting it even more simply it look’s like we’ve been twisting the dragon’s tail and now it’s woken up…

Matrix (Image source: The Matrix film)

How algorithm can kill…

So apparently the Australian Government has been buying it’s software from Cyberdyne Systems, or at least you’d be forgiven for thinking so given the brutal treatment Centerlink’s autonomous debt recovery software has been handing out to welfare recipients who ‘it’ believes have been rorting the system. Yep, you heard right it’s a completely automated compliance operation (well at least the issuing part).  Continue Reading…

Donald Trump

Image source: AP/LM Otero

A Trump presidency in the wings who’d have thought! And what a total shock it was to all those pollsters, commentators and apparatchiks who are now trying to explain why they got it so wrong. All of which is a textbook example of what students of risk theory call a Black Swan event. Continue Reading…

Accidents of potentially catastrophic potential pose a particular challenge to classical utilitarian theories of managing risk. A reader of this blog might be aware of how the presence of possibility of irreversible catastrophic outcomes (i.e. non-ergodicity) undermines a key assumption on which classical risk assessment is based. But what to do about it? Well one thing we can practically do is to ensure that when we assess risk we take into account the irreversible (non-ergodic) nature of such catastrophes and there are good reasons that we should do so, as the law does not look kindly on organisations (or people) who make decisions about risk of death purely on the basis of frequency gambling.

A while ago I put together a classical risk matrix (1) that treated risk in accordance with De Moivre’s formulation and I’ve modified this matrix to explicitly address non-ergodicity. The modification is to the extreme (catastrophic) severity column where I’ve shifted the boundary of unacceptable risk downwards to reflect that the (classical) iso-risk contour in that catastrophic case under-estimates the risk posed by catastrophic irreversible outcomes. The matrix now also imposes claim limits on risk where a SPOF may exist that could result in a catastrophic loss (2). We end up with something that looks a bit like the matrix below (3).

modified_matrix

From a decision making perspective you’ll note that not only is the threshold for unacceptable risk reduced but that for catastrophic severity (one or more deaths) there is no longer a ‘acceptable’ threshold. This is an important consideration reflecting as it does the laws position that you cannot in gamble away your duty of care, e.g justify not taking an action purely the basis of a risk threshold (4).  The final outcome of this work, along with revised likelihood and severity definitions, can be found in hazard matrix V1.1 (5). I’m still thinking about how you might introduce more consideration of epistemic and ontological risks into the matrix, it’s a work in progress.

Notes

1. Mainly to provide a canonical example of what a well constructed matrix should look like as there are an awful lot of bad ones floating around.

2. You have to either eliminate the SPOF or reduce the severity. There’s an implied treatment of epistemic uncertainty in such a claim limit that I find appealing.

3. The star represents a calibration point that’s used when soliciting subjective assessments of likelihood from SME.

4.  By the way you’re not going to find these sort of considerations in ISO 31000.

5. Important note. like all risk matrices it needs to be calibrated to the actual circumstances and risk appetite of the organisation. No warranty given and YMMV.

Anna Johnson on boycotting the census

M1 Risk_Spectrum_redux

A short article on (you guessed it) risk, uncertainty and unpleasant surprises for the 25th Anniversary issue of the UK SCS Club’s Newsletter, in which I introduce a unified theory of risk management that brings together aleatory, epistemic and ontological risk management and formalises the Rumsfeld four quadrant risk model which I’ve used for a while as a teaching aid.

My thanks once again to Felix Redmill for the opportunity to contribute.  🙂

Monkey-typing

Safety cases and that room full of monkeys

Back in 1943, the French mathematician Émile Borel published a book titled Les probabilités et la vie, in which he stated what has come to be called Borel’s law which can be paraphrased as, “Events with a sufficiently small probability never occur.” Continue Reading…