All you ever thought you knew about risk is wrong…


So what do gambling, thermodynamics and risk all have in common?

As a thought experiment let’s say we’re building a nuclear power station. Actually lets make that a number of power stations in a single industrial park. The benefit is we get very very cheap power, so the social worth of the project is well understood and defensible.

The downside is that a bad reactor accident is possible and could kill a lot of people (say a hundred thousand) but that’s OK because we’ve computed the likelihood as being really, really low. And it’s accurately computed here no arguments about uncertainty, so we can’t hide behind that. If we do the numbers and the Risk (where Risk=Probability X Severity) is acceptably low we should proceed yes?

Right, thought you’d agree.

So as it turns out an even worse scenario comes to light which could kill a million people, but the probability is an order of magnitude lower as well. Obviously the risk is the same and should still be acceptable.

Do you still agree?

OK, so you’re still with me. Unfortunately some further analysis discloses an even worse accident scenario, the good news is that while it will kill ten million people or so the likelihood is ten times less. So, you being a rational risk manager understand that the risk is the same and the answer is still go ahead right?

No!? What do you mean that you’re not comfortable with that? Don’t you understand De Moivre’s theorem? Are you into making irrational decisions or what?

I understand we’re talking people’s lives and that’s a difficult decision to make, so let’s substitute dollars for people. Instead of a hundred thousand lives make that a billion dollars of insurance payout. For the next scenario it’s ten billion, then a hundred billion…

Willing to risk that hundred billion? I thought not.

What we’re talking about here is a variant of the St Petersburg paradox. Originally posed by Daniel Bernoulli (1738) the original paradox showed that for a gambling game that’s constructed to have an infinite expected value of return, while people should logically be willing to pay any amount to enter the game, they actually cap off their entry bids at a finite value ($15-20 dollars in some scenarios).

Clearly the people in the original study were crazy folk, I mean infinite expected value? What, don’t they understand probability theory and risk? Bernoulli came up with a utility theory answer to the paradox, which has it’s own problems, but the actual reason people are unwilling to buy into Bernoulli’s game, or the scenario above is a lot more fundamental. In fact it goes right to the heart of how we define probability (Peters 2011).

Let’s start with a little bit of history and a problem. How do we assess the probability of events that haven’t happened yet? The answer is that we rely on a little bit of philosophical legerdemain in which we imagine a series of parallel worlds in which different outcomes can occur, we sum the number of worlds in which ‘X’ occurs over the total and voila, we have the probability of some future event ‘X’.

Now this sleight of hand actually works when we’re looking at the behaviour of certain systems, for example the behaviour of gases or that of insurance markets. The physicist Boltzman coined the term ergodic for such systems. The basic definition of an ergodic system is that ensemble statistics (across the many) and time series statistics (along a single timeline) are the same.

Think about throwing a coin, if I throw one million coin tosses and compute the probability of heads or tails, I’m going to get the same as if I had a million people throw a coin once. In this case the coin toss scenario is ergodic and the statistic I get from one is applicable to the other. All good right? Well, wrong… And not just a slightly, ehem ‘we’ll adjust that a little’, wrong but large scarily conceptually wrong.

Here’s the problem, if we don’t actually engage in multiple games at once then the average calculated only has relevance if it’s identical to the time average. In the case of the power plants example we don’t operate a million reactors for one year instead we operate one (or a couple) of reactors for twenty to thirty years (in a time series). The question is whether the two statistics are really equivalent? If we consider that the operation of our single reactor five years or a twenty years down the track is dependent on the intervening years then clearly what happens in year one does matter to year two or to year twenty for that matter. On the other hand all those ‘parallel world’ reactors are independent of each other and this means that the ensemble statistic is going to be ‘better’ than the time series statistic, maliciously so. So in the case of our reactor scenario an ‘ensemble’ based average is just not an appropriate statistic (Peters 2009).

Let’s look at it from a practical perspective, say I operate an insurance company, most of the time people don’t claim on my policies so I bank the money but sometimes (a few) people do. Because all the insurance policies come back to my central fund the people who need to claim can access a shared pool of resources. From this perspective using the ensemble average works as a statistic and we can treat the system as ‘ergodic’. But say we are operating our reactors above, and after five years of operation there’s a massive earthquake and subsequent tsunami that causes our worst case accident. Can I go and live in those parallel worlds? Answer no. In this case the system is determinedly non-ergodic, reflecting the hard irreversibility of time.

Does this start to concern you?

Well it ought, because the risk theory that forms the basis of modern probabilistic risk assessment is fundamentally based on an assumption of ergodicity. In other words it’s based on an assumption that some-how magically we can get in contact with all these parallel worlds and that offsets the one one in which we’ve just had a very, very bad day indeed.

So now let’s consider all the probabilistic risk assessments that have been carried out for high consequence systems and ask ourselves whether we have badly and critically underestimated the risk consequences. The alternative time based view of risk provides a more rational framework in which to evaluate risk, because it establishes an upper bound on the risk that we are willing to run in this the only universe that we have access to (see for example Kelly’s (1956) criterion). The problem is of course that we just haven’t bothered to apply it. Interestingly the principle of ergodicity and it’s affect upon risk assessment provides another justification for a basic risk categorisation principle that I discussed in a previous post.

The bottom line? Well unless we have a handy time machine we should steer clear of risk assessments based on the assumption of ergodic risk, when the risks we’re evaluating are slate wipers…


1. Bernoulli, D., orig., 1738; translated by Dr. Louise Sommer. (1954), Exposition of a New Theory on the Measurement of Risk, Econometrica, 22 (1): 22–36.

2. Peters, O., On Time and Risk, Sante Fe Institute Bulletin, 2009, p 36-41.

3. Peters, O., The Time Resolution of the St petersburg Paradox, Phil. Trans. R. Soc. A (2011) 369, 4913–4931, doi:10.1098/rsta.2011.0065.

4. Kelly, J.L., Jr. (1956), A New Interpretation of Information RateBell System Technical Journal 35: 917–926, 1956.

2 responses to All you ever thought you knew about risk is wrong…


    Hi Matt, great article, very thought provoking and well written.
    (As a side note I think I preferred the Dark Matter name, sounds more dramatic.)


      Matthew Squair 13/09/2012 at 10:35 pm

      Thanks, yes it’s interesting how we can get into trouble when we use a concept without understanding the underlying assumptions.