Counting black swans (Pt II)

09/12/2015 — 3 Comments

4blackswans

One of the problems that we face in estimating risk driven is that as our uncertainty increases our ability to express it in a precise fashion (e.g. numerically) weakens to the point where for deep uncertainty (1) we definitionally cannot make a direct estimate of risk in the classical sense.

As I pointed out in a previous post, what we need is a way to measure, express and reason about such deep uncertainties. Here I don’t mean tools like Bayesian belief structures, but a way to measure and judge deep epistemic/ontological uncertainty. If we fail to consider such sources of risk then our risk assessment is incomplete, and as Mark Twain once remarked it’s the things that you know that just ain’t so that are important (2).

So, if we can’t directly measure the likelihood of being surprised perhaps there are indirect measures of deep uncertainty? In other words perhaps we have a way to measure our degree of belief that we’ll find a black swan in the bathtub tomorrow morning. A recent paper by Parker and Risby (2014) provides a set of criteria (3) that when present either independently or in combination, will tend to increase our degree of belief in the presence of such events:

  1. System complexity (4)
  2. Limited knowledge (5)
  3. Overconfidence (6)
  4. Past instances of genuine surprises (7)
  5. Novel conditions, or working outside your experience base (8)

The degree and type of uncertainty will affect the decision making strategy we may adopt. If risk can be precisely defined then classical utility based decision making might be applied. If on the other hand epistemic uncertainty was found to dominate then we could apply robust decision strategy (9). While if we have identify deep uncertainties, in our understanding, which may hide  unpleasant surprises (i.e. ontological risk) we could adopt a precautionary (10) strategy.

References

Parker, W.S., Risby, J.S., Fales precision surprise and improved uncertainty assessment, Phi Trans., R. Soc. A. http://dx.doi.org/10.1098/rtsa.2014.0453., 2014

Notes

1 See epistemic and ontological and aleatory risk for definitions of each risk category.

2. An allied problem is that modern discourse on risk is dominated by the paradigm of pascalian logic, with its emphasis upon hard numerical (set theoretic) probabilities. Unfortunately as a language of risk it is incapable of expressing the necessarily more abstract concepts of deeper uncertainty. As George Orwell pointed out, if your language can’t express a concept then you’ll have difficulty in discussing it particularly well.

3. Their paper also provides the glimmering of a lexicon for what a suitable language of deep uncertainty might look like.

4. Complexity is here taken to be an observer centric property. Complexity making it more difficult for a specific observer to understand, describe and subsequently predict a system’s behaviour. Note that depending on definitions, complexity may include complicatedness (many interacting parts and processes), emergent behaviour, deep hierarchies, non-linear and stochastic behaviour.

5. The classic example of limited knowledge is the Tacoma Narrows disaster, where the designers were completely unaware of the torsional flutter mode of excitation that doomed the bridge.

6. The surprise index is a standard measure of overconfidence. The index measure what percentage of the true measured value of a parameter lie outside an assessor’s 98% CI.

7. Which provides a neat theoretical justification for the traditional inclusion of tracking and reviewing accident and near miss data as part of any safety program.

8. See the Challenger launch decision for example.

9. For example trading some optimal performance for less sensitivity to assumptions, satisficing over a wide range of futures, and pursuing corrigibility. Interestingly, a study by Lempert and Collins (2007) found that all of these decision criteria picked the same strategies as the most robust.

10. For example adopting possibilistic design approach, rather than a probabilistic one. See also further discussion here.

3 responses to Counting black swans (Pt II)

  1. 

    Fewer than 300 words, one embedded link, two references, and ten footnotes, on quantifying profoundly conjectural events. This just begs to be a mind map.

  2. 

    The Parker and Risby article seems to be under the control of the Fellows of the Royal Society. Is there any pubic access to a copy that I can read?

    • 
      Matthew Squair 14/12/2015 at 6:32 am

      I had a scan around and I’m afraid not online. But you should be able to grab a copy through your local university library. If they don’t carry the Journal they can easily get it on interlibrary loan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s