One of the problems that we face in estimating risk driven is that as our uncertainty increases our ability to express it in a precise fashion (e.g. numerically) weakens to the point where for deep uncertainty (1) we definitionally cannot make a direct estimate of risk in the classical sense.
As I pointed out in a previous post, what we need is a way to measure, express and reason about such deep uncertainties. Here I don’t mean tools like Bayesian belief structures, but a way to measure and judge deep epistemic/ontological uncertainty. If we fail to consider such sources of risk then our risk assessment is incomplete, and as Mark Twain once remarked it’s the things that you know that just ain’t so that are important (2).
So, if we can’t directly measure the likelihood of being surprised perhaps there are indirect measures of deep uncertainty? In other words perhaps we have a way to measure our degree of belief that we’ll find a black swan in the bathtub tomorrow morning. A recent paper by Parker and Risby (2014) provides a set of criteria (3) that when present either independently or in combination, will tend to increase our degree of belief in the presence of such events:
- System complexity (4)
- Limited knowledge (5)
- Overconfidence (6)
- Past instances of genuine surprises (7)
- Novel conditions, or working outside your experience base (8)
The degree and type of uncertainty will affect the decision making strategy we may adopt. If risk can be precisely defined then classical utility based decision making might be applied. If on the other hand epistemic uncertainty was found to dominate then we could apply robust decision strategy (9). While if we have identify deep uncertainties, in our understanding, which may hide unpleasant surprises (i.e. ontological risk) we could adopt a precautionary (10) strategy.
Limpert R.J., Collins M.T., Managing the risk of uncer- tain threshold responses: comparison of robust, optimum, and precautionary approaches. Risk Anal 2007, 27:1009 – 1026., 2007.
Parker, W.S., Risby, J.S., Fales precision surprise and improved uncertainty assessment, Phi Trans., R. Soc. A. http://dx.doi.org/10.1098/rtsa.2014.0453., 2014
1 See epistemic and ontological and aleatory risk for definitions of each risk category.
2. An allied problem is that modern discourse on risk is dominated by the paradigm of pascalian logic, with its emphasis upon hard numerical (set theoretic) probabilities. Unfortunately as a language of risk it is incapable of expressing the necessarily more abstract concepts of deeper uncertainty. As George Orwell pointed out, if your language can’t express a concept then you’ll have difficulty in discussing it particularly well.
3. Their paper also provides the glimmering of a lexicon for what a suitable language of deep uncertainty might look like.
4. Complexity is here taken to be an observer centric property. Complexity making it more difficult for a specific observer to understand, describe and subsequently predict a system’s behaviour. Note that depending on definitions, complexity may include complicatedness (many interacting parts and processes), emergent behaviour, deep hierarchies, non-linear and stochastic behaviour.
5. The classic example of limited knowledge is the Tacoma Narrows disaster, where the designers were completely unaware of the torsional flutter mode of excitation that doomed the bridge.
6. The surprise index is a standard measure of overconfidence. The index measure what percentage of the true measured value of a parameter lie outside an assessor’s 98% CI.
7. Which provides a neat theoretical justification for the traditional inclusion of tracking and reviewing accident and near miss data as part of any safety program.
8. See the Challenger launch decision for example.
9. For example trading some optimal performance for less sensitivity to assumptions, satisficing over a wide range of futures, and pursuing corrigibility. Interestingly, a study by Lempert and Collins (2007) found that all of these decision criteria picked the same strategies as the most robust.