Probability is not corroboration, and why it matters
The IPCC issued a set of lead author guidance notes (IPCC 2007) on how to describe uncertainty prior to the fourth IPCC assessment. In it the IPCC laid out a methodology on how to deal with various classes of uncertainty.
Unfortunately the IPCC guidance also falls into a fatal trap, where the IPCC’s guidance fails is in it’s assignment of probabilistic values (levels of confidence) to judgments made as to the correctness of a theory or element of a theory (evidence, premise and so on). This in effect is an experts judgement as to the acceptability of that theory. Unfortunately this assignment violates the doctrine that the degree of acceptability cannot be a probability as first proposed by Popper (Popper 1959).
An example illustrates the point. One can always report the results of testing of a theory in the form of a statement of the degree of corroboration of the theory. But such an appraisal can never take the form of a statement of probability, because this simply does not identify the severity of those tests and in what fashion the theory passed this test. (Popper 1959). The reason for this is that the empirical content of a theory establishes it’s testability and degree of corroboration. So the IPCC’s use of ‘pseudo’ probability figures in this context simply confuses the issue. As a ‘by the way’ their use also introduces logical paradoxes which is another reason they should not be used. From a practical perspective, expressing such assessments as probabilities also allows climate change denialists to erect specious ‘gambling’ style arguments (1).
So, what to do next?
As a piece of free advice to the IPCC the guidance notes need to be revised to remove the confusion introduced by using probability as a metric for corroboration. In a word stop it, it just doesn’t work. To further reduce this confusion the guidance notes should also be clearly structured into two parts the first dealing with how to express the degree to which a theory is corroborated based upon the tests performed and results achieved. The second should deal with how uncertainty (for example epistemic and aleatory forms) as well as qualitative and quantitative assessments thereof are integrated into probabilistic models and analyses (2).
1. My polite term for such folk 🙂
2. For example we could model sea level changes as a stochastic process with a random (uncertain) component (aleatory) while also including our uncertainty of distribution parameters (epistemic).
IPCC, Climate Change 2007: Synthesis Report, IPCC Plenary XXVII, Valencia, Spain, 12-17 November 2007.IPCC, Guidance Notes for Lead Authors of the IPCC Fourth Assessment Report on Addressing Uncertainties, June 2005.
Manning, M.R., M. Petit, D. Easterling, J. Murphy, A. Patwardhan, H-H. Rogner, R. Swart, and G. Yohe (Eds), IPCC Workshop on Describing Scientific Uncertainties in Climate Change to Support Analysis of Risk and of Options: Workshop report. Intergovernmental Panel on Climate Change (IPCC), Geneva, 2004.
Popper, K., The Logic of Scientific Discovery (1959).