Or how do we measure the unknown?
The problem is that as our understanding and control of known risks increases, the remaining risk in any system become increasingly dominated by the ‘unknown‘. The more we demand in integrity for high integrity systems the more we end up in the situation of having to deal with residual risks that are unknown and unknowable. Well at least the day before the accident they are. What we really need is a way to measure, express and reason about deep uncertainty, and by that I don’t mean tools like Pascalian calculus or Bayesian prior belief structures, but a way to measure and judge ontological uncertainty.
Even if we can’t measure ontological uncertainty directly perhaps there are indirect measures? Perhaps we can infer something from the platonic shadows it casts on the wall, so to speak. Nassim Taleb would certainly say no, the unknowability of such events is the central thesis of his Ludic Fallacy after all. But I still think it’s worthwhile thinking about, because while he might be right, he may also be wrong.
*With apologies to Nassim Taleb.