One of the tenets of safety engineering is that simple systems are better
Many practical reasons are advanced to justify this assertion, but I’ve always wondered what, if any, theoretical justification was there for such a position. The philosopher Karl Popper’s provides just such a justification in his discussion of scientific theories and why we should prefer simplicity in that context. To summarise Popper’s argument, simpler theories contain more empirical content and are more amendable to testing. That is, a simpler theory applies to more cases and therefore provides more opportunity for falsification.
Simple statements, if knowledge is our object, are to be prized more highly than less simple ones because they tell us more; because their empirical content is greater; and because they are better testable.
Karl Popper The Logic of Scientific Discovery (1959)
So if we consider a design hypothesis, a simpler one is a more inherently falsifiable and therefore to be preferred because we can attempt to disprove it more rigorously. Of course this does tip the idea of why we test (especially for safety) upon its head as according to Popper we gather empirical evidence in an attempt to falsify our theory not to prove it.
Popper’s theory as to why scientific simplicity is to be preferred also provides a justification for why we should consider more complex systems to contain more risk. In essence the less testable the design theory the less our empirical knowledge of a system’s safety properties, the more likely we are to accept a system which is actually unsafe and the greater the epistemic/ontological risk.
Authors Note – 1 Aug 2012: In this and other posts I’ve been somewhat loosely using the term ‘epistemic’ to denote both epistemic and ontological uncertainty. So I’ve revised this post to more clearly identify ontological as well as epistemic uncertainty.