Why safety integrity levels are (still) pseudo-science

14/06/2009 — Leave a comment

Buncefield (Image Source Royal Air Support Unit)

The use of integrity levels to achieve ultra high levels of safety has become an ‘accepted wisdom’ in the safety community. Yet I remain unconvinced as to their efficacy, and in this post I argue that integrity levels are not scientific in any real sense of that term.

Introduction

The basis of science is empirical observation and inductive reasoning. For example we may observe that swans are white and therefore form a theory that all swans must be white. But as Hume pointed out inductive reasoning is inherently limited because the premises of an inductive argument support but cannot logically entail the conclusion. For example, in our original example a single black swan is sufficient to refute our theory, despite there being a thousand white swans…

This does not mean that a theory cannot be useful (that is it works), but just because a theory has worked a number of times does not mean that it is proven to be true. For example, we can build ten bridges that stay up (our theory is useful) but there is nothing to say that the eleventh will not fall down due to effects not considered in the existing theory of how to design bridges.

As any test of a theory cannot prove the truth of a theory only disprove it, when we say a theory is testable we are not saying that we can prove it, only that there exists an opportunity to disprove or falsify it. This concept of disproof is very much akin to the legal principal of finding a person ‘not guilty’, rather than ‘innocent’ of a charge. Which leaves us with a problem as to how science really works, if we presume that it’s science’s job to prove things.

The response of the philosopher Karl Popper was to accept this inability to absolutely prove the truth and conclude that because we can never prove the truth of a scientific theory, science has to advance on the basis of the falsification of existing theories and replacing them with theories that better explain the facts (Popper 1968).

From Popper’s perspective a good theory is one that offers us ample opportunity to falsify it. Conversely a theory which is not refutable by any conceivable means is non-scientific. Irrefutability is in fact not a virtue of a theory (as people often think) but a vice (Popper 1968).  To achieve falsifiability, according to Popper, a theory therefore needs to be:

  1. precisely stated (i.e. unambiguous),
  2. wide ranging, and
  3. testable in practical terms.

As a corollary if a theory does not satisfy these criteria it should not be considered scientific (Popper 1968). For example we could develop a design hypothesis  as to how we could design a bridge to span the straits of Gibraltar using as yet undeveloped hyper-strength materials, but as we have no practical way to test such a theory this should not be considered as scientific.

Another way to look at it is that a ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is because it gives us greater scope to falsify it. For example we build a very slender bridge using deflection theory (we do X) the design hypothesis forbids the bridge from falling down under specific deck loads (If X then not Y) which is eminently testable.

Confirmations should also only count only if they are the result of risky predictions. That is if based on our original theory or understanding, we expect an event that is incompatible with the new theory then that would refute the theory. If the new theory predicts pretty much the same results as the old theory then there’s not much at risk. In our bridge example if our new theory of bridge construction predicts that such a bridge will not fail under an known load, whereas the older theory based on traditional techniques predicts that it will, then there’s a clear, and testable, difference.

From an engineering perspective this means that the confirmation of a theory comes when it allows engineers to do something beyond the current state of the art. For example, we could use a new bridge deflection theory to design a lighter and more slender bridge span for given wind loads. If the bridge stands under the loads then the the results would count as confirmation of the new theory as the old theory would have predicted failure.

Confirming evidence should not count except when it is the result of a genuine test of the theory and it can be presented as a serious, but unsuccessful, attempt to falsify the theory whose results corroborate the evidence (Popper 1968). Essentially our theories need to have some, ‘skin in the game’. Again using the bridge design example if both the original and the new design theory predict the survival of a bridge this does not represent a genuine test of the new theory.

The scientific theory of software Safety Integrity Levels (SILs)

As the concept of a safety integrity levels is most entrenched within the software community I’ll stick with them for the moment, noting that the issues raised below are just as valid for safety integrity levels when applied to hardware.

The theory of safety integrity levels for software can be expressed as follows:

  1. Software failures are ‘systematic’ (DEF STAN 00-55) that is they result from systematic faults in the software specification, design or production processes,
  2. As such software failures are not random in nature, given the correct set of inputs or environmental conditions the failure will always occur,
  3. The requirement for ultrahigh reliability (10E-9 per hour) of safety functions makes traditional reliability testing of software to demonstrate such reliability impossible,
  4. The use of specific development processes will deliver the required reliability by reducing the number of such faults,
  5. Based on an assessment of risk an ‘integrity level’ is assigned to the software that performs such safety functions,
  6. This integrity level represents the required reliability of the safety function, and
  7. To achieve the integrity level a set of processes are applied to the specification, design & production processes.

But SIL’s are fundamentally not testable

Unfortunately even the lowest target failure rates for safety functions (e.g. 10E-5 per hr) are already beyond practical verification (Littlewood-Strigini 1993) therefore we have no practical independent and empirical way to demonstrate that application of a SIL (or any other posited technique) will achieve the required reliability. So we end up with a circular argument where we can only demonstrate achieving a specific SIL by the evidence of carrying out the processes that define that SIL level (McDermid 2001).

SIL allocation is non-trivial

A number of different techniques can be used to allocate integrity level requirements ranging from the Consequence/Autonomy models of DO-178B and MIL-STD-882  to the Risk Matrices of IEC 61508 (1). Because of these differences SIL allocation cannot be said to be a consistent and therefore precisely defined activity this makes refutation of the theory difficult as a failure could be argued on an after the fact basis as being due to incorrect allocation.

SIL Activities are inconsistent from standard to standard

The many SIL based standards vary widely in the methods invoked and the degree of tailoring that a project can apply. DO-178B defines a basic development process but focuses upon software product testing and inspection to assure safety. Other standards such as DEF STAN 00-55 focus on the definition of safety requirements that are acquitted through evidence. Some standards, such as DEF AUST 5679, emphasise the use of formal methods to achieve the highest integrity levels while others, such as IEC 61508, invoke a broad range of techniques to deliver a safety function at a required integrity level. There is as a result no single consistent and therefore wide ranging, ‘theory of SILs’.

SIL activities are applied inconsistently

The majority of SIL standards allow a degree of tailoring of process to the specific project or company. While this is understandable given the range of projects and industry contexts it results in an inherently inconsistent application of processes across projects. As an example from aviation, within that industries software community there has been a vigorous debate over the application of various methods of achieving the Modified Condition/Decision Coverage criteria of DO-178B (Chilenski 2001). Because of this variability of application it is impossible to say with precision that a specific standard has been fully applied. The lack of precision then makes it difficult to argue that should an accident occur that the standard failed because it could always be argued after the fact that it was a fault of application rather than an inherent fault in the process standard that caused the failure. This is what Popper calls a conventionalist twist, because it can be used to explain away inconvenient results.

This problem of application is further exacerbated by the standardisation bodies expressing their requirements in terms of recommendations (IEC 61508) or guidance (DO-178) rather than requirements and thereby allowing process variance without either justification or demonstration of equivalence.

SIL activities are ambiguous as to outcome

While the SIL standards are intended to deliver both intermediate and final products with low defect rates the logical argument as to how each process achieves or contributes to such a process is not so clear. The problem becomes worse as the process moves away from proximal activities that directly impact the final delivered product and towards the distal activities of managing the process. For example DO-178 Table A-1 requires the preparation of a plan for the software aspects of certification. While planning a process is certainly a ‘good thing’, the problem is that it is difficult to link the quality of overall planning to a specific and hazardous fault in a product. All that can be said about a plan is that it represents a planning activity and ensures that, if adhered to, subsequent efforts are carried out in a planned way and are auditable against the plan.

The product still looks the same

Having developed a software product to the SIL requirements we then find that the end product behaves much like software that has not been developed to such a standard. In essence SIL’s make no risky predictions given as noted above that the purported reliability of the software is not empirically testable. Even should a latent software fault exist as long as the correct set of circumstances never arise in practice the software will operate safely.

Conclusions

Given the problems identified above we must conclude that, however much SIL’s have become the accepted wisdom, they do not satisfy the requirements of a scientific theory. They may have a seductive simplicity but they are, it seems, closer to astrology than to science or engineering. Unfortunately while the software community continues to cling to such concepts it stifles serious investigation into the real question of what constitutes safe software.

References

Chilenski, J. J. (2001), An Investigation of Three Forms of the Modified Condition Decision Coverage (MCDC) Criterion, FAA Tech Center Report DOT/FAA/AR-01/18.

Fowler, D., Application of IEC 61508 to Air Traffic Management and Similar Complex Critical Systems – Methods and Mythology, in Lessons in System Safety: Proceedings of the Eighth Safety-Critical Systems Symposium, Anderson, T., Redmill, F. (ed.s), pp 226-245, Southampton, UK, Springer Verlag.

Littlewood, B. & Strigini, L. (1993), Validation of Ultra-High Dependability for Software-based Systems. Comm. of the ACM, 36(11):69–80.

McDermid, J. A, Pumfrey, D.J Software Safety: Why is there no Consensus? Proceedings of the International System Safety Conference (ISSC) 2001, Huntsville,System Safety Society, 2001.

Popper, K.R. , Conjectures and Refutations, Third Ed. Routledge Pub., 1968.

Redmill, F., Safety Integrity Levels – Theory and Problems, Lessons in Systems Safety, in Lessons in System Safety: Proceedings of the Eighth Safety-Critical Systems Symposium, Anderson, T., Redmill, F. (ed.s), pp 1-20, Southampton, UK, Springer Verlag

Notes

1. There have been a multitude of qualitative and quantitative methods proposed for SIL assignment, so many that it sometimes seems that safety professionals take a perverse delight in propagating new techniques. Some of the more common include (with source):

  • Consequence (control loss) (MISRA),
  • Software authority (MIL-STD-882),
  • Consequence (loss severity) (DO 178),
  • Quantitative risk method (IEC 61508),
  • Risk graph and calibrated risk graph (IEC 61508/IEC 61511),
  • Hazardous event severity matrix (IEC 61508),
  • Hybrid consequence and risk matrix (DEF STAN 00-56),
  • Semi-quantitative method (IEC 61511),
  • Safety layer matrix method (IEC 61511), and
  • Layer of protection analysis (IEC 61511).

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s