SFAIRP and the theory of risk

04/09/2014 — Leave a comment

Enshrined in Australia’s current workplace health and safety legislation is the principle of ‘So Far As Is Reasonably Practicable’. In essence SFAIRP requires you to eliminate or to reduce risk to a negligible level as is (surprise) reasonably practicable. While there’s been a lot of commentary on the increased requirements for diligence (read industry moaning and groaning) there’s been little or no consideration of what is the ‘theory of risk’ that backs this legislative principle and how it shapes the current legislation, let alone whether for good or ill. So I thought I’d take a stab at it. 🙂

The definition of ‘reasonably practicable’ consists of a number of separate but supporting decision making criteria, defined by the Australian SWA guide as:

  • the likelihood of the hazard and the degree of harm that might result (a measure of the risk),
  • what you could, or ought reasonably to, know about the hazard and ways of eliminating or reducing it’s risk
  • the availability and suitability of ways to eliminate or minimise the risk (your state of knowledge), and
  • the cost associated with available ways of eliminating or minimising the risk and it’s proportionality to the risk reduction (the rule of gross disproportion).

There are, unfortunately, some problems with this definition when we turn away from the boiler boots and hard hats brigade and towards more complex technological problems. Firstly there is an implicit assumption that we know upfront what the risks are and can characterise them, but this is actually more difficult than it seems for complex systems. Indeed for a new system it is extremely hard to characterise the likelihood of a hazard occurring because one has little operational data to rely upon and the proportional discrepancy is so much greater. So it is as a consequence difficult to establish what the risk actually is and therefore whether what we are doing is both reasonable and practicable in the circumstances. Yet the legislation simply assumes that one can do just that, perpetuating the myth of risk as an objective quasi-measurable quality that we can characterise in the same way that we might characterise the weight of a pumpkin. The recognised difficulty in assessing risk effectively may in fact be one of the strongest arguments for the rule of gross disproportion when it comes to evaluating the reasonableness of implementing a control for a risk.

Another challenge to this mechanistic and objectivist view of risk assessment is the work of folk such as Slovic and Fischoff which clearly shows that risk assessment as an inherently subjective and value laden exercise. By introducing risk as one of the criteria it is hard to see how that a judgement of practicability and reasonableness can be anything but an inherently subjective exercise, and calls into question pronouncements in the SWA guide that what is reasonably practicable can in fact be objectively tested. On the other hand reasonably practicability does relegate risk to a consideration whether it is reasonable to apply a specific control to the risk rather than allowing it to be used as an apologia for doing nothing in the first instance.

There is also an implicit assumption in the legislation as to the patent nature of hazards, e.g. that hazards are easy to identify and that the cost arises when we try to eliminate or reduce their risk. Now while this may be true for simple physical hazards, such as unguarded machinery, it’s most certainly not true for more complex systems where it often takes considerable effort to identify whether hazards exist in the first place. Nor is it possible to predict what sort of hazardous faults we might identify in advance, they may be minor (and therefore we need not have bothered) or they may be extremely significant (and the effort is deserved). This epistemic uncertainty means that we cannot judge whether the application of effort of is reasonably practicable before we start. We end up in a vicious circle where to judge whether the effort was reasonably practicable we need to analyse the system, but to decide to expend resources on the analysis we need to judge whether it is reasonably practicable in the first place. As McDermid (2001) pointed out about the ALARP principle, so too SFAIRP implicitly assumes that the epistemic cost is negligible, but for complex critical systems this is unlikely to be the case.

The current legislation does try to make an end run around these problem by allowing one to justify that meeting the criteria by appeal to compliance with an accepted standard. But this appeal to authority is at best a legalistic patch and doesn’t eliminate the fundamental flaw in the heart of the legislation, that it views risk assessment as a mechanistic, objective and essentially cost free exercise ignoring that epistemic uncertainty can be a significant source of risk, and possibly the predominant risk source in complex systems, and likewise failing to recognise that strategies to reduce epistemic risk can take significant time and effort to achieve.

The most obvious implication is that simplistic assertions in safety arguments for complex system, that the system is ‘SFAIRP’ should to be treated with a high degree of scepticism as given the current state of practice it is highly unlikely that these claims can be backed up by a defensible argument. McDermid (2008) and Kelly have proposed that an adjunct criteria of ‘As Confident As Is Reasonably Practicable’ (ACAIRP) be used to address epistemic uncertainty and the cost of obtaining information to reduce such uncertainty. However given this concept has no legal basis your mileage may vary with the beak judge…

If all the above seems to you dear reader, like one of those medieval arguments over how many angels might dance on the head of a pin, you may well be right. There are certainly much simpler ways in which we can make decisions about safety and risk, and no hard evidence to indicate that SFAIRP does any better than them. So perhaps we should run the sword of common sense through this gordian knot.

References

McDermid, J.A., Software Safety: Where’s The Evidence?, in Proceedings of the Sixth Australian Workshop on Industrial Experience with Safety Critical Systems and Software, Brisbane, Australia, published in Conferences in Research and Practice in Information Technology Series, P. Lindsay (Ed.), vol. 3, pp.1-6, Australian Computer Society, 2001.

McDermid J., Risk, Uncertainty, Software and Professional Ethics, Safety Systems: The Safety-Critical Systems Club Newsletter, vol.17 no.2, January 2008

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s