Reasonably practicable and software assurance (part I)

24/09/2015 — 1 Comment

Matrix (Image source: The Matrix film)

The law of unintended consequences

There are some significant consequences to the principal of reasonable practicability enshrined within the Australian WHS Act. The act is particularly problematic for risk based software assurance standards, where risk is used to determine the degree of effort that should be applied. In part one of this three part post I’ll be discussing the implications of the act for the process industries functional safety standard IEC 61508, in the second part I’ll look at aerospace and their software assurance standard DO-178C then finally I’ll try and piece together a software assurance strategy that is compliant with the Act.

For readers unfamiliar with the principle of reasonable practicability, it’s a hybrid decision criteria that mixes together a rights based principle, ‘that everyone has an right to be safe’, with a utility based principle, ‘the test of gross disproportion’, which reduces what would be an absolute obligation down to what is judged as ‘reasonably practicable’. The test for what is considered reasonable is what is called the criteria of gross disproportion, i.e. if the costs of applying any remaining measures are grossly disproportionate to any benefit gained (1,2) then you need do no more. Turning our attention to the current crop of software assurance standards, we find that such standards all embody the use of some form of risk based utility criteria. Let’s look at two different standards IEC 61508 and DO-178 and see how each individually stacks up. First up IEC61508.

IEC 61508 was originally intended to be used in circumstances where there was a process plant with some inherent dangerous process and which we wished to make safer by adding a safety function, for example a shutdown function, hence the standards moniker, ‘Functional Safety’. The standard utilises a maximum allowable probability of failure for that safety function to determine a Safety Integrity Level (SIL), a requirement imposed on the  software. Each SIL has a set of development processes that are deemed to satisfy the SIL obligation and meet the probability of failure target (i.e. the process set is the solution). Fairly obviously the standard uses risk in determining what is an appropriate set of assurance activities to perform.

So could we use the 61508 risk-utility criteria to argue that we have satisfied our obligations under the Act? Specifically if the likelihood is below that of SIL 1 (SIL 0 if the standard specified a SIL 0) could we argue that it was appropriate to apply no assurance activities? Well no, if you make the decision based on risk alone then you’re behaving with recklessness as defined in Section 31 of the Act, that is you knew of the risk, and that you could do something, but took the risk without considering what was reasonably practicable in the circumstances.

If you try and justify a SIL level on the basis that the effort associated with the next higher one is grossly disproportionate the first problem you strike is there’s little reliable cost data for such comparisons. The next problem you hit is that costs are proportionate to the size of the software artefact, this is driven by the conduct of human intensive assurance activities like code inspections, reviews, and testing; which means in turn that the assurance of large software projects will cost more. A perverse outcome of this is that we could end up with a large project finding that the cost outweighed the benefit, while a small project did not. This seems an odd sort of outcome, and an indicator that there is a potential disconnect between the Acts ‘theory of safety’ and that of the standard.

However before we can perform any sort of comparison we need to establish some sort of equivalence between the cost (dollars) and the benefit (safety from accidents). So what do we do? Place a monetary value upon human life? There are long standing concerns about such a utilitarian approach, and even where we to do so values assigned can and do vary considerably.

Presuming that we have overcome the problem of equating human life with monetary loss the classical way to then perform such a comparison is to use expectation, in this case the risk reduced and compare it to cost. For a specified loss severity we find that the risk reduction per SIL decade decreases by an order of magnitude as we step up the SIL ladder. Given that benefit decreases as we increase the SIL while costs per SIL increases, at some point we might be able to argue that the cost becomes grossly disproportionate. So far so good.

However to make this sort of analysis defensible you need to be very sure of the numbers, and the problem with SILs is that there’s no evidence to back up the conclusion that the SIL activities deliver the required reliability level. At the lowest level there’s no credible empirical evidence, and at the higher levels it simply becomes impossible to verify in any empirical sense. So a probabilistic cost benefit argument still seems problematic. Finally there’s no guarantee that a court might accept such a cost/benefit calculation as truly representative of the benefit gained. Despite pronouncements about risk, courts with their retrospective view of events seems to tend more towards possibilistic rather than probabilistic reasoning.

If we open the hood of IEC 61508 and look at individual activities within a SIL level, many of the identified activities are considered good practice in software with the potential to actually reduce the costs of producing software. This opens us up to a counter argument that these specific activities are inherently reasonable regardless of integrity level because they would reduce software defects and therefore the cost of producing the software. One might try to establish that such techniques (formal methods comes to mind) were not actually practicable given the circumstances, but as the IEC is a widely accepted international standards organisation, and the techniques identified within it are methods that other people actually use that’s unlikely to be a viable defence. In fact in the case of formal methods one would only have to tender the case studies of a company such as Praxis to make the argument that not only were formal methods viable, they could reasonably be expected to save you time and money.

So IEC 61508 is not looking good, if you do decide to apply it then the only legally defensible position that I see is to apply the highest SIL when there’s any possibility of serious harm, and be done with it. Doing anything less would expose you to either a direct charge of recklessness were you to apply the risk criteria in a naked fashion or likely failure to demonstrate complying with the SFAIRP principles were you to attempt to argue the rule of gross disproportion.

Where I think IEC 61508 gets into trouble is in equating SIL levels with discrete reliability levels, this is really a historical accident (3) but however it arose the standard asserts that to meet a particular risk level with an associated level of reliability (i.e. a probability of a dangerous failure) you apply the requisite SIL activities. Thus SILs map to reliability, which is a product requirement and aleatory uncertainty. However if we look at the process activities (checklists, code inspections or design reviews) that are required there’s a low correlation between processes applied and actual dangerous failure rates (4). Without such a correlation, it’s hard to argue the benefits of a specific SIL, other than by what the standard claims, and given that some specific SIL activities would also save you time and money it’s difficult to see how the IEC 61508 hypothesis that more rigour delivers greater reliability albeit at the cost of more effort holds up.

The problem that IEC 61508 faces as a standard is that many of the assurance activities are intended to find errors in the software, which we suspect might be there. As a result most of the cost is in finding the risk, while eliminating it is much cheaper. In contrast the Act implicitly assumes that it’s the other way round e.g that is finding the risk is fairly straight forward but mitigating it much less so. Unfortunately you have no alternative but to look for errors in the software because the Act requires that you identify risks through some reasonable means.

All of which are very shifting sands to find yourself on if you have to justify the legality of your actions.

Next. In Part II I’ll be looking at another software assurance standard DO-178C which takes a different approach and see how it stacks up against reasonable practicability.

Notes

1. There’s a lot of discussion about why the word ‘gross’ was used, as well as what ‘gross’ actually means. Is it two, three or even an order of magnitude of difference?

2. Likewise there are problems about measuring such a proportion. How does one compare disparate benefits against costs? All of which are nothing new, these type of objections have raised about cost/benefit analyses since the early 1960’s.

3. The term SILs seem to have evolved from the concept of integrity testing for hardware.

4. Another good question would be what are these activities actually doing? For example does the use of complexity metrics achieve anything?

One response to Reasonably practicable and software assurance (part I)

  1. 

    Matthew, the system safety fundamentals course is very close to the “Reality Charting” approach of the Apollo method we apply. Thanks for the insight.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s