Risk as uncontrollability…
The venerable safety standard MIL-STD-882 introduced the concept of software hazard and risk in Revision C of that standard. Rather than using the classical definition of risk as combination of severity and likelihood the authors struck off down quite a different, and interesting, path.
Instead the authors of 882C based their software risk assessment on the degree of autonomy of the software. A highly autonomous software control component for a critical application is considered high risk but as the degree of autonomy lessons so too does the risk. This assessment is then used to establish the degree of rigour required in the development process.
While I agree with the authors that figuring out software reliability ‘a priori’ is a fools errand I’ve always wondered why they picked autonomy specifically, and as it turns out research in the area of risk perception can shed some light on this. Humans it seems are far more tolerant of risk when they believe that the situation they are are in is controllable by them, in fact risk is sometimes defined as ‘insufficient controllability’ (Brun 1994).
From this perspective the approach of the authors to risk can be seen as a reflection of our human fear of loss of control. The greater the autonomy of automation the greater our perception of risk. Thus a loss of control to something as intangible as a software program is always going to be perceived as a risk. Which may also explain the recurring thematic golem in modern culture of the faceless, remorseless and fundamentally uncontrollable thinking machine, Colossus being but one example.
Interesting that such a classic system safety standard as MIL-STD-882 should have adopted such a non-classical approach to the assessment of software risk.
Brun, W., Risk perception: Main issues, approached and findings. In G. Wright and P. Ayton (Eds.), Subjective probability (pp. 395-420). Chichester: John Wiley and