For those interested, here’s a draft of the ‘Fundamentals of system safety‘ module from a course that I teach on system safety. Of course if you want the full effect, you’ll just have to come along. :)
Archives For software safety
From Les Hatton, here’s how, in four easy steps:
- Insist on using R = F x C in your assessment. This will panic HR (People go into HR to avoid nasty things like multiplication.)
- Put “end of universe” as risk number 1 (Rationale: R = F x C. Since the end of the universe has an infinite consequence C, then no matter how small the frequency F, the Risk is also infinite)
- Ignore all other risks as insignificant
- Wait for call from HR…
A humorous note, amongst many, in an excellent presentation on the fell effect that bureaucracies can have upon the development of safety critical systems. I would add my own small corollary that when you see warning notes on microwaves and hot water services the risk assessment lunatics have taken over the asylum…
The QF 72 accident illustrates the significant effects that ‘small field’ decisions can have on overall system safety Continue Reading…
How do ya do and shake hands, shake hands, shake hands. How do ya do and shake hands and state your name and business…
Lewis Carrol, Through the Looking Glass
You would have thought after the Leveson and Knight experiments that the theory that independently written software would only contain independent faults was dead and buried, another beautiful theory shot down by hard cold fact. But unfortunately like many great errors the theory of n-versioning keeps on keeping on (1).
Recent incidents involving Airbus aircraft have again focused attention on their approach to cockpit automation and it’s interaction with the crew.
Underlying the current debate is perhaps a general view that the automation should somehow be ‘perfect’, and that failure of automation is also a form of moral failing (1). While this weltanschauung undoubtedly serves certain social and psychological needs the debate it engenders doesn’t really further productive discussion on what could or indeed should be done to improve cockpit automation. So let’s take a closer look at the Airbus protection laws implemented in the flight control automation and compare it with how experienced aircrew actually make decisions in the cockpit.
Authors Note. Below is my original post on the potential causes of the AF 447 cabin altitude advisory, I concluded that there were a number of potential causes one of which could be an erroneous altitude input from the ADIRU. What I didn’t consider was that the altitude advisory could have been triggered by correct operation of the cabin pressure control system, see The AF 447 cabin vertical speed advisory and Pt II for more on this.
The last ACARS transmision received from AF 447 was the ECAM advisory that the cabin altitude (pressure) variation had exceeded 1,800 ft/min for greater than 5 seconds. While some commentators have taken this message to indicate that the aircraft had suffered a catastrophic structural failure, all we really know is that at that point there was a rapid change in reported cabin altitude. Given the strong indications of unreliable air data from other on-board systems, perhaps it’s worthwhile having a look for other potential causes of such rapid cabin pressure changes.