Archives For Technology

The technological aspects of engineering for high consequence systems.

MH370 Satellite Image (Image source: AMSA)

While once again the media has whipped itself into a frenzy of anticipation over the objects sighted in the southern Indian ocean we should all be realistic about the likelihood of finding wreckage from MH370.

Continue Reading…

20140122-072236.jpg

The failure of NVP and the likelihood of correlated security exploits

In 1986, John Knight & Nancy Leveson conducted an experiment to empirically test the assumption of independence in N version programming. What they found was that the hypothesis of independence of failures in N-version programs could be rejected at a 99% confidence level. While their results caused quite a stir in the software community, see their A reply to the critics for a flavour, what’s of interest to me is what they found when they took a closer look at the software faults.

…approximately one half of the total software faults found involved two or more programs. This is surprisingly high and implies that either programmers make a large number of similar faults or, alternatively, that the common faults are more likely to remain after debugging and testing.

Knight, Leveson 1986

Continue Reading…

Separation of privilege and the avoidance of unpleasant surprises

Another post in an occasional series on how Saltzer and Schroeder’s eight principles of security and safety engineering seem to overlap in a number of areas, and what we might get from looking at safety with from a security perspective. In this post I’ll look at the concept of separation of privilege.

Continue Reading…

And not quite as simple as you think…

The testimony of Michael Barr, in the recent Oklahoma Toyota court case highlighted problems with the design of Toyota’s watchdog timer for their Camry ETCS-i  throttle control system, amongst other things, which got me thinking about the pervasive role that watchdogs play in safety critical systems.

Continue Reading…

Toyota ECM (Image source: Barr testimony presentation)

Economy of mechanism and fail safe defaults

I’ve just finished reading the testimony of Phil Koopman and Michael Barr given for the Toyota un-commanded acceleration lawsuit. Toyota settled after they were found guilty of acting with reckless disregard, but before the jury came back with their decision on punitive damages, and I’m not surprised.

Continue Reading…

Singularity (Image source:  Tecnoscience)

Or ‘On the breakdown of Bayesian techniques in the presence of knowledge singularities’

One of the abiding problems of safety critical ‘first of’ systems is that you face, as David Collingridge observed, a double bind dilemma:

  1. Initially an information problem because ‘real’ safety issues (hazards) and their risk cannot be easily identified or quantified until the system is deployed, but 
  2. By the time the system is deployed you now face a power (inertia) problem, that is control or change is difficult once the system is deployed or delivered. Eliminating a hazard is usually very difficult and we can only mitigate them in some fashion.

    Continue Reading…

Ariane 501 Launch

In 1996 the European Space Agency lost their brand new Ariane 5 launcher on it’s first flight. Here’s a recently updated annotated version of that report. I’d also note that the software that faulted was written using Ada a ‘strongly typed’ language, which does point to a few small problems with the use of such languages.

Continue Reading…

New Battery boxes (Image source: Boeing)

The end of the matter…well almost

Continue Reading…

No, not the alternative name for this blog. :)

I’ve just given the post Pitch ladders and unusual attitude a solid rewrite adding some new material and looking a little more deeply at some of the underlying safety myths.

X-Ray of JAL Battery (Image Source: NTSB)

A bit more on Boeing’s battery woes…

The NTSB has released more pictures of the JAL battery, and there are some interesting conclusions that can be drawn from the evidence to date.

Continue Reading…


JAL JA829J Fire (Image Source: Stephan Savoia AP Photo)

Boeing’s Dreamliner program runs into trouble with lithium ion batteries

Lithium batteries performance in providing lightweight, low volume power storage has made them a ubiquitous part of modern consumer life. And high power density also makes them attractive in applications, such as aerospace, where weight and space are at a premium. Unfortunately lithium batteries are also very unforgiving if operated outside their safe operating envelope and can fail in a spectacularly energetic fashion called a thermal runaway (1), as occurred in the recent JAL and ANA 787 incidents.

Continue Reading…

Buncefield Tank on Fire (Image Source: Royal Chiltern Air Support Unit)

Why sometimes simpler is better in safety engineering.

Continue Reading…

Resilience and common cause considered in the wake of hurricane Sandy

One of the fairly obvious lessons from Hurricane Sandy is the vulnerability of underground infrastructure such as subways, road tunnels and below grade service equipment to flooding events.

The New York City subway system is 108 years old, but it has never faced a disaster as devastating as what we experienced last night”

NYC transport director Joseph Lhota

Yet despite the obviousness of the risk we still insist on placing such services and infrastructure below grade level. Considering actual rises in mean sea level, e.g a 1 foot increase at Battery Park NYC since 1900, and those projected to occur this century perhaps now is the time to recompute the likelihood and risk of storm surges overtopping defensive barriers.

Continue Reading…

The following is an extract from Kevin Driscoll’s Murphy Was an Optimist presentation at SAFECOMP 2010. Here Kevin does the maths to show how a lack of exposure to failures over a small sample size of operating hours leads to a normalcy bias amongst designers and a rejection of proposed failure modes as ‘not credible’.

The reason I find it of especial interest is that it gives, at least in part, an empirical argument to why designers find it difficult to anticipate the system accidents of Charles Perrow’s Normal Accident Theory.

Kevin’s argument also supports John Downer’s (2010) concept of Epistemic accidents. John defines epistemic accidents as those that occur because of an erroneous technological assumption, even though there were good reasons to hold that assumption before the accident.

Kevin’s argument illustrates that engineers as technological actors must make decisions in which their knowledge is inherently limited and so their design choices will exhibit bounded rationality.

In effect the higher the dependability of a system the greater the mismatch between designer experience and system operational hours and therefore the tighter the bounds on the rationality of design choices and their underpinning assumptions. The tighter the bounds the greater the effect of congnitive biases will have, e.g. such as falling prey to the Normalcy Bias.

Of course there are other reasons for such bounded rationality, see Logic, Mathematics and Science are Not Enough for a discussion of these.

Continue Reading…

I just realised that I’ve used the term ‘design hypothesis’ throughout this blog without a clear definition of what one is. :-)

So here it is.

A design hypothesis is a prediction that a specific design will result in a specific outcome. A design hypothesis must:

  1. Identify the designs provenance, e.g the theory, practice or standards from it is derived.
  2. Provide a concise description of the design.
  3. State what the design must achieve in a verifiable fashion.
  4. Clearly identify critical assumptions that support the hypothesis.

Note that the concept of a fault hypothesis can be seen as a particular and constrained form of design hypothesis as, after Powell (1992), a fault hypothesis specifies assumptions about the types of faults, the rate at which components fail and how components may fail for fault tolerant computing purposes.

I give a short example of of a design hypothesis in the Titanic Part I post.

References

Powell, D., Failure mode assumptions and assumption coverage. In Proc. of the 22nd IEEE Annual International Symposium on Fault-Tolerant Computing (FTCS-22) , p386–395, Boston, USA, 1992.

Just finished giving my post on Lessons from Nuclear Weapons Safety a rewrite.

The original post is, as the title implies, about what we can learn from the principled base approach to safety adopted by the US DOE nuclear weapons safety community. Hopefully the rewrite will make it a little clearer, I can be opaque as a writer sometimes. :-)

P.S. I probably should look at integrating the 3I principles introduced into this post on the philosophy of safety critical systems.

Warsaw A320 Accident (Image Source: Unknown)

One of the questions that we should ask whenever an accident occurs is whether we could have identified the causes during design? And if we didn’t, is there a flaw in our safety process?

Continue Reading…

I’m currently reading Richard de Crespigny’s book on flight QF 32. In he writes that he felt at one point that he was being over whelmed by the number and complexity of ECAM messages. At that moment he recalled remembering a quote from Gene Kranz, NASA’s flight director, of Apollo 13 fame, “Hold it Gentlemen, Hold it! I don’t care about what went wrong. I need to know what is still working on that space craft.”.

The crew of QF32 are not alone in experiencing the overwhelming flood of data that a modern control system can produce in a crisis situation. Their experience is similar to that of the operators of the Three Mile island nuclear plant who faced a daunting 100+ near simultaneous alarms, or more recently the experiences of QF 72.

The take home point for designers is that, if you’ve carefully constructed a fault monitoring and management system you also need to consider the situation where the damage to the system is so severe that the needs of the operator invert and they need to know ‘what they’ve still got’, rather that what they don’t have.

The term ‘never give up design strategy’ is bandied around in the fault tolerance community, the above lesson should form at least a part of any such strategy.

This post is part of the Airbus aircraft family and system safety thread.

In June of 2011 the Australian Safety Critical Systems Association (ASCSA) published a short discussion paper on what they believed to be the philosophical principles necessary to successfully guide the development of a safety critical system. The paper identified eight management and eight technical principles, but do these principles do justice to the purported purpose of the paper?

Continue Reading…

The MIL-STD-882 lexicon of hazard analyses includes the System Hazard Analysis (analysis) which according to the standard is intended to:

“…examines the interfaces between subsystems. In so doing, it must integrate the outputs of the SSHA. It should identify safety problem areas of the total system design including safety critical human errors, and assess total system risk. Emphasis is placed on examining the interactions of the subsystems.”

MIL-STD-882C

This sounds reasonable in theory and I’ve certainly seen a number toy examples touted in various text books on what it should look like. But, to be honest, I’ve never really been convinced by such examples, hence this post.

Continue Reading…

Stage Separation – A Classic Irreversible Command

The concept of irreversible commands is one that has been around for a long time in the safety and aerospace communities, but why are they significant from a safety perspective?

Continue Reading…

20120722-182815.jpg

One of the canonical design principles of the nuclear weapons safety community is to base the behaviour of safety devices upon fundamental physical principles.

Continue Reading…

In an article published in the online magazine Spectrum Eliza Strickland has charted the first 24 hours at Fukushima. A sobering description of the difficulty of the task facing the operators in the wake of the tsunami.

Her article identified a number of specific lessons about nuclear plant design, so in this post I thought I’d look at whether more general lessons for high consequence system design could be inferred in turn from her list.

Continue Reading…

A330 Right hand (1 & 3) AoA probes (Image source: ATSB)

In an earlier post I commented that in the QF72 incident the use of a geometric mean (1) instead of the arithmetic mean when calculating the aircrafts angle of attack would have reduced the severity of the subsequent pitch over.

Which leads into the more general subject of what to do when the real world departs from our assumption about the statistical ‘well formededness’ of data.

Continue Reading…

I’ve recently been reading John Downer on what he terms the Myth of Mechanical Objectivity. To summarise John’s argument he points out that once the risk of an extreme event has been ‘formally’ assessed as being so low as to be acceptable it becomes very hard for society and it’s institutions to justify preparing for it (Downer 2011).

Continue Reading…

Why We Automate Failure
A recent post on the interface issues surrounding the use of side-stick controllers in current generation passenger aircraft led me to think more generally about the the current pre-eminence of software driven visual displays and why we persist in their use even though there may be a mismatch between what they can provide and what the operator needs.

Continue Reading…

The Mississippi River’s Old River Control Structure, a National Single Point of Failure?

Given the recent events in Fukushima and our subsequent western cultural obsession with the radiological consequences, perhaps it’s appropriate to reflect on other non-nuclear vulnerabilities.

As a case in point what about the Old River Control Structure erected by those busy chaps the US Army Corp of Engineers to control the path of the Mississippi to the sea? Yes, well as it turns out maybe trapping the Mississippi wasn’t really such a good idea…

Continue Reading…

How the marking of a traffic speed hump provides a classic example of a false affordance and an unintentional hazard.

Continue Reading...

Soviet Shuttle was safer by design

According to veteran russian cosmonaut Oleg Kotov, quoted in a New Scientist article the soviet Buran shuttle (1) was much safer than the American shuttle due to fundamental design decisions. Kotov’s comments once again underline the importance to safety of architectural decisions in the early phases of a design.

Continue Reading…

Planes and Trains

05/07/2011 — 1 Comment

I attended the annual Rail Safety conference for 2011 earlier in the year and one of the speakers was Group capt Alan Clements, the Director Defence Aviation Safety and Air Force Safety. His presentation was interesting in both where the ADO is going with their aviation safety management system as well as providing some historical perspective, and statistics.

Continue Reading...

Just discovered a paper I co-authored for the 2006 AIAA Reno Conference on the Risk & Safety Aspects of Systems of Systems. A little disjointed but does cover some interesting problem areas for systems of systems.

A UAV and COMAIR near miss over Kabul illustrates the problem of emergent hazards when we integrate systems or operate existing systems in operational contexts not considered by their designers.

Continue Reading...

A near disaster in space 40 years ago serves as a salutory lesson on common cause failure

Two days after the launch of Apollo 13 an oxygen tank ruptured crippling the Apollo service module upon which the the astronauts depended for survival, precipitating a desperate life or death struggle for survival. But leaving aside what was possibly NASA’s finest hour, the causes of this near disaster provide important lessons for design damage resistant architectures.

Continue Reading…

Blayais Plant (Image source: Wikipedia Commons)

What a near miss flooding incident at a French nuclear plant in 1999 and the Fukushima 2012 disaster can tell us about fault tolerance and designing for reactor safety

Continue Reading…

What a near miss flooding incident at a french reactor plant in 1999, it’s aftermath and the subsequent Fukushima plant disaster can tell us about fault tolerance and designing for reactor safety.

Continue Reading...

QF32 Redux

29/03/2011 — Leave a comment

QF32 - No. 1 engine failure to shutdown

The ABC’s treatment of the QF 32 incident treads familiar and slightly disappointing ground

While I thought that the ABC 4 Corners programs treatment of the QF 32 incident was a creditable effort I have to say that I was unimpressed by the producers homing in on a (presumed) Rolls Royce production error as the casus belli.

The report focused almost entirely upon the engine rotor burst and its proximal cause but failed to discuss (for example) the situational overload introduced by the ECAM fault reporting, or for that matter why a single rotor burst should have caused so much cascading damage and so nearly led to the loss of the aircraft.

Overall two out of four stars :)

If however your interested in a discussion of the deeper issues arising from this incident then see:

  1. Lessons from QF32. A discussion of some immediate lessons that could be learned from the QF 32 accident;
  2. The ATSB QF32 preliminary report. A commentary on the preliminary report and its strengths and weaknesses;
  3. Rotor bursts and single points of failure. A review and discussion of the underlying certification basis for commercial aircraft and protection from rotor burst events;
  4. Rotor bursts and single points of failure (Part II), Discusses differences between the damage sustained by QF 32 and that premised by a contemporary report issued by the AIA on rotor bursts;
  5. A hard rain is gonna fall. An analysis of 2006 American Airlines rotor burst incident that indicated problems with the FAA’s assumed rotor burst debris patterns; and
  6. Lies, damn lies and statistics. A statistical analysis, looking at the AIA 2010 report on rotor bursts and it’s underestimation of their risk.

On June 2, 2006, an American Airlines B767-223(ER), N330AA, equipped with General Electric (GE) CF6-80A engines experienced an uncontained failure of the high pressure turbine (HPT) stage 1 disk2 in the No. 1 (left) engine during a high-power ground run for maintenance at Los Angeles International Airport (LAX), Los Angeles, California.

To provide a better appreciation of aircraft level effects I’ve taken the NTBS summary description of the damage sustained by the aircraft and illustrated it with pictures taken of the accident by bystanders and technical staff.

Continue Reading...

A report by the AIA on engine rotor bursts and their expected severity raises questions about the levels of damage sustained by QF 32.

Continue Reading...

It appears that the underlying certification basis for aircraft safety in the event of a intermediate power turbine rotor bursts is not supported by the rotor failure seen on QF 32.

Continue Reading...

The recent Qantas A380 catastrophic engine failure illustrates the problems of dealing with common cause failures

Updated: 15 Nov 2012

Generally the reason we have more than one of anything on a passenger aircraft is because we know that components can fail so independent redundancy is the cornerstone strategy to achieve the required levels of system reliability and safety. But while overall aircraft safety is predicated on the independence of these components, the reality is that the catastrophic failure of one component can also affect adjacent equipment and systems leading to what are termed common cause failures.

Continue Reading…

Tweedle Dum and Dee (Image source: Wikipedia Commons)
How do ya do and shake hands, shake hands, shake hands. How do ya do and shake hands and state your name and business…

Lewis Carrol, Through the Looking Glass

You would have thought after the Leveson and Knight experiments that the  theory that independently written software would only contain independent faults was dead and buried, another beautiful theory shot down by hard cold fact.  But unfortunately like many great errors the theory of n-versioning keeps on keeping on (1). Continue Reading…

Over the last couple of months I’ve posted on various incidents involving the Airbus A330 aircraft from the perspective of system safety. As these posts are scattered through my blog I thought I’d pull them together, the earliest post is at the bottom.

Continue Reading...

So far as we know flight AF 447 fell out of the sky with its systems performing as their designers had specified, if not how they expected, right up-to the point that it impacted the surface of the ocean.

So how is it possible that incorrect air data could simultaneously cause upsets in aircraft functions as disparate as engine thrust management, flight law protection and traffic avoidance?

Continue Reading...

Ariane 501 Launch

I was cleaning up some of my reference material and came across a copy of the ESA board of investigation report into the Ariane 501 accident. I’ve added my own personal observations, as well as those of other commentators, to the report. Continue Reading…

In one’s professional life there are certain critical works that open your eyes to how lucidly a technical argument can be stated. Continue Reading…

Invalid air data may have triggered the cabin pressure differential safety function on AF 447.

Continue Reading...

Recent incidents involving Airbus aircraft have again focused attention on their approach to cockpit automation and it’s interaction with the crew.

Underlying the current debate is perhaps a general view that the automation should somehow be ‘perfect’, and that failure of automation is also a form of moral failing (1). While this weltanschauung undoubtedly serves certain social and psychological needs the debate it engenders doesn’t really further productive discussion on what could or indeed should be done to improve cockpit automation.

Continue Reading…

A cross walk of the interim investigation accident reports issued by the ATSB and BEA for the QF72 and AF447 accidents respectively shows that in both accidents the inertial reference units that are part of the onboard air data inertial reference unit (ADIRU) that exhibited anomalous behaviour also declared a failure. Why did this occur?

Continue Reading...

The HAL effect

09/09/2009 — 2 Comments

Do we automate our cultural biases, and can this have an affect upon the safe coordination of crew and automation?

Continue Reading...

Authors Note. Below is my original post on the potential causes of the AF 447 cabin altitude advisory, I concluded that there were a number of potential causes one of which could be an erroneous altitude input from the ADIRU. What I didn’t consider was that the altitude advisory could have been triggered by correct operation of the cabin pressure control system, see  The AF 447 cabin vertical speed advisory and Pt II for more on this.

The last ACARS transmision received from AF 447 was the ECAM advisory that the cabin altitude (pressure) variation had exceeded 1,800 ft/min for greater than 5 seconds. While some commentators have taken this message to indicate that the aircraft had suffered a catastrophic structural failure, all we really know is that at that point there was a rapid change in reported cabin altitude. Given the strong indications of unreliable air data from other on-board systems, perhaps it’s worthwhile having a look for other potential causes of such rapid cabin pressure changes.

Continue Reading…