Archives For System safety course

Fault trees


Here’s the fault tree module from my system safety course. A powerful, but in some ways dangerous, graphical technique. A tip o’ the hat to Pat Clements for a large chunk of the material.

Added in the system safety planning module of my system safety course to the freeware available on this site. As Eisenhower remarked it’s all about the planning. 🙂

Safety case notes


Update to the safety case module of my UNSW course. Just added a little bit more on how to structure a physical safety case report.

Just updated the course notes for safety cases and argument to include more on how to represent safety cases if you are not graphically inclined. All in preparation for the next system safety course in July 2016 at ADFA places still open folk! A tip o’ the hat to Chris Holloway whose work prompted the additional material. 🙂

Just finished updating the Functional Hazard Analysis course notes (to V1.2) to expand and clarify the section on complex interaction style functional failures. To my mind complex interactions are where accidents actually occur and where the guidance provided by various standards, see SAMS or ARP 4754, is also the weakest.

Event trees


I’ve just added the event trees module to the course notes.

System Safety Fundamentals Concept Cloud

System safety course, now with more case studies and software safety!

Have just added a couple of case studies and some course notes of software hazards and integrity partitioning, because hey I know you guys love that sort of stuff 🙂

Safety course notes


System Safety Fundamentals Concept Cloud

I have finally got around to putting my safety course notes up, enjoy. You can also find them off the main menu.

Feel free to read and use under the terms of the associated creative commons license. I’d note that these are course notes so I use a large amount of example material from other sources (because hey, a good example is a good example right?) and where I have a source these are acknowledged in the notes. If you think I’ve missed a citation or made an error, then let me know.

To err is human, but to really screw it up takes a team of humans and computers…

How did a state of the art cruiser operated by one of the worlds superpowers end up shooting down an innocent passenger aircraft? To answer that question (at least in part) here’s a case study that’s part of the system safety course I teach that looks at some of the casual factors in the incident.

In the immediate aftermath of this disaster there was a lot of reflection, and work done, on how humans and complex systems interact. However one question that has so far gone unasked is simply this. What if the crew of the USS Vincennes had just used the combat system as it was intended? What would have happened if they’d implemented a doctrinal ruleset that reflected the rules of engagement that they were operating under and simply let the system do its job? After all it was not the software that confused altitude with range on the display, or misused the IFF system, or was confused by track IDs being recycled… no, that was the crew.

ZEIT8236 System safety 2015 redux

Off to teach a course in system safety for Navy, whic ends up as a week spent at the old almer mater. Hopefully all transport legs will be uneventful. 🙂


…for my boat is so small and the ocean so huge

For a small close knit community like the submarine service the loss of a boat and it’s crew can strike doubly hard. The USN’s response to this disaster, was both effective and long lasting. Doubly impressive given it was implemented at the height of the Cold War. As part of the course that I teach on system safety I use the Thresher as an important case study in organisational failure, and recovery.


The RAN’s Collins class Subsafe program derived it’s strategic principles in large measure from the USNs original program. The successful recovery of HMAS Dechaineux from a flooding incident at depth illustrates the success of both the RANs Subsafe program and also its antecedent.

Here’s a link to version 1.3 of System Safety Fundamentals, part of the course I teach at UNSW. I’ll be putting the rest of the course material up over the next couple of months. Enjoy 🙂

A short tutorial on the architectural principles of integrity level partitioning,  I wrote this a while ago, but the fundamentals remain the same. Partitioning is a very powerful design technique but if you apply it you also need to be aware that it can interact with all sorts of other system design attributes, like scheduling and fault tolerance to name but two.

The material is drawn from may different sources, which unfortunately at the time I didn’t reference, so all I can do is offer a general acknowledgement here. You can also find a more permanent link to the tutorial on my publications page.

The ADFA campus

ZEIT8236 – Systems Safety course Canberra 14-18 July 2014

There’s a few seats still left on the next system safety course at UNSW@Canberra, it’s a short course (here’s a taste) so if you can only get away for three days it may well suit you, and you may get to see me wear my Ushanka this time of year, so see you there!