Archives For Safety culture
In a slight segue, I was reading Bruce Schneier’s blog on security and came across this post on the psychology behind fraud. Bruce points to this post on why, yes I know, ‘good people do bad things’. The explanation that researchers such as Ann Tenbrunsel of Notre Dame offer is that in the same way that we are boundedly rational in other aspects of decision making so to are our ethical decisions.
In particular, the way in which decision problems were framed seems to have a great impact upon how we make decisions. Basically if a problem was framed without an ethical dimension then decision makers were much less likely to consider that aspect.
Additionally to framing effects, researchers found in studying collusion in fraud cases most people seem to act from an honest desire simply to help others, regardless of any attendant ethical issues.
What fascinates me is how closely such research parallels the work in system safer and human error. Clearly if management works within a frame based upon performance and efficiency, they are simply going to overlook the down side completely, and in a desire to be helpful why everyone else ‘goes along for the ride’.
There is as I see it a concrete recommendation that come out of this research that we can apply to safety; that fundamentally safety management systems need to be designed to take account of of our weaknesses as boundedly rational actors.
From Les Hatton, here’s how, in four easy steps:
- Insist on using R = F x C in your assessment. This will panic HR (People go into HR to avoid nasty things like multiplication.)
- Put “end of universe” as risk number 1 (Rationale: R = F x C. Since the end of the universe has an infinite consequence C, then no matter how small the frequency F, the Risk is also infinite)
- Ignore all other risks as insignificant
- Wait for call from HR…
A humorous note, amongst many, in an excellent presentation on the fell effect that bureaucracies can have upon the development of safety critical systems. I would add my own small corollary that when you see warning notes on microwaves and hot water services the risk assessment lunatics have taken over the asylum…
I’ve just finished up the working week with a day long Safety Conversations and Observations course conducted by Dr Robert Long of Human Dymensions. A good, actually very good, course with an excellent balance between the theory of risk psychology and the practicalities of successfully carrying out safety conversations. I’d recommend it to any organisation that’s seeking to take their safety culture beyond systems and paperwork. Although he’s not a great fan of engineers. 🙂
In a recent NRCOHSR white paper on the Deeepwater Horizon explosion Professor Andrew Hopkins of the Australian National University argued that the Transocean and BP management teams that were visiting the rig on the day of the accident failed to detect the unsafe well condition because of biases in their audit practices.
In an article published in the online magazine Spectrum Eliza Strickland has charted the first 24 hours at Fukushima. A sobering description of the difficulty of the task facing the operators in the wake of the tsunami.
Her article identified a number of specific lessons about nuclear plant design, so in this post I thought I’d look at whether more general lessons for high consequence system design could be inferred in turn from her list.
I attended the annual Rail Safety conference for 2011 earlier in the year and one of the speakers was Group capt Alan Clements, the Director Defence Aviation Safety and Air Force Safety. His presentation was interesting in both where the ADO is going with their aviation safety management system as well as providing some historical perspective, and statistics.Continue Reading...