Archives For Deepwater Horizon

Deepwater horizon (Image source NY Times)

Mindfulness and paying attention to the wrong things

As I talked about in a previous post on the Deepwater Horizon disaster, I believe one of the underlying reasons, perhaps the reason, for Deepwater’s problems escalating to into a catastrophe was the attentional blindness of management to the indicators of problems on the rig, and that this blindness was due in large part to a corporate focus on individual worker injury rates at the expense of thinking about those rare but catastrophic risks that James Reason calls organisational accidents. And, in a coincidence to end all coincidences there was actually a high level management team visiting just prior to the disaster to congratulate the crew as to their seven years of injury free operations.

So it was kind of interesting to read in James Reason’s latest work ‘A Life in Error‘ his conclusion that the road to epic organisational accidents, is paved with declining or low Lost Time Injury Frequency Rates (LTIFR). He goes on to give the following examples in support:

  • Westray mining disaster (1992), Canada. 26 miners died, but the company had received an award for reducing the LTIFR,
  • Moura mining disaster (1994), Queensland. 11 miners died. The company had halved its LTIFR in the four years preceding the accident.
  • Longford gas plant explosion (1998), Victoria. Two died, eight injured. Safety was directed to reducing LTIFR rather than identifying and fixing the major hazards of un-repaired equipment.
  • Texas City explosion (2005), Texas. The Independent Safety Review panel identified that BP relied on injury rates to evaluate safety performance.

As Reason concludes, the causes of accidents that result in a direct (and individual injury) are very different to those that result in what he calls an organisational accident, that is one that is both rare and truly catastrophic. Therefore data gathered on LTIFR tells you nothing about the likelihood of such a catastrophic event, and as it turns out can be quite misleading. My belief is that not only is such data misleading, it’s salience actively channelises management attention, thereby ensuring the organisation is effectively unable to see the indications of impending disaster.

So if you see an organisation whose operations can go catastrophically wrong, but all you hear from management is proud pronouncements as to how they’re reducing their loss time injury rate then you might want to consider maintaining a safe, perhaps very safe, distance.

Reason’s A Life in Error is an excellent read by the way, I give if four omitted critical procedural steps out of five. 🙂

In a recent NRCOHSR white paper on the Deeepwater Horizon explosion Professor Andrew Hopkins of the Australian National University argued that the Transocean and BP management teams that were visiting the rig on the day of the accident failed to detect the unsafe well condition because of biases in their audit practices.

Continue Reading…

Recently there’s been some robust discussion over on the Safety Critical Mail List at York regarding the utility of safety cases and performance based safety standards (as exemplified by the UK safety case regime) versus more prescriptive design standards (as exemplified by the aerospace industry FAR regulations). To provide one UK regulator’s perspective here’s a presentation by Taf Powell, Director of the Offshore Division of Health and Safety Executive’s Hazardous Industries Directorate, UK, on the state of safety cases in the UK offshore industry circa 2005. Of course his talk was well before the 2010 Deepwater Horizon disaster.

Continue Reading…