Archives For Decision making

If you want to know where Crew Resource Management as a discipline started, then you need to read NASA Technical Memorandum 78482 or “A Simulator Study of the Interaction of Pilot Workload With Errors, Vigilance, and Decisions” by H.P. Ruffel Smith, the British borne physician and pilot. Before this study it was hours in the seat and line seniority that mattered when things went to hell. After it the aviation industry started to realise that crews rose or fell on the basis of how well they worked together, and that a good captain got the best out of his team. Today whether crews get it right, as they did on QF72, or terribly wrong, as they did on AF447, the lens that we view their performance through has been irrevocably shaped by the work of Russel Smith. From little seeds great oaks grow indeed.

Matrix (Image source: The Matrix film)

The law of unintended consequences

There are some significant consequences to the principal of reasonable practicability enshrined within the Australian WHS Act. The act is particularly problematic for risk based software assurance standards, where risk is used to determine the degree of effort that should be applied. In part one of this three part post I’ll be discussing the implications of the act for the process industries functional safety standard IEC 61508, in the second part I’ll look at aerospace and their software assurance standard DO-178C then finally I’ll try and piece together a software assurance strategy that is compliant with the Act. Continue Reading…

What burns in Vegas…

Ladies and gentlemen you need to leave, like leave your luggage!

This has been another moment of aircraft evacuation Zen.

787 Battery after fire (Image source: NTSB)

The NTSB have released their final report on the Boeing 787 Dreamliner Li-Ion battery fires. The report makes interesting reading, but for me the most telling point is summarised in conclusion seven, which I quote below.

Conclusion 7. Boeing’s electrical power system safety assessment did not consider the most severe effects of a cell internal short circuit and include requirements to mitigate related risks, and the review of the assessment by Boeing authorized representatives and Federal Aviation Administration certification engineers did not reveal this deficiency.

NTSB/AIR-14/01  (p78 )

In other words Boeing got themselves into a position with their safety assessment where their ‘assumed worst case’ was much less worse case than the reality. This failure to imagine the worst ensured that when they aggressively weight optimised the battery design instead of thermally optimising it, the risks they were actually running were unwittingly so much higher.

The first principal is that you must not fool yourself, and that you are the easiest person to fool

Richard P. Feynman

I’m also thinking that the behaviour of Boeing is consistent with what McDermid et al, calls probative blindness. That is, the safety activities that were conducted were intended to comply with regulatory requirements rather than actually determine what hazards existed and their risk.

… there is a high level of corporate confidence in the safety of the [Nimrod aircraft]. However, the lack of structured evidence to support this confidence clearly requires rectifying, in order to meet forthcoming legislation and to achieve compliance.

Nimrod Safety Management Plan 2002 (1)

As the quote from the Nimrod program deftly illustrates, often (2) safety analyses are conducted simply to confirm what we already ‘know’ that the system is safe, non-probative if you will. In these circumstances the objective is compliance with the regulations rather than to generate evidence that our system is unsafe. In such circumstances doing more or better safety analysis is unlikely to prevent an accident because the evidence will not cause beliefs to change, belief it seems is a powerful thing.

The Boeing battery saga also illustrates how much regulators like the FAA actually rely on the technical competence of those being regulated, and how fragile that regulatory relationship is when it comes to dealing with the safety of emerging technologies.

Notes

1. As quoted in Probative Blindness: How Safety Activity can fail to Update Beliefs about Safety, A J Rae*, J A McDermid, R D Alexander, M Nicholson (IET SSCS Conference 2014).

2. Actually in aerospace I’d assert that it’s normal practice to carry out hazard analyses simply to comply with a regulatory requirement. As far as the organisation commissioning them is concerned the results are going to tell them what they know already, that the system is safe.

Finding MH370

26/08/2014

MH370 underwater search area map (Image source- Australian Govt)

Finding MH370 is going to be a bitch

The aircraft has gone down in an area which is the undersea equivalent of the eastern slopes of the Rockies, well before anyone mapped them. Add to that a search area of thousands of square kilometres in about an isolated a spot as you can imagine, a search zone interpolated from satellite pings and you can see that it’s going to be tough.

Continue Reading…

John Adams has an interesting take on the bureaucratic approach to risk management in his post reducing zero risk.

The problem is that each decision to further reduce an already acceptably low risk is always defended as being ‘cheap’, but when you add up the increments it’s the death of a thousand cuts, because no one ever considers the aggregated opportunity cost of course.

This remorseless slide of our public and private institutions into a hysteria of risk aversion seems to me to be be due to an inherent societal psychosis that nations sharing the english common law tradition are prone to. At best we end up with pointless safety theatre, at worst we end up bankrupting our culture.

The igloo of uncertainty (Image source: UNEP 2010)

Ethics, uncertainty and decision making

The name of the model made me smile, but this article The Ethics of Uncertainty by TannertElvers and Jandrig argues that where uncertainty exists research should be considered as part of an ethical approach to managing risk.

Continue Reading…

Taboo transactions and the safety dilemma Again my thanks goes to Ross Anderson over on the Light Blue Touchpaper blog for the reference, this time to a paper by Alan Fiske  an anthropologist and Philip Tetlock a social psychologist, on what they terms taboo transactions. What they point out is that there are domains of sharing in society which each work on different rules; communal, versus reciprocal obligations for example, or authority versus market. And within each domain we socially ‘transact’ trade-offs between equivalent social goods.

Continue Reading…

I’ve just finished up the working week with a day long Safety Conversations and Observations course conducted by Dr Robert Long of Human Dymensions. A good, actually very good, course with an excellent balance between the theory of risk psychology and the practicalities of successfully carrying out safety conversations. I’d recommend it to any organisation that’s seeking to take their safety culture beyond systems and paperwork. Although he’s not a great fan of engineers. 🙂

20120722-210308.jpg

An interesting theory of risk perception and communication is put forward by Kahan (2012) in the context of climate risk.

Continue Reading…

I think it was John Norman who pointed out that accidents in complex automated systems often arise because of unintended interactions between operator and automation where both are trying to control the same system.

Now Johns comment is an insightful one, but the follow on question is, logically, how are automation and operator trying to control the system?

Continue Reading...

Fighter Cockpit Rear View Mirror

What the economic theory of sunk costs tells us about plan continuation bias

Plan continuation bias is a recognised and subtle cognitive bias that tends to force the continuation of an existing plan or course of action even in the face of changing conditions. In the field of aerospace it has been recognised as a significant causal factor in accidents, with a 2004 NASA study finding that in 9 out of the 19 accidents studied aircrew exhibited this behavioural bias. One explanation of this behaviour may be a version of the well known ‘sunk cost‘ economic heuristic.

Continue Reading…

In a previous post I discussed that in HOT systems the operator will inherently be asked to intervene in situations that are unplanned for by the designer. As such situations are inherently not ‘handled’ by the system this has strong implications for the design of the human machine interface.

Continue Reading...

Often times we make decisions as part of a group and in the environment of the group there is a strong possibility that the cohesiveness of the group leads members to minimise interpersonal conflict and reach a consensus at the expense of crticially evaluating and testing ideas.

Continue Reading...

In a series of occasional posts on this blog, I’ve discussed some of the pitfalls of heuristics based decision making as well as the risks associated with decision making on incomplete information or in an environment of time pressure. As an aid to the reader I’ve provided a consolidated list here.

Continue Reading...

Racing the Devil

21/03/2011

This railway crossing near miss due to a driver ‘racing the devil’ is, on the face of it, a classic example of the perversity of human behaviour. But on closer examination it does illustrate the risk we introduce when transitioning from a regine of approved operational procedures to those that have been merely accepted or tolerated.

Continue Reading...

As the latin root of the word risk indicates an integral part of risk taking is the benefit we achieve. However often times decision makers do not have a clear understanding of what is the upside or payoff.

Continue Reading...

One of the current concepts in decision making theory is that of bounded rationality. In essence we (humans) try to act rationally but are constrained by the limits of time and information on our decisions. So if we make decisions in this way what are some useful, ‘tools of the trade’ that can guide our decision making?

Continue Reading...

So why is one in a million an acceptable risk? The answer may be simpler than we think.

Continue Reading...

Disappointingly the Black Saturday royal commission report makes no mention of the effect of cognitive biases upon making a ‘stay or go’ decision, instead assuming that such decisions are made in a completely rationa fashion. As Black Saturday and other disasters show this is rarely the case.

Continue Reading...

One of the positive outcomes from a disaster such as Black Saturday is that a window of opportunity opens in which opinions, behaviour and even public policy can be changed.

Continue Reading...

From the BEA’s second interim report (BEA 2009) we now know that AF 447 was flown into the water in a deep stall. Given the training and experience of the flight crew how did they end up in such a situation?

Continue Reading...

If you read through the International Panel on Climate Change (IPCC) reports you’ll strike qualitative phrases such as’likely’ and ‘high confidence’ to describe uncertainty. But is there a credible basis for these terms?

Continue Reading...

Recent incidents involving Airbus aircraft have again focused attention on their approach to cockpit automation and it’s interaction with the crew.

Underlying the current debate is perhaps a general view that the automation should somehow be ‘perfect’, and that failure of automation is also a form of moral failing (1). While this weltanschauung undoubtedly serves certain social and psychological needs the debate it engenders doesn’t really further productive discussion on what could or indeed should be done to improve cockpit automation. So let’s take a closer look at the Airbus protection laws implemented in the flight control automation and compare it with how experienced aircrew actually make decisions in the cockpit.

Continue Reading…

Fire has been an integral part of the Australian ecosystem for tens of thousands of years. Both the landscape and it’s native inhabitants have adapted to this periodic cycle of fire and regeneration. These fires are not bolts from the blue, they occur regularly and predictably, yet modern Australians seem to have difficulty understanding that their land will burn, regularly, and sometimes catastrophically.

So why do we studiously avoid serious consideration of the hazards of living in a country that regularly produces firestorms? Why, in the time of fire, do we go through the same cycle of shock, recrimination, exhortations to do better, diminishing interest and finally forgetfulness?

Continue Reading...