Archives For Philosophy

The philosophical aspects of safety and risk.

Waaay back in 2002 Chris Holloway wrote a paper that used a fictional civil court case involving the hazardous failure of software to show that much of the expertise and received wisdom of software engineering was, using the standards of the US federal judiciary, junky and at best opinion based.

Rereading the transcripts of Phillip Koopman, and Michael Barr in the 2013 Toyota spaghetti monster case I am struck both by how little things have changed and how far actual state of the industry can be from state of the practice, let alone state of the art. Life recapitulates art I guess, though not in a good way.

When Formal Systems Kill, an interesting paper by Lee Pike and Darren Abramson looking at the automatic formal system property of computers from an ethical perspective. Of course as we all know, the 9000 series has a perfect operational record…

Easter 2014 bus-cycle accident (Image Source: James Brickwood)

The limits of rational-legal authority

One of the underlying and unquestioned aspects of modern western society is that the power of the state is derived from a rational-legal authority, that is in the Weberian sense of a purposive or instrumental rationality in pursuing some end. But what if it isn’t? What if the decisions of the state are more based on belief in how people ought to behave and how things ought to be rather than reality? What, in other words, if the lunatics really are running the asylum?

Continue Reading…

On being a professional

Currently dealing with some software types who, god bless their wooly socks, are less than enthusiastic about dealing with all this ‘paperwork’ and ‘process’, which got me to thinking about the nature of professionalism.

While the approach of ‘Bürokratie über alles’ doesn’t sit well with me I confess, on the other side of coin I see the girls just wanna have fun mentality of many software developers as symptomatic of a lack of professionalism amongst the programming class. Professionals in my book intuitively understand that the ‘job’ entails three parts the preparing, the doing and the cleaning up, in a stoichiometric ratio of 4:2:4. That’s right, any job worth doing is a basic mix of two parts fun to eight part diligence, and that’s true if you’re a carpenter or a heart surgeon.

Unfortunately the fields of computer science seems to attract what I can only call man children, those folk who like Peter Pan want to fly around never land and never grow up, which is OK if you’re coding Java beans for a funky hipster website, not so great if you’re writing an embedded program for a pacemaker, and so in response we have seem to have process*.

Now as a wise man once remarked, process really says you don’t trust your people so I draw the logical conclusion that the continuing process obsession of the software community simply reflects an endemic lack of trust, due to the aforementioned lack professionalism, in that field. In contrast I trust my heart surgeon (or my master carpenter) because she is an avowed, experienced and skillful professional not because she’s CMMI level 4 certified.

*I guess that’s also why we have the systems engineer. :)

…and the value of virtuous witnesses

I have to say that I’ve never been terribly impressed with ISO 61508, given it purports to be so arcane that it require a priesthood of independent safety assessors to reliably interpret and sanction its implementation. My view is if your standard is that difficult then you need to rewrite the standard.

Which is where I would have parked my unhappiness with the general 61508 concept of an ISA, until I remembered a paper written by John Downer on how the FAA regulates the aerospace sector. Within the FAA’s regulatory framework there exists an analog to the ISA role, in the form of what are called Designated Engineering Representatives or DERs. In a similar independent sign-off role to the ISAs, DERs are paid by the company they work for to carry out a certifying function on behalf of the FAA.

Continue Reading…

Current practices in formal safety argument notation such as Goal Structuring Notation (GSN) or Cause Argument Evidence (CAE) rely on the practical argument model developed by the philosopher Toulmin (1958). Toulmin focused on the justification aspects of arguments rather than the inferential and developed a model of these ‘real world’ argument based on facts, conclusions, warrants, backing and qualifier entities.

Using Toulmin’s model from evidence one can draw a conclusion, as long as it is warranted. Said warrant being possibly supported by additional backing, and possibly contingent upon some qualifying statement. Importantly one of the qualifier elements in practical arguments is what Toulmin called a ‘rebuttal’, that is some form of legitimate constraint that may be placed on the conclusion drawn, we’ll get to why that’s important in a second.

Toulmin Argumentation Example

You see Toulmin developed his model so that one could actually analyse an argument, that is argument in the verb sense of, ‘we are having a safety argument’. Formal safety arguments in safety cases however are inherently advocacy positions, and the rebuttal part of Toulmin’s model finds no part in them. In the noun world of safety cases, argument is used in the sense of, ‘there is the 12 volume safety argument on the shelf’, and if the object is to produce something rather than discuss then there’s no need for a claim and rebuttal pairing is there?

In fact you won’t find an explicit rebuttal form in either GSN or CAE as far as I am aware, it seem that the very ‘idea’ of rebuttal has been pruned from the language of both. Of course it’s hard to express a concept if you don’t have the word for it, nice little example of how language form can control the conversation. Language is power so they say.

 

MH370 Satellite Image (Image source: AMSA)

MH370 and privileging hypotheses

The further away we’ve moved from whatever event that initiated the disappearance of MH370, the less entanglement there is between circumstances and the event, and thus the more difficult it is to make legitimate inferences about what happened. In essence the signal-to-noise ratio decreases exponentially as the causal distance from the event increases, thus the best evidence is that which is intimately entwined with what was going on onboard MH370 and of lesser importance is that evidence obtained at greater distances in time or space.

Continue Reading…

Process is no substitute for paying attention

As Weick has pointed out, to manage the unexpected we need to be reliably mindful, not reliably mindless. Obvious as that truism may be, those who invest heavily in plans, procedures, process and policy also end up perpetuating and reinforcing a whole raft of expectations, and thus investing in an organisational culture of mindlessness rather than mindfulness.

Continue Reading…

The igloo of uncertainty (Image source: UNEP 2010)

Ethics, uncertainty and decision making

The name of the model made me smile, but this article The Ethics of Uncertainty by TannertElvers and Jandrig argues that where uncertainty exists research should be considered as part of an ethical approach to managing risk.

Continue Reading…

Taboo transactions and the safety dilemma Again my thanks goes to Ross Anderson over on the Light Blue Touchpaper blog for the reference, this time to a paper by Alan Fiske  an anthropologist and Philip Tetlock a social psychologist, on what they terms taboo transactions. What they point out is that there are domains of sharing in society which each work on different rules; communal, versus reciprocal obligations for example, or authority versus market. And within each domain we socially ‘transact’ trade-offs between equivalent social goods.

Continue Reading…

I was reading a post by Ross Anderson on his dismal experiences at John Lewis, and ran across the term security theatre, I’ve actually heard the term, before, it was orignally coined by Bruce Schneier, but this time it got me thinking about how much activity in the safety field is really nothing more than theatrical devices that give the appearance of achieving safety, but not the reality. From zero harm initiatives to hi-vis vests, from the stylised playbook of public consultation to the use of safety integrity levels that purport to show a system is safe. How much of this adds any real value?

Worse yet, and as with security theatre, an entire industry has grown up around this culture of risk, which in reality amounts to a culture of risk aversion in western society. As I see it risk as a cultural concept is like fire, a dangerous tool and an even more terrible master.

An articulated guess beats an unspoken assumption

Frederick Brooks

A point that Fred Brooks makes in his recent work the Design of Design is that it’s wiser to explicitly make specific assumptions, even if that entails guessing the values, rather than leave the assumption un-stated and vague because ‘we just don’t know’.

Brooks notes that while specific and explicit assumptions may be questioned, implicit and vague ones definitely won’t be. If a critical aspect of your design rests upon such fuzzy unarticulated assumptions, then the results can be dire. Continue Reading…

From Les Hatton, here’s how, in four easy steps:

  1. Insist on using R = F x C in your assessment. This will panic HR (People go into HR to avoid nasty things like multiplication.)
  2. Put “end of universe” as risk number 1 (Rationale: R = F x C. Since the end of the universe has an infinite consequence C, then no matter how small the frequency F, the Risk is also infinite)
  3. Ignore all other risks as insignificant
  4. Wait for call from HR…

A humorous note, amongst many, in an excellent presentation on the fell effect that bureaucracies can have upon the development of safety critical systems. I would add my own small corollary that when you see warning notes on microwaves and hot water services the risk assessment lunatics have taken over the asylum…

Battery post fire (Image source: NTSB)

The NTSB has released it’s interim report on the Boeing 787 JAL battery fire and it appears that Boeing’s initial safety assessment had concluded that the only way in which a battery fire would eventuate was through overcharging. Continue Reading…

787 Lithium Battery (Image Source: JTSB)

But, we tested it? Didn’t we?

Earlier reports of the Boeing 787 lithium battery initial development indicated that Boeing engineers had conducted tests to confirm that a single cell failure would not lead to a cascading thermal runaway amongst the remaining batteries. According to these reports their tests were successful, so what went wrong?

Continue Reading…

Just updated the post Why Safety Integrity Levels Are Pseudo-science with additional reference material and links to where it’s available on the web. Oh, and they’re still pseudo-science…

Just finished reading the excellent paper A Conundrum: Logic, Mathematics and Science Are Not Enough by John Holloway on the the swirling currents of politics, economics and emotion that can surround and affect any discussions of safety. The paper neatly illustrates why the canonical rational-philosophical model of expert knowledge is inherently flawed.

What I find interesting as a practicing engineer is that although every day debates and discussions with your peers emphasise the subjectivity of engineering ‘knowledge’ as engineers we all still like to pretend and behave as if it is not.

“Knowledge is an unending adventure at the edge of uncertainty”

Jacob Bronowski

British mathematician, biologist, historian of science, theatre author, poet and inventor.

In June of 2011 the Australian Safety Critical Systems Association (ASCSA) published a short discussion paper on what they believed to be the philosophical principles necessary to successfully guide the development of a safety critical system. The paper identified eight management and eight technical principles, but do these principles do justice to the purported purpose of the paper?

Continue Reading…

Did the designers of the japanese seawalls consider all the factors?

In an eerie parallel with the Blayais nuclear power plant flooding incident it appears that the designers of tsunami protection for the Japanese coastal cities and infrastructure hit by the 2011 earthquake did not consider all the combinations of environmental factors that go to set the height of a tsunami.

Continue Reading…

Thinking about the unintentional and contra-indicating stall warning signal of AF 447 I was struck by the common themes between AF 447 and the Titanic. In both the design teams designed a vehicle compliant to the regulations of the day. But in both cases an implicit design assumption as to how the system would be operated was invalidated.

Continue Reading...

Why more information does not automatically reduce risk

I recently re-read the article Risks and Riddles by Gregory Treverton on the difference between a puzzle and a mystery. Treverton’s thesis, taken up by Malcom Gladwell in Open Secrets, is that there is a significant difference between puzzles, in which the answer hinges on a known missing piece, and mysteries in which the answer is contingent upon information that may be ambiguous or even in conflict. Continue Reading…

Over the years a recurring question raised about the design of FBW aircraft has been whether pilots constrained by software embedded protection laws really have the authority to do what is necessary to avoid an accident? But this question falls into the trap of characterising the software as an entity in and of itself. The real question is should the engineers who developed the software be the final authority?

Continue Reading...

Why taking risk is an inherent part of the human condition

On the 6th of May 1968 Neil Armstrong stepped aboard the Lunar Lander Test Vehicle (LLTV) for a routine training mission. During the flight the vehicle went out of control and crashed with Armstrong ejecting to safety seconds before impact. Continue Reading…

Blayais Plant (Image source: Wikipedia Commons)

What a near miss flooding incident at a French nuclear plant in 1999 and the Fukushima 2012 disaster can tell us about fault tolerance and designing for reactor safety

Continue Reading…

A report by the AIA on engine rotor bursts and their expected severity raises questions about the levels of damage sustained by QF 32.

Continue Reading...

It appears that the underlying certification basis for aircraft safety in the event of a intermediate power turbine rotor bursts is not supported by the rotor failure seen on QF 32.

Continue Reading...

The Titanic effect

27/09/2010 — 1 Comment

So why did the Titanic sink? The reason highlights the role of implicit design assumptions in complex accidents and the interaction of design with operations of safety critical systems

Continue Reading...

Why do safety critical organisations also fail to respond to sentinel events?

Continue Reading...

The IPCC issued a set of lead author guidance notes on how to describe uncertainty prior to the fourth IPCC assessment. In it the IPCC laid out a methodology on how to deal with various classes of uncertainty. Unforunately the IPCC guidance also fell into a fatal trap.

Continue Reading...

One of the tenets of safety engineering is that simple systems are better. Many practical reasons are advanced to justify this assertion, but I’ve always wondered what, if any, theoretical justification was there for such a position.

Continue Reading...

The principal of phenotype and genotype is used to explain the variability amongst definitions of hazard.

Continue Reading...

Buncefield (Image Source Royal Air Support Unit)

The use of integrity levels to achieve ultra high levels of safety has become an ‘accepted wisdom’ in the safety community. Yet I remain unconvinced as to their efficacy, and in this post I argue that integrity levels are not scientific in any real sense of that term.

Continue Reading…

Why is the concept of a hazard so hard to pin down? Wittgenstein provides some pointers as to why there is les to this old chestnut than appears to be.

Continue Reading...