4blackswans

Or how do we measure the unknown?

The problem is that as our understanding and control of known risks increases, the remaining risk in any system become increasingly dominated by  the ‘unknown‘. The higher the integrity of our systems the more uncertainty we have over the unknown and unknowable residual risk. What we need is a way to measure, express and reason about such deep uncertainty, and I don’t mean tools like Pascalian calculus or Bayesian prior belief structures, but a way to measure and judge ontological uncertainty.

Even if we can’t measure ontological uncertainty directly perhaps there are indirect measures? Perhaps there’s a way to infer something from the platonic shadow that such uncertainty casts on the wall, so to speak. Nassim Taleb would say no, the unknowability of such events is the central thesis of his Ludic Fallacy after all. But I still think it’s worthwhile exploring, because while he might be right, he may also be wrong.

*With apologies to Nassim Taleb.

h

Boeing 787-8 N787BA cockpit (Image source: Alex Beltyukov CC BY-SA 3.0)

The Dreamliner and the Network

Big complicated technologies are rarely (perhaps never) developed by one organisation. Instead they’re a patchwork quilt of individual systems which are developed by domain experts, with the whole being stitched together by a single authority/agency. This practice is nothing new, it’s been around since the earliest days of the cybernetic era, it’s a classic tool that organisations and engineers use to deal with industrial scale design tasks (1). But what is different is that we no longer design systems, and systems of systems, as loose federations of entities. We now think of and design our systems as networks, and thus our system of systems have become a ‘network of networks’ that exhibit much greater degrees of interdependence.

Continue Reading…

In by the out door

787 Battery after fire (Image source: NTSB)

The NTSB have released their final report on the Boeing 787 Dreamliner Li-Ion battery fires. The report makes interesting reading, but for me the most telling point is summarised in conclusion seven, which I quote below.

Conclusion 7. Boeing’s electrical power system safety assessment did not consider the most severe effects of a cell internal short circuit and include requirements to mitigate related risks, and the review of the assessment by Boeing authorized representatives and Federal Aviation Administration certification engineers did not reveal this deficiency.

NTSB/AIR-14/01  (p78 )

In other words Boeing got themselves into a position with their safety assessment where their ‘assumed worst case’ was much less worse case than the reality. This failure to imagine the worst ensured that when they aggressively weight optimised the battery design instead of thermally optimising it, the risks they were actually running were unwittingly so much higher.

The first principal is that you must not fool yourself, and that you are the easiest person to fool

Richard P. Feynman

I’m also thinking that the behaviour of Boeing is consistent with what McDermid et al, calls probative blindness. That is, the safety activities that were conducted were intended to comply with regulatory requirements rather than actually determine what hazards existed and their risk.

… there is a high level of corporate confidence in the safety of the [Nimrod aircraft]. However, the lack of structured evidence to support this confidence clearly requires rectifying, in order to meet forthcoming legislation and to achieve compliance.

Nimrod Safety Management Plan 2002 (1)

As the quote from the Nimrod program deftly illustrates, often (2) safety analyses are conducted simply to confirm what we already ‘know’ that the system is safe, non-probative if you will. In these circumstances the objective is compliance with the regulations rather than to generate evidence that our system is unsafe. In such circumstances doing more or better safety analysis is unlikely to prevent an accident because the evidence will not cause beliefs to change, belief it seems is a powerful thing.

The Boeing battery saga also illustrates how much regulators like the FAA actually rely on the technical competence of those being regulated, and how fragile that regulatory relationship is when it comes to dealing with the safety of emerging technologies.

Notes

1. As quoted in Probative Blindness: How Safety Activity can fail to Update Beliefs about Safety, A J Rae*, J A McDermid, R D Alexander, M Nicholson (IET SSCS Conference 2014).

2. Actually in aerospace I’d assert that it’s normal practice to carry out hazard analyses simply to comply with a regulatory requirement. As far as the organisation commissioning them is concerned the results are going to tell them what they know already, that the system is safe.

Here’s a short tutorial I put together (in a bit of a rush) about the ‘mechanics’ of producing compliance finding as part of the ADF’s Airworthiness Regime. Hopefully this will be of assistance to anyone faced with the task of making compliance findings, managing the compliance finding process or dealing with the ADF airworthiness certification ‘beast’.

The tutorial is a mix of how to think about and judge evidence, drawing upon legal principles, and how to use practical argumentation models to structure the finding. No Dempster Shafer logic yet, perhaps in the next tutorial.

Anyway, hope you enjoy it. :)

Enigma Rotors (Image source: Harold Thimbleby)

Or getting off the password merry go round… 

I’m not sure how this happens, but there are certain months where a good proportion of my passwords rollover. Of course password rollovers are one of those entrenched security ‘good ideas’, and you’d assume it makes us more secure? Well no, unfortunately it has entirely the opposite effect.

Continue Reading…

Yep that’s right, due to popular demand I’m running ZEIT 8236 System Safety as an Intensive Delivery mode course in the second session at ADFA from the 13th to 17th of July 2015. If you want a flavour, here’s the introductory module. Remember, I love this stuff. :)

A safety engineer is someone who builds castles in the air and an operator is someone who goes and lives in them. But nature is the one who collects the rent…

IMG_3872.JPG

Well if news from the G20 is anything to go by we may be on the verge of a seismic shift in how the challenge of climate change is treated. Our Prime Ministers denial notwithstanding :)

IMG_3851-0.JPG

A report issued by the US Chemical Safety Board on Monday entitled “Regulatory Report: Chevron Richmond Refinery Pipe Rupture and Fire,” calls on California to make changes to the way it manages process safety.

The report is worth a read as it looks at various regulatory regimes in a fairly balanced fashion. A strong independent competent regulator is seen as a key factor for success by the reports authors, regardless of the regulatory mechanisms. I don’t however think the evidence is as strong as the report makes out that safety case/goal based safety regimes perform ‘all that better’ than other regulatory regimes. Would have also been nice if they’d compared and contrasted against other industries, like aviation.

IMG_3835.JPG

Yep it’s the purple train again. This time taking me to EECON 2014 to give a dinner speech on risk.

EECON 2014

07/11/2014 — Leave a comment

So I’ve been invited to to give a talk on risk at the conference dinner. Should be interesting.

Cassini Descent Module (Image source: NASA)

When is an interlock not an interlock?

I was working on an interface problem the other day. The problem related to how to judge when a payload (attached to a carrier bus like) had left the parent (like the Huygens lander leaving the Cassini spacecraft above). Now I could use what’s called the ‘interlock interface’ which is a discrete ‘loop back’ that runs through the bus to payload connector then turns around and heads back into the bus again. The interlock interface is there to provides a means for the carriers avionics to determine if the payload is electrically mated to the bus. So should I use this as an indication that the payload has left the carrier bus as well? Well maybe not.

Continue Reading…

Well it was either Crowley or Kylie Minogue given the title of the post, so think yourselves lucky (Image source: Warner Brothers TV)

Sometimes it’s just a choice between bad and worse

If we accept that different types of uncertainty create different types of risk then it follows that we may in fact be able to trade one type of risk for another, and in certain circumstances this may be a preferable option.

Continue Reading…

Midlands hotel

A quick report from sunny Manchester, where I’m attending the IET’s annual combined conference on system safety and cyber security. Day one of the conference proper and I got to be lead off with the first keynote. I was thinking about getting everyone to do some Tai Chii to limber up (maybe next year). Thanks once again to Dr Carl Sandom for inviting me over, it was a pleasure. I just hope the audience felt the same way. :)

Continue Reading…

An interesting article in Forbes on human error in a very unforgiving environment, i.e. treating ebola patients, and an excellent use of basic statistics to prove that cumulative risk tends to do just that, accumulate. As the number of patients being treated in the west is pretty low at the moment it also gives a good indication of just how infectious Ebola is. One might also infer that the western medical establishment is not quite so smart as it thought it was, at least when it comes to treating the big E safely.

Of course the moment of international zen in the whole story had to be the comment by the head of the CDC Dr Friedan, that and I quote “clearly there was a breach in protocol”, a perfect example of affirming the consequent. As James Reason pointed out years ago there are two ways of dealing with human error, so I guess we know where the head of the CDC stands on that question. :)

IMG_3519.JPG

Well after a week away teaching system safety to Navy in the depths of Victoria I’m off again! Current destination, the IET’s System Safety and Cyber Security conference in Manchester.

Just to mention that if you’re coming to the workshop day I’ll be running one on safety cases, so if you’re interested drop by and pull up a chair. :)

Tenerife disaster moments after the impact

TCAS, emergent properties and risk trade-offs

There’s been some comment from various regulator’s regarding the use of Traffic Collision Avoidance System (TCAS) on the ground, experience shows that TCAS is sometimes turned on and off at the same time as the Mode S transponder. Eurocontrol doesn’t like it and is quite explicit about their dislike, ‘do not use it while taxiing’ they say, likewise the FAA also states that you should ‘minimise use on ground’. There are legitimate reasons for this dislike, having too many TCAS transponders operating within a specific area can degrade system performance as well as potentially interfering with airport ground radars. And as the FAA point out operating with the AD-B transponder on will also ensure that the aircraft is visible to ATC and other ADS-B (in) equipped aircraft (1). Which leaves us with the question, why are aircrew using TCAS on the ground? Is it because it’s just easy enough to turn on at the push back? Or is there another reason?

Continue Reading…

Interesting article on old school rail safety and lessons for the modern nuclear industry. As a somewhat ironic addendum the early nuclear industry safety studies also overlooked the risks posed by large inventories of fuel rods on site, the then assumption being that they’d be shipped off to a reprocessing facility as soon as possible, it’s hard to predict the future. :)

And in news just to hand, the first Ebola case is reported in the US. It’ll be very interesting to see what happens next, and how much transmission rate is driven by cultural and socio-economic effects…

IMG_3374.JPG

In case anyone missed it the Ebola outbreak in Africa is now into the ‘explosive’ phase of the classic logistics growth curve, see this article from New Scientist for more details.

Here in the west we get all the rhetoric about Islamic State as an existential threat but little to nothing about the big E, even though this epidemic will undoubtedly kill more people than that bunch of crazies ever will. Ebola doesn’t hate us for who we are, but it’ll damn well kill a lot of people regardless.

Another worrying thought is that the more cases, the more generations of the disease clock over and the more chance there is for a much worse variant to emerge that’s got global legs. We’ve been gruesomely lucky to date that Ebola is so nasty, because it tends too burn out before going to far, but that can change ver quickly. This is a small world, and what happens inside a village in West Africa actually matters to people in London, Paris, Sydney or Moscow. Were I PM that’s where I’d be sending our defence force, not back into the cauldron of the Middle East…

If you were wondering why the Outliers post was, ehem, a little rough I accidentally posted an initial draft rather than the final version. I’ve now released the right one.

Right hand AoA probes (Image source: ATSB)

When good voting algorithms go bad

Thinking about the QF72 incident, it struck me that average value based voting methods are based on the calculation of a population statistic. Now population statistics work well when the population is normally distributed, or otherwise clustered around some value. But if the distribution has heavy tails, we can expect that extreme values will occur fairly regularly and therefor the ‘average’ value means much less. In fact for some distributions we may not be able to put a cap on the upper value that an ‘average’ could be, e.g. it could have an infinite value and the idea of an average is therefore meaningless.

Continue Reading…

IMG_3354.JPG

Dear AGL,

I realise that you are not directly responsible for the repeal of the carbon tax by the current government, and I also realise that we the voting public need to man up and shoulder the responsibility for the government and their actions. I even appreciate that if you did wish to retain the carbon tax as a green surcharge, the current government would undoubtedly act to force your hand.

But really, I have to draw the line at your latest correspondence. Simply stamping the latest bill with “SAVINGS FROM REMOVING THE CARBON TAX” scarcely does the benefits of this legislative windfall justice. You have, I fear, entirely undersold the comprehensive social, moral and economic benefits that accrue through the return of this saving to your customers. I submit therefore for your corporate attention some alternatives slogans:

  • “Savings from removing the carbon tax…you’ll pay for it later”
  • “Savings from removing the carbon tax…buy a bigger air conditioner, you’ll need it”
  • “Savings from removing the carbon tax…we also have a unique coal seam investment opportunity”
  • “Savings from removing the carbon tax, invest in climate change!”
  • “Savings from removing the carbon tax, look up the word ‘venal’, yep that’s you”
  • “Savings from removing the carbon tax, because a bigger flatscreen TV is worth your children’s future”
  • “Savings from removing the carbon tax, disinvesting in the future”

So be brave and take advantage of this singular opportunity to fully invest your corporate reputation in the truly wonderful outcomes of this prescient and clear sighted decision by our federal government.

Yours respectfully

etc.

20140122-072236.jpg

A report from Beecham research on challenges in securing the IoT, my favourite quote from the press release, “Security in the Internet of Things is significantly more complex than many system designers have previously experienced...”.

I’ll be interested to see whether they put the finger on Postel’s robustness principle (RFC 793) as one of the root causes of our current internet security woes or the necessity to starve the Turing beast.

An interesting post by Mike Thicke over at Cloud Chamber on the potential use of prediction markets to predict the location of MH370. Prediction markets integrate ‘diffused’ knowledge using a market mechanism to derive a predicted likelihood, essentially market prices are assigned to various outcomes and are treated as analogs of their likelihood. Market trading then established what the market ‘thinks’ is the value of each outcome. The technique has a long and colourful history, but it does seem to work. As an aside prediction markets are still predicting a No vote in the upcoming referendum on Scottish Independence despite recent polls to the contrary.

Returning to the MH370 saga, if the ATSB is not intending to use a Bayesian search plan then one could in principle crowd source the effort through such a prediction market. One could run the market in a dynamic fashion with the market prices updating as new information comes in from the ongoing search. Any investors out there?

MH370 underwater search area map (Image source- Australian Govt)

Just saw a sound bite of our Prime Minister reiterating that we’ll spare no expense to find MH370. Throwing money is one thing, but I’m kind of hoping that the ATSB will pull it’s finger out of it’s bureaucratic ass and actually apply the best search methods to the search. Unkind? Perhaps, but then maybe the families of the lost deserve the best that we can do…

Enshrined in Australia’s current workplace health and safety legislation is the principle of ‘So Far As Is Reasonably Practical’. In essence SFAIRP requires you to eliminate or to reduce risk to a negligible level as is (surprise) reasonably practical. While there’s been a lot of commentary on the increased requirements for diligence (read industry moaning and groaning) there’s been little or no consideration of what is the ‘theory of risk’ that backs this legislative principle and how it shapes the current legislation, let alone whether for good or ill. So I thought I’d take a stab at it. :) Continue Reading…

London Science Museums Replica Difference Engine (Image source: wikipedia)

An amusing illustration of the power of metadata, Finding Paul Revere, by Kieran Healy. Clearly what the British colonial administration in America lacked was a firm grasp of the mathematical principles embodied in social network theory, Ada Lovelace on consultancy and a server park filled with Mr Babbage’s difference engines. If they had, then the American revolution might well have had a very different outcome. :)

Interesting, and a little weird. From Krebs on Security the strange tale of Loren Ipsum and Google.

20140629-132953-48593553.jpg

Just because you can, doesn’t mean you ought

An interesting article by  and  on the moral hazard that the use of drone strikes poses and how in the debate on their use there arises a confusion of the facts with value. To say that drone strikes are effective and near consequence free, at least for the perpetrator, does not equate to the conclusion that they are ethical and that we should carry them out. Nor does the capability to safely attack with focused lethality mean that we will in fact make better ethical decisions. The moral hazard that Kaag and Krep assert is that ease of use can all to easily end up becoming the justification for use. My further prediction is that with the increasing automation and psychological distancing of the kill chain this tendency will inevitably increase. Herman Kahn is probably smiling now, wherever he is.

Continue Reading…

Finding MH370

26/08/2014 — 1 Comment

MH370 underwater search area map (Image source- Australian Govt)

Finding MH370 is going to be a bitch

The aircraft has gone down in an area which is the undersea equivalent of the eastern slopes of the Rockies, well before anyone mapped them. Add to that a search area of thousands of square kilometres in about an isolated a spot as you can imagine, a search zone interpolated from satellite pings and you can see that it’s going to be tough.

Continue Reading…

Global temperature 2050

Just received a text from my gas and electricity supplier. Good news! My gas and electricity bills will come down by about 4 and 8% respectively due to the repeal of the carbon tax in Australia. Of course we had to doom the planetary ecosystem and condemn our children to runaway climate change but hey, think of the $550 we get back per year. And, how can it get any better, now we’re also seen as a nation of environmental wreckers. I think I’ll go an invest the money in that AGL Hunter coal seam gas project, y’know thinking global, acting local. Thanks Prime Minister Abbott, thanks!

20140629-132953-48593553.jpg

On Artificial Intelligence as Ethical Prosthesis

Out here in the grim meat-hook present of Reaper missions and Predator drone strikes we’re already well down track to a future in which decisions as to who lives and who dies are made less and less by human beings, and more and more by automation.

Continue Reading…

Toyota ECM (Image source: Barr testimony presentation)

Comparing and contrasting

In 2010 NASA was called in by the National Highway Transport Safety Administration to help in figuring out the reason for reported unintended Toyota Camry accelerations. They subsequently published a report including a dedicated software annex. What’s interesting to me is the different outcome and conclusions of the two reports regarding software.  Continue Reading…

More speed bumps on the road to the Internet of Everything

Continue Reading…

The quote below is from the eminent British scientist Lord Kelvin, who also pronounced that x-rays were a hoax, that heavier than air flying machines would never catch on and that radio had no future…

I often say that when you can measure what you are speaking about, and express it in numbers, then you know something about it; but when you cannot measure it, when you cannot express it in numbers, your may knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever that may be.

Lord Kelvin, 1891

I’d turn that statement about and remark that once you have a number in your grasp, your problems have only just started. And that numbers shorn of context are a meagre and entirely unsatisfactory way of expressing our understanding of the world.

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Arthur C. Clarke,  Profiles of the Future (1962)

I often think that Arthur C. Clarke penned his famous laws in direct juxtaposition to the dogmatic statements of Lord Kelvin. It’s nice to think so anyway. :)

20140715-160952-58192661.jpg

Back at the ADFA campus to present the July IDM system safety course. A little cooler this time of year. :)

Just added a modified version of the venerable subjective 882 hazard risk matrix to my useful stuff page in which I fix a few issues that have bugged me about that particular tool, see Risk and the Matrix for a fuller discussion of the problems with risk matrices.

For those of you with a strong interest in such I’ve translated the matrix into cartesian coordinates, revised the risk zone and definitions to make the matrix ‘De Moivre theorem’ compliant (and a touch more conservative), added the AIAA’s combinatorial probability thresholds, introduced a calibration point and added the ALARP principal.

Who knows maybe the US DoD will pick it up…but probably not. :)

MIL-STD-882 Hazard Risk Matrix (Modified).

 

Waaay back in 2002 Chris Holloway wrote a paper that used a fictional civil court case involving the hazardous failure of software to show that much of the expertise and received wisdom of software engineering was, using the standards of the US federal judiciary, junky and at best opinion based.

Rereading the transcripts of Phillip Koopman, and Michael Barr in the 2013 Toyota spaghetti monster case I am struck both by how little things have changed and how far actual state of the industry can be from state of the practice, let alone state of the art. Life recapitulates art I guess, though not in a good way.

I’ve put the original Def Stan 00-55 (both parts) onto my resources page for those who are interested in doing a compare and contrast between the old, and the new (whenever it’s RFC is released). I’ll be interested to see whether the standards reluctance to buy into the whole safety by following a process argument is maintained in the next iteration. The problem of arguing from fault density to safety that they allude to also remains, I believe, insurmountable.

The justification of how the SRS development process is expected to deliver SRS of the required safety integrity level, mainly on the basis of the performance of the process on previous projects, is covered in 7.4 and annex E. However, in general the process used is a very weak predictor of the safety integrity level attained in a particular case, because of the variability from project to project. Instrumentation of the process to obtain repeatable data is difficult and enormously expensive, and capturing the important human factors aspects is still an active research area. Furthermore, even very high quality processes only predict the fault density of the software, and the problem of predicting safety integrity from fault density is insurmountable at the time of writing (unless it is possible to argue for zero faults).

Def Stan 00-55 Issue 2 Part 2 Cl. 7.3.1

Just as an aside, the original release of Def Stan 00-56 is also worth a look as it contains the methodology for the assignment of safety integrity levels. Basically for a single function or N>1 non-independent functions the SIL assigned to the function(s) is derived from the worst credible accident severity (much like DO-178). In the case of N>1 independent functions, one of these functions gets a SIL based on severity but the remainder have a SIL rating apportioned to them based on risk criteria. From which you can infer that the authors, just like the aviation community were rather mistrustful of using estimates of probability in assuring a first line of defence. :)

When Formal Systems Kill, an interesting paper by Lee Pike and Darren Abramson looking at the automatic formal system property of computers from an ethical perspective. Of course as we all know, the 9000 series has a perfect operational record…

Tweedle Dum and Dee (Image source: Wikipedia Commons)

Revisiting the Knight, Leveson experiments

In the through the looking glass world of high integrity systems, the use of N-version programming is often touted as a means to achieve extremely lower failure rates without extensive V&V, due to the postulated independence of failure in independently developed software. Unfortunately this is hockum, as Knight and Leveson amply demonstrated with their N version experiments, but there may actually be advantages to N versioning, although not quite what the proponents of it originally expected.

Continue Reading…

For those of you in northern climes, here’s some tips on safer summer reading, and for once I have nothing to add. :)

Yours truly

I’ve just finished reading an interesting post by Andrew Rae on the missing aspects of engineering education (Mind the Feynman gap) which parallels my more specific concerns, and possibly unkinder comments, about the lack of professionalism in the software community.

Continue Reading…

Hazard checklists

06/07/2014 — 1 Comment

As I had to throw together an example checklist for a course I’m running, here it is. I’ve also given a little bit of a commentary on the use, advantages and disadvantages of checklists as well. Enjoy. :)

System Safety Fundamentals Concept Cloud

There’s a very interesting site,  run by a couple of Australian lads, called Text is Beautiful that provides some free tools that allow you to visually represent the relationships within a text. No this isn’t the same as Wordle, these guys have gone beyond that to develop what they call a Concept cloud, colours in the Concept Cloud are indicative of distinct themes and themes themselves represent rough groupings of related concepts. What’s a concept? Well a concept is made up of several words, with each concept having it’s own unique thesaurus that is statistically derived from the text.

So without further ado I took the Fundamentals of System Safety course that I teach and dropped it in the hopper, the results as you might guess are above. Very neat to look at and it also gives an interesting insight into how the concepts that the course teaches interrelate. Enjoy. :)

20140629-112815-41295158.jpg

The DEF STAN 00-55 monster is back!!

That’s right, moves are afoot to reboot the cancelled UK MOD standard for software safety, DEF STAN 00-55. See the UK SCSC’s Event Diary for an opportunity to meet and greet the writers. They’ll have the standard up for an initial look on-line sometime in July as we well, so stay posted.

Continue Reading…

Cleveland street train overrun (Image source: ATSB)

The final ATSB report, sloppy and flawed

The ATSB has released it’s final report into the Cleveland street overrun and it’s disappointing, at least when it comes to how and why a buffer stop that actually materially contributed to an overrun came to be installed at Cleveland street station. I wasn’t greatly impressed by their preliminary report and asked some questions of the ATSB at the time (their response was polite but not terribly forthcoming) so I decided to see what the final report was like before sitting in judgement.

Continue Reading…

Easter 2014 bus-cycle accident (Image Source: James Brickwood)

The limits of rational-legal authority

One of the underlying and unquestioned aspects of modern western society is that the power of the state is derived from a rational-legal authority, that is in the Weberian sense of a purposive or instrumental rationality in pursuing some end. But what if it isn’t? What if the decisions of the state are more based on belief in how people ought to behave and how things ought to be rather than reality? What, in other words, if the lunatics really are running the asylum?

Continue Reading…