Challenger_flight_51-l_crew

Vale Challenger

The anniversary of the loss of Challenger passed by on thursday. In memorium, I’ve updated my post that deals with the failure of communication that I think lies at the heart of that disaster.

MH 370 search vessel (Image source: ATSB)

Once more with feeling

Sonar vessels searching for Malaysia Airlines Flight MH370 in the southern Indian Ocean may have missed the jet, the ATSB’s Chief Commissioner Martin Dolan has told News Online. he went on to point out the uncertainties involved, the difficulty of terrain that could mask the signature of wreckage and that therefore problematic areas would need to be re-surveyed. Despite all that the Commissioner was confident that the wreckage site would be found by June. Me I’m not so sure.

Continue Reading…

IMG_4880

A very late draft

I originally gave a presentation at the 2015 ASSC Conference, but never got around to finishing the companion paper. Not my most stellar title either. The paper is basically how to leverage off the very close similarities between the objectives of the WHS Act (2011) and those of MIL-STD882C, yes that standard. You might almost think the drafters of the legislation had a system safety specialist advising them… Recently I’ve had the necessity to apply this approach on another project (a ground system this time) so I took the opportunity to update the draft paper as an aide memoire, and here it is!

:)

The long gone, but not forgotten, second issue of the UK MoD’s safety management standard DEFSTAN 00-56 introduced the concept of a qualitative likelihood of Incredible, this is however not just another likelihood category. The intention of the standard writers was that it would be used to capture risks that were deemed effectively impossible to occur, given the assumptions about the domain and system. The category was be applied to those scenarios where the hazard had been designed out, where the design concept had been assessed and it turns out that the posited hazard was just not applicable or where some non-probabilistic technique is used to verify the safety of the system (think mathematical proof). Such a category records that yes, it’s effectively impossible, while retaining the record of assessment should it become necessary to revisit it, a useful mechanism.

A.1.19 Incredible. Believed to have a probability of occurrence too low for expression in meaningful numerical terms.

DEFSTAN 00-56 Issue 2

I’ve seen this approach mangled in a number or hazard analyses were the disjoint nature of the incredible category was not recognised and it was thereafter assigned a specific likelihood that followed on in a decadal fashion from the next highest category. Yes difficulties ensued. The key is that the Incredible is not the next likelihood bin after Improbable it is in fact beyond the end of the line where we park those hazards that we have judged to have an immeasurably small likelihood of occurrence. This, we are asserting, will not happen and we are as confident of that fact as one can ever be.

“Incredible” may be exceptionally defined in terms of reasoned argument that does not rely solely on numerical probabilities.

DEFSTAN 00-56 Issue 2

To put it another way the category reflects a statement of our degree of belief that an event will not occur rather than an assertion as to its frequency of occurrence as the other subjective categories do. What the standard writers have unwittingly done is introduce a superset, in which the ‘no hazard exists’ set is represented by Incredible and the other likelihoods form the ‘a hazard exists’ set. All of which starts to sound like an mashup of frequentist probabilities with Dempster Shafer  belief structures. Promising, it’s a pity the standard committee didn’t take the concept further.

[I]t is better to think of a problem of understanding disasters as a ‘socio- technical’ problem with social organization and technical processes interacting to produce the phenomena to be studied.

B.A Turner

Crowely (Image source: Warner Bro's TV)

The psychological basis of uncertainty

There’s a famous psychological experiment conducted by Ellsberg, called eponymously the Ellsberg paradox, in which he showed that people overwhelmingly prefer a betting scenario in which the probabilities are known, rather than one in which the odds are actually ambiguous, even if the potential for winning might be greater.  Continue Reading…

The first principle is that you must not fool yourself and you are the easiest person to fool.

Richard P. Feynman

4blackswans

One of the problems that we face in estimating risk driven is that as our uncertainty increases our ability to express it in a precise fashion (e.g. numerically) weakens to the point where for deep uncertainty (1) we definitionally cannot make a direct estimate of risk in the classical sense. Continue Reading…

Just finished updating the Functional Hazard Analysis course notes (to V1.2) to expand and clarify the section on complex interaction style functional failures. To my mind complex interactions are where accidents actually occur and where the guidance provided by various standards, see SAMS or ARP 4754, is also the weakest.

In breaking news the Australian Bureau of Meteorology has been hacked by the Chinese. Government sources are quoted by the ABC as stating that the BoM has definitely been compromised and that this may in turn mean the compromise of other government departments.

We’re probably now in the Chinese’s operational end game as their first priority would have been to expropriate (read steal) as much of the Bureau’s intellectual property as they could, given that follow-up exploits of other information systems naturally carry a higher likelihood of detection. The intruders running afoul of someone else who was not quite so asleep at the switch may well be how the breach was eventually detected.

The first major problem is that the Bureau provides services to a host of government and commercial entities so it’s just about as good a platform as you could want from which to launch follow on campaigns.  The second major problem is that you just can’t turn the services that the Bureau provides off, critical infrastructure is, well, critical. That means in turn that the Bureau’s server’s can’t just go dark while they hunt down the malware. As a result it’s going to be very difficult and expensive to root the problem out and also to be sure that it is. Well played PLA unit 61398, well played.

As to how this happened? Well unfortunately the idea that data is as much critical national infrastructure as say a bridge or highway just doesn’t seem to resonate with management in most Australian organisations, or at least not enough to ensure there’s what the trade calls ‘advanced persistent diligence’ to go round, or even for that matter sufficient situational awareness by management to be able to guard against evolving such high end threats.

Perusing the FAA’s system safety handbook while doing some research for a current job, I came upon an interesting definition of severities. What’s interesting is that the FAA introduces the concept of safety margin reduction as a specific form of severity (loss).

Here’s a summary of Table (3-2) form the handbook:

  • Catastrophic – ‘Multiple fatalities and/or loss of system’
  • Major – ‘Significant reduction in safety margin…’
  • Minor – ‘Slight reduction in safety margin…’

If we think about safety margins for a functional system they represent a system state that’s a precursor to a mishap, with the margin representing some intervening set of states. But a system state of reduced safety margin (lets call it a hazard state) is causally linked to a mishap state, else we wouldn’t care, and must therefore inherit it’s severity. The problem is that in the FAA’s definition they have arbitrarily assigned severity levels to specific hazardous degrees of safety margin reduction, yet all these could still be linked causally to a catastrophic event, e.g. a mid-air collision.

What the FAA’s Systems Engineering Council (SEC) has done is conflate severity with likelihood, as a result their severity definition is actually a risk definition, at least when it comes to safety margin hazards. The problem with this approach is that we end up under treating risks as per classical risk theory. For example say we have a potential reduction in safety margin, which is also casually linked to a catastrophic outcome. Now per Table 3-2 if the reduction was classified as ‘slight’, then we would assess the probability and given the minor severity decide to do nothing, even though in reality the severity is still catastrophic. If, on the other hand, we decided to make decisions based on severity alone, we would still end up making a hidden risk judgement depending on what the likelihood of propagation form hazard state to accident state was (undefined in the handbook). So basically the definitions set you up for trouble even before you start.

My guess is that the SEC decided to fill in the lesser severities with hazard states because for an ATM system true mishaps tend to be invariably catastrophic, and they were left scratching their head for lesser severity mishap definitions. Enter the safety margin reduction hazard. The take home from all this is that severity needs to be based on the loss event, introducing intermediate hybrid hazard/severity state definitions leads inevitably to incoherence of your definition of risk. Oh and (as far as I am aware) this malformed definition has spread everywhere…

National-Terrorism-Threat-Advisory-System

With much pomp and circumstance the attorney general and our top state security mandarin’s have rolled out the brand new threat level advisory system. Congrats to us, we are now the proud owners of a five runged ladder of terror. There’s just one small teeny tiny insignificant problem, it just doesn’t work. Yep that’s right, as a tool for communicating it’s completely void of meaning, useless in fact, a hopelessly vacuous piece of security theatre.

You see the levels of this scale are based on likelihood. But whoever designed the scale forgot to include over what duration they were estimating the likelihood. And without that duration it’s just a meaningless list of words. 

Here’s how likelihood works. Say you ask me whether it’s likely to rain tomorrow, I say ‘unlikely’, now ask me whether it will rain in the next week, well that’s a bit more likely isn’t it? OK, so next you ask me whether it’ll rain in the next year? Well unless you live in Alice Springs the answer is going to be even more likely, maybe almost certain isn’t it? So you can see that the duration we’re thinking of affects the likelihood we come up with because it’s a cumulative measure. 

Now ask me whether a terrorist threat was going to happen tomorrow? I’d probably say it was so unlikely that it was, ‘Not expected’. But if you asked me whether one might occur in the next year I’d say (as we’re accumulating exposure) it’d be more likely, maybe even ‘Probable’ while if the question was asked about a decade of exposure I’d almost certainly say it was,  ‘Certain’. So you see how a scale without a duration means absolutely nothing, in fact it’s much worse than nothing, it actually causes misunderstanding because I may be thinking in threats across the next year, while you may be thinking about threats occurring in the next month. So it actually communicates negative information.

And this took years of consideration according to the Attorney General, man we are governed by second raters. Puts head in hands. 

Governance is a lot like being a fireman. You’re either checking smoke alarms or out attending a fire.

Matthew Squair

Screwtape(Image source: end time info)

How to deal with those pesky high risks without even trying

Screwtape here,

One of my clients recently came to me with what seemed to be an insurmountable problem in getting his facility accepted despite the presence of an unacceptably high risk of a catastrophic accident. The regulator, not happy, likewise all those mothers with placards outside his office every morning. Most upsetting. Not a problem said I, let me introduce you to the Screwtape LLC patented cut and come again risk refactoring strategy. Please forgive me now dear reader for without further ado we must do some math.

Risk is defined as the loss times probability of loss or R = L x P (1), which is the reverse of expectation, now interestingly if we have a set of individual risks we can add them together to get the total risk, for our facility we might say that total risk is R_f = (R_1 + R_2 + R_3 … + R_n). ‘So what Screwtape, this will not pacify those angry mothers!’ I hear you say? Ahh, now bear with me as I show you how we can hide, err I mean refactor, our unacceptable risk in plain view. Let us also posit that we have a number of systems S_1, S_2, S_3 and so on in our facility… Well instead of looking at the total facility risk, let’s go down inside our facility and look at risks at the system level. Given that the probability of each subsystem causing an accident is (by definition) much less, why then per system the risk must also be less! If you don’t get an acceptable risk at the system level then go down to the subsystem, or equipment level.

The fin de coup is to present this ensemble of subsystem risks as a voluminous and comprehensive list (2), thereby convincing everyone of the earnestness of your endeavours, but omit any consideration of ensemble risk (3). Of course one should be scrupulously careful that the numbers add up, even though you don’t present them. After all there’s no point in getting caught for stealing a pence while engaged in purloining the Bank of England! For extra points we can utilise subjective measures of risk rather than numeric, thereby obfuscating the proceedings further.

Needless to say my client went away a happy man, the facility was built and the total risk of operation was hidden right there in plain sight… ah how I love the remorseless bloody hand of progress.

Infernally yours,

Screwtape

Notes

1. Where R = Risk, L = Loss, and P = Probability after De’Moivre. I believe Screwtape keeps De’Moivre’s heart in a jar on his desk. (Ed.).

2. The technical term for this is a Preliminary Hazard Analysis.

3. Screwtape omitted to note that total risk remains the same, all we’ve done is budgeted it out across an ensemble of subsystems, i.e. R_f = R_s1 + R_s2 + R_s3 (Ed.).

 

 

 

 

 

 

qantas-vh-vzr-737-838-640x353

Deconstructing a tail strike incident

On August 1 last year, a Qantas 737-838 (VH-VZR) suffered a tail-strike while taking off from Sydney airport, and this week the ATSB released it’s report on the incident. The ATSB narrative is essentially that when working out the plane’s Takeoff Weight (TOW) on a notepad, the captain forgot to carry the ‘1’ which resulted in an erroneous weight of 66,400kg rather than 76,400kg. Subsequently the co-pilot made a transposition error when carrying out the same calculation on the Qantas iPad resident on-board performance tool (OPT), in this case transposing 6 for 7 in the fuel weight resulting in entering 66,400kg into the OPT. A cross check of the OPT calculated Vref40 speed value against that calculated by the FMC (which uses the aircraft Zero Fuel Weight (ZFW) input rather than TOW to calculate Vref40) would have picked the error up, but the crew mis-interpreted the check and so it was not performed correctly. Continue Reading…

Why probability is not corroboration

The IEC’s 61508 standard on functional safety  assigns a series of Safety Integrity Levels (SIL) that correlate to the achievement of specific hazardous failure rates. Unfortunately this definition of SILs, that ties SILs to a probabilistic metric of failure, contains a fatal flaw.

Continue Reading…

Event trees

05/11/2015 — Leave a comment

I’ve just added the event trees module to the course notes.

System Safety Fundamentals Concept Cloud

System safety course, now with more case studies and software safety!

Have just added a couple of case studies and some course notes of software hazards and integrity partitioning, because hey I know you guys love that sort of stuff :)

System Safety Fundamentals Concept Cloud

I have finally got around to putting my safety course notes up, enjoy. You can also find them off the main menu.

Feel free to read and use under the terms of the associated creative commons license. I’d note that these are course notes so I use a large amount of example material from other sources (because hey, a good example is a good example right?) and where I have a source these are acknowledged in the notes. If you think I’ve missed a citation or made an error, then let me know.

To err is human, but to really screw it up takes a team of humans and computers…

How did a state of the art cruiser operated by one of the worlds superpowers end up shooting down an innocent passenger aircraft? To answer that question (at least in part) here’s a case study that’s part of the system safety course I teach that looks at some of the casual factors in the incident.

In the immediate aftermath of this disaster there was a lot of reflection, and work done, on how humans and complex systems interact. However one question that has so far gone unasked is simply this. What if the crew of the USS Vincennes had just used the combat system as it was intended? What would have happened if they’d implemented a doctrinal ruleset that reflected the rules of engagement that they were operating under and simply let the system do its job? After all it was not the software that confused altitude with range on the display, or misused the IFF system, or was confused by track IDs being recycled… no, that was the crew.

Consider the effect that the choice of a single word can have upon the success or failure of a standard.The standard is DO-278A, and the word is, ‘approve’. DO-278 is the ground worlds version of the aviation communities DO-178 software assurance standard, intended to bring the same level of rigour to the software used for navigation and air traffic management. There’s just one tiny difference, while DO-178 use the word ‘certify’, DO-278 uses the word ‘approve’, and in that one word lies a vast difference in the effectiveness of these two standards.

DO-178C has traditionally been applied in the context of an independent certifier (such as the FAA or JAA) who does just that, certifies that the standard has been applied appropriately and that the design produced meets the standard. Certification is independent of the supplier/customer relationship, which has a number of clear advantages. First the certifying body is indifferent as to whether the applicant meets or does not meet the requirements of DO-178C so has greater credibility when certifying as they are clearly much less likely to suffer from any conflict of interest. Second, because there is one certifying agency there is consistent interpretation of the standard and the fostering and dissemination of corporate knowledge across the industry through advice from the regulator.

Turning to DO-278A we find that the term ‘approver’ has mysteriously (1) replaced the term ‘certify’. So who, you may ask, can approve? In fact what does approve mean? Well the long answer short is anyone can, approve and the term means whatever you make of it. What it usually results in is the standard invoked between a supplier and customer, with the customer acting as the ‘approver’ of the standards application. This has obvious and significant implications for the degree of trust that we can place in the approval given by the customer organisation. Unlike an independent certifying agency the customer clearly has a corporate interest in acquiring the system which may well conflict with the object of fully complying with the requirements of the standard. Give that ‘approval’ is given on a contract basis between two organisations and often cloaked in non-disclosure agreements there is also little to no opportunity for the dissemination of useful learnings as to how to meet the standard. Finally when dealing with previously developed software the question becomes not just ‘did you apply the standard?’, but also ‘who was it that actually approved your application?’ and ‘How did they actually interpret the standard?’.

So what to do about it? To my mind the unstated success factor for the original DO-178 standard was in fact the regulatory environment in which it was used. If you want DO-278A to be more than just a paper tiger then you should also put in place mechanism for independent certification. In these days of smaller government this is unlikely to involve a government regulator, but there’s no reason why (for example) the independent safety assessor concept embodied in IEC 61508 could not be applied with appropriate checks and balances (1). Until that happens though, don’t set too much store by pronouncements of compliance to DO-278.

Final thought, I’m currently renovating our house and have had to employ an independent certifier to sign off on critical parts of the works. Now if I have to do that for a home renovation, I don’t see why some national ANSP shouldn’t have to do it for their bright and shiny toys.

Notes

1. Perhaps Screwtape consultants were advising the committee. :)

2. One of the problems of how 61508 implement the ISA is that they’re still paid by the customer, which leads in turn to the agency problem. A better scheme would be an industry fund into which all players contribute and from which the ISA agent is paid.

Meltwater river Greenland icecap (Image source: Ian Jouhgin)

Meme’s, media and drug dealer’s

In honour of our Prime Minister’s use of the drug dealer’s argument to justify (at least to himself) why it’s OK for Australia to continue to sell coal, when we know we really have to stop, here’s an update of a piece I wrote on the role of the media in propagating denialist meme’s. Enjoy, there’s even a public heath tip at the end.

PS. You can find Part I and II of the series here.

:)

The point of an investigation is not to find where people went wrong; it is to understand why their assessments and actions made sense at the time.

Sidney Dekker

  
ZEIT8236 System safety 2015 redux

Off to teach a course in system safety for Navy, whic ends up as a week spent at the old almer mater. Hopefully all transport legs will be uneventful. :)

C1Thresher

…for my boat is so small and the ocean so huge

For a small close knit community like the submarine service the loss of a boat and it’s crew can strike doubly hard. The USN’s response to this disaster, was both effective and long lasting. Doubly impressive given it was implemented at the height of the Cold War. As part of the course that I teach on system safety I use the Thresher as an important case study in organisational failure, and recovery.

Postscript

The RAN’s Collins class Subsafe program derived it’s strategic principles in large measure from the USNs original program. The successful recovery of HMAS Dechaineux from a flooding incident at depth illustrates the success of both the RANs Subsafe program and also its antecedent.

Here’s a copy of the presentation that I gave at ASSC 2015 on how to use MIL-STD-882C to demonstrate compliance to the WHS Act 2011. The Model Australian Workplace Health and Safety (WHS) Act places new and quite onerous requirements upon manufacturer, suppliers and end users organisations. These new requirements include the requirement to demonstrate due diligence in the discharge of individual and corporate responsibilities. Traditionally contracts have steered clear of invoking Workplace Health and Safety (WHS) legislation in anything other than a most abstract form, unfortunately such traditional approaches provide little evidence with which to demonstrate compliance with the WHS act.

The presentation describes an approach to establishing compliance with the WHS Act (2011) using the combination of a contracted MIL-STD-882C system safety program and a compliance finding methodology. The advantages and effectiveness of this approach in terms of establishing compliance with the act and the effective discharge the responsibilities of both supplier and acquirer are illustrated using a case study of a major aircraft modification program. Limitations of the approach are then discussed given the significant difference between the decision making criteria of classic systems safety and the so far as is reasonably practicable principle.

Matrix (Image source: The Matrix film)

The law of unintended consequences

There are some significant consequences to the principal of reasonable practicability enshrined within the Australian WHS Act. The act is particularly problematic for risk based software assurance standards, where risk is used to determine the degree of effort that should be applied. In part one of this three part post I’ll be discussing the implications of the act for the process industries functional safety standard IEC 61508, in the second part I’ll look at aerospace and their software assurance standard DO-178C then finally I’ll try and piece together a software assurance strategy that is compliant with the Act. Continue Reading…

Source: Technical Lessons from QF32

Here’s a link to version 1.3 of System Safety Fundamentals, part of the course I teach at UNSW. I’ll be putting the rest of the course material up over the next couple of months. Enjoy :)

It is a common requirement to either load or update applications over the air after a distributed system has been deployed. For embedded systems that are mass market this is in fact a fundamental necessity. Of course once you do have an ability to load remotely there’s a back door that you have to be concerned about, and if the software is part of a vehicle’s control system or an insulin pump controller the consequences of leaving that door unsecured can be dire. To do this securely requires us to tackle the insecurities of the communications protocol head on.

One strategy is to insert a protocol ‘security layer’ between the stack and the application. The security layer then mediate between the application and the Stack to enforce the system’s overall security policy. For example the layer could confirm:

  • that the software update originated from an authenticated source,
  • that the update had not been modified,
  • that the update itself had been authorised, and
  • that the resources required by the downloaded software conform to any onboard safety or security policy.

There are also obvious economy of mechanism advantages when dealing with protocols like the TCP/IP monster. Who after all wants to mess around with the entirety of the TCP/IP stack, given that Richard Stevens took three volumes to define the damn thing? Similarly who wants to go through the entire process again when going from IP5 to IP6? :)

Interesting documentary on SBS about the Germanwings tragedy, if you want a deeper insight see my post on the dirty little secret of civilian aviation. By the way, the two person rule only works if both those people are alive.

What burns in Vegas…

Ladies and gentlemen you need to leave, like leave your luggage!

This has been another moment of aircraft evacuation Zen.

Lady Justice (Image source: Jongleur CC-BY-SA-3.0)

Or how I learned to stop worrying about trifles and love the Act

One of the Achilles heels of the current Australian WH&S legislation is that it provides no clear point at which you should stop caring about potential harm. While there are reasons for this, it does mean that we can end up with some theatre of the absurd moments where someone seriously proposes paper cuts as a risk of concern.

The traditional response to such claims of risk is to point out that actually the law rarely concerns itself with such trifles. Or more pragmatically, as you are highly unlikely to be prosecuted over a paper cut it’s not worth worrying about. Continue Reading…

The bond between a man and his profession is similar to that which ties him to his country; it is just as complex, often ambivalent, and in general it is understood completely only when it is broken: by exile or emigration in the case of one’s country, by retirement in the case of a trade or profession.

Primo Levi (1919-87)

Defence in depth

One of the oft stated mantra’s of both system safety and cyber-security is that a defence in depth is required if you’re really serious about either topic. But what does that even mean? How deep? And depth of what exactly? Jello? Cacti? While such a statement has a reassuring gravitas, in practice it’s void of meaning unless you can point to an exemplar design and say there, that is what a defence in depth looks like. Continue Reading…

Technical debt

05/09/2015 — 1 Comment

St Briavels Castle Debtors Prison (Image source: Public domain)

Paying down the debt

A great term that I’ve just come across, technical debt is a metaphor coined by Ward Cunningham to reflect on how a decision to act expediently for an immediate reason may have longer term consequences. This is a classic problem during design and development where we have to balance various ‘quality’ factors against cost and schedule. The point of the metaphor is that this debt doesn’t go away, the interest on that sloppy or expedient design solution keeps on getting paid every time you make a change and find that it’s harder than it should be. Turning around and ‘fixing’ the design in effect pays back the principal that you originally incurred. Failing to pay off the principal? Well such tales can end darkly. Continue Reading…

Inspecting Tacoma Narrows (Image source: Public domain)

We don’t know what we don’t know

The Tacoma Narrows bridge stands, or rather falls, as a classic example of what happens when we run up against the limits of our knowledge. The failure of the bridge due to an as then unknown torsional aeroelastic flutter mode, which the bridge with it’s high span to width ratio was particularly vulnerable to, is almost a textbook example of ontological risk. Continue Reading…

Icicles on the launch tower (Image source: NASA)

An uneasy truth about the Challenger disaster

The story of Challenger in the public imagination could be summed up as ”’heroic’ engineers versus ’wicked’ managers”, which is a powerful myth but unfortunately just a myth. In reality? Well the reality is more complex and the causes of the decision to launch rest in part upon the failure of the participating engineers in the launch decision to clearly communicate the risks involved. Yes that’s right, the engineers screwed up in the first instance. Continue Reading…

Risk managers are the historians of futures that may never be.

Matthew Squair

I’ve rewritten my post on epistemic, aleatory and ontological risk pretty much completely, enjoy.

Qui enim teneat causas rerum futurarum, idem necesse est omnia teneat quae futura sint. Quod cum nem…

[Roughly, He who knows the causes will understand the future, except no-one but god possesses such faculty]

Cicero, De Divinatione, Liber primus, LVI, 127

Piece of wing found on La Réunion Island, is that could be flap of #MH370 ? Credit: reunion 1ere

Piece of wing found on La Réunion Island (Image source: reunion 1ere)


Why this bit of wreckage is unlikely to affect the outcome of the MH370 search

If this really is a flaperon from MH370 then it’s good news in a way because we could use wind and current data for the Indian ocean to determine where it might have gone into the water. That in turn could be used to update a probability map of where we think that MH370 went down, by adjusting our priors in the Bayesian search strategy. Thereby ensuring that all the information we have is fruitfully integrated into our search strategy.

Well… perhaps it could, if the ATSB were actually applying a Bayesian search strategy, but apparently they’re not. So the ATSB is unlikely to get the most out of this piece of evidence and the only real upside that I see to this is that it should shutdown most of the conspiracy nut jobs who reckoned MH370 had been spirited away to North Korea or some such. :)

We must contemplate some extremely unpleasant possibilities, just because we want to avoid them. 

As quoted in ‘The New Nuclear Age’. The Economist, 6 March 2015

Albert Wohlstetter

Jeep (Image source: Andy Greenberg/Wired)

A big shout out to the Chrysler-Jeep control systems design team, it turns out that flat and un-partitioned architectures are not so secure, after security experts Charlie Miller and Chris Valasek demonstrated the ability to remotely take over a Jeep via the internet and steer it into a ditch.

Chrysler’s now patched the Sprint/UConnect vulnerability, and subsequently issued a recall notice for 1.4 million vehicles which requires owners to download a car security patch onto a USB stick then plug it into their car to update the firmware. So a big well done Chrysler-Jeep guys, you win this years Toyota Spaghetti Monster prize* for outstanding contributions to embedded systems design.

Continue Reading…

There are no facts, only interpretations…

Friedrich Nietzcshe

More woes for OPM, and pause for thought for the proponents of centralized government data stores. If you build it they will come…and steal it.

For no other reason than to answer the rhetorical question. Feel free to share.

How?

The offending PCA serial cable linking the comms module to the motherboard (Image source: Billy Rios)

Hannibal ante portas!

A recent article in Wired discloses how hospital drug pumps can be hacked and the firmware controlling them modified at will. Although in theory the comms module and motherboard should be separated by an air gap, in practice there’s a serial link cunningly installed to allow firmware to be updated via the interwebz.

As the Romans found, once you’ve built a road that a legion can march down it’s entirely possible for Hannibal and his elephants to march right up it. Thus proving once again, if proof be needed, that there’s nothing really new under the sun. In a similar vein we probably won’t see any real reform in this area until someone is actually killed or injured.

This has been another Internet of Things moment of zen.

Two_reactors

A tale of another two reactors

There’s been much debate over the years as whether various tolerance of risk approaches actually satisfy the legal principle of reasonable practicability. But there hasn’t to my mind been much consideration of the value of simply adopting the legalistic approach in situations when we have a high degree of uncertainty regarding the likelihood of adverse events. In such circumstances basing our decisions upon what can turn out to be very unreliable estimates of risk can have extremely unfortunate consequences. Continue Reading…

SFAIRP

The current Workplace Health and Safety (WHS) legislation of Australia formalises the common law principle of reasonable practicability in regard to the elimination or minimisation of risks associated with industrial hazards. Having had the advantage of going through this with a couple of clients the above flowchart is my interpretation of what reasonable practicability looks like as a process, annotated with cross references to the legislation and guidance material. What’s most interesting is that the process is determinedly not about tolerance of risk but instead firmly focused on what can reasonably and practicably be done. Continue Reading…