Archives For Complexity

Complexity, what is, how do we deal with it, and how does it contribute to risk.

Toyota ECM (Image source: Barr testimony presentation)

Comparing and contrasting

In 2010 NASA was called in by the National Highway Transport Safety Administration to help in figuring out the reason for reported unintended Toyota Camry accelerations. They subsequently published a report including a dedicated software annex. What’s interesting to me is the different outcome and conclusions of the two reports regarding software.  Continue Reading…

Tweedle Dum and Dee (Image source: Wikipedia Commons)

Revisiting the Knight, Leveson experiments

In the through the looking glass world of high integrity systems, the use of N-version programming is often touted as a means to achieve extremely lower failure rates without extensive V&V, due to the postulated independence of failure in independently developed software. Unfortunately this is hockum, as Knight and Leveson amply demonstrated with their N version experiments, but there may actually be advantages to N versioning, although not quite what the proponents of it originally expected.

Continue Reading…

System Safety Fundamentals Concept Cloud

There’s a very interesting site,  run by a couple of Australian lads, called Text is Beautiful that provides some free tools that allow you to visually represent the relationships within a text. No this isn’t the same as Wordle, these guys have gone beyond that to develop what they call a Concept cloud, colours in the Concept Cloud are indicative of distinct themes and themes themselves represent rough groupings of related concepts. What’s a concept? Well a concept is made up of several words, with each concept having it’s own unique thesaurus that is statistically derived from the text.

So without further ado I took the Fundamentals of System Safety course that I teach and dropped it in the hopper, the results as you might guess are above. Very neat to look at and it also gives an interesting insight into how the concepts that the course teaches interrelate. Enjoy. :)

Well I can’t believe I’m saying this but those happy clappers of the software development world, the proponents of Agile, Scrum and the like might (grits teeth), actually, have a point. At least when it comes to the development of novel software systems in circumstances of uncertainty, and possibly even for high assurance systems.

Continue Reading…

Mars code: JPL and risk based design

Linguistic security, and the second great crisis of computing

Distributed systems need to communicate, or talk, through some sort of communications channel in order to achieve coordinated behaviour which introduces the need for components to firstly recognise the difference between valid and invalid messages and secondly to have a common set of expectation of behaviour. And fairly obviously these two problems of coordination have safety and security implications of course.

The problem is that up to now security has been framed in the context of code, but this approach fails to realise that recognition and context are essentially language problems, which brings us firstly to the work of Chomsky on languages and next to Turing on computation. As it turns out above a certain level of expressive power of a language in the Chomsky hierarchy figuring out whether an input is valid runs into the halting problem of Turing. For such expressively powerful languages the question, ‘is it valid?’ is simply undecidable, no matter how hard you try. This is an important point, it’s not just hard or even really really hard to do but actually undecidable so…don’t go there.

Enter the study of linguistic security to address the vulnerabilities introduced by the to date unrecognised expressive power of the languages we communicate with.

Continue Reading…


The failure of NVP and the likelihood of correlated security exploits

In 1986, John Knight & Nancy Leveson conducted an experiment to empirically test the assumption of independence in N version programming. What they found was that the hypothesis of independence of failures in N-version programs could be rejected at a 99% confidence level. While their results caused quite a stir in the software community, see their A reply to the critics for a flavour, what’s of interest to me is what they found when they took a closer look at the software faults.

…approximately one half of the total software faults found involved two or more programs. This is surprisingly high and implies that either programmers make a large number of similar faults or, alternatively, that the common faults are more likely to remain after debugging and testing.

Knight, Leveson 1986

Continue Reading…