Archives For Complexity
Complexity, what is, how do we deal with it, and how does it contribute to risk.
Linguistic security, and the second great crisis of computing
Distributed systems need to communicate, or talk, through some sort of communications channel in order to achieve coordinated behaviour which introduces the need for components to firstly recognise the difference between valid and invalid messages and secondly to have a common set of expectation of behaviour. And fairly obviously these two problems of coordination have safety and security implications of course.
The problem is that up to now security has been framed in the context of code, but this approach fails to realise that recognition and context are essentially language problems, which brings us firstly to the work of Chomsky on languages and next to Turing on computation. As it turns out above a certain level of expressive power of a language in the Chomsky hierarchy figuring out whether an input is valid runs into the halting problem of Turing. For such expressively powerful languages the question, ‘is it valid?’ is simply undecidable, no matter how hard you try. This is an important point, it’s not just hard or even really really hard to do but actually undecidable so…don’t go there.
Enter the study of linguistic security to address the vulnerabilities introduced by the to date unrecognised expressive power of the languages we communicate with.
Economy of mechanism and fail safe defaults
I’ve just finished reading the testimony of Phil Koopman and Michael Barr given for the Toyota un-commanded acceleration lawsuit. Toyota settled after they were found guilty of acting with reckless disregard, but before the jury came back with their decision on punitive damages, and I’m not surprised.
Or ‘On the breakdown of Bayesian techniques in the presence of knowledge singularities’
One of the abiding problems of safety critical ‘first of’ systems is that you face, as David Collingridge observed, a double bind dilemma:
- Initially an information problem because ‘real’ safety issues (hazards) and their risk cannot be easily identified or quantified until the system is deployed, but
- By the time the system is deployed you now face a power (inertia) problem, that is control or change is difficult once the system is deployed or delivered. Eliminating a hazard is usually very difficult and we can only mitigate them in some fashion.
With apologies to the philosopher George Santayana, I’ll make the point that the BMW Head Up Display technology is in fact not the unalloyed blessing premised by BMW in their marketing material.