I guess we’re all aware of the wave of texting while driving legislation, as well as recent moves in a number of jurisdictions to make the penalties more draconian. And it seems like a reasonable supposition that such legislation would reduce the incidence of accidents doesn’t it?
The above info graphic courtesy of Jeff Masters Wunderblog blog says it all, 6 out of the 13 most destructive superstorms have occurred after 1998.
Interestingly the study is circa 2011 but I’ve seen no reflection in Australia on the uncomfortable fact that the study found, i.e that all we are doing with such schemes is shifting the death rate to an older cohort. Of course all the adults can sit back and congratulate themselves on a job well done, except, well except it doesn’t work, and worse yet sucks resources and attention away from searching for more effective remedies.
In essence we’ve done less than nothing as a society to address teenage driving related deaths, safety theatre of the worst sort…
And not quite as simple as you think…
The testimony of Michael Barr, in the recent Oklahoma Toyota court case highlighted problems with the design of Toyota’s watchdog timer for their Camry ETCS-i throttle control system, amongst other things, which got me thinking about the pervasive role that watchdogs timers play in safety critical systems.
And I’ve just updated the philosophical principles for acquiring safety critical systems. All suggestions welcome…
Economy of mechanism and fail safe defaults
I’ve just finished reading the testimony of Phil Koopman and Michael Barr given for the Toyota un-commanded acceleration lawsuit. Toyota settled after they were found guilty of acting with reckless disregard, but before the jury came back with their decision on punitive damages, and I’m not surprised.
Over on The Emergent Chaos blog, there’s a neat post of Saltzer and Schroeder’s security principals using scenes from Star Wars as illustration. A fun introduction to a classic work on the security of information systems.
The article also made me stop and think about how the principles of security and safety engineering seem to overlap in a number of areas, and what we might get from looking at safety with our security glasses on.
The subject of another post perhaps.
Why risk communication is tricky…
An interesting post by Ross Anderson on the problems of risk communication, in the wake of the savage storm that the UK has just experienced. Doubly interesting to compare the UK’s disaster communication during this storm to that of the NSW governments during our recent bushfires.
Or ‘On the breakdown of Bayesian techniques in the presence of knowledge singularities’
One of the abiding problems of safety critical ‘first of’ systems is that you face, as David Collingridge observed, a double bind dilemma:
- Initially an information problem because ‘real’ safety issues (hazards) and their risk cannot be easily identified or quantified until the system is deployed, but
- By the time the system is deployed you now face a power (inertia) problem, that is control or change is difficult once the system is deployed or delivered. Eliminating a hazard is usually very difficult and we can only mitigate them in some fashion.