Why the future holds more unpleasant surprises for us
I think it was Donald Norman who pointed out in a lecture entitled ‘The Design of Future Things‘ that accidents in complex automated systems often arise because of unintended interactions between operator and automation where both are trying to control the same system.
Norman’s comment is an insightful one, but the follow on question is, how are automation and operator actually interacting. Looking at the current generation of automata for a moment through the lens of Rassmussen’s Skills, Rules and Knowledge (SRK) model of human cognition, we see automation interacting with humans mostly at the skill or rule level. One can see that skill based performance can (and has) been fairly easily automated, see for example the very early autopilots and process plant feedback control loops. But the automation of rule based behaviour has been much more problematic.
So what should we expect when we embark on the ‘next great challenge’ of automation, that of automating human knowledge based decision making? As the level of complexity of automation grows at what point will automation transition to making ‘knowledge based’ decisions? And do the designers of these future systems, and they are being designed now, really understand their implications?
I would submit that at the moment there is a gulf of understanding between the designers of this behaviour and the leading lights in this field such as Reason, Klein and Rasmussen. So the question becomes, will automation act in the role of prosthesis or support for the human operator? Will we heed the hard learned lessons from the current generation of rule driven technological systems? Perhaps, but our past record does not bode well.
Rasmussen, J. Skills, Rules and Knowledge: Signals, Signs and Symbols, and Other Distinctions in Human Performance Models, IEEE Trans. on Systems, Man and Cybernetics, Vol.smc-13, No. 3, May 1983.