A Convenient Fiction (Automation as Proxy)

06/07/2010 — Leave a comment

In all that we do with the design of automated systems there is one fundamental truth that is time and time again ignored by practitioners

And the truth is simply this, when we treat automation as an autonomous agent we are simply deceiving ourselves. Treating automation (even the most advanced) as an autonomous agent is, much like corporations or Rousseau’s concept of the state, a convenient legal fiction.

The reality is that when pilots fly through an icing event or a driver steers through a skid the aircraft or car is not intelligent, the intelligence is actually in the head of the designer, the automation is merely his proxy.  But, as Don Norman points out, on the day you can’t talk to the designer through his proxy.

Now traditionally designers tries to come up with a bunch of rules of behavior, and we call the collection of these rules an ‘expert’ system, that cover all the circumstances he can imagine. Of course as the real world is infinitely variable and the resources available to a designer are limited our set of rules is is bounded. So given this bounded and therefore incomplete state, a priori, any expert system inherenty violates Ashby’s law of of requisite variety.

This limited rule set then leads to another fundamental problem with ‘expert systems’, that they tend to ‘evolve’ in an adhoc fashion. We develop a set of rules and then we release them for use with the system. But the problem is we can’t anticipate all the unanticipated circumstances, by definition (1). Next unexpected circumstances occur and we add extra rules to cover them. So our rules grow, much like topsy, and with that ad hoc growth goes the potential for further inadvertent interaction between rules and rule sets.

Now as we’ve already established expert systems are not expert in the way we think of human expertise. They’re just a set of rules. Sophisticated and nuanced perhaps but at their heart they’re still a set of fixed rules. Because of this fixed nature there is a fundamental gulf  between what the automation can understand of the world and also between how itself and the human operator think.

Thus when we have a system made up of automation & human there can end up being three potential areas of difference between human and automation behaviour. First of perception (what’s going on), then of goals (what do I want to achieve) and finally of action (what must we do). And inevitably within this region of divergence conflict and therefore hazards lurk (2).

Notes

1.  See for example the USN’s F/A-18 AoA fault tolerance logic failures. In one accident the AOA probe failed to an extreme position during the catapult stroke but prior to the ‘weight off wheels’ state, the flight control system recognised the failure and reverted to an assumed (but hazardous) safe AOA probe value sampled prior to weight off wheels.

2.  See for example the Iberia FL 1456 and QF 72 incidents as examples of such interaction hazards.

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s