Interlocks and non-Aristotlean logic


Cassini Descent Module (Image source: NASA)

When is an interlock not an interlock?

I was working on an interface problem the other day. The problem related to how to judge when a payload (attached to a carrier bus) had left the parent (much like the Huygens lander leaving the Cassini spacecraft above). Now I could use what’s called the ‘interlock interface’ which is a discrete ‘loop back’ that runs through the bus to payload connector then turns around and heads back into the bus again. The interlock interface is there to provides a means for the carriers avionics to determine if the payload is electrically mated to the bus. So should I use this as an indication that the payload has left the carrier bus as well? Well maybe, maybe not.

But why ever not? I here you ask. Surely ‘signal present true = payload present’ and conversely ‘signal not present = payload not present’, that’s only logical yes? The answer of course is that it’s completely logical, as long as you accept the premise that the world works along Aristotlean lines. In this case yes our signal present may be Aristotlean (true, not true), but that does not equate to the payload equivalently being there (true, not true. Given that an internal failure within the carrier such as a loose connection, broken wire or contaminated contact anywhere along the path can also cause a loss of continuity we cannot argue that the payload is no longer present. We may only assert, at best, that there is ‘now uncertainty that the payload is present’. Uncertainty is clearly not equivalent to an indication that payload has physically separated from the parent, even though ‘most of the time’ we’d be safe in making that assumption. ‘Most of the time’ type statements tends to make safety engineers nervy as a rule.

Which leads us to the problems of Aristotlean logic. Traditionally, and we’re all about the tradition here, A-logic can have strictly only two values, i.e true or false. In fact there’s a name for this, the ‘law of the excluded middle‘. Which is fine, if you can rely on the real world to have such black and white divisions. In practice as the interlock example above illustrates life is usually much less accommodating and we end up with a situation in which we are pretty sure about a signal meaning ‘present = true’ but lack of the signal only means ‘present = maybe’ not ‘present = false’.

Of course there have been a number of cases I’ve seen where engineers have naively relied on the assumption that the law of the excluded middle applies with predictably dire results when used to control critical functions. Editorial comment, this trap seems to be one into which software engineers and designers tend to fall on a regular basis, possibly because in their discipline the binary ‘true, not true’ paradigm dominates. What we’re dealing with here is that the real world is better modelled in this instance using  some form of non-monotonic, logic because we can only reason as a default that loss of interlock signal entails separation of the carrier and it’s payload, i.e. there is always the chance that additional information may come along and negate our conclusion. Using default logic we would set a default rule (loss of interlock = payload not present) but there would also exist a background theory indicating when such a signal would not indicate separation (if release clamps closed , ejector not actuated etc).

Which brings us to what Brian Cantwell Smith termed in his seminal paper The Limits of Correctness the right side problem or the relationship between our partial, abstract and formal model of the world and the actual infinite and very informal world, and it’s the inadequacies in that relationship where systems so often fail. So while I’m not suggesting that changing the type of logic you use will address this problem I would reflect that using one that has inherent difficulty in expressing such uncertainty is probably ‘a bad thing’.

And all of this from a simple little interlock problem. 🙂