Safety Systems, Hume and Uniformity

31/01/2012 — 4 Comments

20120722-182815.jpg

One of the canonical design principles of the nuclear weapons safety community is to base the behaviour of safety devices upon fundamental physical principles.

For example a nuclear weapon firing circuit might include capacitors in the firing circuit that, in the event of a fire, will always fail to open circuit thereby safing the weapon. The safety of the weapon in this instance is assured by devices whose performance is based on well understood and predictable material properties.

While the above may seem very domain specific it actually serves as an introduction to the more general problem of how do we ‘prove’ that any system will be safe at some arbitrary point in the future.

Underlying all such proofs is an implicit assumption of what Hume called, ‘The Principle of Uniformity’. Or to put it another way, when we argue from specific to general we are assuming that what we see working locally also applies more generally.

In this case we’re assuming that (for example) current safe behaviour will continue to occur in the future. Unfortunately an argument of this form is an inductive one and we run straight into Hume’s problem of inductive reasoning.

Even more unfortunately for us the more assumptions we need to make about future political, economic, technological or cultural conditions to support our argument the worse our epistemic position and the greater the risk.

One response to such epistemic risk is to try and rest the fundamental premises of the argument not upon such imponderables but instead upon things which we have the most confidence will persist in the future.

Which drives us (as it did the nuclear weapons safety community) to try and base safety upon the laws of physics rather than those of design, procedure or custom as these physical laws have the greatest likelihood of persistence (uniformity) over time.

As an example were we to design a nuclear power plant to be safe in the event of a loss of cooling event then the safety of the plant should be ensured not by a complex ‘add on’ applique of safety systems, but by the plants fundamental physical properties and behaviour.

Similarly if we were designing a long term nuclear waste repository we should rely not upon the design of engineered technologies of encapsulation, as we have no experience in design such systems to survive geological epochs, but again upon the fundamental geology to sequester the waste body.

Given that high consequences demand a very low frequency of occurrence, for such systems we must eschew safety based upon functional and procedural barriers and instead rely upon physical laws and properties.

4 responses to Safety Systems, Hume and Uniformity

  1. 

    A small point: does “teleological” really apply to physical systems? In other words, does the system in an of itself have a goal, to be or not to be? Does the firing mechanism that fails deliberately with high temperature really have teleological mechanisms. Weiner said cybernetics had such mechanisms, but I’m not sure I understand his argument either.

    A larger point: physics, environment (surely not constant), and operators (surely not even rational under stress, but more likely subconscious “fast thinkers” in the Kahneman mold) is a problematic risk management regime. Witness the 447 disaster where a small system failure or degradation led to disastrous consequences driven largely by improper operator responses.

    • 
      Matthew Squair 02/02/2012 at 11:51 am

      Thanks for the comments John, I reread my post and decided to redraft and remove the reference to teleological systems. As you point out it does add some confusion to the thrust of the argument. To answer your original question I wouldn’t say a safety device had a teleological purpose per se., but rather the system (in this case weapon) to which it belongs. Weapons have two basic process goals (extrinsic finalities) first to detonate when commanded legitimately, and secondly at all other times not. :-)

  2. 

    Matthew,
    Having worked in the emergency shutdown business in the past, the physical barrier approach to shut down was the design basis. Fuel valves that close when power is removed, doors that close when power is removed, MOV (Metal Oxide Resistors) that open under over-current conditions (rather than short), “fail to safety” culture for all exothermic processes.
    I was on the decommissioning side of the weapons business, but similar “fail to safe” design was in place for the machines that disassembled the weapons.
    Nancy Leavenson (sic) speaks to many of these. NQA-1 as well.

    • 
      Matthew Squair 12/10/2012 at 2:44 pm

      I’ve just finished revising my Lessons post if you’re interested. What I find very powerful is the development by the guys at Sandia of a philosophy of safe design around the 3I principles (isolate, Incompatibility and Inoperability).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s