Why taking risk is an inherent part of the human condition
On the 6th of May 1968 Neil Armstrong stepped aboard the Lunar Lander Test Vehicle (LLTV) for a routine training mission. During the flight the vehicle went out of control and crashed with Armstrong ejecting to safety seconds before impact.
Of the fleet of four LLTV only one survived the Apollo flight program, so these were not ‘safe’ beasts to fly by any measure. Yet NASA management and astronauts alike enthusiastically pursued their use, rather than the safer computer and tethered simulators. Why? Remember that no one knew how the LEM would actually fly until the first time someone actually came to use it. So the LLTV, despite the risks, gave the astronauts a feel (and no more than that) for the performance of the lander that could not be provided in any other way.
“…the real in-flight situation provided by the LLTVs provided a reality that was hard to duplicate in a fixed-base simulator.”
For example the LLTV provided astronaut trainees with experience of the larger tilt angles and significantly different motion cues that would be experienced aboard the LEM. Learning how the machine responds provided astronauts with both confidence in the machine and reduced their cognitive workload during actual flight. Similarly the LLTV’s predecessor the LLRV (Reasearch variant) also exposed a potentially fatal design flaw in the attitude control authority at low angular rates which was subsequently corrected in the LEM design.
So to put it bluntly, risking the lives of the Apollo crews and NASA test pilots was necessary to gain knowledge that could not be got in any other way. By taking risks during the lead up to the final assault on the moon, NASA sought to reduce the risk of the final landing. In comparison simulations are just that, simulations, and you cannot simulate the unexpected.
When we seek to do something for the first time there is an inherent element of uncertainty and therefore aspects of epistemic and ontological risk. With increased knowledge we can reduce these risk of the activity. In the example of the Apollo program a stepping stone approach was taken so so that the program could learn from experience, at an acceptable cost.
Each small step took NASA outside its experience base but in turn valuable insight was gained and the overall risk profile reduced. Had this approach not been taken then the first time that problems would have eventuated would have been on the actual mission itself, in fact Neil Armstrong actually credited the time that he spent flying the LLTV as giving him the confidence to ignore the 1202 software alarm during the lunar descent. Had he not, an abort to orbit would have ended the first lunar mission ignominiously.
So it appears that to effectively manage the ontological/peistemic risk inherent in new endeavours we need to reduce uncertainty and to do that we need to increase our knowledge. In turn some of this knowledge may only be purchased at the expense of exposing ourselves, at least partly, to the risk that we are concerned about (1).
…Out of this nettle, danger, we pluck this flower, safety.
Henry IV Part 1: Act 2, Scene 3,
In the end it appears that the safety of new endeavours with their ‘unknown, unknowns’ can often only really be assured by taking planned and prudent risks, this is a concept that may not sit well with an increasingly risk averse western culture.
1. An approach characterised by David Collinridge in his work The Control of Technology as seeking to ensure the corrigibility of decisions.