A conference call from Fukushima

08/05/2014 — Leave a comment

Preamble

The following is a critique of a teleconference conducted on the 16 March  between the UK embassy in Japan and the UK Governments senior scientific advisor and members of SAGE, a UK government crisis panel formed in the aftermath of the Japanese tsunami to advise on the Fukushima crisis. These comments pertain specifically to the 16 March (UK time) teleconference with the British embassy and the minutes of SAGE meetings on the 15th and 16th that preceded that teleconference.

As a case study in risk communication what was said during this teleconference and how it was communicated is an exemplar of why risk communication under crisis conditions is so very difficult, and how easy it is for even highly trained and experienced professionals to fall into the narrative fallacy.

On the limits of advice

The first issue is that if we look at the statements made during the telecom as a risk assessment; the limitations of that assessment were not identified upfront to the audience.

For example reading the SAGE minutes you can see that the assessments were necessarily based upon a series of assumptions, due to the lack of hard data from TEPCO and NISA (this is also supported by material released under FOI requests to the Guardian newspaper). However this did not come through in the telecon where confidence was expressed in the information that SAGE did possess. There’s some discussion of uncertainties in answer to one of the questions raised towards the end but no meaningful coverage of the limits of the advice.

As another example of limitations, on the 14/15 Mar (in Japan) the worst fallout emissions from the hydrogen explosions was being deposited over by rain over the south of Japan, due to cyclone induced wind circulations. Fortunately this didn’t occur while the highest concentrations were over Tokyo or other population centres, but it did result in the contamination of Tokyo’s water supply and the presence of hotspots well outside the exclusion zone. Contrast this to the advice (perhaps more accurately reassurance) provided in the telecon on the question of whether wind could carry fallout.

Uncertainty and the estimation of reasonable

Secondly there’s the (unstated) fluid nature of the worst case scenario and the associated issue of what does ‘reasonable’ actually mean.

At the time that the initial telecon 16th Mar (AM UK) the state of the fuel rod pools had been previously discussed during the SAGE meetings of the 15th (1400 UK) and an an enhanced worst case agreed although the original reasonable worst case was used in the telecom. By the 16th SAGE was now discussing the possibility of fallout in Tokyo sufficient to require sheltering in place in the context of the enhanced worst case. Throughout this period the risk of a hydrogen explosion due to the exposure of zircalloy fuel cladding in the core seems not to have been considered by SAGE, the explosions on the 15th were therefore a surprise as comments made during the 16 Mar teleconference indicated. So what was considered a ‘reasonable’ worst case evolved as the understanding of the situation evolved, however there was no indication given during the telecom that this would be the case. One could characterise this as a failure to characterise and communicate clearly the high degree of ontological uncertainty in such situations, what Donald Rumsfeld characterised as the “things we don’t know, we don’t know”.

Note that the use of the qualifier term ‘reasonable’ is itself somewhat problematic because what’s the criteria for reasonableness? I guess my conclusion is that the use of reasonable denotes a scenario that is somewhat less than the worst case, for whatever reason, but isn’t this potentially just that the ‘most likely’ outcome by another name? The use of the term in worst case assessments has itself come in for criticism by the House of Commons Science & Technology committee (3rd report HoC March 2012). Of course ‘reasonable’ also imputes a degree of certainty such that you can characterise a number of alternate scenarios and assign a relative likelihood to each, unfortunately ontological uncertainty (those Black swans again) tend to make a mockery of such estimations.

Uncertainty and human decisions 

Thirdly there’s the use of likelihood estimates in situations where what happens is dictated as much by human decision making, fallibility and judgement as by the physics or technology.

If, for example, the local plant manager had not decided to disobey TEPCO upper management and use salt water cooling then the outcome would have been very different. Likewise if TEPCO had withdrawn all its staff, as it intended, the results would have been different again (very, very different). Had the decision to apply salt water cooling been made early rather than later then again the consequences would have differed, likewise for the application of cooling water to cooling pond No. 4 and so on. All these are inherently unpredictable and unknowable factors and I don’t see an easy way to use likelihood to describe such situations even though there is a strong tendency for us to speak in such terms.

Constructing the narrative

Final and perhaps most significantly there is the role that the discussions that the SAGE committee meetings played.

Were they really a forum in which uncertainties over data and the facts as known were rigourously debated? or were they rather a vehicle to establishing the false narrative that there was indeed sufficient evidence to profer reasonable advice? While I don’t question the objectives of these meetings what I do question is the value of hypothesising and scenario building in a situation where you know that you don’t have all the data, and that what you do have is lagging behind events. Narrative building is very easy to fall into, but a narrative is not reality, as subsequent events proved.

Conclusions

To conclude what was implied was a level of precision that was unwarranted given the inherent fluidity and uncertainty of the situation. The effect of this was to frame the assessment in terms of a fixed scenario in which, to paraphrase, ‘if you’re outside the 20km exclusion zone your fine’, and impute to that scenario a degree of certainty that was unwarranted.

All the above being said we should also compare the advice and how it was given to the risk communication during the Three Mile Island crisis, which managed to precipitate a panic evacuation. Or to that provided by the French government during the Chernobyl accident, where they actually suppressed fallout readings. I’d also note that after the Fukushima crisis, the UK Government commissioned a review by the Government office for science into high impact/low probability risks.

To quote from Plato, “We need the wisdom to reflect what we do not know”.

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s