Archives For Philosophy

The philosophical aspects of safety and risk.

M1 Risk_Spectrum_redux

A short article on (you guessed it) risk, uncertainty and unpleasant surprises for the 25th Anniversary issue of the UK SCS Club’s Newsletter, in which I introduce a unified theory of risk management that brings together aleatory, epistemic and ontological risk management and formalises the Rumsfeld four quadrant risk model which I’ve used for a while as a teaching aid.

My thanks once again to Felix Redmill for the opportunity to contribute.  ūüôā

But the virtues we get by first exercising them, as also happens in the case of the arts as well. For the things we have to learn before we can do them, we learn by doing them, e.g., men become builders by building and lyreplayers by playing the lyre; so too we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.

Aristotle

Meltwater river Greenland icecap (Image source: Ian Jouhgin)

Meme’s, media and drug dealer’s

In honour of our Prime Minister’s use of the drug dealer’s argument to justify (at least to himself) why it’s OK for Australia to continue to sell coal, when we know we really have to stop, here’s an update of a piece I wrote on the role of the media in propagating denialist meme’s. Enjoy, there’s even a public heath tip at the end.

PS. You can find Part I and II of the series here.

ūüôā

Technical debt

05/09/2015 — 1 Comment

St Briavels Castle Debtors Prison (Image source: Public domain)

Paying down the debt

A great term that I’ve just come across,¬†technical debt is a metaphor coined by Ward Cunningham to reflect on how a decision to act expediently for an immediate reason may have longer term consequences. This is a classic problem during design and development where we have to balance various ‘quality’ factors against cost and schedule. The point of the metaphor is that this debt doesn’t go away, the interest on that sloppy or expedient design solution keeps on getting paid every time you make a change and find that it’s harder than it should be. Turning around and ‘fixing’ the design in effect pays back the principal that you originally incurred. Failing to pay off the principal? Well such tales can end darkly. Continue Reading…

4blackswans

Or how do we measure the unknown?

The problem is that as our understanding and control of¬†known risks increases, the remaining risk in any system become increasingly dominated by ¬†the ‘unknown‘. The higher the integrity of our systems the more uncertainty we have over the unknown and unknowable residual risk. What we need is a way to measure, express and reason about¬†such deep¬†uncertainty, and I don’t mean tools like Pascalian calculus¬†or Bayesian prior belief structures, but a way to measure and judge ontological uncertainty.

Even if¬†we can’t measure ontological uncertainty directly perhaps there are indirect measures? Perhaps there’s a way to infer something from the platonic shadow that such uncertainty casts on the wall, so to speak. Nassim Taleb would say no, the unknowability of such events is the central thesis of his Ludic Fallacy after all. But I still think it’s worthwhile exploring, because while he might be right, he may also be wrong.

*With apologies to Nassim Taleb.

20140629-132953-48593553.jpg

Just because you can, doesn’t mean you ought

An interesting article by  and  on the moral hazard that the use of drone strikes poses and how in the debate on their use there arises a confusion of the facts with value. To say that drone strikes are effective and near consequence free, at least for the perpetrator, does not equate to the conclusion that they are ethical and that we should carry them out. Nor does the capability to safely attack with focused lethality mean that we will in fact make better ethical decisions. The moral hazard that Kaag and Krep assert is that ease of use can all to easily end up becoming the justification for use. My further prediction is that with the increasing automation and psychological distancing of the kill chain this tendency will inevitably increase. Herman Kahn is probably smiling now, wherever he is.

Continue Reading…

 

On Artificial Intelligence as ethical prosthesis

Out here in the grim meat-hook¬†present¬†of Reaper missions and Predator drone strikes we’re already well down track to a future in which decisions as to who lives and who dies are made less and less by human beings, and more and more by automation.¬†Although¬†there’s been a lot of¬†‘sexy’ discussion¬†recently of the possibility of purely AI decision¬†making, the current panic misses the real issue d’jour, that is the question of how well current day hybrid human-automation systems make such decisions, and the potential for the incremental abrogation of moral authority by the human part of this cybernetic system as the automation in this synthesis¬†becomes progressively more sophisticated and suasive.

As Dijkstra pointed out¬†in the context of programming, one¬†of the problems or biases humans¬†have in thinking about¬†automation is that because¬†it ‘does stuff’, we find the need to imbue it with agency, and from there it’s a short step to treating the automation as a partner in decision making. From this very human¬†misunderstanding¬†it’s almost inevitable that the the decision maker holding such a view will¬†feel that the responsibility for decisions¬†are shared, and responsibility diluted, thereby opening up potential for¬†choice shift in decision making. As the degree of sophistication of such automation increases of course this effect becomes stronger and stronger,¬†even though ‘on paper’ we would not recognise the AI as a rational being in the Kantian sense.

Even the design of decision support system interfaces can pose tricky problems when an ethical component is present, as the dimensions of ethical problem solving (time intensiveness, consideration, uncertainty, uniqueness and reflection) directly conflict with those that make for efficient automation (brevity, formulaic, simplification, certainty and repetition). This inherent conflict thereby ensuring that the interaction of automation and human ethical decision making becomes a tangled and conflicted mess. Technologists of course look at the way in which human beings make such decisions in the real world and believe, rightly or wrongly, that automation can do better. What we should remember is that such automation is still a proxy for the designer, if the designer has no real understanding of the needs of the user in forming such ethical decisions then if if the past is any guide we are up for a future of poorly conceived decision support systems, with all the inevitable and unfortunate consequences that attend. In fact I feel confident in predicting that the designers of such systems will, once again, automate their biases about how humans and automation should interact, with unpleasant surprises for all.

In a broader sense what¬†we’re doing with this current debate is essentially rehashing the old arguments between two world views on the proper role of automation, on the one side¬†automation is intended to supplant those messy, unreliable¬†humans, in the current context effecting an unintentional¬†ethical prosthetic.¬†On the other hand we have the view that¬†automation can and should be used to assist and augment¬†human capabilities, that is it should be used to support and develop peoples innate ethical sense. Unfortunately in this current debate it also looks like the prosthesis school¬†of thought is winning out.¬†My view is that if we continue in this approach of¬†‘automating out’ moral decision making we will inevitably¬†end up with the amputation of ethical¬†sense in the decision maker, long before killer robots stalk the battlefield, or the high street of your home town.