Archives For Ethics

Meltwater river Greenland icecap (Image source: Ian Jouhgin)

Meme’s, media and drug dealer’s

In honour of our Prime Minister’s use of the drug dealer’s argument to justify (at least to himself) why it’s OK for Australia to continue to sell coal, when we know we really have to stop, here’s an update of a piece I wrote on the role of the media in propagating denialist meme’s. Enjoy, there’s even a public heath tip at the end.

PS. You can find Part I and II of the series here.

🙂

20140629-132953-48593553.jpg

Just because you can, doesn’t mean you ought

An interesting article by  and  on the moral hazard that the use of drone strikes poses and how in the debate on their use there arises a confusion of the facts with value. To say that drone strikes are effective and near consequence free, at least for the perpetrator, does not equate to the conclusion that they are ethical and that we should carry them out. Nor does the capability to safely attack with focused lethality mean that we will in fact make better ethical decisions. The moral hazard that Kaag and Krep assert is that ease of use can all to easily end up becoming the justification for use. My further prediction is that with the increasing automation and psychological distancing of the kill chain this tendency will inevitably increase. Herman Kahn is probably smiling now, wherever he is.

Continue Reading…

 

On Artificial Intelligence as ethical prosthesis

Out here in the grim meat-hook present of Reaper missions and Predator drone strikes we’re already well down track to a future in which decisions as to who lives and who dies are made less and less by human beings, and more and more by automation. Although there’s been a lot of ‘sexy’ discussion recently of the possibility of purely AI decision making, the current panic misses the real issue d’jour, that is the question of how well current day hybrid human-automation systems make such decisions, and the potential for the incremental abrogation of moral authority by the human part of this cybernetic system as the automation in this synthesis becomes progressively more sophisticated and suasive.

As Dijkstra pointed out in the context of programming, one of the problems or biases humans have in thinking about automation is that because it ‘does stuff’, we find the need to imbue it with agency, and from there it’s a short step to treating the automation as a partner in decision making. From this very human misunderstanding it’s almost inevitable that the the decision maker holding such a view will feel that the responsibility for decisions are shared, and responsibility diluted, thereby opening up potential for choice shift in decision making. As the degree of sophistication of such automation increases of course this effect becomes stronger and stronger, even though ‘on paper’ we would not recognise the AI as a rational being in the Kantian sense.

Even the design of decision support system interfaces can pose tricky problems when an ethical component is present, as the dimensions of ethical problem solving (time intensiveness, consideration, uncertainty, uniqueness and reflection) directly conflict with those that make for efficient automation (brevity, formulaic, simplification, certainty and repetition). This inherent conflict thereby ensuring that the interaction of automation and human ethical decision making becomes a tangled and conflicted mess. Technologists of course look at the way in which human beings make such decisions in the real world and believe, rightly or wrongly, that automation can do better. What we should remember is that such automation is still a proxy for the designer, if the designer has no real understanding of the needs of the user in forming such ethical decisions then if if the past is any guide we are up for a future of poorly conceived decision support systems, with all the inevitable and unfortunate consequences that attend. In fact I feel confident in predicting that the designers of such systems will, once again, automate their biases about how humans and automation should interact, with unpleasant surprises for all.

In a broader sense what we’re doing with this current debate is essentially rehashing the old arguments between two world views on the proper role of automation, on the one side automation is intended to supplant those messy, unreliable humans, in the current context effecting an unintentional ethical prosthetic. On the other hand we have the view that automation can and should be used to assist and augment human capabilities, that is it should be used to support and develop peoples innate ethical sense. Unfortunately in this current debate it also looks like the prosthesis school of thought is winning out. My view is that if we continue in this approach of ‘automating out’ moral decision making we will inevitably end up with the amputation of ethical sense in the decision maker, long before killer robots stalk the battlefield, or the high street of your home town.

Waaay back in 2002 Chris Holloway wrote a paper that used a fictional civil court case involving the hazardous failure of software to show that much of the expertise and received wisdom of software engineering was, using the standards of the US federal judiciary, junky and at best opinion based.

Rereading the transcripts of Phillip Koopman, and Michael Barr in the 2013 Toyota spaghetti monster case I am struck both by how little things have changed and how far actual state of the industry can be from state of the practice, let alone state of the art. Life recapitulates art I guess, though not in a good way.

The enigmatic face of HAL

The enigmatic face of HAL

When Formal Systems Kill, an interesting paper by Lee Pike and Darren Abramson looking at the automatic formal system property of computers from an ethical perspective. Of course as we all know, the 9000 series has a perfect operational record…

The igloo of uncertainty (Image source: UNEP 2010)

Ethics, uncertainty and decision making

The name of the model made me smile, but this article The Ethics of Uncertainty by TannertElvers and Jandrig argues that where uncertainty exists research should be considered as part of an ethical approach to managing risk.

Continue Reading…

Taboo transactions and the safety dilemma Again my thanks goes to Ross Anderson over on the Light Blue Touchpaper blog for the reference, this time to a paper by Alan Fiske  an anthropologist and Philip Tetlock a social psychologist, on what they terms taboo transactions. What they point out is that there are domains of sharing in society which each work on different rules; communal, versus reciprocal obligations for example, or authority versus market. And within each domain we socially ‘transact’ trade-offs between equivalent social goods.

Continue Reading…