Archives For automation

Joshua Brown screen grabKeep your eyes on the road, and your hands upon the wheel…

The first fatality involving the use of Tesla’s autopilot* occurred last May. The Guardian reported that the autopilot sensors on the Model S failed to distinguish a white tractor-trailer crossing the highway against a bright sky and promptly tried to drive under the trailer, with decapitating results. What’s emerged is that the driver had a history of driving at speed and also of using the automation beyond the maker’s intent, e.g. operating the vehicle hands off rather than hands on, as the screen grab above indicates. Indeed recent reports indicate that immediately prior to the accident he was travelling fast (maybe too fast) whilst watching a Harry Potter DVD. There also appears to be a community of like minded spirits out there who are intent on seeing how far they can push the automation… sigh.  Continue Reading…

The problem with people

The HAL effect, named after the eponymous anti-hero of Stanley Kubrick and Arthur C. Clarke’s film 2001, is the tendency for designers to implicitly embed their cultural biases into automation. While such biases are undoubtedly a very uncertain guide it might also be worthwhile to look at the 2001 Odyssey mission from HAL’s perspective for a moment. Here we have the classic long duration space mission with a minimalist two man complement for the cruise phase. The crew and the ship are on their own. In fact they’re about as isolated as it’s possible to be as human beings, and help is a very, very long way away. Now from HAL’s perspectives humans are messy, fallible creatures prone to making basic errors in even the most routine of tasks. Not to mention that they annoyingly use emotion to inform even the most basic of decisions. Then there’s the added complication that they’re social creatures apt in even the most well matched of groups to act in ways that a dispassionate external observer could only consider as confusing and often dysfunctional. Finally they break, sometimes in ways that can actively endanger others and the mission itself.

So from a mission assurance perspective would it really be appropriate to rely on a two man crew in the vastness of space? The answer is clearly no, even the most well adjusted of cosmonauts can exhibit psychological problems when isolated in the vastness of space. While a two man crew may be attractive from a cost perspective it’s still vulnerable to a single point of human failure. Or to put it more brutally murder and suicide are much more likely to be successful in small crews. Such scenarios however dark they may be need to be guarded against if we intended to use a small crew. But how to do it? If we add more crew to the cruise phase complement then we also add all the logistics tail that goes along with it, and our mission may become unviable. Even if cost were not a consideration small groups isolated for long periods are prone to yet other forms of psychological dysfunctions (1). Humans it seems exhibit a set of common mode failures that are difficult to deal with, so what to do?

Well, one way to guard against common mode failures is to implement diverse redundancy in the form of a cognitive agent whose intelligence is based on vastly different principles to human affect driven processing. Of course to be effective we’re talking a high end AI, with a sufficient grasp of the theory of mind and the subtleties of human psychology and group dynamics to be able to make usefully accurate predictions of what the crew will do next (2). With that insight goes the requirement for autonomy in vetoing illogical and patently hazardous crew actions, e.g “I’m sorry Dave but I’m afraid I can’t  let you override the safety interlocks on the reactor fuel feed…“. From that perspective we might have some sympathy for HAL’s reaction to his other crew mates plotting his cybernetic demise.

Which may all seem a little far fetched after all an AI of that sophistication is another twenty to thirty years away, and long duration deep space missions are probably that far away as well. On the other hand there’s currently a quiet conversation going on in the aviation industry about the next step for automation in the cockpit, e.g. one pilot in the cockpit of large airliners. After all, so the argument goes, pilot’s are expensive beasts and with the degree of automated support available to day, surely we don’t need two men in the cockpit? Well, if we’re thinking purely about economics then sure one could make that argument, but on the other hand as the awful reality of the Germanwings tragedy sinks in we also need to understand that people are simply not perfect, and that sometimes (very rarely (3)) they can fail catastrophically. Given that we know that reducing crew levels down to two increases the risk of successful suicide by airliner one could ask what happens to the risk if we go to single pilot operations? I think we all know what the answer to that would be.

Where is a competent AI (HAL 9000) when you need one? 🙂

Notes

1. From polar exploration we know that small exploratory teams of three persons are socially unstable and should be avoided. Which then drives the team size to four.

2. As an aside, the inability of HAL to understand the basics of human motivation always struck me as a false note in Kubrick’s 2001 movie. An AI as smart as Hal apparently was, and yet lacking even an undergraduate understanding of human psychology, maybe not.

3. Remember that we are in the tail of the aviation safety program where we are trying to mitigate hazards whose likelihoods are very, very rare. However given that they aren’t mitigated they dominate the residual statistic.

Another A320 crash

25/03/2015

Germanwings crash (Image source: AFP)

The Germanwings A320 crash

At this stage there’s not more that can be said about the particulars of this tragedy that has claimed a 150 lives in a mountainous corner of France. Disturbingly again we have an A320 aircraft descending rapidly and apparently out of control, without the crew having any time to issue a distress call. Yet more disturbing is the though that the crash might be due to the crew failing to carry out the workaround for two blocked AoA probes promulgated in this Emergency Airworthiness Directive (EAD) that was issued in December of last year. And, as the final and rather unpleasant icing on this particular cake, there is the followup question as to whether the problem covered by the directive might also have been a causal factor in the AirAsia flight 8501 crash. That, if it be the case, would be very, very nasty indeed.

Unfortunately at this stage the answer to all of the above questions is that no one knows the answer, especially as the Indonesian investigators have declined to issue any further information on the causes of the Air Asia crash. However what we can be sure of is that given the highly dependable nature of aircraft systems the answer when it comes will comprise an apparently unlikely combinations of events, actions and circumstance, because that is the nature of accidents that occur in high dependability systems. One thing that’s also for sure, there’ll be little sleep in Toulouse until the FDRs are recovered, and maybe not much after that….

Postscript

if having read the EAD your’e left wondering why it directed that two ADR’s be turned off it’s simply that by doing so you push the aircraft out of what’s called Normal law, where Alpha protection is trying to drive the nose down, into Alternate law, where the (erroneous) Alpha protection is removed. Of course in order to do so you need to be able to recognise, diagnose and apply the correct action, which also generally requires training.

Recent work in complexity and robustness theory for engineered systems has highlighted that the architecture with which these systems are designed inherently leads to ‘robust yet fragile’ behavior. This vulnerability has strong implications for the human operator when he or she is expected to intervene in response to the failure of system.

Continue Reading...

The effective use by humans of any transport system is a critical success factor in the development of such systems. Careful consideration of the interaction of ergonomic and functional design with the physical and cognitive capabilities and limitations of crew, passengers and maintainers is essential to assure safe, effective and profitable rail operations.

Continue Reading...

So far as we know flight AF 447 fell out of the sky with its systems performing as their designers had specified, if not how they expected, right up-to the point that it impacted the surface of the ocean.

So how is it possible that incorrect air data could simultaneously cause upsets in aircraft functions as disparate as engine thrust management, flight law protection and traffic avoidance?

Continue Reading...

Ariane 501 Launch

I was cleaning up some of my reference material and came across a copy of the ESA board of investigation report into the Ariane 501 accident. I’ve added my own personal observations, as well as those of other commentators, to the report. Continue Reading…

Recent incidents involving Airbus aircraft have again focused attention on their approach to cockpit automation and it’s interaction with the crew.

Underlying the current debate is perhaps a general view that the automation should somehow be ‘perfect’, and that failure of automation is also a form of moral failing (1). While this weltanschauung undoubtedly serves certain social and psychological needs the debate it engenders doesn’t really further productive discussion on what could or indeed should be done to improve cockpit automation. So let’s take a closer look at the Airbus protection laws implemented in the flight control automation and compare it with how experienced aircrew actually make decisions in the cockpit.

Continue Reading…

The HAL effect

09/09/2009

Do we automate our cultural biases, and can this have an affect upon the safe coordination of crew and automation?

Continue Reading...