How much airline safety is luck?

 

If you look at the statistics for fatal airline accidents in 2017, the year looked faultless.

There were no fatal accidents – at least not among the mainline carriers operating passenger jets.

But if you look at the number of near-disasters, and especially if you hear the accounts of what happened on board and imagine the trauma the survivors underwent, you might wonder what made the difference between the mishaps they survived and fatal crashes in recent years that had almost identical precursors.

The answer is luck. Not a scientific answer, but it is the only word in the English language that describes that difference. A study Flight International/FlightGlobal will shortly publish (Flight International issue 23-29 January) contains an analysis of how luck works in today’s air travel.

Giving detail of numerous recent near-disastrous mishaps, the report observes:  “Sometimes these mishaps start with a technical problem, but more often they are the result of inadequate crew knowledge, poor procedural discipline or simple human carelessness.”

Many of them ended up as that most common of all airline accidents, runway excursions or overruns on landing, and the result is usually serious and very expensive damage.

Pegasus Airlines at Trabzon, Turkey, 13 January (Twitter World News)

The spectrum of industry discussion about how to deal with this “luck” factor includes – at one end of the scale – automating pilots and their fallibilities out of the picture, and at the other end imbuing today’s crews with a quality referred to as “resilience”. The latter is the ability to face a surprising or unforeseen combination of circumstances with cool logic based on knowledge, situational awareness and skill. That’s what most passengers assume all pilots have.

Airline pilots today are firmly discouraged by their employers from disconnecting the autopilot and autothrottle during revenue flights. There are good reasons for this, the most obvious being that the automation – properly programmed – flies the aircraft more accurately than most pilots can. The argument against it is that if the automation is wrongly programmed, or used  unintentionally in the wrong mode, or suffers a rare failure, the pilot reaction to the unintended consequences frequently demonstrates a lack of “resilience”, setting off a chain of events that can lead to an accident.

The question is, if pilots were permitted to fly their aircraft manually more often during revenue passenger flights, would their manual flying and associated cognitive skills be better primed for the unexpected, making a resilient response more likely when things don’t go according to plan? Pilot organisations like IFALPA believe they would.

To many airlines, that idea is heresy. Letting pilots “practice” flying with passengers on board is just not acceptable, they argue. “Practicing” (what pilots call flying) should only take place in a simulator or an empty aeroplane, they maintain.

The main problem with simulators is that, although getting better all the time, they will – psychologically – be no preparation for the real environment. The sense of risk, or fear, and the stress generated by it, can never be replicated in a simulator.

The reason aeroplanes have not been even more automated than they have been so far is that most flights don’t happen exactly as planned, so the pilots have frequently to intervene to make decisions and adjust the trajectory, even if they use the automation to do it.

This is a discussion that will – and should – continue, and the existing polarisation of views also seems likely to persist.

What is really needed is a cost-benefit and risk examination of whether the regular employment of manual and traditional pilot cognitive skills in flight has net advantages or disadvantages for airlines, but such research has never been carried out.

The ideal institution to do it would be the Institut Supérieur de l’Aéronautique et de l’Espace in Toulouse , which has expertise in measuring neuro-ergonomics in working pilots. ISAE has successfully carried out studies of the effect of stress on pilot cognitive and manual skills, and tested ways of re-orienting pilots when they lose situational awareness.

 

10 thoughts on “How much airline safety is luck?

  1. According to a B-744 friend: The Pegasus F/O was PF, Wx was at minimums. They were expecting to see the runway at minimums. At minimums they saw the runway, the F/O disengaged the autopilot but at the same time he pressed the TOGA buttons. Captain took over and lowered the nose and retarded both thrust levers to idle, they landed at idle thrust and aircraft was dispatched with one reverser inop. Captain deployed the thrust reverser of the left engine and released the right engine (always best to pull both op or inop reversers to mitigate this type of TAM A-320-like T/R mishap/disaster of São Paulo’s 35L of July 17, 2007). Since he hadn’t disconnected the auto throttle; the right engine went to TOGA thrust; the aircraft started to accelerate and skidded off the runway from the left, with right engine separation. All passengers evacuated the aircraft from the rear door. No smoke in the cabin and no injuries. Unlucky, but lucky to avoid the drink of a Black Sea immersion.

    Like

  2. It seems that the statement ” no fatal accident” is incorrect as the ATR that crashed in Fond-du-Lac (Canada) killed one. Not directly from the crash but few days after as this passenger was severely injured.

    Like

    • You are quite correct about the death caused by that accident, but if you look at the qualifying statement it says “There were no fatal accidents – at least not among the mainline carriers operating passenger jets”. This is a turbo-prop. You could accuse me of splitting hairs, but the overall message is that fatalities in all types of operation are at a record low, but the number of near disasters is still high, so we shouldn’t forget that risks still lurk.

      Like

  3. Not all airlines are opposed to manual flying during revenue flights. My employer (US Major A3201/321 and E-190 operator) has begun actively encouraging manual flight when conditions are appropriate and our sim sessions now routinely check us on flying with all the automation turned off. My sense is at least in the US the FAA is pushing airlines to ensure their pilots are not becoming automation dependent.

    Like

    • Thanks for pointing that out. But America is unusual in this respect, and a lot more flying in US airspace is under VFR, whereas in all controlled airspace in Europe IFR applies at all times. That doesn’t stop pilots flying manually, but the psychology is completely different. And Europe has a much denser population per unit area, with less distance between major hubs, so the airlines have a completely different attitude to how operations should be conducted. In my opinion, the USA has the balance right, but that’s just me.

      Like

      • I happen to agree with you that in the US we are striking a good balance.

        It hasn’t always been that way and is the result of lessons learned and the FAA pushing manual flying skills. It wasn’t that long ago in the US the some US majors forbid turning the autothrust off (for example) except in an emergency.

        Like

  4. The problem with automating fallibility out of the system is that designers are faced with the probabilistically indeterminate task of anticipating every single possible outcome – an utterly unachievable outcome. Therefore, tombstone automation will continue to be the norm. Situational awareness is severely hampered by strong and silent automation. High levels of autonomy and authority inevitably lead to the automation being a very poor team player – simply revisit AF447. “Luck” is premised upon complexity – that is the interactions between the pilots and the automation. However, when the automation changes the state of an aircraft without giving the pilots the requisite information – the complexity of the system changes without the pilots being aware of what has transpired (again see AF447). This is where ‘luck” runs out – the inevitable black swan event; that is, unanticipated complexity leads to a low risk + high consequence event. The “solution” – simply increase the levels of automation – creating further opportunities for black swan events! The other solution that is often muted – increase pilot training. However, this effectively means that we are continuing to shape pilots around pre-existing and perhaps misguided beliefs about automation. Perhaps its is high time to look at what we SHOULD automate rather than what we CAN automate…

    Like

  5. AF447 isn’t an example of the pitfalls of automation it’s an example of a pilot that was either very poorly trained and or had very poor basic skills.

    Why do i say that? Because remember just prior to the incident they had been discussing that the aircraft was very near its maximum altitude for weight and temperature. So they should absolutely have been aware that trying to climb was a bad idea. Secondly one aspect of alternate law in the Airbus that gets repeated over and over again is that YES you can stall the aircraft in alternate law.

    Given those facts the decision by the pilot in the right seat when the autopilot kicked off to pull and hold full aft stick can only be seen as poor in the extreme. Not to mention that the crew never even attempted to apply the relevant ECAM and QRH procedures as they were most certainly trained to do.

    There are most certainly pitfalls in the way we automate airliners today. But AF447 is not an example of an automation problem. It’s an example of the utter lack of the most basic flying skills which that automation can create and enable. That Europe (unlike the US) has not moved aggressively to ensure its airline pilots possess and retain the required basic skills is (especially in the wake of 447) and absolute disgrace.

    Like

    • A Response to 121Pilot:

      The arguments presented are a classic feature of hindsight bias, which may be characterised by the following three mechanisms (cf Dekker 2002):

      1. Cherry-picking reductionism that linearises the so-called chain of causes and effects – this results in oversimplified causality
      2. Finding what people could have done to avoid the accident – it is all too easy to impose counterfactual assertions to assert a linear untangled history
      3. Judging people for what they should have done – the “should-have-done” judgement that makes it seem all so obvious.

      To say that “something should have been obvious, when it manifestly was not, may reveal more about our ignorance of the demands and activities of this complex world than it does of the performance of its practitioners” (Woods et al., 1994).

      The arguments further asserted that the pilots should have followed the relevant ECAM & QRH procedures. However, the “vol avec douteuse” (VAD) procedures are non-ECAM and the QRH unwittingly suggested a low-level procedure. It is interesting to note that Air France intensified the VAD procedures to include high altitude training as a result of AF447. The problem becomes systemic and non-linear. The PNF was presented with a huge amount of data from the ECAM but in reality was given no information. Hence, he becomes pre-occupied with trying to make sense of what is going on – the ECAM has unwittingly become a Situational Awareness Black Hole. Here the automation leads to a gulf of evaluation and hence a gulf of execution by ALL three pilots.

      When the automation suffered a decompensation incident and simply handed back control of the aircraft to the pilots – at no point during the disconnection (02:10:05) did the ECAM indicate that it was having a problem with ADRs. This is a perfect example of automation being a strong and silent team player. In this instance the automation would have benefitted from situational-adaptive automation – where collaboration AND effective communication is required. The first inkling of inconsistent airspeed occurred at 02:12:44.

      One cannot ignore the design implications of using sidestick controllers. Two lines of communication have now been removed (between the pilots and between the pilots and the automation). The PNF was unable to determine what the PF was doing as there was an absence linkages (and hence tactile cues). Similarly, the priority switch seems to have been used to resolve competing goals and/or differing cognitive models. It may be legitimately argued that the PF may have been cognitively tunnelled into the view that the aircraft was in an overspeed situation – this may go someway to understanding why he thought his inputs were appropriate. Had there been a stick-shaker – the problem may have been resolved sooner as his mental model is challenged by haptic cues. Similarly, the Flight Directors (a highly salient instrument) may have resulted in inappropriate sidestick commands. There is evidence to suggest that the FDs were commanding a v/s of +1400 ft/sec and matched the PF’s inputs (although this may not be evidence of causation). Had the VAD procedures resulted in response automaticity among the crew for high altitude – the FDs would have been turned off as part of the procedures.

      A great deal of criticism has been levelled at the PF for pulling the stick aft. However, accident investigation benefits from using the principle of local rationality which may be elaborated as follows:

      “Why do people do what they do? How could their assessments and actions have made sense to them at the time. The principle of local rationality is critical for studying human performance. No pilot sets out to fly into the ground on today’s mission…If they do intend this, we speak not of error or of failure but of suicide” (Woods et al, 2010).

      Similarly:

      “What is striking about many accidents in complex systems is that people were doing exactly the sort of things they would usually be doing – the things that usually lead to success and safety…People are doing what makes sense given the situational indications, operational pressures and organisational norms existing at the time. People’s errors and mistakes (such as there are in any objective sense) are systematically coupled to their circumstances tools and tasks…What people do makes sense to them at the time – it has to, otherwise, they would not do it” (Dekker, 2002).

      It is interesting to note that AF447 was not the first incident where the pilots pulled the stick aft (e.g. TAM Flight, 12 November 2003). Consequently, the assertion that the PF either had poor skills or was poorly trained is undermined by the TAM case and by the principle of local rationality. Consequently, to describe the PF’s actions as “poor in the extreme” does little to offer new learning or insights gleaned from this accident.

      AF447 (like all accidents) are systemic, and whilst I agree that training is AN issue – there are still many others – most notably the automation as an effective team player. At the end of the day, an aircraft is a complex socio-technical system, where safety is an emergent property of the various interactions of the components of the system (e.g. pilots AND automation).

      Whilst I recognise that I bring a researcher’s perspective, I would leave you with the following insight from a well-known pilot:

      “This Air France [AF447] accident is going to be a seminal accident that will be studied for years, and we need to ask ourselves as an industry tough questions about the way we’re designing airplanes, the way we’re displaying information to the pilots in the cockpit. And about whether or not making airplanes more complicated, more technologically advanced makes it more difficult for pilots to very quickly intervene and very effectively act when things go awry” (Sullenberger, CBS Evening News, 31 May 2011; emphasis added).

      Liked by 1 person

  6. I believe you are very perceptive to raise the question of side sticks and the resultant loss of CRM
    type information. These were evaluated for the Britannia in the early 1950’s.and engineering as art (not science) won the day for the control column. I wonder if we at a place in now where that would now not be possible. Economics as ideology is driving toward totally autonomous operations, its minimalist assumptions driving out a more comprehensive approach.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s