After an apparently near-impeccable year for airline safety in 2017, the traditional accidents are returning. In the last week a Russian carrier has fatally lost a twinjet, and an Iranian airline a twin turboprop.
It would be closer to the truth to say these accidents – or at least the risk they represent – never went away, it’s just that a year is too short a time in which to measure the true safety performance of an industry.
The previous story in this blog sequence investigated the part that luck plays in airline safety, and it concluded that it still plays an unacceptably big part in an industry that confidently tells its passengers it has high standards.
The Russian loss involved a Saratov Airlines Antonov An-148 regional twinjet. It had taken off from Moscow Domodedovo airport in snow, bound eastward for Orsk, but after about 6min its climb became a rapid descent and it hit fields at high speed.
Early data from the investigation suggests the trigger for the fatal sequence of events was a disparity in airspeed indicator readings, probably caused by ice build-up in the external sensor because its heater had failed. The crew saw the disparity developing between the airspeed indicators and tripped out the autopilot, but failed in their attempts to fly the aircraft successfully relying on instruments that included misleading airspeed data.
Less detail is known about the Iran Aseman Airlines ATR72, but it was on a flight from Tehran to Yasouj among Iran’s south-western mountains. The destination is cradled in a valley, and the aircraft hit mountains about 30km north of the city in its early descent. The mountainous terrain was under complete cloud cover and snow.
The Saratov case provides more evidence of pilots’ unpreparedness for “limited panel” instrument flying. Air France 447 was the most famous example of pilot inability to cope with instrument flying when the airspeed sensors were temporarily compromised by ice, but the final report revealed that there had been six other recorded occasions in the same aircraft type (A330) where pilots had coped successfully with unreliable airspeed readings.
Airline recurrent trainers need to go back to basics with instrument flying, because it is increasingly clear many pilots all over the world are losing this crucial skill.
Loss of reliable airspeed information is unsettling, and it usually causes the autopilot to trip out, so airlines should ensure their pilots are able to cope with this situation.
In stable flight, whether level, climbing or descending, airspeed is a product of engine power and pitch attitude. All pilots with time on a particular type should know approximately what power will produce the performance they want.
So if they become aware that airspeed indications are compromised, it makes sense to adopt straight and level flight (if at a safe altitude) while sorting out a Plan B. That way the attitude is stable, and if the pilot selects the power setting that will produce a safe airspeed, all is well. However unsettling it is to see an airspeed reading that is clearly wrong, and which would be dangerous if true, it is a plain fact that the correct pitch attitude for S & L flight plus the correct power setting will produce the correct airspeed.
In the case of the Iran Aseman ATR72 the cause of the crash isn’t known yet, but there was no emergency call and that range of mountains contains many aircraft wrecks. If it turns out to be a classic case of CFIT (controlled flight into terrain), the issue will be one of three-dimensional navigation on instruments. Yes, such approaches are demanding, but these procedures should be the lifeblood of crews working for Iranian domestic carriers, for whom approaches into airports surrounded by mountains is their daily work.
These airlines – and others – have to ask themselves what is missing in their pilots’ skills, and why these skills are missing at all. Finally, they have to ask what they need to do to replace skills that have lapsed.
If the investigators’ final verdict is that pilot error was a factor in these accidents, the fault lies squarely with the carriers for failing to ensure, in their recurrent training regime, that their pilots have the living skills their passengers have a right to expect.
And again, if that verdict were to be delivered by the investigators, the airlines should worry about whether their existing crews could fail in the same way tomorrow.
If you look at the statistics for fatal airline accidents in 2017, the year looked faultless.
There were no fatal accidents – at least not among the mainline carriers operating passenger jets.
But if you look at the number of near-disasters, and especially if you hear the accounts of what happened on board and imagine the trauma the survivors underwent, you might wonder what made the difference between the mishaps they survived and fatal crashes in recent years that had almost identical precursors.
The answer is luck. Not a scientific answer, but it is the only word in the English language that describes that difference. A study Flight International/FlightGlobal will shortly publish (Flight International issue 23-29 January) contains an analysis of how luck works in today’s air travel.
Giving detail of numerous recent near-disastrous mishaps, the report observes: “Sometimes these mishaps start with a technical problem, but more often they are the result of inadequate crew knowledge, poor procedural discipline or simple human carelessness.”
Many of them ended up as that most common of all airline accidents, runway excursions or overruns on landing, and the result is usually serious and very expensive damage.
Pegasus Airlines at Trabzon, Turkey, 13 January (Twitter World News)
The spectrum of industry discussion about how to deal with this “luck” factor includes – at one end of the scale – automating pilots and their fallibilities out of the picture, and at the other end imbuing today’s crews with a quality referred to as “resilience”. The latter is the ability to face a surprising or unforeseen combination of circumstances with cool logic based on knowledge, situational awareness and skill. That’s what most passengers assume all pilots have.
Airline pilots today are firmly discouraged by their employers from disconnecting the autopilot and autothrottle during revenue flights. There are good reasons for this, the most obvious being that the automation – properly programmed – flies the aircraft more accurately than most pilots can. The argument against it is that if the automation is wrongly programmed, or used unintentionally in the wrong mode, or suffers a rare failure, the pilot reaction to the unintended consequences frequently demonstrates a lack of “resilience”, setting off a chain of events that can lead to an accident.
The question is, if pilots were permitted to fly their aircraft manually more often during revenue passenger flights, would their manual flying and associated cognitive skills be better primed for the unexpected, making a resilient response more likely when things don’t go according to plan? Pilot organisations like IFALPA believe they would.
To many airlines, that idea is heresy. Letting pilots “practice” flying with passengers on board is just not acceptable, they argue. “Practicing” (what pilots call flying) should only take place in a simulator or an empty aeroplane, they maintain.
The main problem with simulators is that, although getting better all the time, they will – psychologically – be no preparation for the real environment. The sense of risk, or fear, and the stress generated by it, can never be replicated in a simulator.
The reason aeroplanes have not been even more automated than they have been so far is that most flights don’t happen exactly as planned, so the pilots have frequently to intervene to make decisions and adjust the trajectory, even if they use the automation to do it.
This is a discussion that will – and should – continue, and the existing polarisation of views also seems likely to persist.
What is really needed is a cost-benefit and risk examination of whether the regular employment of manual and traditional pilot cognitive skills in flight has net advantages or disadvantages for airlines, but such research has never been carried out.
The ideal institution to do it would be the Institut Supérieur de l’Aéronautique et de l’Espace in Toulouse , which has expertise in measuring neuro-ergonomics in working pilots. ISAE has successfully carried out studies of the effect of stress on pilot cognitive and manual skills, and tested ways of re-orienting pilots when they lose situational awareness.
An unpredicted jet engine design flaw means that all commercial airliners in service today – except one – technically fail to meet the regulatory standards for cabin air quality, according to a new study carried out at Cranfield University, UK.
The Boeing 787 is the exception because – uniquely at present – it doesn’t use engine bleed air for cabin pressurisation and air conditioning. In other types, air for the cabin is bled directly from the compressor of the aircraft’s engines, which makes them vulnerable to an overlooked secondary effect of jet engine lubrication system design.
The design flaw relates to so-called labyrinth and mechanical oil seals that act to contain the lubricant supply to the engine-shaft bearings. Effective lubrication depends on a low level of oil flow through them. In terms of engine oil consumption this leakage is negligible, and it was assumed by engineers that high air pressure would prevent oil leakage into the compressor chamber.
Arguably the seals do exactly what they were designed to do, but the assumption about the effect of high air pressure preventing leakage into the compressor turned out to have been over-optimistic. This matters, because aero engine lubricating oil – an entirely synthetic fluid, not a mineral oil – contains organophosphate additives (tricresyl phosphate) that are highly effective anti-wear agents, but are also particularly toxic to humans.
A 2014 study by Robert Flitney, a sealing technology consultant, established that oil from the bearings does indeed leak into the compressor chamber despite the high air pressure in it. As Flitney explains: “Simply put, the labyrinth seal is essentially a controlled leakage device relying on pressurisation to minimise oil leaking along the compressor shaft.” It may indeed minimise it, Flitney found, but it does not prevent it. As a result, the lubricant that escapes from the seals into the hot environment of the engine compressor chamber are continuously – and inevitably – delivered as pyrolised fumes via the engine bleed air system into the cockpit and cabin. Mechanical oil seals similarly leak a small amount. Meanwhile the bleed air flow to the cabin is not filtered, and there are no detection systems anywhere on the aircraft to measure contamination levels or to alert crews to contamination risk – nor is a detection system mandated.
The latest empirical examination of the cabin air issue was carried out at the UK’s premier aeronautical university, Cranfield. The same establishment in 2011 produced a report into cabin air quality commissioned by the UK Civil Aviation Authority on behalf of the Department for Transport (DfT). At the time, controversially, the study confirmed that engine oil fumes were indeed carried into the cabin, but the first Cranfield study proposed the contaminants were not a hazard to human health at the levels measured.
The report admitted, however, that during the period in which the study team was taking cabin air samples for analysis, there was no occurrence of a “fume event” – an incident in which higher concentrations of oil fumes enter the cabin. Sometimes this is because of the failure or partial failure of an engine oil seal, but it can result from a simple variation of engine power, which varies the internal gas pressure and temperature distribution and affects the seal effectiveness. As the DfT says: “The science is difficult because fume events are unpredictable and can last just a couple of minutes.” It also states that its research into cabin air quality “has been completed and the department’s programme in this area has now stopped”.
Fume events are not everyday occurrences, but neither are they very rare. Their exact frequency is undocumented, partly because the industry and government agencies play down their significance, and the reporting rate per occurrence is unknown. The issue that is particularly carefully ignored, however, is the continuous presence of low level cabin air contamination resulting from the fact that engine oil-seal leakage, it has now been established, is effectively a designed-in phenomenon.
This exposes those who fly for a living – and also frequent fliers – to the risk of the cumulative effects of neurotoxins that can build-up in their systems even if they don’t experience a fume event. The DfT itself admits that the chemical constituents of aero-engine oil are potentially neurotoxic, but maintains that the levels of exposure are so low as to be harmless. The DfT has not, however, carried out any research into the cumulative medical effects of low level exposure despite hundreds of pilots and cabin crew having had to retire because of ill-health, some following fume events, but rather more suffering long-term health degradation from continuous exposure. As the DfT admits, however, its studies into this phenomenon have now stopped.
But the latest study related to cabin air quality carried out at Cranfield is an independent one, and it examined the issues relating to engine design. It also highlighted the failure of regulatory organisations like the DfT or EASA to enforce laid-down standards for bleed air system certification. The study, for which Cranfield holds the copyright, concludes: “Low-level oil leakage in normal flight operations is a function of the design of the pressurised oil and bleed-air systems. The use of the bleed-air system to supply the regulatory required air quality standards is not being met or being enforced as required.”
This study was carried out at Cranfield by Dr Susan Michaelis, a former airline pilot who, in 2010, had been awarded a PhD by the University of New South Wales, Australia for her paper entitled “Health and flight safety implications from exposure to contaminated air in aircraft”. In 2016 Cranfield added an MSc to her academic achievements. The result of the MSc research is her new paper “Implementation of the requirements for the provision of clean air in crew and passenger compartments using the aircraft bleed air system “, which also won Dr Michaelis the accolade “best overall student on the [Cranfield] MSc Air Safety and Accident Investigation” course .
Meanwhile the European Aviation Safety Agency has produced an industry-led study that has more or less re-hashed all the old industry arguments, the main tenet again being that although potentially harmful organophosphate-based fumes are present in bleed air, the concentration is so low as to be harmless. EASA doesn’t address the issue of repeated crew exposure to low levels of harmful toxins and occasional “fume events”, and where it has been established that specific crews have suffered medically identified symptoms, including incapacitation in flight. EASA’s report has dismissed them as psychosomatic.
Now the University of Stirling has just had a paper published (June 2017) in the World Health Organisation’s journal Public Health Panorama. It examines “the health of aircrew who are suspected to have been exposed to contaminated air during their careers,” and says the study shows “a clear link between being exposed to air supplies contaminated by engine oil and other aircraft fluids, and a variety of health problems. Adverse effects in flight are shown to degrade flight safety, with the impact on health ranging from short to long-term”.
The report confirms that more than 300 aircrew, whose cases were examined, “had been exposed to a number of substances through aircraft’s contaminated air and reveal a clear pattern of acute and chronic symptoms, ranging from headaches and dizziness to breathing and vision problems”. One of the report’s authors, Professor Vyvyan Howard, professor of pathology and toxicology, Centre for Molecular Biosciences at the University of Ulster, added: “What we are seeing here is aircraft crew being repeatedly exposed to low levels of hazardous contaminants from the engine oils in bleed air, and to a lesser extent this also applies to frequent fliers. We know from a large body of toxicological scientific evidence that such an exposure pattern can cause harm and, in my opinion, explains why aircrew are more susceptible than average to associated illness.”
Recorded fume events causing sensory impairment and incapacitation of pilots and cabin crew are numerous, but listing them all is of limited use because the stories are remarkably similar. In terms of scale, an event over Canada on 24 October last year is notable because it involved an Airbus A380, but similar events have been recorded on all types large and small. In the A380 case British Airways flight 286 en route San Francisco-London was over Saskatchewan when it was forced to divert to Vancouver with a major fume event that incapacitated at least eight crew members, forcing them to go onto oxygen. When it landed all three pilots and 22 cabin crew were taken to hospital, and many of them were unfit for work months later, according to their union, Unite. The condition of the passengers is unknown. There has been no formal inquiry by British authorities into the event, and BA was left alone to deal with it. BA says the aircraft’s flight back to London was uneventful.
Some individual aircraft become notorious for fume events but remain in service with no follow-up by the authorities. An example is N251AY, a US Airways Boeing 767-200. On 16 January 2010 it operated a flight from St Thomas, US Virgin Islands, to Charlotte, North Carolina with 174 passengers and seven crew on board. During the flight the cabin crew noticed an unpleasant smell in the cabin, and the pilots suffered the onset of headaches, sore throat and eye irritation. By the time they were managing the approach to Charlotte they began to feel groggy and had difficulty in concentrating, but they landed the aircraft safely. During the en-route phase the pilots had messaged base to request medical attendance on arrival.
The event has been confirmed by US Airways but is not recorded by the FAA or the National Transportation Safety Board. Crew blood tests on arrival confirmed high levels of carboxyhaemoglobin, all the symptoms persisted for days, and the feeling of fatigue never left the pilots. They had their aircrew medical clearance rescinded and lost their pilot licences.
In March the same year the US Association of Flight Attendants reported that eight pilots and cabin crew members, including all but one of the crew on the St Thomas-Charlotte flight on 16 January 2010, did not return to work, and that there had been at least three known fume events on N251AY in December and January. The only fault the airline said it found was leaky rear door seals which, arguably, could have allowed engine fumes into the cabin on the ground, but the AFA says it doubts that explains what actually happened.
US Airways had carried out a borescope check on N251AY’s engines but initially found no engine fault. Dr Michaelis points out, however, that there did not have to be a fault for oil seal leakage into the compressor to take place. Her Cranfield work explains that oil seal leakage is lowest during stable flight phases like cruise, but in transition phases like start, spool-up, throttle-back, or whenever the power is varied, the pressure and thermal equilibrium is disturbed, and a fume event can occur even when there is no bearing fault. Hence the frequency of “no fault found” reports from the engineers after post-fume-event inspections.
Meanwhile internal reports and messages by official agencies about cabin air contamination also abound, but again they all say much the same thing. Here is one example of a US FAA report in 2009 recorded, along with many others, in one of Dr Michaelis’ research papers. The FAA said: “Lubricants: Many incidents of smoke/fumes in aircraft cabins have been linked to contamination of cabin air with pyrolytic products of jet engine oils, hydraulic fluids, and/or lubricants by leaking into ventilation air. These leaks can be subjected to 500°C or higher temperatures. If the origin of the smoke/fumes is of organic petroleum derivatives, then the smoke/fumes may cause a multitude of symptoms, including central nervous system dysfunction and mucous membrane irritation.”
Ever since the US Watergate political scandal and cover-up, when news media are following such a story they tend to tag it as a cover-up by suffixing the key word with “gate”. In what is perhaps the most notorious aviation industry alleged cover-up – the Westgate affair – the suffix was already in place. Richard Westgate was a British Airways A320 pilot when he died at age 43 in December 2012.
Westgate had been treated by a specialist clinic in Brussels for a painful neurological disorder for more than a year before his death, and extensive neurological damage was confirmed by his post-mortem. But when the coroner, Dr Simon Fox, QC, ruled on the cause of death, he stated it was the result of a self-administered, non-intentional overdose of pentobarbital, a sedative taken to aid sleep. Westgate died alone in a hotel room in Brussels.
Fox explained in his judgement that, although Westgate may have been exposed to organophosphate neurotoxins as a result of his job, and although that may have caused his poor health, it was not the cause of his death. He also ruled that there had been no causative negligence by British Airways, the Civil Aviation Authority or the Health and Safety Executive, basically because there are no prescriptive rules or guidelines relating to cabin air quality. Effectively, there are no applicable laws, so nobody broke the law.
Fox stated: “My provisional view subject to representations is that, whether or not in life in the period of months or years before his death the deceased was suffering from an illness caused by exposure to organophosphates in the course of his employment as a commercial pilot, is not a proper issue to be the subject of the Inquest.” In a statement that the dead pilot’s mother, Judy Westgate, read out after the judgement, she concluded with the words: “One day the truth will out.”
The subject of contaminated cabin air is to be reviewed by scientists, medical experts and engineers at the International Aircraft Cabin Air Conference at Imperial College, London on 19-20 September 2017 https://www.aircraftcabinair.com. Potential solutions to the problem will be aired as well as research and reports.
Meanwhile the industry and its regulators repeat the mantra that contaminants in the bleed air are at a harmless level of concentration. Dr Michaelis, in her studies, cites numerous, detailed, publicly recorded scientific data sources that all indicate there is no such thing as a safe level of exposure to the chemicals in aero-engine oil, especially when they are released in the form of pyrolised fumes.
But the industry can claim what it likes because it does not have to prove its case. In courts, the legal burden of proof rests entirely on those who claim to be victims of cabin air toxins, and it seems not to be sufficient to demonstrate – as in the Westgate case – that leakage of neurotoxic organophosphates into the cabin air is continuous and inevitable, but that the observable neurological harm to crew was not a coincidence nor psychosomatic.
No person or organisation associated with the 2015 Shoreham flying display accident escaped criticism – implied or actual – in the Air Accident Investigation Branch’s final report.
When the aircraft that later crashed, the Hawker Hunter T7, took off from its North Weald, Essex base heading for Shoreham to fly the display, it had several time-expired or unserviceable components in it. In retrospect the AAIB report says it was not in compliance with its permit to fly, yet none of these faulty components caused the accident.
The Flying Display Director hired to manage show safety was fully qualified in terms of knowledge, experience and expertise to oversee all aspects of the flying display, but the report implies there were some things – like the exact display routine the Hunter was to fly – he should have risk-assessed manoeuvre by manoeuvre. Yet even if he had, the crash might still have happened.
The Hunter’s pilot was fully qualified in terms of experience, training, flying recency and medical fitness to carry out the display he had planned, but on the day he got one of the aerobatic manoeuvres badly wrong, making mistakes that are difficult to understand in somebody so experienced.
The mistakes meant that he himself was seriously injured when thrown clear of the aircraft on impact with the A27 highway next to the airfield, but 11 people on the road were killed.
By flying a similar Hunter through the same manoeuvres, the AAIB has determined that the pilot could not have pulled the aircraft out of his “bent loop” without hitting the ground once he had passed the apex too low and failed, at that point, to carry out an “escape manoeuvre” by rolling the aircraft upright.
If he had used his ejection seat during the high speed descent from the loop it would not have saved him, and the pilotless aircraft would have continued to impact with the ground, possibly in much the same place. The only way the pilot could have prevented killing the people on the A27, says the report, was to crash the aircraft into nearby fields, but during the last second or so he probably still hoped he could avoid harm to road-users by pulling up in time.
Why was he too low at the loop’s apex?
He should have entered the loop from a 500ft base, but he started at about 185ft. The height of the top of a loop compared with the entry height is a product of speed and engine power at entry. The aircraft should have entered at a minimum 350kt with full power selected, but the Hunter entered at 310kt with less than full power until well into the pull-up. The pilot should have aimed for a 4,000ft apex with 150kt indicated airspeed over the top, but in fact it got to less than 3,000ft with 105kt.
Unless the pilot recognised the lack of energy at that point and carried out the rolling escape manoeuvre, he and the aircraft were doomed.
Why a pilot with so much experience of teaching, let alone flying, aerobatic manoeuvres failed to heed these indicators that the loop was going wrong may never be known, because trauma has obliterated the details of the fatal flight from the pilot’s memory, according to the report.
The final Shoreham report confirms the impressions given by the earlier AAIB bulletins on the subject. Because no-one in an on-site air display audience in UK has been killed since the early 1950s, such success appears to have led to complacency.
Not rampant complacency, but a relaxed belief that all the people involved are experts who know what they are doing, so they don’t need to be given the third degree before a show.
The sign that not all was well was the number of serious air display accidents, mostly fatal, that occurred just outside the area controlled by the display organisers – just like the Shoreham Hunter crash.
The AAIB found that 65% of all air show accidents came into that category, but almost always the only person harmed was the pilot. So nobody, including the CAA, raised the alarm, until now.
Meanwhile aerodromes used for decades as air show venues have suffered encroachment at their boundaries by expanding residential and industrial development. This affects the profiles aircraft are allowed to fly during a display, and flight display directors are bound to take this into account.
No longer are display lines, and entry and exit profiles dependent purely on where the display audience “crowd line” is, they have to take into account what each aircraft would have to do in the event of a technical or operational mishap during the display to avoid crashing into a nearby populated zone.
These are considerations that will affect air shows in the future. If a flying display stops being exciting, it might as well give up. Or go somewhere else more rural.
Coastal air displays will survive, because the escape route for aircraft in trouble is obvious.
The best example of the conundrum air show organisers face is what has happened to the traditional Red Arrows display at the biennial Farnborough International Air Show. When the Reds reviewed their Farnborough routine in detail following the tightened guidelines published in the early Shoreham bulletins, they found they had to curtail their display considerably.
In a statement following the release of the Shoreham final report, the CAA says: “We are fully committed to ensuring that all air shows take place safely, for the six million people who attend them each year in the UK and for the communities in which they take place.”
Airline pilots today are obliged to steer their machines according to an instrument discovered in the earth’s Iron Age: the magnetic compass. Ships’ commanders only use that these days if all else fails.
By modern navigation standards the magnetic compass is not an accurate device. An aviator flying along a magnetic meridian toward either the North or South Magnetic Pole flies “a wiggly track” according to the Geomagnetism Team of the British Geological Survey. The pilot’s magnetic compass may display a constant heading, but the aircraft relying on it follows the gently wandering vagaries of the earth’s dipolar magnetic field.
In 2011 a Boeing 737 suffered a fatal crash on approach to land because of the artificially-induced complexities of a navigation system based on Magnetic North in a digital era (detail later). All four crew and eight of the 11 passengers were killed.
Modern aviation navigation can be conducted using a phenomenally accurate, multi-sensor system orientated to the earth’s spin axis, with reliable integrated backups. But in fact it’s compromised by the decision to continue using a legacy system of orientation based on the earth’s ever-changing magnetic field.
This dependency on steering by the earth’s dipolar magnetic field when technology provides far better alternatives is enforced by institutions like the International Civil Aviation Organisation and the International Air Transport Association which are content with the status quo.
For the time being at least.
ICAO’s maritime sibling, the International Maritime Organisation, approved navigation by True North/South beginning in the late 1970s, and now it is universal for all but a few coastal mariners who choose not to use GPS backed up by inertial navigation systems (INS). Now the IMO is in the final stages of implementing standards for what it calls “e-navigation”, its way of describing the use of the best available integrated digital, satellite and other technology, plus best practice, to achieve the most accurate and reliable navigation at sea, all with the earth’s spin axis as the directional orientation reference.
ICAO itself, on the other hand, commented recently: “This issue [navigation by True North] is not on our work programme at present.”
Asked for a comment on the situation, the UK CAA said: “We understand the issue, and with the increased use of GPS etc., moving to True North does make more sense. Also as more aerodromes look to formalise GNSS (global navigation satellite systems) approaches the logic is clear.”
So why not adopt True North? It’s not the CAA’s job to make this decision (rather it’s for ICAO and EASA), but the CAA slightly apologetically offered this explanation on behalf of the international aviation establishment: “If you were starting from a blank sheet of paper with the technology available today, you would select True North. But aviation started with magnetic from the outset. The infrastructure supporting aviation is also based on magnetic, including VORs, runway directions, approach procedures, radar etc.”
ICAO and IATA argue that navigation by magnetic track still works, so there’s no need to face the effort and cost of moving to an orientation system based on Earth’s spin axis (True North/South) despite the fact that the cost of changeover would be one-off.
Maintaining the existing system, which requires regular updating as the earth’s two magnetic poles constantly migrate relative to the geographic North and South Poles, has a continual cost, but that’s apparently fine because it’s built in to the system’s budget, so no new decisions need to be made.
Magnetic navigation is fine as a backup system – and nobody doubts that every aircraft will continue to carry a standby magnetic compass in the cockpit as long as manned flight lasts. The IMO requires all ships to have a magnetic compass, but to steer by a system using True North.
Maps and charts are oriented according to the earth’s spin axis – True North/South. This is also the orientation datum programmed into the firmware of the aircraft flight management computers (FMC). They have to convert their orientation information to Magnetic to pass it to the avionics displays, unless the pilots choose to select True, which, of course, they can. But air traffic management protocol requires them to use Magnetic when operating under instrument flight rules (IFR).
Air traffic controllers still pass magnetic headings for pilots to steer for procedures and traffic separation purposes. Pilots still navigate by adopting Magnetic headings which are actually converted from True by the FMS and shown on their compasses in the primary flight display/navigation display.
The FMS does this by applying a “variation” between Magnetic and True that was embedded in the firmware when the system was set up.
This variation value needs to be updated regularly, but it rarely is, despite the fact that, in the last 40 years the rate of migration of the Magnetic North Pole (MNP) has accelerated dramatically. FMS software is easily updated, but firmware is more of a problem.
In fact the surface position of the MNP is forecast to reach its closest point relative to the Geographic North Pole (GNP) in 2020 (approximately 87N 170E), and then it will continue moving tangentially past the GNP toward Russia’s north coast. Therefore the so-called magnetic heading most aircraft are flying is inaccurate because the variation value – in many fleets – has not been updated since the FMS was new.
THE RESULTING FATAL CRASH
As a result of this built-in disparity, in 2011 an airliner fatally crashed into high ground because the pilots were confused by an inaccurate compass heading.
On 20 August 2011 the crew of the Bradley Air Services Boeing 737-210C (C-GNWN) were attempting an instrument landing system (ILS) approach to the airport at Resolute Bay, Nunavut, in Canada’s far northern islands, somewhat closer to the Magnetic North Pole than most aviators usually get to fly. The airfield and approach charts – and therefore the procedures – show True North for orientation. This is customary in polar regions because the magnetic field lines close to the Magnetic North Pole have a strong vertical component, so the lateral strength of the field reduces. Meanwhile the variation can be enormous.
The Resolute Bay ILS approach that day was to runway 35T, the localiser orientated to 347degT. The magnetic variation at Resolute Bay at the time was 28degW.
The autopilot was initially set to VOR/LOC Capture, and the compass system set to True, but according to the Transportation Safety Board of Canada (TSBC) report there was a compass error. The captain was flying 330degT according to the heading on his horizontal situation indicator (HSI), perceiving the intercept angle to be 17deg from the right of the localiser (347T).
The report explains: “However, due to the compass error, the aircraft’s true heading was 346deg. With 3deg of wind drift to the right, the aircraft diverged further right of the localiser. The crew’s workload increased as they attempted to resolve the ambiguity of the track divergence, which was incongruent with the perceived intercept angle and expected results.”
Diagram from TSBC report
This is what happens when pilots don’t know what to believe, and in a region where there are three Norths – Magnetic, True, and Grid (the latter for local charts), confusion is the default when things don’t proceed as expected.
The crew were indeed confused and decided to abandon the approach, but just as they were initiating the go-around the 737 flew into a snow-covered rocky hilltop about a mile east of the airfield. It didn’t help that the aircraft was fitted with an old fashioned – rather than an enhanced – ground proximity warning system, and that the crew were under additional pressure because they began the descent well above the glideslope.
Spurred by this event, at ICAO’s Air Navigation Conference in November 2012 Nav Canada proposed that aviation should stop using magnetic references and use only directional orientation relative to the Geographic North Pole.
This makes particular sense for any country – like Canada – with territories that approach the arctic or antarctic regions, who are forced to use True close to the magnetic poles, but it would also work perfectly worldwide. But the Montreal-headquartered ICAO has still not put the issue on its “to do” list.
Now, however, with more flights than ever transiting the Arctic ocean on routes between North America and South East Asia or India, steering by Magnetic North makes little sense, although it just happens that the Magnetic North Pole is now migrating closer to the Geographic North Pole than it has ever been in recorded history.
That imminent closeness of the two North Poles is used by the pro-True lobby to suggest this is a natural time to change, because the changes required will be the smallest they have ever been. But it is change itself that presents the one-off cost, and which demands scarce human resources to organise it, not the mathematical size of the variation between Magnetic and True.
The following rather simplified chart shows that the advantages – in today’s world – of Magnetic are few, and the disadvantages many and compelling, while the advantages of True are powerful and its disadvantages relatively trivial.
Maybe at present there is no actual urgency to adopt True North as aviation’s navigation lodestar, but industry voices on the subject are muted, as if it is bad form to step out of line.
A major European airline, not alone in its beliefs, has a compelling and detailed presentation on the subject, in which it concludes : “Transition from magnetic to true reference is unavoidable. The transition phase will need further studies in order to maintain the safety objectives. Time is ripe to start the transition process.” But the carrier was not prepared to break cover.
According to current plans, and if no new information comes to light, the Australian Transportation Safety Board says it will suspend the MH370 search indefinitely on 2 January 2017. The Chinese have already stopped their search.
For those who understand aviation operations, here is a final blast of the raw information that Capt Simon Hardy established through his mathematical and geometric examination of known data. If you’re new to this story, scroll back to to the previous one to establish what that data was.
Below is a diagram showing the tracks Hardy reckons MH370 followed. This is according to his calculations.
Now follows a flight plan which delivers the track Hardy worked out. This flight plan was fed into a Boeing 777 full flight simulator. Be patient, because it’s a long trip and the FP covers four pages of A4 (and that leaves out the calculations for a proposed diversion to Learmonth in Western Australia, on the grounds that the simulator will reject a flight plan without at least one planned alternate).
“I deduced a route mathematically using my technique, which has no relation to how much fuel was on board. Months later I used an airline system and entered this route to see where the aircraft would run out of fuel. Inputting the actual MH370 takeoff fuel of 49.1 tonnes – and allowing the system to do the usual route flight levels and speeds – resulted in a predicted fuel starvation within 12sec of where it should, after 7h 38min.
“The document that follows is an Airline Operational Flight Plan, the kind that thousands of pilots are using in flight right now.
“On long flights of 13 hours it will rarely be out by more than a few minutes and a few hundred kilos of fuel at the destination, once the actual takeoff time has been written in.
“This plan shows the aircraft running out of fuel 1min 48sec before the 7th arc (-175kg), using one turn point ANOKO, and one track of 188degT that was derived from my technique.
“The time the ATSB propose for the 777 to run out of fuel is 2min before the
7th arc (the error is just 12sec after 7h 38min).
“Aircraft Weight and fuel on board are correct as per MH370’s load sheet. Fuel
consumption is as 9M-MRO, although this [simulator] is a different 777 adapted to perform exactly the same.
“Initial Flight level is FL370 instead of FL350 but a cruise time of less than 30min before transponder failure and unknown levels is so short as to make little difference. Simulator system constraints mean once it turns west it must be at ‘even’ flight levels, hence the climb toFL380.
“Winds are forced to zero as they were unknown at the time. Ratio of times 60/90 shows as 60/86. I postulated in December 2014 that this may be due to increasing headwind as we travelled towards 6th arc. This was later proved to be correct! Application of winds will take the result away from 12 seconds
error to 2min error. This is still an astounding result and is only true
for the route inputted, and for the take off fuel of 49.1 tonnes”
End of quote.
The ATSB has all Hardy’s work including this FP.
Here are the four sheets that make up the flight plan.
Hardy welcomes comment and questions via this blog.
The minutes are ticking away to the ATSB’s suspension of the search, even as the arrival of the southern hemisphere midsummer makes searching easier. The aviation world, and all those associated with MH370, wish them luck during these last weeks.
Most theories posted to the web – especially related to serious subjects like this – usually attract massive peer criticism and public comment, but Hardy’s has faced no criticism, just requests for clarification.
It has, however, attracted great interest, and he has met the Australian Transport Safety Board at their request more than once to talk about it.
Hardy’s calculations put the resting position of MH370 just outside the planned area for the multinational search effort, close to its southern end. So the methodology that the official search team used produced results that are pretty close to the predictions Hardy reached independently.
Here is the ATSB’s explanation, posted on 21 September, as to why they will not look in Hardy’s predicted position even if the remainder of the planned search fails to find the aircraft:
“At a meeting of Ministers from Malaysia, Australia and the People’s Republic of China held on 22 July 2016, it was agreed that should the aircraft not be located in the current search area, and in the absence of credible new evidence leading to the identification of a specific location of the aircraft, the search would be suspended upon completion of the 120,000 square kilometre search area.”
The ATSB makes the intention clear: “It is expected that searching the entire 120,000 square kilometre search area will be completed by around December 2016.”
Hardy’s calculations put the MH370 wreck just outside that area, and they cannot be defined as “new evidence” because the ATSB knows about them already and has decided, without explaining why, not to search there.
By December the arrival of the southern hemisphere summer will have made the search much easier.
Hopefully the search team will find the aircraft remains within their planned search area. But what if they don’t?
If the ATSB won’t go there, Hardy is considering crowdfunding to extend the search for a few weeks into the area indicated by his work.