This flight tonight (final report)

Final Report Issued into Loss of AF447

The final report came out this month and is available here.

AF447 Final Report

There’s not a lot in it we didn’t know already, but it does bring a lot of stuff together in one place, and that analysis of what we do know is very interesting. Warning, it’s a big file if you choose to send it to the printer, very graphics and photo intensive.

I’m working my way through the report and no doubt some comment will appear here over the next few months. However I thought I’d comment on one aspect now, the management of sudden anomalies, as this has a real and direct implication for process plant operations, task design, competence, and interface and alarm design. Section 1.16.8.1 of the report deals with the actions of the crew when faced with the need to detect an anomaly, diagnose the symptoms and root cause, modify their priorities to deal with it in a timely fashion, and take the necessary actions in a timely and appropriate manner. No trivial task when the autopilot of an Airbus A-330 suddenly dumps it all in your lap.

In the case of AF447, the crew should have done all of the above, relying on what are termed ‘immediate memory items’, which for the case of airspeed anomalies, include setting the pitch and thrust for the appropriate altitude. However, this relies on the crew correctly diagnosing the root cause of the anomaly and selecting the correct ’procedure’. In terms of the required cognition to successfully accomplish this, there is a nice analysis, which I’ll reproduce here.

In all cases, this includes a specific number of implications concerning human performance, which may be based on what can reasonably be expected of any human operator (for example noticing a clearly audible aural signal), or generic professional abilities normally present in the pilot community (“basic airmanship”), or even specific abilities which must be explicitly developed through a specific training course and/or practice.

In addition, these expected reactions result from various cognitive modes of activity.
Human operators notice and act according to their mental representation of the situation, and not to the “real” situation. The probability and speed of detection of anomaly signals is connected to their “salience”, that is to say to their ability to destabilise and modify the representation of the situation in progress, all the while being situated possibly outside the frame of this representation (that is to say unexpected, surprising, absurd, even “unthinkable” in its context). Depending on the frequency of the operator’s exposure to the anomaly during his training or in real operations, his response may be automatic, applying rules, or developed on the basis of in-depth knowledge. Automatic responses assume recognition of very specific stimuli, to which the reaction is associated without true interpretation. Applying rules assumes not only their knowledge, but also the recognition of their conditions of applicability, and therefore the correct identification plus a specific interpretation of the anomaly. The construction of a response by calling on experience assumes incorporation of the anomaly in the mental representation of the situation, which can go via its destruction/reconstruction, very wasteful in resources and time-consuming.

In this way the correct perception of the situation by a crew, which enables the reliability and speed of diagnosis and decision to be improved, is linked not only to the way in which the situation is presented to this crew (interfaces, parameters) but also to their training and experience.

Based on the preceding, for a good chance that these expectations of the crew may be met, it is therefore necessary:

  • That the signs of the problem are sufficiently salient to bring the crew out of their preoccupations and priorities in the flight phase in progress, which may naturally be distant from strict monitoring of the parameter(s) involved in the anomaly;
  • That these signs be credible and relevant;
  • That the available indications relating to the anomaly are very swiftly identifiable so that the possible immediate actions to perform from memory to stabilise the situation are triggered or that the identification of the applicable procedure is done correctly. In particular, it is important that the interfaces that usually carry anomaly information display, or at least allow, this initial diagnostic, given the minimum competence expected of a crew. Failing this, it is necessary to offset the lack of information supplied by the system which would enable the diagnostic to be reached by specific training;
  • That the memory items are known and sufficiently rehearsed to become automatic reflexes associated only with awareness of the anomaly, without the need to construct a more developed understanding of the problem;
  • That there are no signals or information available that suggest different actions or that incite the crew to prior reconstruction of their understanding the situation.

All good advice to those of us looking at management of critical situations. However it’s not the end of the story. Two other pieces of information stand out for me relating to this event. Firstly, this event (unreliable indicated airspeed) had happened at least 13 times previously, four of them being with Air France. Secondly, in none of these events had the crew used the required immediate memory items from the unreliable airspeed procedure, or even any use of the procedure at all to recover the situation (section 1.16.2). Instead, the situation was recovered by the pilots using a variety of inputs based solely on previous experience and skilled behaviours. This worked fine for the 13 times documented in the report. The 14th occasion when the airspeed failed and the autopilot disengaged, 4 minutes and 23 seconds later 228 people were dead.

The report synopsis identifies the failure of the crew to make connection between the loss of indicated airspeeds and the appropriate procedure as one of 6 events leading to the crash. We need to keep in mind that the failure was not ultimately that of one particular crew on one particular flight, but a failure to provide a workable ‘system’ to give them a fighting chance, and a failure to identify those 13 previous failures of that ‘system’ that were recovered solely by skill and luck.

 

Image credit: USAF via Wikimedia Commons

Categories and Tags
About the author

Tony Atkinson

I lead the ABB Consulting Operational Human Factors team. I've spent over 30 years in the process industries, working in control rooms around the world, in the fields of ergonomics, control and alarm systems, control room design and operational and cultural issues such as communications, competency and fatigue. I've been blogging on diverse topics that interest me in the widest sense of 'human factors', all of which share the same common element, the 'Mk.1 Human Being' and their unique limitations, abilities and behaviours. I'll discuss the technical and organisational issues that affect safety and performance of these process safety operators and technicians and how this impacts control rooms and the wider plant. However learning comes from many places and you can expect entries about aviation, automotive, marine, healthcare, military and many other fields. Outside of work, I indulge in travel, food, wine and flying kites to keep myself moderately sane. Please feel free to post your comments on each post. Blog entries are posted with no set frequency. To ensure you don't miss out on the latest blog post, click the button below to subscribe to email alerts when a new blog has been posted.
Comment on this article