HUMAN FACTORS Part II – Slips, Lapses, Mistakes and Violations

I am not a pilot. John’s the pilot. I was a Flight Attendant - because I wasn’t a pilot- and in my time in the airline, CRM was becoming a thing. It was desperately needed as we saw for ourselves the shenanigans in the flight deck - both good and bad - when it came to flight deck harmony and efficiency.

Although this series of articles is aimed at the flight deck, the consequences of things going wrong up there has major impacts for us as flight attendants and passengers who have no choice but to trust the pilots. To this day, I don’t think most pilots in my day really appreciated that fact.

After you read this short article - which is eye opening about ourselves to say the least (regardless of whether we are pilots or not)- zoom over to the article in this link below that describes the horrors that can unfold when the captain believes he is above, and more important than any other person on the aircraft (NB - the link has a weird description but it’s about when turbulence damaged the interior of a B767 on approach to Christchurch and injured multiple flight attendants). How many slips, lapses, mistakes or violations did he make? or was he just bad? - https://www.pauwelsflyingscholarship.co.nz/blog/diversity-drives-the-engine-of-the-scholarship-8yjw3-lrzwz-ff5n8-ybkr9

Alternatively, those who are curious about the PIA crash referred to in the thumbnail can learn about the incident here - https://www.youtube.com/watch?v=TOOKYR5ZJbQ

Part I of this series walked briefly through the genesis of Cockpit Resource Management (CRM). Central to the acceptance of CRM was the concept that having humans involved in an activity was to invite the appearance of errors.

A foundation text in this area of human factors was written by Professor James Reason in 1990. Called “Human Error[1]”, the text draws together work from many eminent researchers. Reason makes the point that there was nothing new in the tragic accidents that occurred over the years since the Tenerife runway disaster in 1977 and including the accidents described in the previous blog. These accidents (and others such as the Bophal chemical discharge, the Challenger failure on launch, the Chernobyl nuclear meltdown, the Herald of Free Entrerprise ferry rollover, the King’s Cross tube station fire and the Piper Alpha oil platform explosion) all involved human error. What was rather more notable was the nature and scale of the events creating widespread, in both location and time, adverse effects.

Ernst Mach, in 1905 stated: “Knowledge and error flow from the same mental sources, only success can tell the one from the other.”

Cognitive psychologists envisaged a variety of causes underlying observable human errors. Spoiler alert: one can only ascribe an outcome to observable behaviours; to infer an intention suggests knowledge of their thought processes of the person observed; one can only ascribe intentions if one asks the individual what outcome they planned.

So, observing a vehicle, sitting in a turning lane on a fine day, with the wipers operating, one can only conclude that this was the driver’s intention, rather than signalling a turn. Or was it? Maybe they were washing the dust and bugs off the windscreen.

Rasmussen, in 1974, postulated an error orientated model of cognitive control mechanisms. His concern was to describe how serious errors could be caused by people in supervisory positions. His thesis was that there were three levels of control: Skill-Based; Rule-Based; and, Knowledge-Based.

At the skill-based level the brain is operating on preprogrammed instructions, which we would routinely call “habits” or “muscle-memory”.

Rule-based behaviour describes the situation where we are tackling familiar problems with potential solutions governed by stored rules of the “if (this) then (that) action”.

Knowledge-based responses apply in novel situations where the actions taken must be drawn from previous experience and stored information.

Reason built on the work from Rasmussen with the intention of better categorising basic human errors. He therefore talked about skill-based slips and lapses; rule-based mistakes, and knowledge-based mistakes.

These categories subsequently came to be called Slips and Lapses which arose from unintended actions, and Mistakes which arose from intended actions. Reason added a further category of Violations which also arose from intended actions.

Slips are failures of attention and manifest as omissions, and reversals, misordering or mistiming of intended actions. For example, selecting the windscreen wipers instead of the indicators in a car when driving your partners European car when you usually drive a Japanese car.

Lapses are failures of memory and are characterised by the omission of planned items, losing place in a sequence of actions or forgetting intentions. Inadvertently skipping items ion a check-list because of an external distraction is an example.

Mistakes can be rule-based, applying a good rule incorrectly or applying a bad (inappropriate) rule. Or mistakes can be knowledge-based arsing from many variables such as lack of appropriate knowledge, misrecognition of the situation or insufficient training or experience. Shutting down the incorrect engine when faced with an engine failure would normally be a rule-based mistake.

Violations were included because, in safety critical systems, violations represent unacceptable states. A violation may the result of the normalisation of practice which deviates from the expected standard; it may be the deliberate avoidance of a rule; or, it may be an act of sabotage.

Humans are likely to routinely make slips and lapses and system designs should maximise the opportunity to trap slips and lapses. Humans will also make mistakes and here forcing functions, such as electronic checklists that cannot be progressed unless completed in sequence, can help prevent mistakes from developing into unacceptable events or situations.

Slips. Lapses and Mistakes can also be recognised and trapped by work colleagues.

Humans also, regrettably, do make violations and these must be able to be identified and managed to prevent future occurrences.

[1] Reason, J. Human Error. Cambridge University Press, 1990

Previous
Previous

HUMAN FACTORS Part III – The Evolution of Threat and Error Management

Next
Next

HUMAN FACTORS. Part 1 -The Genesis of Cockpit Resource Management