2.6 PEOPLE, CONTEXT AND SAFETY — THE SHEL MODEL2.6.1 Aviation workplaces are multi-component, multi-feature, complex operational contexts. Their functions and performance involve complex relationships among their many components in order for the system to achieveits production goals.
2.6.2 To understand the human contribution to safety and to support the human operational performance
necessary to achieve the system’s production goals, it is necessary to understand how human operational performance
may be affected by the various components and features of the operational context and the interrelationships between
components, features and people.
2.6.3 A very simple example is presented in Figure 2-6. The caveman is representative of operational personnel,
and the mission (or production goal of the system) is to deliver packages to the other side of the mountains. The
different components and features of the operational context and their interaction with the caveman, and among
themselves, will impact the safety and efficiency of the delivery of packages. Thus, the interaction of the caveman with
the lions may have detrimental effects in such delivery, unless the caveman is properly equipped to deal with the lions.
Figure 2-6. People and safety
Source:
2.6.4 Transiting though the mountains on a probably circuitous and unpaved road without footgear will detract
from efficient performance (delays in delivering the packages) and may lead to injuries, thereby raising safety concerns.
Braving the possible weather without rain gear is also a source of potential deficiencies in safety and efficiency.
2.6.5 It is thus evident that proper consideration and analysis of the operational context is a source of valuable
information in order to understand operational performance, to support it and to enhance it.
2.6.6 The need to understand operational performance within the operational context it takes places in is further
illustrated through another example in Figure 2-7A.
2.6.7 In this case, the system’s production objective is the delivery of packages by runners between points A and
B. It is a basic assumption in the design of the system that runners will follow the shortest route, which is represented by
the straight line.
2.6.8 No investment is spared to optimally resource the system. The best available human resources, in this
case the runners, are selected, trained, indoctrinated and equipped with the best available running gear (technology). As
part of the system design, monitoring of operations in real time is included. Once design steps have been completed,
operations begin. Shortly after system operational deployment, monitoring of operations in real time begins. Much to the
dismay of system managers, real-time monitoring discloses that most runners do not follow the intended path, along the
straight line, but rather a zigzagging path. As a consequence, delays in delivery take place, and also incidents occur
(Figure 2-7B).
2.6.9 At this point, system managers have two options. One option is to follow the traditional perspective
discussed in 2.3.6 — produce hollow reminders to runners to do what they know and have been trained to do and
allocate blame and punish the runners for failing to perform as expected. The other option is to analyse the operational
context to see if there are components and features of the context that might be the source of adverse interactions with
the runners. In following the second option, valuable information about certain components and features within the
context will be acquired (Figure 2-7C), which will allow for the readjustment of design assumptions and the development
of mitigation strategies for the safety risks of the consequences of unforeseen components and features of the context.
In other words, by acquiring information on hazards (discussed in Chapter 4) in the operational context and
understanding their interactions with people, system managers can bring the system back under organizational control.
2.6.10 It is thus proposed that a proper understanding of operational performance and operational errors cannot
be achieved without a proper understanding of the operational context in which operational performance and errors take
place. This understanding cannot be achieved unless a clear differentiation is made between processes and outcomes.
There is a tendency to allocate a symmetry to causes and consequences of operational errors which, in real practice,
does not exist. The very same error can have significantly different consequences, depending upon the context in which
the operational error takes place. The consequences of operational errors are not person-dependent but contextdependent
(Figure 2-8). This concept has a significant impact in mitigation strategies: efficient and effective errormitigation
strategies aim at changing those features and components of the operational context that magnify the
consequences of errors, rather than changing people.
2.6.11 Figure 2-8 also illustrates a scenario where the two managerial options discussed in 2.3.6 might apply.
Following the traditional approach would lead to reminders about being careful when leaning (or not to lean) on
windowsills and the dangers of pushing flowerpots out of the window, the re-writing of procedures to the previous
effects, or punishment for pushing flowerpots out of the window (failure to perform as expected or to perform safely). On
the other hand, the organizational approach would lead to installing a containment net under the window, broadening the
windowsill, using flowerpots of the frangible type, re-routing traffic under the window or, in extreme circumstances,
fencing off the window. The bottom line is that by removing or modifying the error-inducing features of the operational
context, an exponential reduction in the probability and severity of the consequences of operational errors is achieved.
2.6.12 A simple, yet visually powerful, conceptual tool for the analysis of the components and features of
operational contexts and their possible interactions with people is the SHEL model. The SHEL model (sometimes
referred to as the SHEL(L) model) can be used to help visualize the interrelationships among the various components
and features of the aviation system. This model places emphasis on the individual and the human’s interfaces with the
other components and features of the aviation system. The SHEL model’s name is derived from the initial letters of its
four components:
a) Software (S) (procedures, training, support, etc.);
b) Hardware (H) (machines and equipment);
c) Environment (E) (the operating circumstances in which the rest of the L-H-S system must function);
and
d) Liveware (L) (humans in the workplace).
2.6.13 Figure 2-9 depicts the SHEL model. This building-block diagram is intended to provide a basic
understanding of the relationship of individuals to components and features in the workplace.
2.6.14 Liveware. In the centre of the SHEL model are the humans at the front line of operations. Although
humans are remarkably adaptable, they are subject to considerable variations in performance. Humans are not
standardized to the same degree as hardware, so the edges of this block are not simple and straight. Humans do not
interface perfectly with the various components of the world in which they work. To avoid tensions that may compromise
human performance, the effects of irregularities at the interfaces between the various SHEL blocks and the central
Liveware block must be understood. The other components of the system must be carefully matched to humans if
stresses in the system are to be avoided.
Figure 2-9. The SHEL model
2.6.15 Several different factors put the rough edges on the Liveware block. Some of the more important factors
affecting individual performance are listed below:
a) Physical factors. These include the human’s physical capabilities to perform the required tasks, e.g.
strength, height, reach, vision and hearing.
b) Physiological factors. These include those factors which affect the human’s internal physical
processes, which can compromise physical and cognitive performance, e.g. oxygen availability,
general health and fitness, disease or illness, tobacco, drug or alcohol use, personal stress, fatigue
and pregnancy.
c) Psychological factors. These include those factors affecting the psychological preparedness of the
human to meet all the circumstances that might occur, e.g. adequacy of training, knowledge and
experience, and workload.
d) Psycho-social factors. These include all those external factors in the social system of humans that
bring pressure to bear on them in their work and non-work environments, e.g. an argument with a
supervisor, labour-management disputes, a death in the family, personal financial problems or other
domestic tension.
2.6.16 The SHEL model is particularly useful in visualizing the interfaces between the various components of the
aviation system. These include:
a) Liveware-Hardware (L-H). The interface between the human and technology is the one most
commonly considered when speaking of human performance. It determines how the human interfaces
with the physical work environment, e.g. the design of seats to fit the sitting characteristics of the
human body, displays to match the sensory and information processing characteristics of the user,
and proper movement, coding and location of controls for the user. However, there is a natural human
tendency to adapt to L-H mismatches. This tendency may mask serious deficiencies, which may only
become evident after an occurrence.
b) Liveware-Software (L-S). The L-S interface is the relationship between the human and the supporting
systems found in the workplace, e.g. regulations, manuals, checklists, publications, standard operating
procedures (SOPs) and computer software. It includes such “user-friendliness” issues as currency,
accuracy, format and presentation, vocabulary, clarity and symbology.
c) Liveware-Liveware (L-L). The L-L interface is the relationship between the human and other persons in
the workplace. Flight crews, air traffic controllers, aircraft maintenance engineers and other operational
personnel function as groups, and group influences play a role in determining human performance. The
advent of crew resource management (CRM) has resulted in considerable focus on this interface. CRM
training and its extension to air traffic services (ATS) (team resource management (TRM)) and
maintenance (maintenance resource management (MRM)) focus on the management of operational
errors. Staff/management relationships are also within the scope of this interface, as are corporate
culture, corporate climate and company operating pressures, which can all significantly affect human
performance.
d) Liveware-Environment (L-E). This interface involves the relationship between the human and both
the internal and external environments. The internal workplace environment includes such physical
considerations as temperature, ambient light, noise, vibration and air quality. The external
environment includes such things as visibility, turbulence and terrain. The twenty-four hour a day,
seven days a week, aviation work environment includes disturbances to normal biological rhythms,
e.g. sleep patterns. In addition, the aviation system operates within a context of broad political and
Chapter 2. Basic Safety Concepts 2-15
economic constraints, which in turn affect the overall corporate environment. Included here are such
factors as the adequacy of physical facilities and supporting infrastructure, the local financial situation,
and regulatory effectiveness. Just as the immediate work environment may create pressures to take
short cuts, inadequate infrastructure support may also compromise the quality of decision-making.
2.6.17 Care needs to be taken in order that operational errors do not “filter through the cracks” at the interfaces.
For the most part, the rough edges of these interfaces can be managed, for example:
a) The designer can ensure the performance reliability of the equipment under specified operating
conditions.
b) During the certification process, the regulatory authority can define realistic conditions under which the
equipment may be used.
c) The organization’s management can develop standard operations procedures (SOPs) and provide
initial and recurrent training for the safe use of the equipment.
d) Individual equipment operators can ensure their familiarity and confidence in using the equipment
safely under all required operating conditions.
2.7 ERRORS AND VIOLATIONS
Operational errors
2.7.1 The growth the aviation industry has experienced over the last two decades would have been impossible
had advanced technology not been available to support the increased demand for the delivery of services. In productionintensive
industries like modern aviation, technology is essential to satisfy requirements regarding the delivery of
services. This is a fundamental point often overlooked in safety analyses. The introduction of technology does not
primarily aim at improving safety; the introduction of technology primarily aims at satisfying the demand for the increase
in the delivery of services, while maintaining existing margins of safety.
2.7.2 Technology is thus introduced on a massive scale in an effort to satisfy production demands. One result of
this mass introduction of technology aimed at improved service delivery is that the Liveware-Hardware interface of the
SHEL model is overlooked, or not always considered to the extent that it should. As a consequence, technology that is
not sufficiently developed may be introduced prematurely, leading to unexpected failures.
2.7.3 While the introduction of underdeveloped technology is an inevitable consequence of the needs of any
mass production industry, its relevance to the management of safety cannot be disregarded. People on the front lines,
such as operational personnel, need to interact daily with technology while performing their operational tasks in order to
achieve the delivery of services. If the Hardware-Liveware interface is not properly considered during technology design,
and if the operational consequences of the interactions between people and technology are overlooked, the result is
obvious: operational errors.
2.7.4 The perspective of operational errors as an emerging property of human/technology systems brings a
significantly different perspective to the management of safety when compared with the traditional, psychology-based
perspective on operational errors. According to the psychology-based perspective, the source of error “resides” within
the person, and is a consequence of specific psycho-social mechanisms explored and explained by the different
branches of research and applied psychology.
2.7.5 Attempting to anticipate and mitigate operational errors effectively following a psychology-based perspective
is extremely difficult if not altogether impossible. Selection may filter out individuals without the basic traits needed for the
job at hand, and behaviour can be influenced by training and regulation. Nevertheless, the flaw of this perspective, from a
strictly operational viewpoint, is clear: it is impossible to anticipate in a systematic manner typical human frailties such as
distraction, tiredness and forgetfulness, and how they can interact with components and features of an operational context
under specific operational conditions. Individual-based mitigation strategies are considered “soft” mitigations, because
deficiencies in human performance will pop up when least expected, not necessarily in demanding situations, and unleash
their damaging potential.
2.7.6 The perspective of operational errors as an emerging property of human/technology systems removes the
source of the operational error from the human and places it squarely in the physical world, in the L/H interface. A mismatch
in this interface is the source of the operational error. As part of the physical world, the source of the operational error thus
becomes visible, and it can be articulated in operational terms (a switch is partially hidden by a lever making it difficult to
observe its correct position during night-time operations) as opposed to scientific terms (perceptual limitations). The source
of the operational error can therefore be anticipated and mitigated through operational interventions. There is not much that
safety management can achieve regarding human perceptual limitations, but there is an array of options available through
safety management to counteract the consequences of a design that includes a partially hidden switch.
2.7.7 It is part and parcel of the aviation safety tradition to consider operational errors as a contributing factor in
most aviation occurrences. This view, based on the psychology-based perspective discussed above, portrays operational
errors as a form of behaviour in which operational personnel willingly engage, as if operational personnel had a clear option
between electing to commit an operational error or not and willingly engage in the first option. Furthermore, an operational
error is considered indicative of substandard performance, flaws in character, lack of professionalism, absence of discipline
and similar attributions that years of partial understanding of human performance have developed. While convenient to
describe events and expedient to blame people, these attributions stop short of understanding and explaining operational
errors.
2.7.8 Following the alternative perspective on operational errors discussed, by considering operational errors as
an emerging property of human/technology systems, and by placing the source of errors in the mismatch in the L/H
interface, it becomes obvious that even the most competent personnel can commit operational errors. Operational errors
are then accepted as a normal component of any system where humans and technology interact, and not considered as
some type of aberrant behaviour. Errors can be viewed rather as a natural by-product of human-technology interactions
during operational activities aimed at the delivery of services of any production system. Operational errors are accepted
as a normal component of any system where humans and technology interact, and operational safety strategies are put
into practice to control operational errors.
2.7.9 Given the inevitability of mismatches in the interfaces of the SHEL in aviation operations, the scope for
operational errors in aviation is enormous. Understanding how these mismatches can affect the average human at work is
fundamental to safety management. Only then can effective measures be implemented to control the effects of operational
errors on safety.
2.7.10 It is a common misperception to establish a linear relationship between operational errors and both the
immediacy and magnitude of their consequences. This misperception is discussed in 2.6.10 and 2.6.11 in terms of
operational errors and the magnitude of their consequences. The discussion argues that there is no symmetry between
operational errors and the magnitude of their potential consequences. It further argues that the magnitude of the
consequences of operational errors is a function of the operational context in which errors take place, rather than a
consequence of the errors themselves. The discussion is furthered hereunder in terms of operational errors and the
immediacy of their consequences.
2.7.11 It is a statistical fact that in aviation millions of operational errors are made on a daily basis before a major
safety breakdown occurs (Figure 2-10). Minor yearly fluctuations aside, industry statistics consistently propose an
accident rate of less than one fatal accident per million departures for the last decade. To put it in different terms, in
Chapter 2. Basic Safety Concepts 2-17
commercial airline operations worldwide, once every million production cycles an operational error is committed that
develops damaging potential strong enough to penetrate system defences and generate a major safety breakdown.
Nevertheless, mismatches in the interfaces of the SHEL model generate tens of thousands of operational errors on a
daily basis during the course of normal aviation operations. These operational errors, however, are trapped by the builtin
defences of the aviation system, and their damaging potential is mitigated, thus preventing negative consequences. In
other words, control of operational errors takes place on a daily basis through the effective performance of the aviation
system defences.
2.7.12 A simple operational scenario is presented to explain the asymmetry between operational errors and the
immediacy of their consequences (Figure 2-11A). Following engine start-up, a flight crew omits to select the flaps to the
appropriate take-off setting during the after-engines-start scan flow, as indicated in the standard operating procedures.
An operational error has therefore been made, but there are no immediate consequences. The operational error has
penetrated the first layer of defence (SOPs, flight crew scan flow sequence following engine start), but its damaging
potential is still dormant. There are no immediate consequences; the operational error just remains in the system, in
latency.
2.7.13 The flight crew performs the after-engines-start checklist, but do not detect the incorrect flap setting, and
the aircraft initiates taxiing for departure. A second opportunity is thus missed to recover from the consequences of the
operational error, which continues to remain in the system, still harmless. Nevertheless, the system is now in a state of
deviation or undesired state (i.e. aircraft taxiing for departure with an incorrect flap setting). The flight crew performs the
taxiing checklist and the before take-off checklist. On both occasions, the incorrect flap setting is missed. Further
opportunities to recover from the consequences of the operational error are missed. The operational error remains
inconsequential, but the status of deviation, or the undesired state of the system, magnifies.
2.7.14 The flight crew starts the take-off roll, and the take-off warning configuration sounds. The flight crew does
not identify the reason for the warning and continues the take-off roll. The operational error still remains inconsequential,
but the system’s undesired state has now progressed to a state of amplification. The aircraft lifts off in an incorrect flaps
configuration. The system has now progressed to a state of degradation, but the undesired state can still conceivably be
recovered by the flight crew. The aircraft cannot sustain flight because of the incorrect flap setting and crashes. It is only
at that point, after breaching a considerable number of built-in system defences, that the operational error develops its
full damaging potential and becomes consequential. The system experiences a catastrophic breakdown.
2.7.15 Notice the relatively considerable time span between the commission of the operational error by the flight
crew and the materialization of its unrecoverable damaging potential. Notice also the number of opportunities to recover
from the consequences of the operational error through defences built into the system. This time span is the time that a
system affords to control the consequences of operational errors, and it is commensurate with the depth and efficiency of
system defences. This is the time span throughout which the management of safety operates with considerable potential for
success.
2.7.16 The more built-in defences and layers of containment the system includes, and the more efficient their
performance, the greater the possibilities are of controlling the consequences of operational errors. The reverse is true.
2.7.17 From the point of view of this discussion, one conclusion is apparent: the scenario discussed in 2.7.12
through 2.7.14 is — unavoidably — what most accident investigations would capture: unmanaged operational errors that
lead to catastrophic system breakdowns. This is valuable information about human and systemic failures; information
that portrays what failed, what did not work, what defences did not perform as intended. While valuable as a baseline,
this information is not enough to fully understand safety breakdowns and should be complemented by information from
alternative sources.
2.7.18 Consider a modified version of the scenario depicted in 2.7.12 through 2.7.14 (Figure 2-11B). Notice that
there are at least four obvious instances where defences could have been triggered to contain the damaging potential of
the initial operational error (omission to select take-off flaps in the after-engines-start flight crew scan flow):
a) the after start checklist;
b) the taxiing checklist;
c) the before-take-off checklist; and
d) the take-off configuration warning.
Effective warning
Check list works
Fla ps omitted
Error Deviation Amplification Normal flight
2-20 Safety Management Manual (SMM)
2.7.19 There are other instances, not as obvious but nonetheless possible, where defences could have been
triggered: warnings by ramp personnel, warnings by flight crews in similar aircraft, warnings by ATC personnel, etc.
Effective performance of the defences in any of these instances could have controlled the consequences of the initial
operational error and restored the system to normal status. The damaging potential of the operational error could have
been eliminated at each instance thus making, for practical purposes, the operational error disappear.
2.7.20 The argument advanced here is that scenarios where operational errors induce catastrophic breakdowns are
rare, while scenarios where operational errors induce system undesired states (deviation/degradation) are frequent. These
scenarios capture information on what initially did not work, but mostly about what thereafter worked, including defences
that performed as designed. This is the type of information that the sources of safety information, alternative and
complementary to the investigation of accidents, capture. The information from an accident investigation would certainly
identify the four instances in which defences should have been triggered, but it can in all likelihood only describe why they
were not.
2.7.21 The additional sources of information under discussion would identify the instances in which defences should
have been triggered and describe why and how they were. These sources characterize successes, and, thus, integrating
the information from accidents with the information from these alternative sources provides for a more complete picture
about specific safety concerns. Furthermore, because scenarios as the one described above are frequent, these alternative
sources of safety information, if deployed, can provide a considerable volume of constant information, to complement the
more sporadic information provided by accidents, thus allowing for a fuller understanding about the potential for safety
breakdowns. The conclusion than can be drawn from this second scenario is that safety resiliency is not so much a
question of error-free operational performance, but rather a question of effective operational error management.
Three strategies to control operational errors
2.7.22 The three basic strategies to control operational errors are based upon the three basic defences of the
aviation system: technology, training and regulations (including procedures).
2.7.23 Reduction strategies intervene directly at the source of the operational error by reducing or eliminating
the factors contributing to the operational error. Examples of reduction strategies include improving the access to aircraft
components for maintenance, improving the lighting in which the task is to be performed, and reducing environmental
distractions, i.e.:
a) human-centred design;
b) ergonomic factors; and
c) training.
2.7.24 Capturing strategies assume the operational error has already been made. The intent is to “capture” the
operational error before any adverse consequences of the operational error are felt. Capturing strategies are different
from reduction strategies in that they do not directly serve to eliminate the error, i.e.:
a) checklists;
b) task cards; and
c) flight strips.
2.7.25 Tolerance strategies refer to the ability of a system to accept an operational error without serious
consequences. An example of a measure to increase system tolerance to operational errors is the incorporation of
Chapter 2. Basic Safety Concepts 2-21
multiple hydraulic or electrical systems on an aircraft to provide redundancy, or a structural inspection programme that
provides multiple opportunities to detect a fatigue crack before it reaches critical length, i.e.:
a) system redundancies; and
b) structural inspections.
2.7.26 Operational error management must not limited to front-line personnel. The performance of front-line
personnel is, as depicted by the SHEL model, influenced by organizational, regulatory and environmental factors. For
example, organizational processes, such as inadequate communication, ambiguous procedures, unreasonable
scheduling, insufficient resources and unrealistic budgeting constitute the breeding grounds for operational errors. Asalready discussed, all these are processes over which an organization must have a reasonable degree of direct control.
Errors versus violations
2.7.27 Thus far, the discussion in this section has focused on operational errors, which have been characterized
as a normal component of any system where people and technology interact to achieve system production goals. The
discussion will now focus on violations, which are quite different from operational errors. Both can lead to failure of the
system and can result in high-consequence situations. A clear differentiation between, and understanding of, operational
errors and violations are essential for the management of safety.
2.7.28 The fundamental difference between operational errors and violations lies in intent. While an error is
unintentional, a violation is a deliberate act. People committing operational errors are trying to do the right thing, but for
the many reasons discussed in previous paragraphs on operational errors, they fail to achieve their expectations. People
committing violations, on the other hand, know that they are engaging in behaviour that involves a deviation from
established procedures, protocols, norms or practices, yet they persevere in the intent.
2.7.29 For example, a controller allows an aircraft to descend through the level of a cruising aircraft when the DME
distance between them is 18 NM, and this occurs in circumstances where the correct separation minimum is 20 NM. If the
controller miscalculated the difference in the DME distances advised by the pilots, this would be an operational error. If the
controller calculated the distance correctly, and allowed the descending aircraft to continue through the level of the cruising
aircraft, knowing that the required separation minimum did not exist, this would be a violation.
2.7.30 In aviation, most violations are the result of deficient or unrealistic procedures where people have developed
workarounds to accomplish the task. Most stem from a genuine desire to do a good job. Seldom are they acts of
negligence. There are two general types of violations: situational violations and routine violations.
2.7.31 Situational violations occur due to the particular factors that exist at the time, such as time pressure or
high workload. In spite of knowing that a violation is being incurred, goal-orientation and mission achievement lead
people to deviate from norms, in the belief that the deviation does not bear adverse consequences.
2.7.32 Routine violations are violations which have become “the normal way of doing business” within a work
group. They occur when the work group has difficulty following established procedures in order to get the job done,
because of practicality/workability issues, deficiencies in human-technology interface design and so forth, and informally
devise and adopt “better” procedures, which eventually become routine. This is the notion of normalization of deviance
discussed in 2.5.4. Routine violations are seldom considered as such by a work group, because their objective is to get
the job done. They are considered as “optimizing” devices, since they aim at saving time and effort by simplifying a task
(even if it involves cutting corners).
2.7.33 A third type of violation, which is often overlooked, is organization-induced violations, which can be
viewed as an extension of routine violations. The full potential of the safety message that violations can convey can be
2-22 Safety Management Manual (SMM)
understood only when considered against the demands imposed by the organization regarding the delivery of the
services for which the organization was created. Figure 2-12 depicts the relationship between the two basic
considerations an organization must weigh and balance in relation to the delivery of its services and when defining its
organizational processes: system output and related safety risks.
2.7.34 In any organization engaged in the delivery of services, system output and safety risks are intertwined. As
demands for system output (i.e. delivery of services) increase, the safety risks associated with the delivery of services
also increase, because of the increase in exposure. Therefore, as Figure 2-12 illustrates, minimum system output
correlates with the lowest safety risk, while maximum system output correlates with the highest safety risk. Continuous
operation exposed to the highest safety risks is undesirable, not only from a safety standpoint but also from a financial
standpoint. Thus, organizations weight desirable output and tolerable safety risk, and define a system output that is less
than the maximum possible, but which correlates with a tolerable level of safety risk. In so doing, the organization
defines its production objectives as a function of balancing acceptable output with acceptable safety risk.
2.7.35 One fundamental decision related to the process of defining production objectives (agreed on the basis of
a balance between system output and safety risks) is the establishment of the defences that the organization needs to
develop in order to protect itself from the safety risks it will generate while producing. As already discussed, the three
basic defences of the aviation system are technology, training and regulations (including procedures). Therefore, when
defining its production objectives, the organization also needs to define the tools (technology) necessary to safely and
effectively achieve service delivery; how to foster the behaviour the workforce must exhibit to safely and efficiently use
the tools (training), and the set of norms and procedures that dictate workforce performance (regulations).
2.7.36 Thus, system output, level of safety risk and defences converge to the point that defines the production
objectives of the organization. They also depict the boundaries of what may be called the “safety space of the
organization”. The safety space represents a protected zone, the zone within which the defences that the organization
has erected guarantee maximum resilience to the safety risks the organization will face while delivering the system
output in terms of production objectives.
2.7.37 The reason for the maximum resilience afforded by that safety space is that the defences erected by the
organization are commensurate with the planned system output, which in turn is commensurate with the tolerable safety
risk. In other words, the resources allocated by the organization to protect are appropriate to and commensurate with the
activities related to the delivery of services. This does not mean that the organization cannot experience an accident,
since accidents are random events resulting from the concatenation of unforeseeable circumstances. It means that the
organization has arrangements for the management of safety that guarantee an acceptable level of control of safety
risks during the delivery of services, under foreseeable circumstances. Simply put, the organization has done the best it
possibly can, safety-wise.
2.7.38 Given the dynamic nature of aviation, aviation organizations may occasionally face transient, short-term
demands for increased output (i.e. increased delivery of services) for brief periods of time, for example, seasonal
variations in seat demands, specific circumstances such as a worldwide sporting event, and so forth. In order to maintain
the safety zone intact, the organization should review and rearrange or modify its existing allocation of resources, and
strengthen existing defences to counteract the increased output and the ensuing increased level of safety risk.
2.7.39 Aviation history, sadly, suggests otherwise. Too often, as the aftermath of safety breakdowns show, aviation
organizations try to cope with short periods of increased system output by “stretching” defences: resorting to overtime
instead of hiring additional personnel, thus leading to increased workload and fatigue; using technology in “more efficient”
ways instead of incorporating additional technology; “optimizing” procedures and resources without revising standard
operating procedures and norms, and so forth.
2.7.40 What this stretching of defences effectively does is it places the organization outside the safety space, first
into the violation space and, ultimately, into the exceptional violation space. In other words, in order to deliver the
increased output with the same resources, operational personnel must deviate from established processes by resorting
to short cuts or workarounds sanctioned by the organization. Operational personnel do not elect to engage in such short
cuts or workarounds, the organization does. The colloquial expression “giving a leg up to the company” eloquently
describes the situation in which people are forced to engage in organization-sanctioned deviations to deliver a system
output incommensurate with the resources allocated to such an end.
2.7.41 Hard evidence that the organization has drifted into the violation space is generally provided by incidents.
A learning organization will then reassess its allocation of resources to expand its safety space in order to maintain the
harmony between system output, tolerable safety risk and defences or, if unable to expand its safety space, it will retract
into the established safety space by reducing the system output. Some organizations will ignore the warnings provided
by incidents, persist in their course of action, and thus inevitably drift into the exceptional violation space. An accident is
then a likely outcome.
2.82.8 ORGANIZATIONAL CULTURE
2.8.1 Culture can be described in the simplest terms as a “collective programming of the mind”. One of the most
graphic descriptions of culture portrays it as the “software of the mind”. Culture influences the values, beliefs and behaviour
that we share with the other members of our various social groups. Culture binds us together as members of groups and
provides clues and cues as to how to behave in both normal and unusual situations. Culture sets the rules of the game, or
the framework for all our interpersonal interactions. It is the sum total of the way people conduct their affairs in a particular
social milieu and provides a context in which things happen. In terms of the management of safety, understanding culture is
as important as understanding context, since culture is an important determinant of human performance.
2-24 Safety Management Manual (SMM)
2.8.2 It is a common pitfall when studying culture and, in particular, cross-cultural issues as they may affect
aviation safety, to unwillingly engage in judgement, and portray one particular culture as perhaps “better” or “more
suited” than another, or propose one particular culture as “bad” or “unsuitable” for specific safety proposals. This is
inappropriate and fruitless, because the study of cross-cultural issues is — in terms of safety — about differences, not
judgement. Cultures are indeed different, and each and every culture has significant strengths as well as identifiable
weaknesses. The purpose of serious cross-cultural endeavours, when applied to the management of safety, is to build
upon combined cultural strengths, as they relate to safety practices, while minimizing the downside of combined cultural
weaknesses.
2.8.3 Organizations, being groups of people, are not immune to cultural considerations. Organizational
performance is subject to cultural influences at every level. The following three levels of culture (Figure 2-13) have
relevance to safety management initiatives, since the three levels are determinants of organizational performance:
a) National culture differentiates the national characteristics and value systems of particular nations.
People of different nationalities differ, for example, in their response to authority, how they deal with
uncertainty and ambiguity, and how they express their individuality. People are not all attuned to the
collective needs of the group (team or organization) in the same way. In collectivist cultures, for
example, there is acceptance of unequal status and deference to leaders. This may affect the
possibility of questioning decisions or actions by elders — an important consideration in teamwork for
example. Work assignments that mix national cultures may thus affect team performance by creating
misunderstandings.
b) Professional culture differentiates the characteristics and value systems of particular professional
groups (the typical behaviour of pilots vis-à-vis that of air traffic controllers, or maintenance engineers).
Through personnel selection, education and training, on-the-job experience, peer pressure, etc.,
professionals (physicians, lawyers, pilots, controllers) tend to adopt the value system and develop
behaviour patterns consistent with their peers; they learn to “walk and talk” alike. They generally share
a pride in their profession and are motivated to excel in it. On the other hand, they may adopt value
systems that lead to developing a sense of personal invulnerability, a feeling that performance is not
affected by personal problems, or that errors will not be made in situations of high stress.
c) Organizational culture differentiates the characteristics and value systems of particular organizations
(the behaviour of members of one company versus that of another company, or government versus
private sector behaviour). Organizations provide a shell for national and professional cultures. For
example, in an airline, pilots may come from different professional backgrounds (military versus
civilian experience, bush or commuter operations versus development within a large carrier). They
may also come from different organizational cultures due to corporate mergers or layoffs.
2.8.4 The three cultural sets described above interact in operational contexts. These interactions determine for
example how:
a) juniors will relate to their seniors;
b) information is shared;
c) personnel will react under demanding operational conditions;
d) particular technologies will be embraced;
e) authority will be acted upon and how organizations react to operational errors (punish offenders or
learn from experience);
f) automation is used;
g) procedures (SOPs) are developed;
h) documentation is prepared, presented and received;
i) training is developed and delivered;
j) work assignments are made;
k) different work groups (pilots, ATC, maintenance personnel, cabin crew) will relate; and
l) management and unions will relate.
2-26 Safety Management Manual (SMM)
In other words, culture impacts on virtually every type of interpersonal and inter-organizational interaction. In addition,
cultural considerations creep into the design of equipment and tools. Technology may appear to be culture-neutral, but it
reflects the biases of the manufacturer (consider the English language bias implicit in much of the world’s computer
software). Yet, for all the above discussion, there is no right and no wrong culture; they are what they are and they each
possesses a blend of strengths and weaknesses.
2.8.5 The greatest scope for creating and nourishing an effective, generative culture for the management of
safety is at the organizational level. Operational personnel in aviation are influenced in their day-to-day behaviour by the
value system of their organization. Does the organization recognize safety merit, promote individual initiative, discourage
or encourage safety risk tolerance, enforce strict SOP compliance, tolerate breeches of SOPs or promote open two-way
communications? Thus, the organization is a major determinant of the behaviour employees will engage in while
performing operational activities that support the delivery of services for which the organization is in business.
Organizational culture sets the boundaries for accepted operational performance in the workplace by establishing the
norms and limits. Thus, organizational culture provides a cornerstone for managerial and employee decision-making:
“This is how we do things here, and this is the way we talk about the way we do things here.”
2.8.6 Organizational culture then consists of shared beliefs, practices and attitudes. The tone for an effective,
generative organizational culture is set and nurtured by the words and actions of senior management. Organizational
culture is the atmosphere created by senior management which shapes workers’ attitudes towards, among others,
safety practices. Organizational culture is affected by such factors as:
a) policies and procedures;
b) supervisory practices;
c) safety planning and goals;
d) actions in response to unsafe behaviour;
e) employee training and motivation; and
f) employee involvement or “buy-in”.
2.8.7 The ultimate responsibility for the establishment and adherence to sound safety practices rests with the
directors and management of the organization — whether it is an airline, an aerodrome, an ATS or an AMO. The safety
ethos of an organization is established from the outset by the extent to which senior management accepts accountability
for safe operations and for dealing with emerging safety concerns.
2.8.8 How line management deals with day-to-day activities is fundamental to a generative organizational culture
for the management of safety. Are the correct lessons being drawn from actual line experiences and appropriate actions
taken? Is the affected staff constructively involved in this process, or do they feel they are the victims of management’s
unilateral action?
2.8.9 The relationship that line management has with the representatives of the regulatory authority is also
indicative of a generative organizational culture. This relationship should be marked by professional courtesy but with
enough distance so as not to compromise accountability. Openness will lead to better safety communications than will
strict enforcement of regulations. The former approach encourages constructive dialogue, while the latter encourages
concealing or ignoring the real safety problems.
2.8.10 Although compliance with safety regulations is fundamental to the development of sound safety practices,
contemporary thinking is that much more is required. Organizations that simply comply with the minimum standards set
by the regulations are not well situated to identify emerging safety problems.
Chapter 2. Basic Safety Concepts 2-27
2.8.11 An effective way to promote safe operations is to ensure that an operator has developed an operational
environment where all staff feel responsible for and consider the impact of safety on everything they do. This way of
thinking must be so deep-rooted in their activities that it truly becomes ‘the way we do business around here’. All decisions,
whether by the board of directors, a driver on the ramp, or an engineer, need to consider the implications on safety.
2.8.12 Such an operational environment must be generated from the ‘top down’ and relies on a high degree of
trust and respect between workers and management. Workers must believe that they will be supported in any decisions
made in the interest of safety. They must also understand that intentional breaches of safety that jeopardize the
operation will not be tolerated.
Комментариев нет:
Отправить комментарий