Design for Values and Operator Roles in Sociotechnical Systems
Engineering is increasingly of systems that are not only complex – in being multilayered – but also hybrid – in containing people as components. This chapter does not discuss the already well-researched safety concerns generated by this development with respect to operators, users, and bystanders but instead addresses values of relevance to the presence in such systems of operators as agents: values associated with what we can reasonably ask people to do and what we make people responsible for. From the perspective of design, systems containing people as components deviate in four ways from traditional systems consisting of all-hardware components. (1) Such systems are not designed as a whole but gradually and modularly. (2) They are typically delivered as incomplete because their operators have to be added by the client or owner during implementation; only operator roles are strictly speaking designed. (3) Persons performing these roles become components of the system only partially; unlike hardware components their behavior continues to be monitored and controlled from a perspective external to the system. (4) Operator roles are often explicitly conceived as being partially external, in that an operator’s task is ultimately to ensure that the system continues to function properly come what may. These features lead to conflicts with the autonomy of operators as persons and the well-foundedness of assigning particular responsibilities to them. The difficulties described here should make us rethink whether traditional engineering approaches to system design are adequate for such hybrid systems.
KeywordsSociotechnical system System design Operator Responsibility Autonomy
That failing technology puts people’s lives at risk is true almost everywhere on earth. But technology can fail in many different ways, which differ in the consequences they have and what we can do about it. The particular aspect at issue in this chapter is that certain complex, systemic forms of technology rely on human operators and that failures of these systems, which typically but not necessarily involve errors on the part of the operators, not only put these operators’ lives at risk as it does the lives of other people but also put their reputation at risk by making them minimally causally responsible for the potentially disastrous consequences of failure, and thereby pose a burden on them for the rest of their lives or, if they do not survive, on their family and relatives. Let me give a few examples.
In July 2002, two aircraft collided in midair over Überlingen in southern Germany after the crew of one of them received conflicting instructions – both to descend and to climb – from two different sources and chose the wrong one to follow. In June 2009 the crew of a French Airbus failed to take control of the aircraft when the autopilot disengaged due to a loss of airspeed data and steered the aircraft into a straight downward course lasting several minutes until it finally crashed into the Atlantic Ocean. In both cases the crews were not flying these aircrafts for their own sake but were executing a task: flying several hundred passengers who had all paid for the service of being transported through air and of whom none survived. The crews were operators in a complex system; they were part of the machinery that their employers were using to generate a profit by offering this transportation service, and they were destroyed together with other parts of that machinery. A failure to execute their task correctly puts these operators at risk just as much as it puts the customers whom they service at risk. The control room operators whose handling of the control rods during a test led to the destruction of reactor no. 4 in Chernobyl in 1986 also paid with their lives, still failing to understand how their actions “entirely in accord with the rules” could have had this disastrous outcome while they were perishing in a Moscow hospital.
For operators the dire consequences can extend well beyond the immediate disastrous event. The Danish air traffic controller, employed by the Swiss company Skyguide, whose late interference was responsible for the issuing of two conflicting instructions that led to the 2002 midair collision, was later murdered by a Russian citizen who lost his wife and two children in the crash. A Yugoslavian air traffic controller whose similar oversight led to an earlier midair collision near Zagreb in September 1976 was convicted to a prison sentence of 7 years, to be released only in response to a worldwide petition by air traffic controllers after having served 2 years of his sentence. And an Italian air traffic controller who directed ground traffic at the time of a deadly collision of an airliner with a smaller aircraft during takeoff at the airport of Linate near Milan in October 2001 was afterwards convicted to a prison sentence of 8 years, a conviction that was upheld after two appeals, even though the official investigation report had not singled out his actions as in any particular and blameworthy way a major cause of the accident.
That the human components of such systems in technology are a special and frequent source of error, with often disastrous consequences, is hardly an original observation and has been argued repeatedly (Perrow 1984; Whittingham 2004). Humans, moreover, fail in other ways than hardware components. Humans are sensitive to the boredom generated by repetition. Rules may not be taken seriously if they interfere with operational procedures or if they too often “cry wolf.”1 Such issues are by now well recognized and well (though perhaps less well) understood, and methods for dealing with them have been and are being developed, making up the field of human factors engineering. Human factors engineering is aimed, however, at the general improvement of the reliability and safety of engineering systems and thus the protection from harm of operators, users, and bystanders alike. Much has already been achieved in this respect. Amalberti (2001) classifies civilian air traffic system and the European railroad systems, from which the majority of examples in this chapter are drawn, as “almost totally safe transportation systems.” My aim in this chapter is not to contribute to this literature, nor is my focus a further increase in the safety and reliability of these systems; though of course it may contribute to that and I should certainly hope it does.2 My aim is rather to discuss how the inclusion of people in technical systems generates value issues for, or related to, specifically these people and to suggest ways in which these issues can be identified and addressed, if probably not entirely resolved, from a design perspective.
In the following three sections, first the concept of a sociotechnical system is discussed; then an analysis is presented of the ways in which such systems, consisting partly of technical, engineered devices and partly of human operators, differ from traditional engineering devices and systems, especially from the point of view of engineering design; and finally some consequences for the status of operators as valuing and evaluated persons are discussed. With a few exceptions, empirical support is drawn from air traffic cases, but the analysis and conclusions I hold to apply beyond this particular field to all systems with humans as components.
Complex Technological Systems with Human Components
Complex people-containing entities which are conceived and implemented as instruments to serve some purpose will be referred to in this chapter as sociotechnical systems . Although this term was coined over half a century ago to indicate a more restricted concept, this is how the term is currently most often used.3 The “socio-” emphasizes not only the mere presence of people as components of the system but also the fact that in order to initiate and coordinate the actions of these people, instructions, rules and regulations, and similar social “mechanisms” play a role. The term is therefore more specific than other terms used in the same field, such as engineering systems (de Weck et al. 2011), where the emphasis is on their engineering complexity and not particularly on the inclusion of people.
The people that are, by design, components of such sociotechnical systems I will refer to as operators , whatever the character of the activities they are supposed to perform. This includes, in the realm of air transportation, pilots and other aircraft crew, air traffic controllers, check-in attendants, luggage handlers and other airport employees, air ticket sale personnel, and so forth. Since sociotechnical systems are conceived and implemented as instruments to serve some purpose, every sociotechnical system presupposes some intentional or quasi-intentional “system owner,” who uses the system as an instrument to achieve this purpose, and also an object for the transformation of which the system is used. Together the sociotechnical system, its user, and its object form an instrumental system of a particular type, a type ultimately determined by its owner–user, irrespective of what the sociotechnical system’s operators think they are participating in.4 The user of a complex sociotechnical system as an instrument is here referred to as the “system owner” of the instrumental system so created in the sense that this user, by using the instrument in a particular way, determines the kind of instrumental system that comes into being by the use. But this user need not own the instrument in any legal sense, though this may certainly be the case, even when the instrument is a nationwide sociotechnical system. What, precisely, is owned in such a case is, however, a difficult question; it excludes, in modern societies at least, all persons who nevertheless must count as components of the sociotechnical instrument.
Due to the user-defined “momentariness” of instrumental systems, any sociotechnical system can, just as much as any tangible object, be operative as instrument in a wide variety of instrumental systems – as basically acknowledged (but insufficiently emphasized) by systems engineering. An air transportation system can be an instrument in a profit-generating system for a private (typically incorporated) owner–user. And it can be an instrument in a flying system or transportation-through-air system for a private (typically individual) user. Note that the two instruments in these systems do not coincide. As for the profit-generating system, each customer to whom a flight is sold is a component of this system’s instrument; without paying customers, the system would be “running idle,” that is, would not generate a profit for its owner–user (who is, by being transformed into a condition of increased wealth, also the system’s object). The owner–user of the profit-generating system is in its turn a component of the flying system’s instrument; without the owner–user in place, there would be no live airline company to be used as an instrument; it would malfunction by not responding to the customer’s inserted “coin” or button pushing, let alone fly the customer to their destination. Although the two systems overlap completely, in that they are instantiated by the same complex entity existing in the world, the boundaries separating the various roles run differently, depending on who is bringing about which change in the world using what.
The preceding two (closely related) examples are of instruments in commercial service-providing systems. Similarly we can also have commercial product-delivering systems with sociotechnical systems as their instrument. Again we are dealing ultimately with encompassing profit-generating systems, more in particular profit-generating-through-the-design-manufacture-and-sale-of-engineered-products systems. Within such a system, we can distinguish, as its instrument, a product-generating-and-delivering system. For example, if an aerospace company, say, Northrop, is invited to design a new jet fighter, then within Northrop a complex instrument will be organized for the design, development, manufacture, testing, delivery, and maintenance of this aircraft. These are the sorts of systems that the discipline of systems engineering has traditionally been concerned with: the jet fighter as a (entirely physical) system and the (partly social) system that has to be put in place in order for a system like a jet fighter to be brought into existence successfully.
The notion of a technical or engineering system, as used in systems engineering, generally refers to the complex wholes that figure as the instruments of my encompassing notion of an instrumental system. However, this could be either a system as delivered to a client (a jet fighter sold to government) or an ongoing service-providing system (a national air force) or the organization that delivers – designs and builds – such a jet fighter or sustains its operation. Sage and Armstrong, Jr. (2000) characterize systems defined, developed, and deployed by systems engineers as being either products or services (p. 2) and distinguish service-oriented (e.g., an airport), product-oriented (e.g., automobile assembly plant), and process-oriented (e.g., a refinery) systems. Each of the three examples is a system that has many people among its components. Buede (2009) is less clear. On the one hand, he conceives of systems in a traditional sense of things built by engineers, taking for granted, e.g., that they can be painted green. On the other hand, the wider notion of a system that brings such engineering products into being (and which definitely cannot be painted green) is presupposed. All Buede’s examples, however, presuppose product-delivering systems (e.g., the system delivering the F-22 jet fighter).
Additionally, there are noncommercial regulating and monitoring systems, conceived as large nation-spanning if not world-spanning infrastructural systems, e.g., the world civil air traffic regulation system, or the national or, say, the European electric-power-providing system. These are not finally instruments in some profit-generating system – at least they are not primarily conceived as such – but rather provided by states for the benefit of their citizens. They can, however, include subsystems which can be so characterized, for instance, any particular power plant participating in a national or supranational electric-power-providing system.
Sociotechnical systems are not designed, assembled, and tested from scratch and as a whole but designed and deployed modularly. As a result, they grow and develop almost organically, with the corresponding consequences, foremost being prone to emergent behavior. Although emergent behavior occurs also in designed all-hardware devices, the possibilities of controlling for it are much greater there.
Even though sociotechnical systems are not designed, tested, and implemented as a whole, they are designed and tested to considerable extent and are conceptualized and monitored from a design perspective. This is required by the inclusion of engineered devices as system components. The human components cannot be treated in the same way, however. The tasks to be performed by people – to monitor and if necessary achieve coordination between various technical components or between the manipulations by external people (“users,” “customers”) and internal technical components – are instead designed as slots to be filled once the system goes operational. Filling these slots with people is, typically, not the responsibility of the designer but the prerogative of the user. Sociotechnical systems, then, emerge from the design-and-manufacture stage incomplete. Their operators are not furnished with them by the product-delivering company, finely tuned to interact optimally with the hardware as any component of a hardware product is, but have to be added by the client, the prospective owner–user, as part of the system’s implementation.
The people who fill the slots and thereby become components of the system do not coincide with their role, as hardware components do, but perform their operator roles as persons and (in the current state of organization of the world) as citizens. In other words, they become components of the system under design/deployment only partially. Operators reflect on the performance of their role; their actions continue to be monitored and controlled from an external, personal perspective.
Operator roles are as a rule not exclusively local but also global: though placed at certain nodes defined by output-to-required-input profiles, an implicit background task is typically to ensure that the system continues to function properly. The capacity and disposition of operators to reflect on task performance and system performance are therefore not only acknowledged but even presupposed. Operators, or rather the people performing operator roles, are supposed to perform a global monitoring role as well, which requires a perspective on the system from the “outside.”
Together, these differences have major consequences both for the sort of values that should be taken into account in any form of design decision making concerning such systems – from conceiving of them all the way to implementing and maintaining them – and for how these values should be taken into account. In the next section I discuss them in more detail one by one and illustrate them using examples of system failures.
Four Distinguishing Characteristics of Sociotechnical Systems
The first difference is that sociotechnical systems are hardly if ever designed and implemented as a whole and from scratch. They are too big for this and too encompassing to allow for the necessary isolation. Rather they are extended through design: they grow almost organically by having designed modules added to them or by having parts of the system replaced by designed modules. As a result, there is no single controlling instance monitoring the design of such systems, checking the progress toward meeting the design requirements, assessing the compatibility of the system components, and ultimately certifying the outcome. As a further consequence, coordination difficulties between system components and modules – the entities that are under unified design control – may crop up and must be expected. These difficulties are themselves not necessarily a consequence of the presence of operators in the system. An example that involved just hardware devices is the crash of the last Concorde on 25 July 2000 (see [Concorde], pp. 94–117). During takeoff, one of the tires of the landing gear blew out and a fragment of the tire bumped against one of the fuel tanks in the wings. The resulting shock wave in the full tank caused a hole of about 30 by 30 cm in the tank wall. The fuel escaping through the hole caught fire, and the scale and intensity of this fire caused so much damage to the wing that the crew lost control of the aircraft. Now tire bursts had been a problem ever since the Concorde started flying, and damage to the wings as a result of these bursts had been a cause for concern.5 Experiments were performed in 1980 to establish the amount of damage possible as a result of tire bursts, and it was concluded that no modifications of the wing tanks were called for. In 1982, however, it was independently decided that the wheels and tires of the landing gear needed strengthening, and as a result the Concorde’s landing gear came to be equipped with thicker and heavier tires. This made the results of the 1980 experiments on likely damage to the wing fuel tanks due to tire bursts irrelevant, but the experiments were not repeated for the new, heavier tires. Accordingly an opportunity was lost to discover that more substantial damage caused by fragments of a burst tire had become a possibility, even though tire bursts themselves were now more seldom. This is a typical example where a device is modified after delivery by the company responsible for its design, when the arrangements for monitoring the design as up to standards are no longer in place.
Of course, this is just one source of failure, and the continued operation of the monitoring arrangement does not guarantee that all coordination issues between components are properly taken care of (as shown, e.g., by the notorious failure of the first Ariane 5 rocket in 1996 due to a failure to update a particular software module from a previous version of the rocket). The experimental study of the consequences of tire bursts undertaken for the Concorde shows an awareness of the importance of monitoring the coordination and interaction between all system components. However, once some of these components are people, that awareness seems often not to include them. A case that shows this is the midair collision mentioned in the opening section between a Tupolev 154 flown by Bashkirian Airlines (a local Russian airline) and a Boeing 757 cargo aircraft flown by DHL, which occurred over Überlingen in the south of Germany on 1 July 2002 (see [Überlingen], esp. pp. 69–71, 98–103). Normally air control should notice whether two aircraft are on a collision course and give instructions to one or both crews to change course. In this case, however, the Swiss air controller was distracted (by having to attend to two work stations at the same time, a situation aggravated by hardware problems and maintenance work going on) and failed to detect the potential conflict. For such cases, occasioned by previous midair collisions, an automatic and at the time of the accident obligatory airborne collision avoidance system (ACAS) was installed in both aircrafts. ACAS operates by having aircraft exchange signals and by generating automatic spoken instructions (or resolution advisories, abbreviated as RAs, in ACAS lingo) to the crews of the aircraft in a coordinated way, one crew receiving an instruction to descend and the other crew an instruction to climb. Due to the failure of the air controller to resolve the conflict in time, the ACAS of both aircrafts had been activated, first warning the crews of an approaching conflict and then generating an instruction to descend for the crew of the Boeing and an instruction to climb for the crew of the Tupolev. Just as this was happening, however, the air controller noticed his oversight and ordered the Tupolev to descend, ignoring the Boeing. The Russian crew received the contradictory instructions within just seconds of each other, the ACAS-generated instruction to climb coming in when the air controller had not yet finished telling the crew to descend. After some confusion, the Russian pilot decided to follow the air controller’s instruction. Since both aircrafts descended, they remained on a collision course and eventually collided, resulting in the loss of 71 lives.
In this case, the necessity to coordinate interaction between two system components – air controller and ACAS – seems not to have been on the mind of anyone engaged with the system from a design perspective. ACAS has been developed in response to earlier midair collisions that were caused by air controllers failing to identify and resolve conflicts between aircrafts. It seems to have been designed and added from the perspective that where a human operator fails to “function correctly,”6 an engineered device should be available as a remedy. It was apparently not considered, at least not by those people responsible for adding ACAS to the air traffic control system, that as long as the human operator remains there, its possible interference with the system through its interactions with other system components should be dealt with.7
Being unfinished, modular and extendible, due to which there are limits to the extent to which design is controlled from a single perspective, is not an exclusive feature of sociotechnical systems, as the Concorde case shows. Any technology that is important and expensive enough to merit continued monitoring with a view to redesign may have this character and be prone to the associated vulnerability. Neither is it an inevitable feature of sociotechnical systems. We can imagine a sociotechnical system to be designed and implemented from scratch as a whole, for instance in the case of a new country that, say, has emerged from a war, with the old infrastructure completely destroyed, and now providing itself with a power-providing system from scratch. Even then, however, due to the other major differences at issue here, sociotechnical systems are especially vulnerable to modular development because, in contrast to device systems like the Concorde, partial redesigns are possible at no immediate costs, as will be discussed below.
The second major difference is that the human components of sociotechnical systems are not contained in them in the same way as hardware components are, manufactured to specifications, installed, tested, and fine-tuned, so as to function properly at delivery. Instead such systems are designed with “slots” to be filled by people during the deployment of the system. The system becomes operational only once all slots are filled. Actual coordination between all components will therefore become possible only in the deployment phase and will, as a consequence, hardly ever be achieved definitively.
The design of “slots” or “roles” for sociotechnical systems to some extent follows the design of all complex systems: the modularity of design, given the complexity of most devices and the fact that a wide spectrum of scientific disciplines is involved in designing particular components, brings with it that any designed system has the character of a number of slots linked by interfaces; any particular component can be characterized by a particular geometry and a particular input–output behavior such that it fits into the corresponding slot and, once there, will contribute through its behavior to the operation of the entire system. Similarly, any slot to be filled by a human operator will be bounded by interfaces to which a human person can connect by way of its senses and the parts under voluntary muscular control – typically the hands and feet. The required input–output behavior will be specified by a list of rules or instructions that, given the interfaces and the set of human capabilities presumed, serve to define the operator role in question.8
However, if this were all, we could strive to close the gap between purely technical systems and sociotechnical systems. Although people cannot be manufactured to specifications, people can be “worked upon” to exhibit the required input–output characteristics as closely as possibly as a lawlike pattern by training or by the more extreme form called conditioning. This approach to sociotechnical systems has been the ideal for one particular form of sociotechnology, the military, ever since the introduction of training and discipline as the “operational principle” in the Dutch army fighting the Spanish in the late sixteenth century by Prince Maurice of Nassau, whose extremely influential military innovations were conceived in close contact with engineers, including Simon Stevin.9 The “scientific management” movement of the early twentieth century, often referred to as Taylorism, also approaches the human operator as a component like any other, whose behavior can be adjusted to fine-tune coordination between components and optimize the operation of the entire system.
Note that this is not, and does not amount to, slavery; rather it goes much further. Slavery is based on, or presupposes, a rational decision on the part of the slave to perform his or her “duties” given the consequences – punishment and death – if this is declined and often enough just as well given the rewards that adequate performance will bring. In this decision the slave exerts a form of autonomy. Slavery relies, therefore, on the recognition that generally persons do not coincide with the functional roles they perform in any system in which they enter – be it a social one or a sociotechnical one – and that performance of a role, the decision to do it, and how to do it are made from the underlying “platform” of the intentional, broadly rational person, as will be discussed further below. The effect aimed at by rigorous conditioning, in contrast, is ultimately to depersonalize the person who is executing a task and to separate the execution of the role from any monitoring by an “underlying” person. The commands in military drilling may look like instructions, to be understood as having some content and to require interpretation to disclose that content, but ultimately and ideally they are supposed to function as signals to which a corresponding conditioned response is expected. For military roles to opt for conditioning is understandable: in circumstances of war, no intentional, broadly rational person would decide to execute the role that a soldier is supposed to perform, and training and conditioning are not the only ways in which the owner–users of sociotechnical “war machines” such as armies have tried to overcome this problem, although in the sixteenth century they were very innovative ways.10
Due to the separation in sociotechnical systems between the system with its operator roles designed as slots to be filled and the people filling these slots, the system cannot be securely tested prior to deployment. Even if to some extent this is also true for hardware-only systems – where batteries, lamps, valves, fuses, tires, and anything else that is subject to wear and tear have to be replaced regularly – such components are sufficient constant, e.g., behave sufficiently lawlike, for this not to produce problems.11 However, as a matter of fact, no amount of training and conditioning will prove sufficient to reach the amount of depersonalization that is required to make humans into sufficiently reliable, if still not completely reliable, deliverers of lawlike behavior. And even if it were sufficient, by far not a sufficient number of people could be found to volunteer for being converted into machine components in this way. And even if they could, legislation would block such a voluntary execution in many cases, just as it blocks in most countries the voluntary sale of organs like the kidneys and blood; legislation often imposes on citizens the honoring of certain values – corporeal integrity, for one – against their autonomy.
A consequence of this feature of sociotechnical systems is that responsibility for and control of the exact operation and functionality of the system is shifted partly toward the owner–user of a sociotechnical system. That agent, by filling the operator slots with the persons who are going to perform the operator roles, decides how operators are trained and kept in shape and what the working conditions for each operator will look like, both generally and momentarily, and even has the final say on what the instructions defining each role will be. What this amounts to is that sociotechnical systems generally lack precise boundaries.12
The Chernobyl disaster furnishes a good – though perhaps extreme – example of what this may look like in practice. A first investigation into the causes, undertaken by a Soviet Union committee, pointed to the operators on shift during the test run which ended so dramatically as the persons to blame for it, due to their massive violation of basic procedures, a violation so massive that the probability of it occurring could not have been foreseen. What the committee judged to be violated, however, were procedures that they assumed were the procedures for handling the reactor, since these were the proper procedures on the basis of their expertise, knowledge of the reactor and of what the people responsible for its design had told them should have been the procedures. Later, however, the initial report had to be retracted when further investigation revealed that the operators had not violated a single procedure as they had in fact been laid down, in writing, by the global and local management of the responsible utility institution ([INSAG-7], pp. 1–2, 13–15).
A second example highlighting the importance of proper instruction and training as part of the functioning of a sociotechnical system is the loss of American Airlines Flight 587 in November 2001 ([AA587], pp. 133–156). The aircraft, an Airbus A300-600, crashed after repeated aggressive handling of the rudder in order to stabilize the aircraft in conditions of turbulence caused the entire rudder to separate. Violently working the rudder for this purpose was practice among American Airlines pilots, but although other aircraft could cope, the Airbus A300, which had a very sensitive rudder-operating mechanism, could not. Airbus industries had warned American Airlines not to use the rudder in this way in 1997, but pilot training courses were not affected. Accordingly, American Airlines was held liable for the accident.
As a consequence of the inclusion of operators, then, sociotechnical systems are, one could almost say, in permanent state of repair. Operator roles are filled by new people all the time, and even while in the system, people’s abilities are not sufficiently constant to be left at the job unattended, so to speak. For this reason, no sociotechnical system can be expected to function properly without training for its personnel. Accordingly, not only should training procedures be included in role instructions but modules dedicated to training should be included in sociotechnical systems by design. Major failures often reveal serious shortcomings in this respect. The investigation following the head-on collision of two passenger trains at Ladbroke Grove, London, in October 1999, resulting in 31 casualties, concluded that the training of drivers as well as the training of signalers was defective. Especially there was no monitoring of training procedures at higher levels of the system, allowing trainers to proceed independently and as they saw fit, without adequate input on what to train for and how (see [Ladbroke Grove], §5.25-5.48 & §6.27-6.42). The consequences of this aspect for system performance and system design are more far-reaching, however. Amalberti (2001) argues that it explains why the performance of even “almost totally safe systems” like civilian air transport or rail transport cannot be improved beyond a certain limit: the modules dedicated to training need occasional failures as input in order to know what to train for. Since the circumstances of system operation change all the time, as do the capabilities and incapabilities of the people who act as its operators, the experience of how to train and what to train for is not asymptotic, and perhaps not even accumulative.
Presupposition of Intentional Rule Following
If the two differences discussed until now were the only ones, sociotechnical systems could still be looked upon as approximating traditional engineering systems, where the human sciences are required only to deliver their best knowledge of effective methods of training and conditioning people but are not further required for describing and understanding the behavior of the system under design. Human operators could be treated as deliverers of input–output patterns similar to hardware components – although their reliability remained an issue, emphasizing that such systems would perhaps have to be seen as being in the prototyping stage indefinitely. There is something of this attitude in classical approaches to systems engineering, sometimes referred to as “hard systems thinking.”13
However, and this is the third major form in which sociotechnical systems differ from traditional hardware systems, in the overwhelming majority of cases, people perform their roles as operators consciously, that is, they are conscious of the fact that they perform a role, instead of coinciding with that role, as they would when being conditioned into performing the actions required by the role. Accordingly, performance of an operator role is a two-level process: operators must understand what is expected of them, depending on the circumstances, and they must decide to carry it out. In a situation of conditioning, there is no room for distinguishing between these two levels.
An operator’s understanding of what is expected can be equated with the drawing up of an exhaustive list of instructions which can be seen as defining the role, although in practice skills and know-how that are acquired in education and training are also important, an aspect that is not further discussed here. To draw up these instructions and to implement a supportive training program is, then, as far as system design can get. The designers of sociotechnical systems generally have little or no control over the execution of these instructions. To secure adequate role performance, execution must be made worthwhile and rewarding, but whether it actually is depends on circumstances which lie to a large extent beyond the scope of system designers and are difficult to foresee. Operators reflect as persons on their role instructions, and they will judge the wisdom of doing what the role definition requires of them from their perspective of a person, who will generally perceive him- or herself to have clear interests. These interests may be judged to be harmed by some of the actions the person is expected to perform as an operator. There may also be vaguer goals and considerations the achievement or satisfaction of which may be jeopardized by such actions. How difficult it is to generalize here is shown by the Japanese kamikaze pilots during the Second World War but also by the behavior of the various operators present in the Chernobyl nuclear plant at the time of the explosion, showing both extreme self-sacrifice and extreme carelessness.
But just as a person performing as an operator does not coincide with the operator role but always performs the role as a person, just as little is a person restricted to the particular role he or she performs within the sociotechnical system at issue. It is in the nature of roles that one person can play or perform several roles at the same time and accordingly be committed to act on several different sets of instructions or rules at the same time. In fact, it is the standard situation in modern societies that people in their actions perform several roles at the same time, although some may be more in the background while one particular role is up front. For once, every adult acting as an operator is also a citizen, that is, is being held to act in accordance with the laws and regulations of the state under whose jurisdiction that person’s actions fall.14
This creates several further challenges for the design for values of sociotechnical systems. For a start, system design must allow operators to be “good citizens” by not requiring them to act in violation of the legislation that they are subject to.15 For a conflict, compare the regulation surrounding ACAS. The Überlingen midair collision was not the first incident of its kind. Almost a year earlier, over Tokyo, air traffic control also was late in noticing that two aircrafts were on collision course and interfered just when the ACAS system on board of both aircrafts has just generated its advisories to the crews (see [JA907/958], pp. 99–117). Here as well this resulted in one of the two crews receiving conflicting instructions. The crew of a Boeing 747 had received an instruction to descend by air traffic control, to which it responded immediately, only to receive 2 s later an ACAS instruction to climb. In response to the Überlingen collision, ICAO emphasized in the first update of its system of regulations that an ACAS instruction, once received, must always be followed, even when an explicit counter instruction by air traffic control is received. In this case, however, the pilot judged following the ACAS advisory to climb an extremely dangerous maneuver, given that he had already started to descend and was making a turn at the same time.16 Acting according to the rule that an ACAS advisory must always be acted upon promptly, therefore, in this case conflicted with the rule that a pilot should operate an aircraft so as to secure the safety of all passengers.
To be sure, most regulations anticipate the possibility of such exceptional circumstances by adding an exception clause. In this particular case of ACAS, the ICAO document of flight procedures 8168 says “In the event of an RA pilots shall […] respond immediately by following the RA as indicated, unless doing so would jeopardize the safety of the aeroplane” ([ICAO8168], p. III-3-3-1). Such an exception clause, however, is lacking in ICAO document 9863 dedicated entirely to ACAS. There, it is stated: “If an RA manoeuvre is inconsistent with the current ATC clearance, pilots shall follow the RA.” Only three very specific exceptions to following an ACAS RA are mentioned: “Stall warning, wind shear and Ground Proximity Warning System (GPWS) alerts take precedence over ACAS RAs.” ([ICAO9863], §18.104.22.168 & §22.214.171.124.) Not only may regulation occasionally betray a lack of anticipation of potential conflicts; it may even blatantly provoke it. The current British Rules of the Air declare themselves to apply “(a) to all aircraft within the United Kingdom; […] (c) to all aircraft registered in the United Kingdom, wherever they may be.” ([Rules of the Air 2007], p. 4.) These regulations inevitably impose a conflict upon any crew flying an aircraft registered in the United Kingdom in the airspace of other countries, where other rules of the air apply which may differ significantly from the British rules – and such differences exist with respect to such basic features as priority rules.17
These issues point to two major difficulties for the “closure under rules” of sociotechnical systems, which are both aspects of a crucial feature of sociotechnical systems already referred to above: the absence of sharp boundaries. The first is that system designers not only have limited control over operator instructions, as already mentioned, but additionally lack control over the content and the stability of any external set of rules with which operator instruction should be consistent. Not only are the rules that operators are legally obliged to follow drawn up by institutions that operate independently of system designers,18 but these roles come into being through mechanisms that differ greatly from the practice of engineering. In contrast to traditional engineering systems, rules systems have no “developmental inertia”: rules can change radically overnight, and these rule changes seem to be costless as well. Prima facie, at least, because there are long-term costs related to such changes, in the form of efficiency losses and damage to health and property due to inconsistencies in the rules, the effects of which emerge only in the course of time. This makes the design of sociotechnical systems a precarious affair. Although here as well there is some continuity with safety and health regulations that traditional hardware devices must satisfy: with respect to these, designers may face the difficulty that these regulations may change while you are designing for them. In such cases, however, whether or not a product does satisfy the rules is typically a clear yes/no matter. And once acquired a product may often still be used, even though producing it is no longer legal.
If we look only at the syntactic level, we may want the instructions defining an operator role to be appropriate, that is, leading to the optimal continued functioning of the system, as well as clear and unambiguous. Difficulties with both are to be expected, due to, again, the peculiar position of the role-defining rules in the total system design. Hardware components can be replaced by alternatives only with difficulty. The replacing itself is time consuming and expensive and may result in a breakdown. Replacement is also a specific interfering act with liability issues attached. Rules, on the other hand, can be changed at will: replacement is effortless and free and will hardly ever lead to an immediate breakdown of the system. Likewise liability works differently as far as regulation is concerned: in a delivery contract, it may often not be clear who is responsible for what rules and even when designer responsibility explicitly extends to rules the client may and typically will request some maneuvering space with respect to them.19
Certainly sovereign states will hold their power to legislate to have absolute priority.20 Sociotechnical systems, however, may be so vast, and their owner–users so powerful that this priority claim can successfully be challenged. Additionally, they can be so vast that consistency with the regulation and legislation of many different countries is required. And there is no mechanism to secure the mutual consistency of these various national rule sets. Coordination mechanisms are usually not stronger than individual states; the absurdly imperial “Rules of the Air” of the United Kingdom quoted above are a case in point. The ICAO, in particular, being a UN agency, cannot impose regulation upon its member states. The most that its member states are obliged to do is to indicate whether or not they adopt ICAO regulation (see, e.g., [ICAOAnn2], p. 2–1).
Combination of Internal and External Perspective
The second of the two difficulties mentioned above brings us to the fourth and final major distinguishing characteristic of sociotechnical systems. The exception clauses in operator instructions, far from being mere fillers for the holes left by the impossibility of drawing up precise instructions for every contingency, are actually a cornerstone of the design of sociotechnical systems containing human operators. Operator instructions are deliberately open-ended and leave room for interpretation because system users want to have their cake and eat it: operators should act in accordance with the rules drawn up by system design aimed to serve system performance best, but if necessary, when the system designers had it wrong for the particular circumstances or when unforeseen circumstances occur, they should adapt their handling of the system and thereby “save” continued system performance.
Designing for solutions where part of the control over the end result is held back from the designing engineers is a significant deviation from how engineers are used to conceive of and treat their “material.” However, system design acknowledges it and builds on it. This is the second difficulty mentioned above, and it brings us to the fourth and final form in which sociotechnical systems differ from standard technical systems. If operators – more precisely the persons who perform operator roles – inevitably perform their roles from an ever-present background of being persons, which leads operators to reflect on the performance of the role and to judge the task’s actions against the person’s interests and all other rules that are perceived to have a say, then we may as well make use of that fact and make it work for the system.
It is surprising how often system designers consider operators and their roles at the very end of the design process. Hardware is purchased, software is specified, and only then are the roles of operators considered – and that role is often to fill the gaps between computer-based subsystems. Designers normally think about the operator’s role at a very high level: ensure everything works! […] Typically the limits of technology define operator activities. In many contemporary systems, the prevalent design philosophy dictates that everything that can be automated should be automated. Operators are responsible for performing those activities that remain and, of course, ensuring that all activities, human and computer, are carried out effectively in the face of changing or unanticipated system and environmental conditions.
To some extent, legislation forces this perspective upon system designers and system owners. Legislation makes, for instance, the driver of a car or the pilot of an aircraft responsible for the safety of passengers and bystanders, irrespective of whether the driver or pilot is operating it privately or in a role as operator in a sociotechnical system and irrespective of the size of that sociotechnical system. What is more, legislation seems to share the gap-filling outlook on operators’ responsibility to make up for the deficiencies of a system. A case in point is the prohibition in most countries of the operation of unmanned vehicles on public roads: a “driver” must be present who can take over in a case of emergency.
Together these four distinguishing features of sociotechnical systems set the stage for a consideration of value issues from a design perspective that concern the position of operators in such systems. How the system behaves, whether it behaves as designed and whether it can behave so, and in relation to that what its operators are supposed to contribute and how they are supposed to do so, affects their careers and their lives. The preceding discussion has argued that, in performing their roles, operators have to satisfy different and often conflicting requirements. From the top-down engineering perspective, people in operator roles are among the components of complex system, for the purpose of achieving and maintaining coordination or as (part of) the system’s interface with users, who must “fit in” and behave in a specific way for the system to work. From the bottom-up societal perspective people are citizens who have a social and more generally moral responsibility for the results of their actions and for the things that they causally bring about, a moral responsibility which they typically have regardless of their position as operator in a system. And society may even formulate further responsibilities for people in particular operator positions on top of the general ones. As our society continues to rely on sociotechnical systems containing human operators and as engineering continues to be instrumental in sustaining this reliance, it is important to morally reflect on how the interests of these operators are cared for. This aspect tends to be overshadowed by the pursuit of caring for the interests of the “general public,” the people who are being served through these sociotechnical systems. In the next section, I discuss two aspects of this problem area: the retaining of a level of autonomy for operators and the responsible assignment of responsibility to operators.
Design for Operator Values
Although engineering has spent much effort on designing systems such that the effects of human error can be contained and system performance is robust under human error, human error is still considered to be the major cause of failures. The circumstances that will generate a human error (from the standpoint of system performance) and the possible types of error are too various. The response of engineering has been and continues to be automation: the replacement of people by hardware components or, and increasingly, hardware-plus-software components. There is no question that automation generally increases system reliability and leads to safer system operation. Nevertheless many systems still contain human operators as components. The reliable automation of many tasks is still beyond current technical possibilities. Additionally, however, automation conflicts with the wish to impose on at least some operators an external perspective next to an internal perspective, as explained in the subsection “Combination of Internal and External Perspective.” Operators are not only supposed to play their part in order to contribute to the system’s functioning in the explicit way anticipated by the designers of the system but are additionally supposed to monitor this contribution and the system’s ability to deal with circumstances the designers did not anticipate. These two aspects are not independent, of course: no need is felt for the continued presence of this external perspective exactly when designers are confident an automated, technical-only solution will do. And even if engineers could part with the urge to rely on human intelligence as a backup option, there is still the confidence of the public of (potential) customers in fully automated service systems to deal with. Plus the fact, already stated above, that legislation often imposes it.
Apart from these considerations of fail safety, there are also limitations to automation of a quite other type: certain sociotechnical systems are, by design, so open that they accept human-operated subsystems as ad hoc components. This is how the public road infrastructure of any country works. From the standpoint of any individual driver using it, the other drivers are human components of the system used, whose actions prepare the system in a precise, though constantly shifting, configuration for use. The price for this honoring of individual freedom is that the likelihood of (local) failure is much greater (cf. (Amalberti 2001), p. 111).
The ambiguous position of the operator is the main point of tension in the treatment of sociotechnical systems. This tension can be seen as a continuation of the dual perspective we have of humans: they are organisms, and as such falling under the descriptive vocabulary of science, but also persons, falling under the intentional and partly normative vocabulary of daily life. Included in the latter perspective is our organized societal existence. Though the results of describing people as complex physical systems and researching their behavior in varying circumstances by scientific methods, plus the fruits of taking these results into account for design purposes, are undeniable; people do not appreciate being conceived of and treated as “mere” physical objects, as cogs and wheels. People value being seen and treated as persons who choose their actions on the basis of reasons. They prefer to understand, and to be able to defend, why the thing they are supposed to do is the correct thing to do and how it contributes to well-being. But there is another side to this coin: with intentionality and understanding comes the option of being held responsible and accountable by the imposed rules of social life when the contribution to well-being misfires. A key question, not addressed by human factors engineering, is how the design and implementation of systems which have both devices and humans as components should cope with this tension and what the possibilities are of relieving it. This section addresses two aspects of this question in the light of the preceding analysis of sociotechnical systems.
Autonomy Versus Automation
To accept to become a component who is supposed to act according to a list of instructions is to deliver oneself to a situation where one is dependent on the quality of the information and the adequacy of the prescribed actions, without having a say in that quality and adequacy. In a more limited context – people working with electronic databases – Jeroen van den Hoven (1998) has characterized this situation as one of “epistemic enslavement.” Not unlike a slave, an operator is “at the mercy” of the system he or she is a component of. The operator trades in his or her personal autonomy and rational and moral control of action, for, ultimately, that of whoever can count as the user of the system for some purpose.21
In the present context, where we look at systems regardless of whether the components are people or things, it should be noted that the condition of epistemic enslavement is not so new or exceptional as it may seem at first: it is a characteristic of any hierarchical social system. When acting “under orders” or “upon request,” a person generally acts without knowing the reasons on the basis of which the action is required or justified. In a sense, then, such a person does not act at all but instead performs someone else’s action, and this action strictly speaking falls outside of the framework where that person is an intentional, let alone a rational agent. Rationality is still an option at a meta-level: it may still be rationally and morally justified to choose to enter such a position, but quite a number of conditions must be satisfied for this. As I see it, whoever takes up this position justifiably22 must (1) subscribe to the goals of having and operating the “machine” one becomes a component of; (2) trust, and be justified in trusting, its designers, for having adequately designed it to achieve these goals; (3) trust its owner–user, for completing the system and using it so as to achieve these goals; and (4) trust the other operators in the system for acting in accordance with their instructions. This latter trust goes in both directions: an operator must trust higher-level operators for giving him or her adequate instructions and must also trust lower-level operators to faithfully execute any instructions he or she passes on to them.
The facts are, however, that people generally fall short of full trustworthiness in this respect. Are we ever justified in trusting to the level that is required for being justified in accepting an operator position? It is hardly possible to give a general answer to this question. As already stated, not all sociotechnical systems are designed. Many are to some extent ad hoc systems: their composition in terms of subsystems, including their operators, change constantly. But while driving to work in your car over the motorway, you – or rather the instrumental driving system so composed – are implicitly transformed into the object of some system, to be “handled” by its operators in conformity with the system’s purpose and operational principle: if road indicators guide you into a particular lane, you submit yourself to the correctness of this “move” by the system. Certainly as the operator of an instrument partial system “for hire,” you deliver yourself to whatever system will be created and to whoever will be the owner–user of that system: any taxi driver may be driving a killer to his victim. Epistemic enslavement therefore is not a condition specifically occurring in engineered systems, and computers and car engines are no blacker boxes than colleagues and customers.
What we can say of engineered systems is that detailed knowledge of how the system works or is supposed to work is in principle available and that it is highly relevant that system operators have access to this knowledge. The disaster of Chernobyl was to a large extent generated by a lack of knowledge of basic aspects of the design of system components among its operators. The control rods were designed in such a way – consisting partly of carbon and partly of empty space filled, in the reactor, with water – that lowering them for a maximally elevated position caused an initial increase in reactor activity instead of a decrease, the effect that lowering the control rods is aimed at achieving. This design feature leads to disastrous consequence if not taken into account in the instructions of how the raising and lowering of a reactor’s control rods should be handled. In fact a similar situation as caused the explosion of the reactor core in Chernobyl had already occurred earlier in a nuclear reactor in Lithuania, with less disastrous consequences.23 Recorded discussions among operators while in hospital awaiting their deaths showed there was some awareness of this particularity of control rods in place and a vague idea that it might have been causally relevant but they lacked the knowledge that would have enabled them to foresee the size of the effect (Medvedev 1989, pp. 10, 72). Obviously, the lack of a safety culture in the Soviet Union contributed significantly, as the IAEA emphasized ([INSAG-7], pp. 20–22). Still, the operators thought they could rely on the rules that they had been working with successfully for years, and it is unthinkable that they would have acted precisely as they did, had they been aware of the full details of the reactor design.
Valuing the autonomy of operators would then require support in the form of a principle of maximum knowledge of system properties in system operators. In the first instance it applies to the preparation of operators for their tasks. Extending it to maximum on-the-spot knowledge, in the form of a right to question instructions and to receive supporting confirmation and explanation of instructions on request (of the sort that could perhaps have avoided the Überlingen collision; see below), will be extremely controversial. Such a principle goes against the engineering design philosophy of sociotechnical systems and, apart from its effects of the technical efficiency of such systems, may well create more safety hazards than it removes.
What is achieved by a sociotechnical machine (however we conceptualize this) could not be achieved through the actions of completely autonomous intentional agents. Such agents would, for instance, have to derive through deliberation on the basis of the totality of their knowledge and the information available to them that the best action to perform if alarm X sounds is indeed to flip switch Y, which, let us assume, is the action prescribed in the instructions for the control room operator. Circumstances do not pause in order to allow agents to go through such deliberations. System design often presupposes that the operator does not look further than the instructions defining his/her role. Typically there is no time to reflect on the appropriateness of an action required, or at least seemingly required, by the operator’s manual, in the current circumstances. The smoothness of the system’s functioning may even depend on the promptness with which certain hardware states lead to operator actions.
As a consequence, there is often a trade-off between the safety and security of people within the system’s reach (customers, operators, and bystanders) and the autonomy of the system’s operators. The increasing complexity of such systems, in conjunction with the increasingly complexity and computerization of its hardware components, tends to emphasize the inevitability and even desirability of operator myopia, and an increased acknowledgment of the autonomy of the persons performing operator roles may lead to a decrease in reliability and safety. The choice against autonomy, and for localized operator control only, is followed in any system where strict hierarchical relations exist between the various operator roles. This includes first of all the military but also civilian systems that are to some extent modeled on the hierarchical system of the military. Such systems actually promote the condition of epistemic enslavement for system operators to ensure their general smooth operation from a system perspective, even against counter-indications in individual cases.
In air traffic, for example, as in the military, instructions from air traffic control to aircraft crews are not questioned. In the case of the Überlingen midair collision, the DHL crew could hear air traffic control issuing an instruction to the Russian airplane to descend and could hear the Russian captain accepting this instruction while it was executing its own ACAS RA to descend as well.24 They did not right away contact ATC about this, however; they only did so after the Swiss air traffic controller repeated its instruction to descend to the Russian Tupolev, and only hesitatingly, but then it was too late. Understandably, Russia added as a comment to the German accident report their opinion that the DHL crew could have done more to prevent the accident ([Überlingen], App. 10). To determine the balance between how often the exercise of operator autonomy can prevent accidents and how often it causes them is difficult, if not impossible. In the most notorious of air traffic accidents, the 1977 Tenerife runway collision, an unsolicited radio message by one of the crews to point out a dangerous situation developing precisely acted as a cause that prevented the other crew from receiving a crucial ATC instruction to stay put ([Tenerife], p. 44).
The responses of system owner–users and regulators to system failures like these (i.e., involving operator actions that are judged erroneous) go in two directions. With respect to interactions between operators and devices (e.g., operator response to an ACAS RA) or interactions between remote operators (e.g., pilot response to ATC instruction or ATC response to pilot reading), the response is a tightening of rules and a corresponding decrease of operator autonomy. With respect to interactions among operators in teams, in contrast, the response is rather an increase of operator autonomy. Approaches like crew resource management are critical of hierarchical forms of organization and emphasize collaboration among equals. Teams of operators are treated as “islands” of purely social systems within a larger sociotechnical environment.
Problems remain in the area that is ambiguous between a regime where interactions are conceived as input–output or stimulus–response and a regime that relies on human intentionality and the capacities of an autonomous agent. A crucial aspect of this is the formation of beliefs. To ensure that the right beliefs are formed in the minds of operators is a major design problem. Indeed it is often underestimated how many different models concerning what goes on inside people (especially their minds) we have to rely on in the design of systems with people as components, and how questionable the general applicability of these models is. The Tenerife accident is a case in point: it is perhaps the greatest enigma of this accident what had made the KLM pilot(s)25 so convinced that the Pan Am Boeing was no longer on the runway. The Dutch commentary on the Spanish investigation report ascribes this to inference on the part of the pilots: the Pan Am Boeing could no longer be on the runway because they had received clearance for takeoff (or so they presumably thought). The emphatic response, however, does not suggest that the conviction was of this inferential sort.
Similarly located in this ambiguous area are cases where operators do not respond to out-of-the-ordinary circumstances or emergencies according to procedures. The crash of Air France Flight 447 into the Atlantic Ocean in June 2009 resulted mainly from a completely inadequate response by the pilot-in-command to the deactivation of the automatic pilot on the loss of signal due to frost from the outboard devices for measuring airspeed. Procedures for this flight situation existed but they were not used nor their existence recalled by either of the two pilots on duty ([Air France 447], p. 175). Likewise a failure to operate according to the specific procedures for the circumstances at hand, this time landing in conditions of severe side wind, led to the crash of Lufthansa flight 2904 in Warsaw in September 1993 ([Lufthansa 2904], p. 42). It is common to attribute such failures to deficiencies in operator training. It is less clear, however, whether training should focus on conditioning behavior or on generating awareness or both, and if the latter, how this should be accomplished. One of the somewhat counterintuitive things that training will have to address is the fact that the direction of technical developments will, increasingly, not only permit but demand minimal operator interference. For the crashes of Air France Flight 447 and Aeroflot Flight 593, postaccident analysis revealed that without the frantic and ongoing attempts of the cockpit crews to steer their way out of the predicament caused by their initial actions, the automated correction and safety mechanisms of the aircraft – Airbus A310 and A330 – would most likely have prevented a fatal flight path.26 These cases in particular bring out the tension between the two tendencies of automation and autonomy and the failure to address this tension squarely.
Whatever the views of the potential of continuing automation that will develop in engineering, on the nontechnical side lack of public acceptance and constraining legislation – which may partly reflect the lagging acceptance – will prove formidable obstacles. Sociotechnical systems including human operators will therefore remain a presence for some time to come. Accordingly there is good reason to reflect more on the moral consequences of putting people in that position and the values at stake for them. Autonomy is such a value, and this section has sketched the pressure that the development of sociotechnical systems toward greater reliability and smoothness as well as greater complexity puts upon operator autonomy. In our society, autonomy is closely linked to responsibility, however, and to this key notion I now turn.
Assigning Responsibility Responsibly
Personal autonomy is considered to be a crucial precondition for the assignment of responsibility, both in a forward-looking sense and in a backward-looking sense associated with liability and blame (cf. Van de Poel 2011 for the distinction). The assignment of responsibility in particular to system operators is taken extremely seriously in our society and treated as a cornerstone of the “license to operate” of the numerous public and private service-providing sociotechnical systems. ICAO regulations, for example, repeatedly stress the ultimate responsibility of pilots for the safety of their aircraft and its passengers and crew regardless of their ever-growing technical embeddedness.27
The complexity and scale of current systems have now far outrun the form of control that underlies the assignment of general responsibility. With respect to time scale, for example, the mismatch is obvious in air traffic. The responsibility of pilots for the separation of aircraft, complete with priority rules similar to those that govern road traffic, still underlies international regulation: the ICAO Rules of the Air contain detailed and separate priority rules for head-on approach, convergence, and overtaking ([ICAOAnn2], p. 3–3). One cannot possibly count on this as being of any help to avoid accidents. The rapid increase of midair collisions in the 1940s and 1950s, first between civilian and military aircraft and later between civilian aircraft mutually, shows that this is not even a recent problem.28 Nevertheless the investigation report concerning the September 1976 midair collision near Zagreb showed this attitude concerning pilot responsibility: “For the purpose of aircraft collision it is the duty of the crew to look out even though a flight is being made with IFR [Instrument Flight Rules] plan in visual conditions.” Since at the time of the collision the weather was fine, “there was nothing to prevent the crew in that respect.” However, the investigation report also made clear that the two crews did not notice each other until the collision occurred – one crew never noticed anything since they were killed on impact. Indeed the report itself acknowledges, immediately after referring to the duty of crews, that “[a]t high altitudes and at high speed, particularly in nearly opposite heading it is very difficult to observe another aircraft” ([Zagreb], pp. 32–33). This state of affairs has not changed since. On the contrary, in the most recent midair collision, between a Boeing 737 and an Embraer jet above the Amazon forest in September 2006, neither crew saw the other aircraft coming, although they flew exactly head-on toward each other in clear weather. Since one of the two aircrafts managed to remain airborne, we have first-hand knowledge that the collision itself took such a split second that it was not even experienced as a collision; it was only on the basis of the visible damage to one of their wings that the pilots could infer that they had collided with something ([Amazonas], pp. 235–237).
This is just one particular aspect of a specific type of system. There are more general reasons to question the practice of squarely assigning responsibility to system operators. It is often claimed that it is extremely difficult to make operators accountable in case of accidents or in cases harm is done because their individual actions are just a contribution to the final result, which additionally required the actions of many other operators and the satisfaction of many conditions. This is called the problem of many hands. The problem of many hands, however, was diagnosed and coined for bureaucratic institutions and is typically analyzed and discussed with reference to these (cf. Thompson 1980; Bovens 1998). The situation for sociotechnical systems is actually more complex. Firstly, we are dealing there with the problem of many hands-and-devices rather than the problem of many hands. In one case, the crash during an emergency landing of United Airlines Flight 232 in Sioux City in 1989, the loss of an aircraft and about one third of its passengers, was due to the failure of a turbine fan disk as a result of a microcavity present since manufacture in combination with the failure of the inspection regime to timely detect the cracks that gradually develop from such miniature manufacture defects (see Del Frate et al. 2011, p. 756). Secondly, the hands involved are distributed over a much wider range, reaching from the design phase (including the definition of the design task itself!) to the phase of system operation. Some of the difficulties this creates, given the hybrid character of sociotechnical systems, were discussed in the previous section. The bureaucratic systems of government and commercial service-providing systems that were the focus of study until now are not designed in the engineering sense of design; instead they resemble ad hoc systems in that their owner–users can to a large extent be viewed as their designers, with corresponding powers to redesign the system in response to emerging flaws, changing circumstances, or modified objectives.
The intimate connection between operator performance and system structure is acknowledged in Whittingham’s claim (2004, p. 254) that “Human error is not inevitable but rather is the inevitable consequence of defective systems,” where the systems may be either technological (e.g., the design of control panel displays) or organizational (e.g., the organization of maintenance procedures). The claim strictly speaking does not say that all human errors originate from system defects, only that all defective systems will sooner or later generate human errors. However, for the claim to be substantive, we need a characterization of what a defective system is that is independent of the notion of human error, otherwise the claim does amount to a statement that all human error originates from system defects but at the price of being tautological as a descriptive claim. Instead the tautology could be embraced and we could look upon it as a normative requirement, a design criterion: no system should be allowed to generate human errors or, rather, no system should be allowed to fail because of a human error. Arguably this is how Whittingham takes his claim: he advocates a practice where operators need not fear to be punished for errors committed but are encouraged to report them, so that defects in the system can be remedied.
Whittingham concedes that operators can show “an element of carelessness, inattention, negligence or deliberate violation of rules that must be dealt with,” but he seems to downplay the reach of accountability even for such cases, stating no more than that in such cases an operator “deservedly attracts some blame” (pp. 254–255). But the presence of operators with a capacity for autonomy will make the task of designing systems that are immune to operator “errors” a horrendous one. As late as 1993, in a well-established safety culture, reckless and careless pilot behavior could result in the crash of a small airliner with 18 casualties (Tarnow 2000). In a more extreme case in 1999, the pilot of an EgyptAir Boeing 767 steered his aircraft deliberately into the Atlantic Ocean, according to the official US investigation report, taking the 216 other people on board with him.29 Technical safety systems to prevent such accidents will most likely be prohibitive of normal aircraft operation, whereas the intensity of psychological monitoring that would detect such tendencies will be felt as too seriously invading the privacy of operators to be acceptable. Clearly there are limits to the extent that human errors can be reconstructed as system defects. If a system has human operators, it will have autonomous operators, both for technical and for moral reasons.
A user A has meta-task responsibility concerning X’ implies that A has an obligation to see to it that (1) conditions are such that it is possible to see to it that X is brought about [A’s positive task responsibility], (2) conditions are such that it is possible to see to it that no harm is done in seeing to it that X is brought about [A’s negative task responsibility].
In order to hold on to a “discourse of responsibility” grounded in personal autonomy, cherished both by philosophers and society, against the equalizing pressure of increasingly complex systems, this puts an additional burden of responsibility on the shoulders of the very same person who has a particular task responsibility. As Rooksby (2009) has argued, this is asking too much. Partially it is very problematic because many operators operate in a hierarchical system: meta-task responsibility would require that subordinate operators check and monitor the actions and inactions of their superiors. Additionally, the required knowledge is vast and may not be easily available; much of it is designer’s knowledge. Finally, resources are limited: if you have to explain to your subordinates why you are giving them certain instructions and why it is desirable that they execute them faithfully and why they are justified in executing them faithfully, then it may be that you are left with too little time to attend to your own primary task responsibility.
The notion of meta-task responsibility can indeed play an important role. In my view, however, the perspective should be shifted back to the design context, where according to Van den Hoven originally statement his worries on the ability of operators in complex systems to take responsibility should be addressed. Meta-task responsibility should be seen as implying a constraint not on how persons operating in a sociotechnical system should conceive their responsibility, taking the role they are supposed to play for granted as given by system design, but instead on how system design should conceive of such roles. It is, so I propose, to be reconceived as a responsibility that people have with respect to the tasks of others, tasks that they are responsible for designing. It applies to the design process as (systems) engineering sees it, where operator roles are first conceived and role instructions are first drawn up as part of the grand design of a system module or even the system as a whole. But is also applies to the implementation phase, where owner–users complete the design of a system (while often partly redesigning it) by specifying role instructions and organizing role training and role selection procedures. And it applies, finally, to the operation phase, where ad hoc orders are given by higher-order manager–operators in response to particular circumstances.
This makes the execution of meta-role responsibility a diffuse matter, in that many different parties have a meta-task responsibility with respect to the ability of a single operator to take responsibility for his or her task. But it reflects the hierarchy of the various design regimes that have some amount of control over that task. The proposal should not be read as removing from the person of the operator him- or herself any responsibility for reflection on whether or not to perform what the task seems to require. To the extent that an operator has knowledge of the system and the particular circumstances and has access to the relevant data, and in that sense can be seen as having to some extent designer’s knowledge of the operator’s role, there is nothing against including the operator in the set of agents who have a meta-task responsibility with respect to their tasks. What matters is to conceive of the responsibility issues from a design perspective, in accord with Van den Hoven’s initial emphasis. In view of the discussion of the distinguishing characteristics of sociotechnical systems in the previous section, the implementation of this shift of perspective invites a reorganization of design processes for sociotechnical systems. What that reorganization should involve is, however, not a task that I here undertake.
Against the background of these considerations, the question still has to be asked why we are so keen on assigning responsibility to operators. According to Björnsson (2011, p. 188), “Our interest in holding people responsible is largely an interest in shaping motivational structures – values, preferences, behavioural and emotional habits, etc. – in order to promote or prevent certain kinds of actions or events that we like or dislike.” This can be termed an instrumental view on responsibility. This view, however, makes it difficult to uphold the conceptual link with personal autonomy. This is a topic that deserves further discussion. But it seems to me clear that such an instrumental view cannot serve as sufficient justification for assigning a responsibility that will lead a (legal) life of its own once an accident has happened. Pressure from air traffic controllers worldwide led to the Zagreb controller being pardoned after having spent several months in prison, but similar protests did not help the Linate controller, whose conviction was confirmed up till the Corte di Cassazione.30
Given the current state of engineered sociotechnical systems, how they come to be, how they function, and how they are likely to develop, we – designers and legislators, the latter being, in a democratic society, the entire adult population – owe it to operators to think very carefully what we want to hold them responsible for and what we can justifiably hold them responsible for, and how the responsibility we do assign should be supported by technological means. Additionally, we should think more carefully what the rationale for our society’s insistence on apportioning responsibility – our blame culture, as the title of Whittingham’s (2004) book suggests – is. Upon reflection we may find that there are good reasons to restrict the responsibility of operators or at least to specify their responsibility in much greater detail than is now customary. Thinking how this can be done in a socially viable and morally justifiable way is a major task for the future.
ACAS, for instance, the automated system for the avoidance of collisions between aircraft which played a role in the Überlingen crash, is notorious for generating false alarms, in particular when an aircraft is climbing or descending and thereby approaches another aircraft cruising at an altitude just below or above the altitude for which the climbing or descending airplane is heading ([Eurocontrol]; Pritchett et al. 2012a, b).
Amalberti (2001), however, particularly discusses the limits to such further increase.
In Franssen (2014), I develop a systematic account of the notion of instrumental system. My use of the term “sociotechnical system” there is slightly different from its use here: there I use it for a particular type of instrumental system, with an instrument of a certain complexity, but including a user and the object the transformation of which is what the user wishes to achieve, whereas here I use it in a more restricted sense to mean just this complex instrument.
Actually this damage was due to fragments of the aircraft dislodged by fragments of burst tires, never to fragments of tires directly.
What exactly constitutes a failing or malfunctioning operator is an interesting and important question, which I will not take up in this text, however. Somewhat more will be said on the topic in the next section.
Neither the ICAO flight instructions for ACAS in force at the time of the accident nor the ACAS manufacturer’s Pilots Guide (all cited in [Überlingen], pp. 51–53) considered the possibility of interference between ATC instructions and the automated ACAS process. See for a discussion of the Überlingen accident particularly from the viewpoint of the social technical hybridity of the system involved also (Ladkin 2004) and (Weyer 2006).
Note that the human capabilities presumed will typically be those of a normal adult person but need not be. Many occurrences of child labor were and are dependent on the child operators being smaller and more versatile than adults. But more extreme cases can be thought of. To give just one example, in the film The Prestige by Christopher Nolan, an elaborate stage illusion figures that requires the participation of several operators who must be blind for the trick to succeed.
Lewis Mumford identified such instruments made of human beings more widely and termed them megamachines. See Mumford (1934). As for the introduction of military drill, Kleinschmidt (1999, pp. 609–611) especially stresses the effort toward a conditioned response to orders, in contrast to an interpretational response.
Still things can go badly wrong due to underspecification of insertable components. An example is the explosion on the Plesetsk launch pad of a 8A92M Vostok space rocket in March 1980. The cause of the explosion was the use of soldering containing lead next to tin instead of pure tin soldering. The lead acted as a catalyst for the decomposition of the hydrogen peroxide component of the rocket fuel. See Varfolomeiev (2007).
Even if designer and owner–user are formally identical – say in the case of a state operating an infrastructure or a public utility – then as a user the state may have other interests and be under the pressure of other forces than when as designer. The notion of instrumental system, mentioned in the previous section, was particularly developed to clarify issues like these. As a mere instrument, a sociotechnical system is typically incomplete and “open.” Only a full instrumental system, complete with its user and object-under-transformation, allows for the delineation of sharp boundaries of the full system and its major components.
For the terminology, see (Checkland 1981). To be sure, animals trained for a conditioned response have been used as “hard” system components: e.g., B. F. Skinner’s use of pigeons as component of the tracking mechanism of a missile guidance system.
It could be argued that citizenship is not a role because society is not an instrument in any instrumental system, at least not from the point of view of liberal democratic society. Whether or not this is granted, roles are anyway not confined to instrumental systems. Let citizenship then be a role in a social system, where I leave the precise meaning of “social system” intuitive, rather than an instrumental system; this does not change the situation.
It is assumed here that these external rules, in particular national legislation, are morally in order.
To add to the difficulty of the situation, the air traffic controller involved had, in his haste, made the error to select for an instruction to adjust its altitude the aircraft that was just making a turn rather than, as he should have done, the one that was flying a straight course.
Quoted is the 2007 version still in force at the time of writing but under review for harmonization with the EU legislation, which is less “imperialistic.”
They are, in a democracy at least, not totally independent, because systems designers are themselves citizens with voting rights and in this way and in other ways as well have the possibility of influencing political decision-making processes.
Responsibility for the content of rules must be sharply distinguished from responsibility for the (non-)violation of rules once in force. As already discussed, the case of Chernobyl clearly showed the difference.
Many of these difficulties could then be thought to disappear if a state has the sole responsibility for design, implementation, and operation of a (e.g., infrastructural) system. As shown by the case of the Soviet Union, this situation is certainly not sufficient for solving the associated problems. Note that of all historical cases, the Soviet Union came closest to a state run by engineers; see, e.g., (Alexijewitsch 2013), p. 443.
Both the notions of autonomy of a person and of the purpose of a system are extremely problematic notions. As for the latter: the users in the sense of service consumers of a sociotechnical service-providing system view its purpose quite differently from how its owner–user views it, and this latter view differs again sharply depending on whether the owner–user is a state or a private company. As for the former, there are many different accounts of autonomy, typically focusing on different aspects of the concept. My use of autonomy here is in a broad sense characterized by Christman (2009) as “basic autonomy” – “the minimal status of being responsible, independent, and able to speak for oneself” – or in somewhat different words, a person’s ability “to act, reflect, and choose on the basis of factors that are somehow her own.”
I am ignoring here deviating reasons, for example, the reasons that a spy or a saboteur may have, which may be justifiable.
This near disaster had not led, however, to changes in the instructions for this type of reactor. Cf. the entirely analogous situation for ATC-ACAS interference, where the near collision over Japan failed to lead to a review of the inclusion of ACAS in the air traffic regulation system. Or cf. the similar case of the head-on train collision at Ladbroke Grove in 1999, where repeated passings at danger of a particular notorious signal without a resulting collision failed to lead to a redesign of the signal or of the rules for approaching it, until a collision finally did result. In none of these cases was the hazard that ultimately led to the respective accidents brought to the attention of system operators.
The facts about the radio communication are clear from Appendix 3 to the report. It may not have been immediately obvious to the DHL crew that the aircraft receiving these ATC instructions was the one that was approaching them, but it was at least highly likely, likely enough to warrant their interference.
The Spanish investigation report of the accident ascribes the emphatic answer “jawel” (“yes”) in reply to the flight engineer’s question whether the Pan Am Boeing had perhaps not yet left the runway to the captain, but according to the Dutch commentary on the report, both pilots gave this reply simultaneously; see [Tenerife], p. 45 and [Tenerife-NL], pp. 46, 63.
This information is presented by experts, with access to the official investigation reports [Air France 447] and [Aeroflot 593], in the respective episodes of the television documentary series Mayday, also known as Air Crash Investigation, produced by the Canadian firm Cineflix. The episode dedicated to Air France Flight 447 is called “Air France 447: Vanished” and dates from April 2013; the episode on Aeroflot Flight 593 is called “Kid in the Cockpit” and dates from November 2005. The crash of the Airbus leased by Aeroflot was caused by the inability of its Russian crew to handle an unintended and undetected partial disengagement of the autopilot occurring when the captain let his two children, who were also on board, sit in his chair and touch some of the controls.
E.g., [ICAO9863] p. 5–3: “ACAS does not alter or diminish the pilot’s basic authority and responsibility to ensure safe flight.” [ICAO8168] p. III-3-3-1: “Nothing in the procedures specified [below] shall prevent pilots-in-command from exercising their best judgement and full authority in the choice of the course of action to resolve a traffic conflict.” [ICAOAnn2] p. 3–2: “Nothing in these rules shall relieve the pilot-in-command of an aircraft from the responsibility of taking such action […] as will best avert collision.”
It also shows how differently military pilots are treated with respect to civilian ones. The culpable pilot of the military aircraft that collided with a commercial airliner in October 1942 was acquitted. This is typical for accidents involving military operators: as recently as 1998, two American pilots were acquitted who had flown their aircraft low enough to cut through the cables of a cable car in the Italian Dolomites, killing 20 people. This brings out once again that military operators are treated in all respects, including legal aspects, as (and will know they are treated as) true components whose “horizon” is determined by their operator role only. They can “malfunction,” but their malfunctioning is exclusively their superiors’ concern.
See [EgyptAir990], pp. 58–65. The Egyptian authorities contested this interpretation, without being able to submit a convincing alternative cause. One can assume they were too embarrassed to admit the facts. This may seem an extraordinary case, but there are at least two similar cases where there is overwhelming evidence that an airliner pilot committed suicide by ditching his aircraft, one from 1997 and one from 2013, with 32 and 103 additional casualties, respectively. There are indications that the still unclarified disappearance of Malaysian Airlines Flight 370 over the Southern Indian Ocean in March 2014, with 239 people on board, may prove to be a fourth example.
See for the juridical aftermath of both accidents the references given in the Wikipedia articles http://en.wikipedia.org/wiki/1976_Zagreb_mid-air_collision and http://en.wikipedia.org/wiki/Linate_Airport_disaster; see [Linate] for the details of the Linate collision.
I thank Luca Del Frate for inspiring conversations and helpful suggestions during the writing of this paper.
(a) Authored Works
- Alexijewitsch S (2013) Secondhand-Zeit: Leben auf den Trümmern des Sozialismus. Hanser, Berlin, Transl. from the Russian original Vremja second-hand: konec krasnogo čeloveka, Moskva: Vremja, 2013Google Scholar
- Bovens M (1998) The quest for responsibility: accountability and citizenship in complex organisations. Cambridge University Press, CambridgeGoogle Scholar
- Checkland P (1981) Systems thinking, systems practice. Wiley, ChichesterGoogle Scholar
- Christman J (2009) Autonomy in moral and political philosophy. Stanford Encyclopedia of Philosophy (on-line), substantively revised (first published 2003)Google Scholar
- de Weck OL, Roos D, Magee CL (2011) Engineering systems: meeting human needs in a complex technological world. MIT Press, Cambridge, MAGoogle Scholar
- Ladkin PB (2004) Causal analysis of the ACAS/TCAS sociotechnical system. In: Cant T (ed) 9th Australian workshop on safety related programmable systems (SCS’04), Brisbane. Conferences in research and practice in information technology, vol 47. unpagGoogle Scholar
- Medvedev G (1989) Chernobyl notebook. JPRS Report no JPRS-UEA-89-034, 23 October 1989. English translation of original Russian publ. by Novy Mir, June 1989Google Scholar
- Mitchell CM, Roberts DW (2009) Model-based design of human interaction with complex systems. In: Sage A, Rouse W (eds) Handbook of systems engineering and management. Wiley, Hoboken, pp 837–908Google Scholar
- Mumford L (1934) Technics and civilization. Harcourt Brace, New YorkGoogle Scholar
- Perrow C (1984) Normal accidents. Basic Books, New YorkGoogle Scholar
- Pritchett AR, Fleming ES, Cleveland WP, Zoetrum JJ, Popescu VM, Thakkar DA (2012a) Pilot interaction with TCAS and Air Traffic Control. In: Smith A (ed) ATACCS’2012, 29–31 May 2012. IRIT Press, London, pp 117–126Google Scholar
- Pritchett AR, Fleming ES, Cleveland WP, Zoetrum JJ, Popescu VM, & Thakkar DA (2012b) Pilot’s information use during TCAS events, and relationship to compliance to TCAS Resolution Advisories. In: Proceedings of the human factors and ergonomic society 56th annual meeting, Boston 2012, pp 26–30Google Scholar
- Sage A, Armstrong J Jr (2000) Introduction to systems engineering. Wiley, New YorkGoogle Scholar
- Tarnow E (2000) Towards the zero accident goal: assisting the first officer monitor and challenge captain errors. J Aviation/Aerosp Educ Res 10:29–38Google Scholar
- van den Hoven MJ (1998) Moral responsibility, public office and information technology. In: Snellen ITM, van den Donk WBHJ (eds) Public administration in an information age: a handbook. IOS Press, Amsterdam, pp 97–111Google Scholar
- Varfolomeiev T (2007) Soviet rocketry that conquered space. Part 8: successes and failures of a three-stage launcher. Online: http://cosmopark.ru/r7/prig8.htm. Retrieved Jan 2014
- Vermaas PE, Kroes P, van de Poel I, Franssen M, Houkes W (2011) A philosophy of technology: from technical artefacts to sociotechnical systems. Morgan & Claypool, San RafaelGoogle Scholar
- Weyer J (2006) Modes of governance of hybrid systems: the mid-air collision at Ueberlingen and the impact of smart technology. Sci Technol Innov Stud 2:127–141Google Scholar
- Whittingham RB (2004) The blame machine: why human error causes accidents. Elsevier Butterworth-Heinemann, Oxford/BurlingtonGoogle Scholar
(b) Anonymous Works (Investigation Reports, Regulations)
- [AA587] (2004) Aircraft accident report: in-flight separation of vertical stabilizer American Airlines Flight 587 Airbus Industrie A300-605R, N14053 Belle Harbor, New York, 12 Nov 2001. National Transportation Safety Board, Washington, DCGoogle Scholar
- [Aeroflot593] (1995) Akt po rezultatam rassledovanija katastrofy samoleta A310-308 F-OGQS, proisšedšej 22 marta 1994 g. v rajone g. Meždurečenska. Departament Vozdušnogo Transporta, Kommisija po Rassledovaniju Aviacionnogo Proisšestvija, Moskva. In RussianGoogle Scholar
- [AirFrance447] (2012) Final report on the accident on 1st June 2009 to the Airbus A330-203 registered F-GZCP operated by Air France flight AF 447 Rio de Janeiro – Paris. Bureau d’Enquêtes et d’Analyses pour la Sécurité de l’Aviation Civile. Official translation of the original French versionGoogle Scholar
- [Amazonas] (2008) Final report A-00X/CENIPA/2008. Comando Aeronáutico, Estado-Maior da Aeronáutica, Centro de Investigação e Prevenção de Acidentes Aeronáuticos, Brasilia. Official translation of the Portuguese versionGoogle Scholar
- [Concorde] (2000) Accident on 25 July 2000 at La Patte d’Oie in Gonesse (95) to the Concorde registered F-BTSC operated by Air France. Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile. Official translation of the original French versionGoogle Scholar
- [EgyptAir990] (2002) Aircraft accident brief: EgyptAir Flight 990 Boeing 67-366ER, SU-GAP 60 Miles South of Nantucket, 31 Oct 1999. National Transportation Safety Board, Washington, DCGoogle Scholar
- [Eurocontrol] (2002) ACAS II Bulletin, EurocontrolGoogle Scholar
- [ICAO8168] (2006) Procedures for Air Navigation Services: Aircraft operations. Volume I, Flight procedures, Fifth edn. International Civil Aviation Organization, document 8168Google Scholar
- [ICAO9863] (2006) Airborne Collision Avoidance System (ACAS) manual, First edn. International Civil Aviation Organization, document 9863Google Scholar
- [ICAOAnn2] (2005) Rules of the air: Annex 2 to the Convention on International Civil Aviation, Tenth edn. International Civil Aviation OrganizationGoogle Scholar
- [INSAG-7] (1992) INSAG-7; The Chernobyl accident: an updating of INSAG-1. A report by the International Nuclear Safety Advisory Group. International Atomic Energy Agency, Vienna. Safety Series No. 75Google Scholar
- [JA907/958] (2002) Aircraft accident investigation report: Japan Airlines Flight 907, Boeing 747-400D, JA8904, Japan Airlines Flight 958, Douglas-DC-10-40, a near midair collision over the sea off Yaizu City, Shizuoka Prefecture, Japan, at about 15:55 JST, January 31, 2001. Aircraft and Railway Accidents Investigation Commission. Official translation of the original Japanese versionGoogle Scholar
- [Ladbroke Grove] (2001) The Ladbroke Grove rail inquiry. Part 1 report. The Rt Hon Lord Cullen PC. HSE BooksGoogle Scholar
- [Linate] (2004) Final report (as approved by ANSV Board on the 20th of January 2004): accident involved aircraft Boeing MD-87, registration SE-DMA and Cessna 525-A, registration D-IEVX Milano Linate airport, 8 Oct 2001. Agenzia Nazionale per la Sicurezza del Volo, Roma. Official translation of the original Italian versionGoogle Scholar
- [Lufthansa2904] (1994) Report on the accident to Airbus A320-211 aircraft in Warsaw on 14 Sep 1993. Unofficial translation of the original Polish report, Państwowa Komisja Badania Wypadków Lotniczych, Warsaw, by Peter LadkinGoogle Scholar
- [Rules of the Air 2007] (2007) The rules of the air: regulations 2007. Statutory instruments 2007 No. 734; Civil aviation. The Stationery Office Limited, LondonGoogle Scholar
- [Tenerife] (1978) Joint report K.L.M.-P.A.A. 12.7.1978. Colisión aeronaves Boeing 747 PH-BUF de K.L.M. y Boeing 747 N 737 PA de PanAm en Los Rodeos (Tenerife) el 27 de marzo de 1.977. Ministerio de Transportes y Comunicaciones, Subsecretaría de Aviación Civil, Dirección de Transporte Aereo, Comisión de Accidentes, MadridGoogle Scholar
- [Tenerife-NL] (1978) Raad voor de Luchtvaart. Netherlands Aviation Safety Board. Final report and comments of the Netherlands Aviation Safety Board of the investigation into the accident with the collision of KLM flight 4805, Boeing 747-206B, PH-BUF, and Pan American flight 1736, Boeing 747–121, N736PA at Tenerife Airport, Spain on 27 Mar 1977. (ICAO Circular 153-AN/56.) OctoberGoogle Scholar
- [Überlingen] (2004) Investigation report AX001-1-2/02 [Report on mid-air collision near Überlingen on 1 July 2002]. Bundesstelle für Flugunfalluntersuchung. Official translation of the original German versionGoogle Scholar
- [Zagreb] (1977) British Airways Trident G-AWZT; Inex Adria DC9 YU-AJR: Report on the collision in the Zagreb area, Yugoslavia, on 10 September 1976. Reprint of the report produced by The Yugoslav federal Civil Aviation Administration Aircraft Accident Investigation Commission. Aircraft Accident Report 5/77. Her Majesty’s Stationary Office, LondonGoogle Scholar