This issue of CTW opens with a special discussion section on situation awareness (SA) which follows on from the earlier such section on workload [CTW 16(3)]. The concept of SA has had massive purchase both in academic literature and in accident investigations. Indeed, it has entered into everyday parlance. That, however, does not guarantee its scientific validity. There are a number of pertinent questions. Does it describe process or product? Does it add anything to standard models of information processing? What is its relationship to the somewhat similar concept of being “out-of-the-loop”? And can it be reliably measured? Is it equivalent to mindfulness as used in social and medical sciences or to sensemaking as used in information science and organisational studies?

Ten years ago in this journal, Dekker and Hollnagel (2004) depicted SA as one of a series of “folk models” used in human factors and particularly in studies of human error and accident causation. The term “folk model” came from the philosopher Stich (1985), and they charged that such folk models as lack of situation awareness or automation-induced complacency were commonsense explanations of human performance and human error that failed to provide any useful explanation of how failure had arisen. Such concepts lacked substance and were often circular in terms of cause and effect: If error had occurred, it was because of lack of situation awareness, and if there was lack of situation awareness, that was the result of error.

Other critics have been willing to accept the term of situation awareness but have admitted the difficulty in defining it. Thus, Sarter and Woods (1995) state: “a long tradition of research has not brought us much closer to being able to understand and support the phenomenon.” They further concede that “it appears to be futile to try to determine the most important contents of situation awareness, because the significance and meaning of any data are dependent on the context in which they appear.” One could perhaps question the utility of a concept that cannot be understood and which defies precise definition.

In this issue, Sidney Dekker returns to the fray. He associates the term of situation awareness with a blame-the-individual culture in which the deficiencies in the system are ignored. Norman (1991) has called this approach “blame and train,” and Dekker states that he does not wish to be associated with it.

Mica Endsley provides a spirited rejoinder, arguing that situation awareness is a valid, useful and distinct construct, backed by theory, that provides useful diagnosis of how to improve system design. She refutes the suggestion that the use of the construct implies blaming the human operator.

Patrick Millot is a “glass-half-full” man. He acknowledges that the concept has some imperfections, but argues that it should be augmented and improved rather than discarded. He also discusses the concept of collective situation awareness and associates it with a proposed framework on human–machine cooperation, where there is a dimension of “knowing how to cooperate.” He proposes that such cooperation within teams can be supported by a common work space.

In their contribution, Paul Salmon, Guy Walker and Neville Stanton also focus on the theme of shared or distributed situation awareness (DSA). They argue that DSA should replace the more traditional focus on individual situation awareness—here they accept much of Dekker’s argument—with a new focus on the operation of the system, looking at all the human operators involved as well as relevant system components. They argue that, with such as systems focus, there is great value in the situation awareness concept.

Finally, this issue also includes a piece by Sidney Dekker and James Nyce on the measurement of workload. This contribution was inadvertently omitted from the earlier special section on workload which was framed around the commentary of de Winter (2014). We hope that readers will appreciate it in spite of the error of omission by the editors that has caused the delay.