What can possibly go wrong? Anticipatory work in space operations


This paper explores how different forms of anticipatory work contribute to reliability in high-risk space operations. It is based on ethnographic field work, participant observation and interviews supplemented with video recordings from a control room responsible for operating a microgravity greenhouse at the International Space Station (ISS). Drawing on examples from different stages of a biological experiment on the ISS, we demonstrate how engineers, researchers and technicians work to anticipate and proactively mitigate possible problems. Space research is expensive and risky. The experiments are planned over the course of many years by a globally distributed network of organizations. Owing to the inaccessibility of the ISS, every trivial detail that could possibly cause a problem is subject to scrutiny. We discuss what we label anticipatory work: practices constituted of an entanglement of cognitive, social and technical elements involved in anticipating and proactively mitigating everything that might go wrong. We show how the nature of anticipatory work changes between planning and the operational phases of an experiment. In the planning phase, operators inscribe their anticipation into technology and procedures. In the operational phase, we show how troubleshooting involves the ability to look ahead in the evolving temporal trajectory of the ISS operations and to juggle pre-planned fixes along these trajectories. A key objective of this paper is to illustrate how anticipation is shared between humans and different forms of technology. Moreover, it illustrates the importance of including considerations of temporality in safety and reliability research.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3


  1. 1.

    Interestingly, Haavik (2014a) argues that the theoretical frameworks Normal Accidents Theory (NAT) and High Reliability Organizations Resilience Engineering are relationally oriented in their initial conceptions.

  2. 2.

    There have been debates between Hutchins and Latour on whether or not cognitive explanations are necessary (Giere and Moffatt 2003). Also, Latour’s insistence that the agency of technology must be understood as symmetrical with the agency of humans is controversial.

  3. 3.

    Latour’s examples are trivial, but pedagogical. Consult Ribes et al. (2013) for a more empirically relevant discussion of delegation (viz. a networked organization managing a computing grid).

  4. 4.

    See Hale and Borys (2013) for a comprehensive review. See also the volume edited by Bieder and Bourrier (2013) and Antonsen et al (2008).

  5. 5.

    Fixating seeds means injecting chemicals into the seed cassettes which stops the biological mechanisms within the seeds in order to study them on the ground from the time of fixation.

  6. 6.

    The terminology being used for communication shadow ISS is loss of signal (LOS) and acquisition of signal (AOS) when the connection is good. Availability of S-band and KU-band is also commonly used to describe communication windows.

  7. 7.

    Further details can be found in Mohammad et al. (2014). Watts et al.’s (1996) rather short overview also illustrates the key features of the voice-loop system in space operations and some of the important ways in which it contributes to robustness.

  8. 8.

     Mohammad et al. (2014) provides a thorough description of the methodological audiovisual set up.

  9. 9.

    Signatures refer to telemetry parameters, color-coded visual signs for errors or sensor interpretation, which indicate that the system is in a nominal or off-nominal state.

  10. 10.

    For example, they can be ways for management to show the authorities that a lesson has been learned from the incident, or to assign responsibility (or blame) for specific issues.


  1. Almklov PG (2008) Standardized data and singular situations. Soc Stud Sci 38(6):873–897

    Article  Google Scholar 

  2. Almklov PG, Antonsen S (2014) Making work invisible: new public management and operational work in critical infrastructure sectors. Public Adm 92(2):477–492

    Article  Google Scholar 

  3. Almklov PG, Østerlie T, Haavik TK (2014) Situated with infrastructures: interactivity and entanglement in sensor data interpretation. J Assoc Inf Syst 15(5):263–286

    Google Scholar 

  4. Antonsen S, Almklov P, Fenstad J (2008) Reducing the gap between procedures and practice–lessons from a successful safety intervention. Saf Sci Monit 12(1):1–16

    Google Scholar 

  5. Bieder C, Bourrier M (eds) (2013) Trapping safety into rules: an introduction. Trapping safety into rules. How desirable or avoidable is proceduralization. Ashgate Publishing, Farnham

    Google Scholar 

  6. Dekker S (2006) Resilience engineering: chronicling the emergence of confused consensus. In: Hollnagel E, Woods DD, Leveson N (eds) Resilience engineering: concepts and precepts. Ashgate, Hampshire

    Google Scholar 

  7. Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Fact J Hum Fact Ergon Soc 37(1):32–64

    Article  Google Scholar 

  8. Giere RN, Moffatt B (2003) Distributed cognition: where the cognitive and the social merge. Soc Stud Sci 33(2):301–310

    Article  Google Scholar 

  9. Haavik TK (2014b) On the ontology of safety. Saf Sci 67:37–43

    Article  Google Scholar 

  10. Haavik TK (2014c) Sensework. J Comput Support Cooper Work 23(3):269–298

    Article  Google Scholar 

  11. Hale A, Borys D (2013) Working to rule, or working safely? Part 1: a state of the art review. Saf Sci 55:207–221

    Article  Google Scholar 

  12. Hayes J (2012) Use of safety barriers in operational safety decision making. Saf Sci 50(3):424–432

    Article  Google Scholar 

  13. Hollnagel E (2015) Why is work-as-imagined different from work-as-done. Resilience in everyday clinical work. Ashgate, Farnham, pp 249–264

    Google Scholar 

  14. Hollnagel E, Woods DD, Leveson N (2006) Resilience engineering: concepts and precepts. Gower Publishing Company, Aldershot

    Google Scholar 

  15. Hutchins E (1995) Cognition in the wild. MIT Press, Cambridge

    Google Scholar 

  16. Hutchins E, Klausen T (1996) Distributed cognition in an airline cockpit. In: Engeström Y, Middleton D (eds) Cognition and communication at work. Cambridge University Press, Cambridge, pp 15–34

    Chapter  Google Scholar 

  17. Kongsvik T, Almklov P, Haavik T, Haugen S, Vinnem JE, Schiefloe PM (2015) Decisions and decision support for major accident prevention in the process industries. J Loss Prev Process Ind 35:85–94

    Article  Google Scholar 

  18. LaPorte TR, Consolini PM (1991) Working in practice but not in theory: theoretical challenges of “high-reliability organizations”. J Public Adm Res Theory 1(1):19–48

    Google Scholar 

  19. Latour B (1990) Technology is society made durable. Sociol Rev 38(S1):103–131

    Article  Google Scholar 

  20. Latour B (1999) Pandora’s hope: essays on the reality of science studies. Harvard University Press, Cambridge

    Google Scholar 

  21. Mohammad AB, Johansen JP, Almklov P (2014) Reliable operations in control centers, an empirical study safety, reliability and risk analysis: beyond the horizon: proceedings of the European safety and reliability conference, ESREL 2013, Amsterdam, The Netherlands, 29 Sept–2 Oct 2013, CRC Press

  22. Nathanael D, Marmaras N (2006) The interplay between work practices and prescription: a key issue for organizational resilience. In: Proceedings of the 2nd resilience engineering symposium, pp 229–237

  23. Orr JE (1996) Talking about machines: an ethnography of a modern job. Cornell University Press, Ithaca

    Google Scholar 

  24. Østerlie T, Almklov PG, Hepsø V (2012) Dual materiality and knowing in petroleum production. Inf Organ 22(2):85–105

    Article  Google Scholar 

  25. Ribes D, Jackson S, Geiger S et al (2013) Artifacts that organize: delegation in the distributed organization. Inf Organ 23(1):1–14

    Article  Google Scholar 

  26. Roe E, Schulman PR (2008) High reliability management: operating on the edge. Stanford Business Books, Stanford University Press, Stanford

    Google Scholar 

  27. Rosness R, Evjemo TE, Haavik TK, Wærø I (2015) Prospective sensemaking in the operating theatre. In: Review for cognition, technology and work 1–17. doi:10.1007/s10111-015-0346-y

  28. Schulman P, Roe E, Eeten MV, Bruijne MD (2004) High reliability and the management of critical infrastructures. J Conting Crisis Manage 12(1):14–28

    Article  Google Scholar 

  29. Stanton NA, Stewart R, Harris D et al (2006) Distributed situation awareness in dynamic systems: theoretical development and application of an ergonomics methodology. Ergonomics 49(12–13):1288–1311

    Article  Google Scholar 

  30. Suchman L (1987) Plans and situated actions. Cambridge University, New York

    Google Scholar 

  31. Watts JC, Woods DD, Corban JM et al (1996) Voice loops as cooperative aids in space shuttle mission control. ACM conference on computer-supported cooperative work

  32. Weick KE (1993) The collapse of sensemaking in organizations: the Mann Gulch disaster. Adm Sci Q 38(4):628–652

    Article  Google Scholar 

  33. Weick KE, Sutcliffe KM (2001) Managing the unexpected. Assuring high performance in an age of complexity, San Francisco

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Jens Petter Johansen.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Johansen, J.P., Almklov, P.G. & Mohammad, A.B. What can possibly go wrong? Anticipatory work in space operations. Cogn Tech Work 18, 333–350 (2016). https://doi.org/10.1007/s10111-015-0357-8

Download citation


  • Anticipation
  • Space operations
  • Distributed cognition
  • Procedures
  • Resilience
  • Control room
  • Voice loop