Using formal methods to reason about taskload and resource conflicts in simulated air traffic scenarios


In complex environments, like the modern air traffic system, interactions between human operators and other system agents can profoundly impact system performance. System complexity can make it difficult to determine all of the situations where issues can arise. Simulation and formal verification have been used separately to explore the role of humans in complex systems. However, both have problems that limit their usefulness. In this paper, we describe a method that allows interesting conditions related to human taskload and resource conflicts between agents to be discovered and evaluated in high fidelity through the synergistic use of formal verification and simulation. The core of this method is based on a formal modeling architecture that represents original, agent-based simulation constructs using computationally efficient abstractions that ensure the temporal and ordinal relationships between simulation events (actions) are represented realistically. Taskload for each agent is represented using a priority queue model where only a limited number of actions can be performed or remembered by a human at a given time. Resources affected by agent behaviors are associated with actions so that resources can be reasoned about at the action level. We discuss our method and its formal architecture. We describe how the method can be used to find taskload and resource conflict conditions through the use of formal, checkable specification properties. We then use a simple air traffic example to demonstrate the ability of our method to find interesting taskload and resource conflict conditions around a simulation trace. The implications of this method are discussed and directions for future work are explored.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. 1.

    Note that specification property patterns are presented with a name, followed by a \(\models \), followed by the specification property logic.

  2. 2.

    SAL was chosen for this work because of the expressiveness of input language [18]. This included the ability to model real time, its support of both synchronous and asynchronous composition of modules (though only synchronous composition was ultimately used), and its inclusion of lambda operations.

  3. 3.

    Note that these timings are not necessarily realistic. The presented tests were concerned with demonstrating the capabilities of our method, not with the realism of the air traffic scenario. More realistic scenarios will be the subject of future work. See Sect. 5.


  1. 1.

    Alur R, Dill DL (1994) A theory of timed automata. Theor Comput Sci 126(2):183–235

    MathSciNet  Article  MATH  Google Scholar 

  2. 2.

    Bainbridge L (1983) Ironies of automation. Automatica 19(6):775–780

    Article  Google Scholar 

  3. 3.

    Bolton ML, Bass EJ (2008) Using relative position and temporal judgments to identify biases in spatial awareness for synthetic vision systems. Int J Aviat Psychol 18(2):1050–8414

    Article  Google Scholar 

  4. 4.

    Bolton ML, Bass EJ (2009) Comparing perceptual judgment and subjective measures of spatial awareness. Appl Ergon 40(4):597–607

    Article  Google Scholar 

  5. 5.

    Bolton ML, Bass EJ (2013) Generating erroneous human behavior from strategic knowledge in task models and evaluating its impact on system safety with model checking. IEEE Trans Syst Man Cybern Syst 43(6):1314–1327

    Article  Google Scholar 

  6. 6.

    Bolton ML, Bass EJ, Siminiceanu RI (2012) Generating phenotypical erroneous human behavior to evaluate human–automation interaction using model checking. Int J Hum Comput Stud 70(11):888–906

    Article  Google Scholar 

  7. 7.

    Bolton ML, Bass EJ, Siminiceanu RI (2013) Using formal verification to evaluate human–automation interaction in safety critical systems: a review. IEEE Trans Syst Man Cybern Syst 43(3):488–503

    Article  Google Scholar 

  8. 8.

    Bolton ML, Jimenez N, van Paassen MM, Trujillo M (2014) Automatically generating specification properties from task models for the formal verification of human–automation interaction. IEEE Trans Hum Mach Syst 44:561–575

    Article  Google Scholar 

  9. 9.

    Bredereke J, Lankenau A (2002) A rigorous view of mode confusion. In: Proceedings of the 21st international conference on computer safety, reliability and security. Springer, London, pp 19–31

  10. 10.

    Bredereke J, Lankenau A (2005) Safety-relevant mode confusions-modelling and reducing them. Reliab Eng Syst Saf 88(3):229–245

    Article  Google Scholar 

  11. 11.

    Buth B (2004) Analyzing mode confusion: an approach using FDR2. In: Proceeding of the 23rd international conference on computer safety, reliability, and security. Springer, Berlin, pp 101–114

  12. 12.

    Butler RW, Miller SP, Potts JN, Carreño VA (1998) A formal methods approach to the analysis of mode confusion. In: Proceeding of the 17th digital avionics systems conference. IEEE, Piscataway, pp C41/1–C41/8

  13. 13.

    Campos JC, Harrison M (2001) Model checking interactor specifications. Autom Softw Eng 8(3):275–310

    Article  MATH  Google Scholar 

  14. 14.

    Campos JC, Harrison MD (2011) Modelling and analysing the interactive behaviour of an infusion pump. In: Proceedings of the fourth international workshop on formal methods for interactive systems. EASST, Potsdam

  15. 15.

    Chen X, Hsieh H, Balarin F, Watanabe Y (2003) Automatic trace analysis for logic of constraints. In: Proceedings of the design automation conference. IEEE, pp 460–465

  16. 16.

    Clarke EM, Grumberg O, Peled DA (1999) Model checking. MIT Press, Cambridge

    Google Scholar 

  17. 17.

    Committee on Autonomy Research for Civil Aviation, Aeronautics and Space Engineering Board, Division on Engineering and Physical Sciences, National Research Council (2014) Autonomy research for civil aviation: toward a new era of flight. National Academy of Sciences, Washington

  18. 18.

    de Moura L, Owre S, Shankar N (2003) The SAL language manual. Technical report CSL-01-01. Computer Science Laboratory, SRI International, Menlo Park

  19. 19.

    De Moura L, Owre S, Rueß H, Rushby J, Shankar N, Sorea M, Tiwari A (2004) Sal 2. In: International conference on computer aided verification. Springer, pp 496–500

  20. 20.

    Degani A (2004) Taming HAL: designing interfaces beyond 2001. Macmillan, New York

    Google Scholar 

  21. 21.

    Degani A, Heymann M (2002) Formal verification of human–automation interaction. Hum Factors 44(1):28–43

    Article  MATH  Google Scholar 

  22. 22.

    Derrick J, North S, Simons T (2006) Issues in implementing a model checker for Z. In: International conference on formal engineering methods. Springer, pp 678–696

  23. 23.

    Dutertre B, Sorea M (2004) Timed systems in SAL. Technical report NASA/CR-2002-211858, SRI International

  24. 24.

    Emerson EA (1990) Handbook of theoretical computer science. In: van Leeuwen J, Meyer AR, Nivat M, Paterson M, Perrin D (eds) Temporal and modal logic, vol 16. MIT Press, Cambridge, pp 995–1072

    Google Scholar 

  25. 25.

    Feigh KM, Pritchett AR, Mamessier S, Gelman G (2014) Generic agent models for simulations of concepts of operation: part 2. J Aerosp Inf Syst 11:623–631

  26. 26.

    Gelman G, Feigh KM, Rushby J (2013) Example of a complementary use of model checking and agent-based simulation. In: IEEE international conference of systems man and cybernetics. IEEE, Piscataway, pp 900–905

  27. 27.

    Gelman G, Feigh K, Rushby J (2014) Example of a complementary use of model checking and human performance simulation. IEEE Trans Hum Mach Syst 44(5):576–590

    Article  Google Scholar 

  28. 28.

    Gelman GE (2012) Comparison of model checking and simulation to examine aircraft system behavior. Ph.D. thesis, Georgia Institute of Technology

  29. 29.

    Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 52:139–183

    Article  Google Scholar 

  30. 30.

    Houser A, Ma LM, Feigh K, Bolton ML (2015) A formal approach to modeling and analyzing human taskload in simulated air traffic scenarios. In: 2015 International conference on complex systems engineering, pp 1–6

  31. 31.

    Hu AJ (2008) Simulation vs. formal: absorb what is useful; reject what is useless. In: Proceedings of the third international Haifa verification conference. Springer, Berlin, pp 1–7

  32. 32.

    IJtsma M, Hoekstra J, Bhattacharyya RP, Pritchett A (2015) Computational assessment of different air-ground function allocations. In: Eleventh USA/Europe air traffic management research and development seminar

  33. 33.

    IJtsma M, Pritchett AR, Bhattacharyya RP (2015) Computational simulation of authority–responsibility mismatches in air-ground function allocation. In: Proceedings of the 18th international symposium on aviation psychology. Write State University, Dayton

  34. 34.

    Joshi A, Miller SP, Heimdahl MP (2003) Mode confusion analysis of a flight guidance system using formal methods. In: Proceedings of the 22nd digital avionics systems conference. IEEE, Piscataway, pp 2.D.1-1–2.D.1-12

  35. 35.

    Loft S, Sanderson P, Neal A, Mooij M (2007) Modeling and predicting mental workload in en route air traffic control: critical review and broader implications. Hum Factors 49(3):376–399

    Article  Google Scholar 

  36. 36.

    Lüttgen G, Carreño V (1999) Analyzing mode confusion via model checking. Proceeding of theoretical and practical aspects of SPIN model checking. Springer, Berlin, pp 120–135

    Google Scholar 

  37. 37.

    Mackworth JF (1964) Performance decrement in vigilance, threshold, and high-speed perceptual motor tasks. Can J Psychol 18(3):209–223

    Article  Google Scholar 

  38. 38.

    McFarlane DC, Latorella KA (2002) The scope and importance of human interruption in human–computer interaction design. Hum Comput Interact 17(1):1–61

    Article  Google Scholar 

  39. 39.

    Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 63(2):81

    Article  Google Scholar 

  40. 40.

    Moore J, Ivie R, Gledhill T, Mercer E, Goodrich M (2014) Modeling human workload in unmanned aerial systems. 2014 AAAI spring symposium series. AAAI, Palo Alto, pp 44–49

    Google Scholar 

  41. 41.

    Palmer E (1995) “Oops, it didn’t arm”—a case study of two automation surprises. In: Proceedings of the 8th international symposium on aviation psychology. Wright State University, Dayton, pp 227–232

  42. 42.

    Pan D, Bolton ML (2016) Properties for formally assessing the performance level of human–human collaborative procedures with miscommunications and erroneous human behavior. Int J Ind Ergon. doi:10.1016/j.ergon.2016.04.001

  43. 43.

    Popescu V, Clarke J, Feigh KM, Feron E (2011) ATC taskload inherent to the geometry of stochastic 4-d trajectory flows with flight technical errors. CoRR abs/1102.1660

  44. 44.

    Pritchett A, Feigh K (2011) Simulating first-principles models of situated human performance. In: Proceedings of the IEEE first international multi-disciplinary conference on cognitive methods in situation awareness and decision support. IEEE, Piscataway, pp 144–151

  45. 45.

    Pritchett AR (2013) The Oxford handbook of cognitive engineering. In: Lee JD, Kirlik A (eds) imulation to assess safety in complex work environments, vol 22. Oxford University Press, New York, pp 352–366

    Google Scholar 

  46. 46.

    Pritchett AR, Feigh KM, Kim SY, Kannan SK (2014) Work models that compute to describe multiagent concepts of operation: part 1. J Aerosp Inf Syst 11(10):610–622

  47. 47.

    Pritchett AR, Kim SY, Feigh KM (2014) Measuring human-automation function allocation. J Cogn Eng Decis Mak 8(1):52–77

    Article  Google Scholar 

  48. 48.

    Pritchett AR, Kim SY, Feigh KM (2014) Modeling human-automation function allocation. J Cogn Eng Decis Mak 8(1):33–51

    Article  Google Scholar 

  49. 49.

    Reason J (1990) Human error. Cambridge University Press, New York

    Google Scholar 

  50. 50.

    Rushby J (2002) Using model checking to help discover mode confusions and other automation surprises. Reliab Eng Syst Saf 75(2):167–177

    Article  Google Scholar 

  51. 51.

    Rushby J, Crow J, Palmer E (1999) An automated method to detect potential mode confusions. In: Proceedings of the 18th digital avionics systems conference. IEEE, Piscataway, pp 4.B.2-1–4.B.2-6

  52. 52.

    Sarter NB, Woods DD (1995) How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Hum Factors 37(1):5–19

    Article  Google Scholar 

  53. 53.

    Sheridan TB, Parasuraman R (2005) Human–automation interaction. Rev Hum Factors Ergon 1(1):89–129

    Article  Google Scholar 

  54. 54.

    Sherry L, Feary M, Polson P, Palmer E (2000) Formal method for identifying two types of automation-surprises. Technical report. C69-5370-016, Honeywell, Phoenix

  55. 55.

    Smith G, Wildman L (2005) Model checking Z specifications using SAL. In: ZB 2005: formal specification and development in Z and B. Springer, pp 85–103

  56. 56.

    Stocker R, Rungta N, Mercer E, Raimondi F, Holbrook J, Cardoza C, Goodrich M (2015) An approach to quantify workload in a system of agents. In: Proceedings of the 14th international conference on autonomous agents and multiagent systems. IFAAMAS, Liverpool

  57. 57.

    Stuart DA, Brockmeyer M, Mok AK, Jahanian F (2001) Simulation–verification: biting at the state explosion problem. IEEE Trans Softw Eng 27(7):599–617

    Article  Google Scholar 

  58. 58.

    Tiwari A (2003) HybridSAL: modeling and abstracting hybrid systems. Technical report

  59. 59.

    Tiwari A, Khanna G (2002) Series of Abstractions for Hybrid Automata. In: Tomlin CJ, Greenstreet MR (eds) Proceedings of the 5th International Workshop on Hybrid Systems: computation and control. Springer, Berlin, Heidelberg, pp 465–478. doi:10.1007/3-540-45873-5_36

  60. 60.

    Vela A, Feigh KM, Solak S, Singhose W, Clarke JP (2012) Formulation of reduced-taskload optimization models for conflict resolution. IEEE Trans Syst Man Cybern A Syst Hum 42(6):1552–1561

    Article  Google Scholar 

  61. 61.

    Weyers B, Bowen J, Dix A, Palanque P (eds) (2017) The handbook of formal methods in human–computer interaction. Springer, Cham

    Google Scholar 

  62. 62.

    Wheeler PH (2007) Aspects of automation mode confusion. Master’s thesis, Massachusetts Institute of Technology, Cambridge

  63. 63.

    Wing JM (1990) A specifier’s introduction to formal methods. Computer 23(9):8, 10–22, 24

  64. 64.

    Yasmeen A, Feigh KM, Gelman G, Gunter EL (2012) Formal analysis of safety-critical system simulations. In: Proceedings of the 2nd international conference on application and theory of automation in command and control systems. IRIT Press, pp 71–81

  65. 65.

    Yuan J, Shen J, Abraham J, Aziz A (1997) On combining formal and informal verification. In: Grumberg O (ed) Proceedings of the the 9th International Conference on Computer Aided Verification. Springer, Berlin, Heidelberg, pp 376–387. doi:10.1007/3-540-63166-6_37

Download references


This work was supported by the grant “Scenario-Based Verification and Validation of Autonomy and Authority” from the NASA Ames Research Center under Award Number NNX13AB71A.

Author information



Corresponding author

Correspondence to Matthew L. Bolton.

Additional information

We presented an earlier version of this manuscript at the 2015 International Conference on Complex Systems Engineering [30]. This new manuscript makes substantial contributions beyond what was reported in [30]. Specifically, we present additional specification properties for generating different types of simulation scenarios. We also report results showing how the model checking analyses connects with the agent-based simulation. This new paper also features an extended discussion.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Houser, A., Ma, L.M., Feigh, K.M. et al. Using formal methods to reason about taskload and resource conflicts in simulated air traffic scenarios. Innovations Syst Softw Eng 14, 1–14 (2018).

Download citation


  • Formal methods
  • Simulation
  • Model checking
  • Human–automation interaction
  • Taskload
  • Workload
  • Mode confusion
  • Air traffic control
  • Agent-based simulation
  • Autonomy