Skip to main content

Toward System Theoretical Foundations for Human–Autonomy Teams

  • Chapter
  • First Online:
Book cover Systems Engineering and Artificial Intelligence
  • 2333 Accesses

Abstract

Both human–autonomy teaming, specifically, and intelligent autonomous systems, more generally, raise new challenges in considering how best to specify, model, design, and verify correctness at a system level. Also important are extending this to monitoring and repairing systems in real time and over lifetimes to detect problems and restore desired properties when they are lost. Systems engineering methods that address these issues are typically based around a level of modeling that involves a broader focus on the life cycle of the system and much higher levels of abstraction and decomposition than some common ones used in disciplines concerned with the design and development of individual elements of intelligent autonomous systems. Nonetheless, many of the disciplines associated with autonomy do have reasons for exploring higher level abstractions, models, and ways of decomposing problems. Some of these may match well or be useful inspirations for systems engineering and related problems like system safety and human system integration. This chapter will provide a sampling of perspectives across scientific fields such as biology, neuroscience, economics/game theory, and psychology, methods for developing and accessing complex socio-technical systems from human factors and organizational psychology, and methods for engineering teams from computer science, robotics, and engineering. Areas of coverage will include considerations of team organizational structure, allocation of roles, functions, and responsibilities, theories for how teammates can work together on tasks, teaming over longer time durations, and formally modeling and composing complex human–machine systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Alami, R., Eder, K. I., Hoffman, G., & Kress-Gazit, H. (2019). Verification and Synthesis of Human-Robot Interaction (Dagstuhl Seminar 19081). In Dagstuhl Reports (Vol. 9, No. 2). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.

    Google Scholar 

  • Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., & Topcu, U. (2018). Safe reinforcement learning via shielding. in Thirty-Second AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Albrecht, S. V., & Stone, P. (2018). Autonomous agents modelling other agents: A comprehensive survey and open problems. Artificial Intelligence, 258, 66–95.

    Article  MathSciNet  MATH  Google Scholar 

  • Alur, R. (2015). Principles of cyber-physical systems. MIT Press.

    Google Scholar 

  • Anderson, G., Verma, A., Dillig, I., & Chaudhuri, S. (2020). Neurosymbolic Reinforcement Learning with Formally Verified Exploration. Advances in Neural Information Processing Systems, 33.

    Google Scholar 

  • Ashoori, M., & Burns, C. (2013). Team cognitive work analysis: Structure and control tasks. Journal of Cognitive Engineering and Decision Making, 7(2), 123–140.

    Article  Google Scholar 

  • Bastani, O., Pu, Y., & Solar-Lezama, A. (2018). Verifiable reinforcement learning via policy extraction. In Advances in neural information processing systems (pp. 2494–2504).

    Google Scholar 

  • Benveniste, A., Caillaud, B., Nickovic, D., Passerone, R., Raclet, J. B., Reinkemeier, P., & Larsen, K. G. (2018). Contracts for system design. Foundations and Trends in Electronic Design Automation, 12(2–3), 124–400.

    Article  Google Scholar 

  • Bolton, M. L., Bass, E. J., & Siminiceanu, R. I. (2013). Using formal verification to evaluate human-automation interaction: A review. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(3), 488–503.

    Article  Google Scholar 

  • Bolton, M. L., & Bass, E. J. (2017). Enhanced operator function model (EOFM): A task analytic modeling formalism for including human behavior in the verification of complex systems. In The handbook of formal methods in human-computer interaction (pp. 343–377). Springer, Cham.

    Google Scholar 

  • Bouchard, A., Tatum, R., & Horan, S. (2017). Verification of autonomous systems by capability verification composition (CVC). In OCEANS 2017-Anchorage (pp. 1–7). IEEE.

    Google Scholar 

  • Bouchard, A., Tatum, R., Hartman, B., Kutzke, D. (2021). A philosophical and mathematical framework for associated problems of hierarchical verification of autonomous systems. Springer. (to appear)

    Google Scholar 

  • Bradshaw, J. M., Feltovich, P., Johnson, M., Breedy, M., Bunch, L., Eskridge, & van Diggelen, J. (2009). From tools to teammates: Joint activity in human-agent-robot teams. In International conference on human centered design (pp. 935–944). Springer, Berlin, Heidelberg.

    Google Scholar 

  • Breazeal, C., Gray, J., & Berlin, M. (2009). An embodied cognition approach to mindreading skills for socially intelligent robots. The International Journal of Robotics Research, 28(5), 656–680.

    Article  Google Scholar 

  • Chen, J. Y., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282.

    Article  Google Scholar 

  • Clark, H. H. (1996). Using language. Cambridge University Press.

    Book  Google Scholar 

  • Cox, M. T., & Ram, A. (1999). Introspective multistrategy learning: On the construction of learning strategies. Artificial Intelligence, 112(1–2), 1–55.

    Article  MATH  Google Scholar 

  • Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive team cognition. Cognitive Science, 37(2), 255–285.

    Article  Google Scholar 

  • Crandall, J. W., Goodrich, M. A., Olsen, D. R., & Nielsen, C. W. (2005). Validating human-robot interaction schemes in multitasking environments. IEEE Transactions on Systems, Man, and Cybernetics-Part a: Systems and Humans, 35(4), 438–449.

    Article  Google Scholar 

  • De Weerd, H., Verbrugge, R., & Verheij, B. (2013). How much does it help to know what she knows you know? An agent-based simulation study. Artificial Intelligence, 199, 67–92.

    Article  MathSciNet  MATH  Google Scholar 

  • Hutchins, A. R., Cummings, M. L., Draper, M., & Hughes, T. (2015). Representing autonomous systems’ self-confidence through competency boundaries. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 59, No. 1, pp. 279–283). Sage CA: Los Angeles, CA: SAGE Publications.

    Google Scholar 

  • Cummings, M., Huang, L., Zhu, H., Finkelstein, D., & Wei, R. (2019). The impact of increasing autonomy on training requirements in a UAV supervisory control task. Journal of Cognitive Engineering and Decision Making, 13(4), 295–309.

    Article  Google Scholar 

  • Doherty, M. (2008). Theory of mind: How children understand others’ thoughts and feelings. Psychology Press.

    Book  Google Scholar 

  • Dragan, A. D., Lee, K. C., & Srinivasa, S. S. (2013). Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 301–308). IEEE.

    Google Scholar 

  • Dumas, G., Nadel, J., Soussignan, R., Martinerie, J., & Garnero, L. (2010). Inter-brain synchronization during social interaction. PloS one, 5(8).

    Google Scholar 

  • Elfar, M., Wang, Y., & Pajic, M. (2020, October). Context-Aware Temporal Logic for Probabilistic Systems. In International Symposium on Automated Technology for Verification and Analysis (pp. 215–232). Springer, Cham.

    Google Scholar 

  • Endsley, M. R., & Garland, D. J. (2000). Theoretical underpinnings of situation awareness: A critical review. Situation Awareness Analysis and Measurement, 1, 24.

    Google Scholar 

  • Endsley, M. R. (2017). From here to autonomy: Lessons learned from human–automation research. Human Factors, 59(1), 5–27.

    Article  Google Scholar 

  • Frosst, N., & Hinton, G. (2017). Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784.

  • Fu, J., & Topcu, U. (2015). Synthesis of shared autonomy policies with temporal logic specifications. IEEE Transactions on Automation Science and Engineering, 13(1), 7–17.

    Article  Google Scholar 

  • Fudenberg, D., Drew, F., Levine, D. K., & Levine, D. K. (1998). The theory of learning in games (Vol. 2). MIT press.

    Google Scholar 

  • Gao, F., Cummings, M. L., & Solovey, E. (2016). Designing for robust and effective teamwork in human-agent teams. In Robust intelligence and trust in autonomous systems (pp. 167–190). Springer, Boston.

    Google Scholar 

  • Gillula, J. H., & Tomlin, C. J. (2012, May). Guaranteed safe online learning via reachability: tracking a ground target using a quadrotor. In 2012 IEEE International Conference on Robotics and Automation (pp. 2723–2730). IEEE.

    Google Scholar 

  • Gorman, J. C., Demir, M., Cooke, N. J., & Grimm, D. A. (2019). Evaluating sociotechnical dynamics in a simulated remotely-piloted aircraft system: A layered dynamics approach. Ergonomics, 62(5), 629–643.

    Article  Google Scholar 

  • Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human–robot teams. Interaction Studies, 8(3), 483–500.

    Article  Google Scholar 

  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527.

    Article  Google Scholar 

  • Haque, M. A., & Egerstedt, M. (2009). Coalition formation in multi-agent systems based on bottlenose dolphin alliances. In 2009 American Control Conference (pp. 3280–3285). IEEE.

    Google Scholar 

  • Hare, B., & Woods, V. (2013). The genius of dogs: How dogs are smarter than you think. Penguin.

    Google Scholar 

  • Herbert, S. L., Chen, M., Han, S., Bansal, S., Fisac, J. F., & Tomlin, C. J. (2017, December). FaSTrack: A modular framework for fast and guaranteed safe motion planning. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (pp. 1517–1522). IEEE.

    Google Scholar 

  • Hiatt, L. M., Harrison, A. M., & Trafton, J. G. (2011, June). Accommodating human variability in human-robot teams through theory of mind. In Twenty-Second International Joint Conference on Artificial Intelligence.

    Google Scholar 

  • Hoffman, J. D., Lee, J. D., & Seppelt, B. D. (2008). Identifying display concepts to support distributed collaboration of unmanned vehicle teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 52, No. 5, pp. 488–492). Sage CA: Los Angeles, CA: SAGE Publications.

    Google Scholar 

  • Ivanov, R., Weimer, J., Alur, R., Pappas, G. J., & Lee, I. (2019, April). Verisig: verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control (pp. 169–178).

    Google Scholar 

  • Jara-Ettinger, J., Gweon, H., Schulz, L. E., & Tenenbaum, J. B. (2016). The naïve utility calculus: Computational principles underlying commonsense psychology. Trends in Cognitive Sciences, 20(8), 589–604.

    Article  Google Scholar 

  • Javaremi, M. N., Young, M., & Argall, B. D. (2019, June). Interface Operation and Implications for Shared-Control Assistive Robots. In 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR) (pp. 232–239).

    Google Scholar 

  • Javdani, S., Admoni, H., Pellegrinelli, S., Srinivasa, S. S., & Bagnell, J. A. (2018). Shared autonomy via hindsight optimization for teleoperation and teaming. The International Journal of Robotics Research, 37(7), 717–742.

    Article  Google Scholar 

  • Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43–69.

    Article  Google Scholar 

  • Johnson, M., & Vera, A. (2019). No AI is an island: The case for teaming intelligence. AI Magazine, 40(1), 16–28.

    Article  Google Scholar 

  • Chakraborti, T., Sreedharan, S., & Kambhampati, S. (2020). The emerging landscape of explainable automated planning & decision making. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20 (pp. 4803–4811).

    Google Scholar 

  • Klien, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a” team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91–95.

    Article  Google Scholar 

  • Krening, S., & Feigh, K. M. (2018). Interaction algorithm effect on human experience with reinforcement learning. ACM Transactions on Human-Robot Interaction (THRI), 7(2), 1–22.

    Article  Google Scholar 

  • Kress-Gazit, H., Eder, K., Hoffman, G., Admoni, H., Argall, B., Ehlers, R., & Levy-Tzedek, S. (2020). Formalizing and Guaranteeing* Human-Robot Interaction. arXiv preprint arXiv:2006.16732.

  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.

    Article  MathSciNet  Google Scholar 

  • Lewis, M., Wang, H., Chien, S. Y., Velagapudi, P., Scerri, P., & Sycara, K. (2011). Process and performance in human-robot teams. Journal of Cognitive Engineering and Decision Making, 5(2), 186–208.

    Article  Google Scholar 

  • Lim, B. C., & Klein, K. J. (2006). Team mental models and team performance: A field study of the effects of team mental model similarity and accuracy. Journal of Organizational Behavior: THe International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 27(4), 403–418.

    Article  Google Scholar 

  • Linegang, M. P., Stoner, H. A., Patterson, M. J., Seppelt, B. D., Hoffman, J. D., Crittendon, Z. B., & Lee, J. D. (2006). Human-automation collaboration in dynamic mission planning: A challenge requiring an ecological approach. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 50, No. 23, pp. 2482–2486). Sage CA: Los Angeles, CA: SAGE Publications.

    Google Scholar 

  • Littman, M. L., Topcu, U., Fu, J., Isbell, C., Wen, M., & MacGlashan, J. (2017). Environment-independent task specifications via GLTL. CoRR, vol. abs/1704.04341.

    Google Scholar 

  • Lutz, C. (2006, May). Complexity and succinctness of public announcement logic. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems (pp. 137–143).

    Google Scholar 

  • Madden, J. D., Arkin, R. C., & MacNulty, D. R. (2010, December). Multi-robot system based on model of wolf hunting behavior to emulate wolf and elk interactions. In 2010 IEEE International Conference on Robotics and Biomimetics (pp. 1043–1050). IEEE.

    Google Scholar 

  • McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C. (2018). Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60(2), 262–273.

    Article  Google Scholar 

  • Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36(4), 876–910.

    Article  Google Scholar 

  • Moshkina, L., Park, S., Arkin, R. C., Lee, J. K., & Jung, H. (2011). TAME: Time-varying affective response for humanoid robots. International Journal of Social Robotics, 3(3), 207–221.

    Article  Google Scholar 

  • Mulder, M., Abbink, D. A., & Carlson, T. (2015). Introduction to the special issue on shared control: Applications. Journal of Human-Robot Interaction, 4(3), 1–3.

    Article  Google Scholar 

  • Nikolaidis, S., Zhu, Y. X., Hsu, D., & Srinivasa, S. (2017, March). Human-robot mutual adaptation in shared autonomy. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI (pp. 294–302). IEEE.

    Google Scholar 

  • Nuzzo, P., Sangiovanni-Vincentelli, A. L., Bresolin, D., Geretti, L., & Villa, T. (2015). A platform-based design methodology with contracts and related tools for the design of cyber-physical systems. Proceedings of the IEEE, 103(11), 2104–2132.

    Article  Google Scholar 

  • Pacheck, A., Moarref, S., & Kress-Gazit, H. (2020, May). Finding Missing Skills for High-Level Behaviors. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 10335–10341). IEEE.

    Google Scholar 

  • Phillips, E., Schaefer, K. E., Billings, D. R., Jentsch, F., & Hancock, P. A. (2016). Human-animal teams as an analog for future human-robot teams: Influencing design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100–125.

    Article  Google Scholar 

  • Ramaswamy, V., Paccagnan, D., & Marden, J. R. (2019). Multiagent maximum coverage problems: The trade-off between anarchy and stability. In 2019 18th European Control Conference (ECC) (pp. 1043–1048). IEEE.

    Google Scholar 

  • Rogers, L. J., & Kaplan, G. (Eds.). (2012). Comparative vertebrate cognition: are primates superior to non-primates?. Springer Science & Business Media.

    Google Scholar 

  • Roth, M., Simmons, R., & Veloso, M. (2005). Reasoning about joint beliefs for execution-time communication decisions. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (pp. 786–793).

    Google Scholar 

  • Roth, E. M., Sushereba, C., Militello, L. G., Diiulio, J., & Ernst, K. (2019). Function Allocation Considerations in the Era of Human Autonomy Teaming. Journal of Cognitive Engineering and Decision Making, 13(4), 199–220.

    Article  Google Scholar 

  • Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13–24.

    Article  MATH  Google Scholar 

  • Seshia, S. A., Sadigh, D., & Sastry, S. S. (2015, June). Formal methods for semi-autonomous driving. In 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC) (pp. 1–5). IEEE.

    Google Scholar 

  • Shah, J., & Breazeal, C. (2010). An empirical analysis of team coordination behaviors and action planning with application to human–robot teaming. Human Factors, 52(2), 234–245.

    Article  Google Scholar 

  • Steinberg, M. (2011, May). Biologically-inspired approaches for self-organization, adaptation, and collaboration of heterogeneous autonomous systems. In Defense Transformation and Net-Centric Systems 2011 (Vol. 8062, p. 80620H). International Society for Optics and Photonics.

    Google Scholar 

  • Steinfeld, A., & Goodrich, M. (2020, March). Assessing, Explaining, and Conveying Robot Proficiency for Human-Robot Teaming. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 662–662).

    Google Scholar 

  • Stubbs, K., Hinds, P. J., & Wettergreen, D. (2007). Autonomy and common ground in human-robot interaction: A field study. IEEE Intelligent Systems, 22(2), 42–50.

    Article  Google Scholar 

  • Sun, X., Ray, L. E., Kralik, J. D., & Shi, D. (2010, October). Socially augmented hierarchical reinforcement learning for reducing complexity in cooperative multi-agent systems. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 3244–3250). IEEE.

    Google Scholar 

  • Unhelkar, V. V., & Shah, J. A. (2016, March). Contact: Deciding to communicate during time-critical collaborative tasks in unknown, deterministic domains. In Thirtieth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive, and healthy computer-based work. CRC Press.

    Book  Google Scholar 

  • Walker, P., Nunnally, S., Lewis, M., Kolling, A., Chakraborty, N., & Sycara, K. (2012, October). Neglect benevolence in human control of swarms in the presence of latency. In 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 3009–3014). IEEE.

    Google Scholar 

  • Young, P., & Zamir, S. (Eds.). (2014). Handbook of game theory. Elsevier.

    Google Scholar 

  • Zhou, J., Zhu, H., Kim, M., & Cummings, M. L. (2019). The Impact of Different Levels of Autonomy and Training on Operators’ Drone Control Strategies. ACM Transactions on Human-Robot Interaction (THRI), 8(4), 1–15.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc Steinberg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Steinberg, M. (2021). Toward System Theoretical Foundations for Human–Autonomy Teams. In: Lawless, W.F., Mittu, R., Sofge, D.A., Shortell, T., McDermott, T.A. (eds) Systems Engineering and Artificial Intelligence . Springer, Cham. https://doi.org/10.1007/978-3-030-77283-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77283-3_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77282-6

  • Online ISBN: 978-3-030-77283-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics