Advertisement

SAIL: A Social Artificial Intelligence Layer for Human-Machine Teaming

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10978)

Abstract

Human-machine teaming (HMT) is a promising paradigm to approach future situations in which humans and autonomous systems closely collaborate. This paper introduces SAIL, a design method and framework for the development of HMT-concepts. Starting point of SAIL is that an HMT can be developed in an iterative process in which an existing autonomous system is enhanced with social functions tailored to the specific context. The SAIL framework consists of a modular social layer between autonomous systems and human team members, in which all social capabilities can be implemented to enable teamwork. Within SAIL, HMT-modules are developed that construct these social capabilities. The modules are reusable in multiple domains.

Next to introducing SAIL we demonstrate the method and framework using a proof of concept task, from which we conclude that the method is a promising approach to design, implement and evaluate HMT-concepts.

Keywords

Human-machine teaming Social artificial intelligence Collaboration Design method 

References

  1. 1.
    Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., Schroeder, M.: Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans. Affect. Comput. 3(1), 69–87 (2012)CrossRefGoogle Scholar
  2. 2.
    Roff, H., Moyes, R.: Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. In: Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Conventional Weapons, April 2016Google Scholar
  3. 3.
    Parasuraman, R., Barnes, M., Cosenzo, K., Mulgund, S.: Adaptive automation for human-robot teaming in future command and control systems. Int. J. Command Control 1(2), 43–68 (2007)Google Scholar
  4. 4.
    van Diggelen, J., Looije, R., van der Waa, J., Neerincx, M.: Human robot team development: an operational and technical perspective. In: Chen, J. (ed.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 595, pp. 293–302. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-60384-1_28CrossRefGoogle Scholar
  5. 5.
    Johnson, M., Bradshaw, J.M., Feltovich, P.J., Van Riemsdijk, M.B., Jonker, C.M., Sierhuis, M.: Coactive design: designing support for interdependence in joint activity. J. Hum.-Robot Interact. 3(1), 2014 (2014)CrossRefGoogle Scholar
  6. 6.
    Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: AAAI, pp. 4762–4764, February 2017Google Scholar
  7. 7.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)CrossRefGoogle Scholar
  8. 8.
    Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37(1), 32–64 (1995)CrossRefGoogle Scholar
  9. 9.
    Uszok, A., Bradshaw, J., Jeffers, R., Suri, N., Hayes, P., Breedy, M., Bunch, L., Johnson, M., Kulkarni, S., Lott, J.: KAoS policy and domain services: toward a description-logic approach to policy representation, deconfliction, and enforcement. In: 2003 Proceedings of the IEEE 4th International Workshop on Policies for Distributed Systems and Networks, POLICY 2003, pp. 93–96. IEEE (2003)Google Scholar
  10. 10.
    Dignum, M.V.: A model for organizational interaction: based on agents, founded in logic. SIKS (2004)Google Scholar
  11. 11.
    Fortino, G., Garro, A., Russo, W.: An integrated approach for the development and validation of multi-agent systems. Int. J. Comput. Syst. Sci. Eng. 20(4), 259–271 (2005)Google Scholar
  12. 12.
    Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM, March 2017Google Scholar
  13. 13.
    Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: 2004 Proceedings of the National Conference on Artificial Intelligence, pp. 900–907. AAAI Press, July 2004Google Scholar
  14. 14.
    Labrou, Y., Finin, T., Peng, Y.: Agent communication languages: The current landscape. IEEE Intell. Syst. Their Appl. 14(2), 45–52 (1999)CrossRefGoogle Scholar
  15. 15.
    FIPA: FIPA ACL message structure specification. Foundation for Intelligent Physical Agents (2002). http://www.fipa.org/specs/fipa00061/index.html. Accessed December 2017
  16. 16.
    Peeters, M.M., Bosch, K.V.D., Neerincx, M.A., Meyer, J.J.C.: An ontology for automated scenario–based training. Int. J. Technol. Enhanc. Learn. 6(3), 195–211 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.TNOThe HagueThe Netherlands

Personalised recommendations