SAIL: A Social Artificial Intelligence Layer for Human-Machine Teaming
- 1.3k Downloads
Human-machine teaming (HMT) is a promising paradigm to approach future situations in which humans and autonomous systems closely collaborate. This paper introduces SAIL, a design method and framework for the development of HMT-concepts. Starting point of SAIL is that an HMT can be developed in an iterative process in which an existing autonomous system is enhanced with social functions tailored to the specific context. The SAIL framework consists of a modular social layer between autonomous systems and human team members, in which all social capabilities can be implemented to enable teamwork. Within SAIL, HMT-modules are developed that construct these social capabilities. The modules are reusable in multiple domains.
Next to introducing SAIL we demonstrate the method and framework using a proof of concept task, from which we conclude that the method is a promising approach to design, implement and evaluate HMT-concepts.
KeywordsHuman-machine teaming Social artificial intelligence Collaboration Design method
- 2.Roff, H., Moyes, R.: Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. In: Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Conventional Weapons, April 2016Google Scholar
- 3.Parasuraman, R., Barnes, M., Cosenzo, K., Mulgund, S.: Adaptive automation for human-robot teaming in future command and control systems. Int. J. Command Control 1(2), 43–68 (2007)Google Scholar
- 4.van Diggelen, J., Looije, R., van der Waa, J., Neerincx, M.: Human robot team development: an operational and technical perspective. In: Chen, J. (ed.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 595, pp. 293–302. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-60384-1_28CrossRefGoogle Scholar
- 6.Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: AAAI, pp. 4762–4764, February 2017Google Scholar
- 9.Uszok, A., Bradshaw, J., Jeffers, R., Suri, N., Hayes, P., Breedy, M., Bunch, L., Johnson, M., Kulkarni, S., Lott, J.: KAoS policy and domain services: toward a description-logic approach to policy representation, deconfliction, and enforcement. In: 2003 Proceedings of the IEEE 4th International Workshop on Policies for Distributed Systems and Networks, POLICY 2003, pp. 93–96. IEEE (2003)Google Scholar
- 10.Dignum, M.V.: A model for organizational interaction: based on agents, founded in logic. SIKS (2004)Google Scholar
- 11.Fortino, G., Garro, A., Russo, W.: An integrated approach for the development and validation of multi-agent systems. Int. J. Comput. Syst. Sci. Eng. 20(4), 259–271 (2005)Google Scholar
- 12.Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM, March 2017Google Scholar
- 13.Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: 2004 Proceedings of the National Conference on Artificial Intelligence, pp. 900–907. AAAI Press, July 2004Google Scholar
- 15.FIPA: FIPA ACL message structure specification. Foundation for Intelligent Physical Agents (2002). http://www.fipa.org/specs/fipa00061/index.html. Accessed December 2017