Abstract
Interactions between humans and AI-enabled systems are occurring more frequently, often at times unbeknownst to the human end users. As AI-enabled systems become more pervasive and advanced in helping humans make decisions, there is a need to ensure the systems are designed around human needs and for the end users to have a general understanding of the benefits and limitations of AI. Humans who understand both the benefits and limitations of AI systems may then make more informed decisions that maximize the satisfaction of their own needs. Toward this end, we have developed a game based on Max-Neef’s Fundamental Human Needs Scale to inform human-AI interaction design. We have designed and prototyped the game so that it may serve as a data collection and user research tool. The tool presents various human-AI interaction scenarios to players in an effort to reveal the level of agency humans are comfortable ceding to an AI-enabled system, based on their personal needs that are satisfied by that system. Through the design mechanics and gameplay that result from real people playing the game, insights may be gathered regarding human-AI interaction dynamics, which may then be used to inform the design of future AI systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alderfer, C.P.: An empirical test of a new theory of human needs. Organ. Behav. Hum. Perform. 4(2), 142–175 (1969). https://doi.org/10.1016/0030-5073(69)90004-X
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. https://arxiv.org/abs/1909.03012v2 (2019)
Boer, L., Donovan, J.: Provotypes for participatory innovation. In: Proceedings of the Designing Interactive Systems Conference, DIS 2012, pp. 388–397 (2012). https://doi.org/10.1145/2317956.2318014
Desurvire, H., Caplan, M., Toth, J.A.: Using heuristics to evaluate the playability of games. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 1509–1512 (2004). https://doi.org/10.1145/985921.986102
Koren, G., Soni, S.: Basic Human Needs. Mooze Design (2010). http://www.moozedesign.in/images/portfolio/actual/bhn.jpg
Li, F.-F., Etchemendy, J.: Introducing Stanford’s Human-Centered AI Initiative (2018). https://hai.stanford.edu/news/introducing-stanfords-human-centered-ai-initiative
Markovic, I.: How to use the risk assessment matrix to organize your project better. TMS (2019). https://tms-outsource.com/blog/posts/risk-assessment-matrix/
Maslow, A.H.: A theory of human motivation. Psychol. Rev. 50(4), 370–396 (1943). http://www.abika.com/
Max-Neef, M.A., Elizalde, A., Hopenhayn, M.: Human Scale Development: Conception, Application and Further Reflections. The Apex Press, Lexington (1991)
McLeod, S.: Maslow’s Hierarchy of Needs. Simply Psychology (2020). https://www.simplypsychology.org/maslow.html
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–297 (2000). https://doi.org/10.1109/3468.844354
Ribera, M., Lapedriza, A.: Can we do better explanations? A proposal of User-Centered Explainable AI (2019)
Ryan, R.M., Deci, E.L.: Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55(1), 68–78 (2000). https://doi.org/10.1037/0003-066X.55.1.68
Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 31, 47–53 (2018). https://www.researchgate.net/publication/324006061_Building_Trust_in_Artificial_Intelligence_Machine_Learning_and_Robotics
Xu, W.: Toward human-centered AI: a perspective from human-computer interaction. ACM Interact. Mag. 42–46 (2019). https://interactions.acm.org/archive/view/july-august-2019/toward-human-centered-ai
Yang, Q., Steinfeld, A., Rosé, C., Zimmerman, J.: Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In: Conference on Human Factors in Computing Systems – Proceedings, 21 April 2020. https://doi.org/10.1145/3313831.3376301
Yu, O.: A new model of human needs as the foundation for innovation management. IEEE Eng. Manage. Rev. 46(3), 40–45 (2018). https://doi.org/10.1109/EMR.2018.2870431
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, M.K., Trewhitt, E. (2022). SatisfAI: A Serious Tabletop Game to Reveal Human-AI Interaction Dynamics. In: Sottilare, R.A., Schwarz, J. (eds) Adaptive Instructional Systems. HCII 2022. Lecture Notes in Computer Science, vol 13332. Springer, Cham. https://doi.org/10.1007/978-3-031-05887-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-05887-5_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05886-8
Online ISBN: 978-3-031-05887-5
eBook Packages: Computer ScienceComputer Science (R0)