Skip to main content

Human-Computer Interaction and Explainability: Intersection and Terminology

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

Human-computer interaction (HCI) is generally considered the broader domain encompassing the study of the relationships between humans and types of technological artifacts or systems. Explainable AI (xAI) is involved in HCI to have humans better understand computers or AI systems which fosters, as a consequence, better interaction. The term “explainability” is sometimes used interchangeably with other closely related terms such as interpretability or understandability. The same can be said for the term “interaction”. It is a very broad way to describe the relationship between humans and technologies, which is why it is often replaced or completed by more precise terms like cooperation, collaboration, teaming, symbiosis, and integration. In the same vein, the technologies are represented by several terms like computer, machine, AI, agent, and robot. However, each of these terms (technologies and relationships) has its specificity and properties which need to be clearly defined. Currently, the definitions of these various terms are not well established in the literature, and their usage in various contexts is ambiguous. The goals of this paper are threefold: First, clarify the terminology in the HCI domain representing the technologies and their relationships with humans. A few concepts specific to xAI are also clarified. Second, highlight the role that xAI plays or can play in the HCI domain. Third, study the evolution and tendency of the usage of explainability and interpretability with the HCI terminology throughout the years and highlight the observations in the last three years.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akata, Z., et al.: A research agenda for hybrid intelligence: augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 53(08), 18–28 (2020)

    Article  Google Scholar 

  2. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  3. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  4. Bazzano, A.N., Martin, J., Hicks, E., Faughnan, M., Murphy, L.: Human-centred design in global health: a scoping review of applications and contexts. PLoS ONE 12(11), e0186744 (2017). https://doi.org/10.1371/journal.pone.0186744

    Article  Google Scholar 

  5. Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. In: IJCAI-17 workshop on explainable AI (XAI). pp. 8–13. No. 1 (2017)

    Google Scholar 

  6. Bødker, S.: When second wave HCI meets third wave challenges. In: Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles, pp. 1–8 (2006)

    Google Scholar 

  7. Bolman, L.G., Deal, T.E.: What makes a team work? Organ. Dyn. 21(2), 34–44 (1992). https://doi.org/10.1016/0090-2616(92)90062-R

    Article  Google Scholar 

  8. Bunt, A., Lount, M., Lauzon, C.: Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, pp. 169–178 (2012)

    Google Scholar 

  9. Chatzimparmpas, A., Martins, R.M., Jusufi, I., Kerren, A.: A survey of surveys on the use of visualization for interpreting machine learning models. Inf. Vis. 19(3), 207–233 (2020). https://doi.org/10.1177/1473871620904671

    Article  Google Scholar 

  10. Cila, N.: Designing human-agent collaborations: commitment, responsiveness, and support. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–18. CHI 2022, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3491102.3517500

  11. Dhurandhar, A., Iyengar, V., Luss, R., Shanmugam, K.: TIP: typifying the interpretability of procedures. CoRR abs/1706.02952 (2017). http://arxiv.org/abs/1706.02952

  12. Dix, A.: Human-computer interaction, foundations and new paradigms. J. Vis. Lang. Comput. 42, 122–134 (2017). https://doi.org/10.1016/j.jvlc.2016.04.001

    Article  Google Scholar 

  13. Ehsan, U., Riedl, M.O.: Human-centered explainable AI: towards a reflective sociotechnical approach (arXiv:2002.01092) (2020). http://arxiv.org/abs/2002.01092, arXiv:2002.01092 [cs]

  14. Fass, D., Gechter, F.: Towards a theory for bio–cyber physical systems modelling. In: Duffy, V.G. (ed.) DHM 2015. LNCS, vol. 9184, pp. 245–255. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21073-5_25

    Chapter  Google Scholar 

  15. Flathmann, C., Schelble, B.G., McNeese, N.J.: Fostering human-agent team leadership by leveraging human teaming principles. In: 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), pp. 1–6 (2021). https://doi.org/10.1109/ICHMS53169.2021.9582649

  16. Gallina, P., Bellotto, N., Di Luca, M.: Progressive co-adaptation in human-machine interaction. In: 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), vol. 02, pp. 362–368 (2015)

    Google Scholar 

  17. Gechter, F., Fass, D.: Bio-cyber-physical systems: from concepts to human-systems integration engineering. In: Human System Integration Conference INCOSE 2022 (2022)

    Google Scholar 

  18. Gerber, A., Derckx, P., Döppner, D.A., Schoder, D.: Conceptualization of the human-machine symbiosis - a literature review. In: Hawaii International Conference on System Sciences 2020 (HICSS-53) (2020). https://aisel.aisnet.org/hicss-53/cl/machines_as_teammates/5

  19. Gervasi, R., Mastrogiacomo, L., Franceschini, F.: A conceptual framework to evaluate human-robot collaboration. Int. J. Adv. Manuf. Technol. 108(3), 841–865 (2020). https://doi.org/10.1007/s00170-020-05363-1

    Article  Google Scholar 

  20. Glass, A., McGuinness, D.L., Wolverton, M.: Toward establishing trust in adaptive agents. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 227–236 (2008)

    Google Scholar 

  21. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 93 (2019)

    Article  Google Scholar 

  22. Gunning, D.: Explainable artificial intelligence (XAI). In: Defense Advanced Research Projects Agency (DARPA), nd Web (2017)

    Google Scholar 

  23. Harrison, S., Tatar, D., Sengers, P.: The three paradigms of HCI. In: Alt. Chi. Session at the SIGCHI Conference on Human Factors in Computing Systems San Jose, California, USA, pp. 1–18 (2007)

    Google Scholar 

  24. Hemmer, P., Schemmer, M., Vössing, M., Kühl, N.: Human-AI complementarity in hybrid intelligence systems: a structured literature review. PACIS 78 (2021)

    Google Scholar 

  25. Hoc, J.M.: From human - machine interaction to human - machine cooperation. Ergonomics 43(7), 833–843 (2000). https://doi.org/10.1080/001401300409044

    Article  Google Scholar 

  26. Kirsch, A.: Explain to whom? putting the user in the center of explainable AI (2017)

    Google Scholar 

  27. Kozar, O.: Towards better group work: seeing the difference between cooperation and collaboration. Engl. Teach. Forum 48(2), 16–23 (2010). eRIC Number: EJ914888

    Google Scholar 

  28. Krafft, P.M., Young, M., Katell, M., Huang, K., Bugingo, G.: Defining AI in policy versus practice. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 72–78. AIES 2020, Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3375627.3375835

  29. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)

    Google Scholar 

  30. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018). https://doi.org/10.1145/3233231

  31. Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S., Kostakos, V.: Chi 1994–2013: mapping two decades of intellectual progress through co-word analysis. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3553–3562. CHI 2014, Association for Computing Machinery, New York, NY, USA (2014). https://doi.org/10.1145/2556288.2556969

  32. Lu, Y., Garcia, R., Hansen, B., Gleicher, M., Maciejewski, R.: The state-of-the-art in predictive visual analytics. Comput. Graph. Forum 36(3), 539–562 (2017). https://doi.org/10.1111/cgf.13210

    Article  Google Scholar 

  33. Mackay, W.: Responding to cognitive overload: co-adaptation between users and technology. Intellectica 30(1), 177–193 (2000)

    Google Scholar 

  34. Martin, B.D., Schwab, E.: Current usage of symbiosis and associated terminology. Int. J. Biol. 5(1), 32–45 (2012). https://doi.org/10.5539/ijb.v5n1p32

    Article  Google Scholar 

  35. Matson, E.T., Min, B.C.: M2M infrastructure to integrate humans, agents and robots into collectives. In: IEEE International Instrumentation and Measurement Technology Conference, pp. 1–6. IEEE (2011)

    Google Scholar 

  36. Matson, E.T., Taylor, J., Raskin, V., Min, B.C., Wilson, E.C.: A natural language exchange model for enabling human, agent, robot and machine interaction. In: The 5th International Conference on Automation, Robotics and Applications, pp. 340–345. IEEE (2011)

    Google Scholar 

  37. Monett, D., Lewis, C.W.P.: Getting clarity by defining artificial intelligence—a survey. In: Müller, V.C. (ed.) PT-AI 2017. SAPERE, vol. 44, pp. 212–214. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96448-5_21

    Chapter  Google Scholar 

  38. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 73, 1–15 (2018). https://doi.org/10.1016/j.dsp.2017.10.011

    Article  MathSciNet  Google Scholar 

  39. Mualla, Y., et al.: The quest of parsimonious XAI: A human-agent architecture for explanation formulation. Artif. Intell. 302, 103573 (2022). https://doi.org/10.1016/j.artint.2021.103573

  40. Nazar, M., Alam, M.M., Yafi, E., Su’ud, M.M.: A systematic review of human-computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access 9, 153316–153348 (2021). https://doi.org/10.1109/ACCESS.2021.3127881

    Article  Google Scholar 

  41. Oulhen, N., Schulz, B., Carrier, T.: English translation of Heinrich Anton de Bary’s 1878 speech, “die erscheinung der symbiose” (’de la symbiose’). Symbiosis 69, 131–139 (2016). https://doi.org/10.1007/s13199-016-0409-8

  42. Paleja, R., Ghuy, M., Ranawaka Arachchige, N., Jensen, R., Gombolay, M.: The utility of explainable AI in ad hoc human-machine teaming. In: Advances in Neural Information Processing Systems, vol. 34, pp. 610–623. Curran Associates, Inc. (2021). https://proceedings.neurips.cc/paper/2021/hash/05d74c48b5b30514d8e9bd60320fc8f6-Abstract.html

  43. Parasuraman, R., Mouloua, M., Molloy, R.: Effects of adaptive task allocation on monitoring of automated systems. Hum. Factors 38(4), 665–679 (1996)

    Article  Google Scholar 

  44. Preece, A.: Asking ‘Why’in AI: explainability of intelligent systems-perspectives and challenges. Intell. Syst. Account. Financ. Manage. 25(2), 63–72 (2018)

    Article  Google Scholar 

  45. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agent. Multi-Agent Syst. 33, 673–705 (2019)

    Article  Google Scholar 

  46. Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017)

  47. Silva, A., Schrum, M., Hedlund-Botti, E., Gopalan, N., Gombolay, M.: Explainable artificial intelligence: Evaluating the objective and subjective impacts of XAI on human-agent interaction. Int. J. Hum. Comput. Interact. 39(7), 1390–1404 (2023). https://doi.org/10.1080/10447318.2022.2101698

    Article  Google Scholar 

  48. Stephanidis, et al.: Seven HCI grand challenges. Int. J. Hum. Comput. Interact. 35(14), 1229–1269 (2019). https://doi.org/10.1080/10447318.2019.1619259

  49. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  50. Wagoner, A.R., Matson, E.T.: A robust human-robot communication system using natural language for harms. In: FNC/MobiSPC, pp. 119–126 (2015)

    Google Scholar 

  51. Wang, P.: On defining artificial intelligence. J. Artif. Gen. Intell. 10(2), 1–37 (2019). https://doi.org/10.2478/jagi-2019-0002

    Article  Google Scholar 

  52. Weiss, G., Wooldridge, M.: Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. MIT Press, Cambridge (1999). google-Books-ID: JYcznFCN3xcC

    Google Scholar 

  53. Wellsandt, S., et al.: 45 - hybrid-augmented intelligence in predictive maintenance with digital intelligent assistants. Annu. Rev. Control. 53, 382–390 (2022). https://doi.org/10.1016/j.arcontrol.2022.04.001

    Article  Google Scholar 

  54. Yang, C., Zhu, Y., Chen, Y.: A review of human-machine cooperation in the robotics domain. IEEE Trans. Hum. Mach. Syst. 52(1), 12–25 (2022). https://doi.org/10.1109/THMS.2021.3131684

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Arthur Picard or Yazan Mualla .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Picard, A., Mualla, Y., Gechter, F., Galland, S. (2023). Human-Computer Interaction and Explainability: Intersection and Terminology. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1902. Springer, Cham. https://doi.org/10.1007/978-3-031-44067-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44067-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44066-3

  • Online ISBN: 978-3-031-44067-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics