Abstract
This paper presents a comparative study amongst the three main frameworks acknowledged for designing trust in AI; specifications, principles and the levels of control necessary to underpin trust in order to address the rising concerns of Highly Automated Systems (HAS). We will also address trust design in four case studies specifically designed to address the rising concerns of these systems in the area of health and wellbeing. Based on the results, levels of control emerge as at the most reliable option to design trust in Highly Automated Systems, as it provides a more structured focus than specifications and principles. However, principles enhance philosophical inquiry to frame the intended outcome and specifications provide a constructive space for product development. In this context, the authors recommend the integration of all the frameworks into a multi-dimensional cross-disciplinary framework to build and extend robustness throughout the entire interactive lifecycle in the development of future applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hancock, P.A.: Imposing limits on autonomous systems. Ergonomics 60(2), 284–291 (2017). https://doi.org/10.1080/00140139.2016.1190035
Ortega, B.P.A.: Building safe artificial intelligence: specification, robustness, and assurance specification: define the purpose of the system. Medium (2018). https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f
Sheridan, T.B., Verplank, W.L.: Human and computer control of undersea teleoperators. Fort Belvoir, VA Def Technology Information Center (1978)
Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1) (2019). https://doi.org/10.1162/99608f92.8cd550d1
Bukhari, S.A.H.: What is Comparative Study (2011). SSRN: https://ssrn.com/abstract=1962328, http://dx.doi.org/10.2139/ssrn.1962328
Galdon, F., Hall, A., Ferrarello, L.: Synthetic consequential reasoning: building synthetic morality on highly automated systems via a multidimensional-scales framework. In: Proceedings of the 2st International Conference on Human Interaction and Emerging Technologies (IHIET 2020), Lausanne, Switzerland, 22–24 April 2020 (2020)
Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. (2019). https://doi.org/10.1038/s42256-019-0114-4
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Galdon, F., Hall, A., Ferrarello, L. (2020). Designing Trust in Artificial Intelligence: A Comparative Study Among Specifications, Principles and Levels of Control. In: Ahram, T., Taiar, R., Gremeaux-Bader, V., Aminian, K. (eds) Human Interaction, Emerging Technologies and Future Applications II. IHIET 2020. Advances in Intelligent Systems and Computing, vol 1152. Springer, Cham. https://doi.org/10.1007/978-3-030-44267-5_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-44267-5_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-44266-8
Online ISBN: 978-3-030-44267-5
eBook Packages: EngineeringEngineering (R0)