Abstract
The semantic substance that gives body to the concept of [Trust], and its slight variation according to the distinct contexts of experience and/or according to the different entities involved, is here briefly addressed in order to identify what is really at stake when we define [Trust] as a core criterion for accepting the development and deployment of artificial intelligent systems (either embodied or non embodied) in different domains of human life.
Highlighting the deep biological roots the concept stems from, the paper posits the existence of a preconceptual primitive common to most living organisms. Human conceptualising capacity incorporates this primitive and through a process that is socially and culturally determined shapes its rich meaning-nuanced character.
Claiming that the concept is primarily inherently associated with an implicit or explicit risk assessment by the potentially affected entity, the paper identifies two fundamental cognitive dimensions involved in that assessment: an intuitive and a rational. The rational risk assessment bases itself on the individual’s direct experience or on the empirical evidence provided by the experience of others. In cases such as those of adopting a scientific or technological innovation, and lacking the competence to ground their judgement, the end-user’s assessment will rely heavily on a certification issued by organisms, publicly acknowledged as competent, that declare the risk free character of a product or procedure.
It is in this context that benchmarking, standardisation and certification assume a fundamental role across the processual pipeline. The permanent and systematic monitoring of how all these new technologies evolve in their different contexts of use, updating predefined sets of standards will guarantee their quality, efficiency, reliability, integrity and safety.
The paper concludes by advocating the urge of making standards in general, and the standardisation of AIS in particular, available as open-source for the sake of a universal harmonious and beneficial technological development.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
For a comprehensive view on the different philosophical approaches to the concept cf. https://plato.stanford.edu/entries/trust/.
- 3.
According to Engelman and Herrman (2016) evidence indicates that human friendships have evolved especially robust forms of trust that are relatively immune to the contingencies of a volatile and ever-changing environment.
- 4.
Steven Pinker (2005) refers that there is evidence that trust has a strong evolutionary value.
- 5.
Those that don’t result from actual personal or from reported experiences.
- 6.
Independently of its lec.
- 7.
Game theory has its roots in an article by the Hungarian mathematician John von Neumann, who subsequently published his book on the Theory of Games and Economic Behavior with Oskar Morgenstern in 1944. Game theory models a social situation as a strategic game. Such a game consists formally of three elements: the players who interact with each other; a set of available actions for each player; and a so-called pay-off function for each player. The possible outcomes that an individual player can achieve depend not only on the player's own behaviour but also on the other players with whom he or she interacts.
- 8.
The author played this game when visiting the museum in 2019.
What seems to be a video taken by another visitor when playing the game is presently available in the internet at https://www.youtube.com/watch?v=kzBzi8LNk34.
- 9.
We lack evidence if this behaviour can also be found in organisms as plants.
- 10.
https://vax-trust.eu/. VAX-TRUST examines vaccine hesitancy as a broad societal phenomenon aiming to identify the societal factors that shape beliefs and attitudes towards vaccinations in current societies.
- 11.
- 12.
- 13.
For the sake of conciseness we will not analyse structures as: “X trusts Y with Z” or “X trusts Z to Y”, which we consider as being a specification of the basic (1) X trusts Y.
- 14.
- 15.
The fact that for many consumers white label products, though cheaper, are viewed with suspicion is a proof of that.
- 16.
- 17.
- 18.
Ethics, in this sense, (i) reflects on the nature of the changes technological innovation introduces in human reality by modifying behavioural patterns and lifestyles; (ii) identifies the eventual disruption caused on common values, accepted norms and relations and consequently the eventual harm to individuals, to the communities or to the environment; (iii) it defines a human-centred approach where human well-being and dignity as well as the respect for other species and the planet are prioritised.
- 19.
- 20.
- 21.
- 22.
Technical Committee: Meysam Basiri, Instituto Superior Técnico, Portugal, Pedro Lima, Instituto Superior Técnico, Portugal, Daniele Nardi, Sapienza University of Rome, Italy, Gianluca Bardaro, The Open University, UK, External Expert: Alessandro Saffiotti, Örebro University, Sweden.
- 23.
- 24.
- 25.
- 26.
- 27.
Keynote, at the International Conference on Robot Ethics and Standards 2019, London Southbank University.
- 28.
- 29.
- 30.
References
Aly A, Griffiths S, Stramandinoli F (2016) Metrics and benchmarks in human-robot interaction: recent advances in cognitive robotics. Cogn Syst Res. https://doi.org/10.1016/j.cogsys.2016.06.002
Amigoni F, Bastianelli E, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Schiaffonati V (2016) Competitions for benchmarking. IEEE Robot Autom Mag 22(3):53–61
Arrow KJ (1974) The limits of organization. Norton, New York, NY, USA
Berg J, Dickhaut J, McCabe K (1995) Trust, reciprocity, and social-history. Games Econ Behav 10:122–142. https://doi.org/10.1006/game.1995.1027
Bjornskov C (2007) Determinants of generalized trust: a cross-county comparison. Pub Choice 130:1–21. https://doi.org/10.1007/s11127-006-9069-1
Bonsignorio F, Del Pobil AP (2015) Toward replicable and measurable robotics research. IEEE Robot Autom Mag 22(3):32–35
Chatila R, Dignum V, Fisher M, Giannoti F, Morik K, Russell S, Yening K (2021) Trustworthy AI. In: Braunschweig B, Ghallab M (eds) Reflections on artificial intelligence for humanity. Springer
Coeckelbergh M (2012) Can we trust robots? Ethics Inf Technol 14(1):53–60. https://doi.org/10.1007/s10676-011-9279-1
DIN, DKE (2020) German standardisation roadmap on artificial intelligence. https://www.din.de/resource/blob/772610/e96c34dd6b12900ea75b460538805349/normungsroadmap-en-data.pdf
Dai W, Berleant D (2019) Benchmarking contemporary deep learning hardware and frameworks: a survey of qualitative metrics. In: 2019 IEEE first international conference on cognitive machine intelligence (CogMI). IEEE, Los Angeles, CA, USA, pp 148–155. arXiv:1907.03626. doi:https://doi.org/10.1109/CogMI48466.2019.00029.
Damasio A (2005) Brain trust. Nature 435:571–572. https://doi.org/10.1038/435571a
Engelmann JM, Herrmann E (2016) Chimpanzees trust their friends. Curr Biol 26:252–256. Elsevier
Ethics Guidelines for Trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
European Commission (2019) Ethics guidelines for trustworthy AI. High level working group on AI https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
European Commission (2019) Building trust in human-centric artificial intelligence. https://ec.europa.eu/jrc/communities/en/community/digitranscope/document/building-trust-human-centric-artificial-intelligence
European Commission (2020) White paper on artificial intelligence: a European approach to excellence and trust. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
European Commission (2020) Report from the commission to the European parliament, the council and the European economic and social committee. Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics
European Commission (2020) A new industrial strategy for Europe, communication from the commission to the European parliament, the European council, the council, the European economic and social committee and the committee of the regions. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0102&from=EN
European Commission (2020) New consumer agenda. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0696
European Commission (2021) The 2021 coordinated plan on artificial intelligence is the next step in creating EU global leadership in trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review
European Commission (2021) General product safe directive. https://ec.europa.eu/info/business-economy-euro/product-safety-and-requirements/product-safety/consumer-product-safety_en
European Community (2019) Communication from the commission to the European parliament, the council, the European economic and social committee and the committee of the regions on building trust in human-centric artificial intelligence, Brussels, COM, 168 final
Fletcher S, Charalambous G (2021) Trust in human robot collaboration. In: Ferreira MIA, Fletcher SR (eds) The 21st century industrial robot: when tools become collaborators. ISCA series. Springer
Franklin C (2021) The role of standards in human-robot integration safety. In: Ferreira MIA, Fletcher S (eds) The 21st century industrial robot: when tools become collaborators. ISCA series. Springer
Heidegger M (1977) The question concerning technology and other essays. Garland Publishing Incorporated. New York, London. https://monoskop.org/images/4/44/Heidegger_Martin_The_Question_Concerning_Technology_and_Other_Essays.pdf
IEEE, Ethically aligned design, from theory to practice. Chair: Raja Chatila. https://standards.ieee.org/industry-connections/ec/ead1e-infographic/
IEEE, The IEEE global initiative on ethics of autonomous and intelligent systems. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead1e-introduction.pdf
Johnson W (2007) Genetic and environmental in—fluences on behavior: capturing all the interplay. Psychol Rev 114:423–440. https://doi.org/10.1037/0033-295X.114.2.423
Le Roux M, Peterson S, Mougous J (2015) Bacterial danger sensing. J Mol Biol 427(23):3744–3753. Published online 2015 Oct 3. https://doi.org/10.1016/j.jmb.2015.09.018
OECD (2021) Tools for trustworthy AI: a framework to compare implementation tools for trustworthy AI systems. OECD Digit Econ Pap, nº312. https://www.oecd.org/science/tools-for-trustworthy-ai-008232ec-en.htm
OECD (2022) Enabling effective AI policies. https://oecd-events.org/2022-ai-wips/session/c6e2c2b1-bd7a-ec11-94f6-a04a5e7d3e1c
Oreskes N (2019) Why trust science? Princeton University Press, Princeton NJ
Pinker S (2005) Mind & language, vol 20, no 1 February 2005, pp 1–24. #Blackwell Publishing Ltd. 2005, 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. https://stevenpinker.com/files/pinker/files/so_how_does_the_mind_work.pdf
Riedl R, Javor A (2012) The biology of trust: integrating evidence from genetics, endocrinology, and functional brain imaging. J Neurosci Psychol Econ 2011 Am Psychol Assoc 5(2):63–91. https://doi.org/10.1037/a0026318
Righetti F, Finkenauer C (2011) If you are able to control yourself, I will trust you: the role of perceived self-control in interpersonal trust. J Pers Soc Psychol 100:874–886. https://doi.org/10.1037/a0021827
Studley M, Little H (2021) Robots in smart cities. In: Ferreira MIA (ed) How smart is your city?—technological innovation, ethics and inclusiveness. ISCA series. Springer
Sullins JP (2020) Trust in robots. Simon 2020:313–325
Todorov A, Baron SG, Oosterhof NN (2008) Evaluating face trustworthiness: A model based ap- proach. Soc Cogn Affect Neurosci 3:119–127. https://doi.org/10.1093/scan/nsn009
Townley C, Garfield JL (2013) Public trust. In: Makela P, Townley C (eds) Trust: analytic and applied perspectives. Rodopi Press, Amsterdam, pp 95–107
Wahlster W, Winterhalter C (2020) German standardisation roadmap on artificial intelligence, November 2021. DKE German Commission for Electrical, Electronic & Information Technologies of DIN and VDE
Welch M, Rivera R, Conway B, Yonskoski J, Lupton P, Giancola R (2005) Determinants and consequences of social trust. Sociol Inq. Wiley Online Library
Zak PJ, Kurzban R, Matzner WT (2004) The neurobiology of trust. Ann N Y Acad Sci 1032:224–227. https://doi.org/10.1196/annals.1314.025
Zeder M (2012) Pathways to animal domestication. Biodivers Agric Domest Evol Sustain 227–259
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Ferreira, M.I.A. (2022). In Machines We Trust?. In: Ferreira, M.I.A., Tokhi, M.O. (eds) Towards Trustworthy Artificial Intelligent Systems. Intelligent Systems, Control and Automation: Science and Engineering, vol 102. Springer, Cham. https://doi.org/10.1007/978-3-031-09823-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-09823-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09822-2
Online ISBN: 978-3-031-09823-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)