Skip to main content

In Machines We Trust?

  • Chapter
  • First Online:
Towards Trustworthy Artificial Intelligent Systems

Abstract

The semantic substance that gives body to the concept of [Trust], and its slight variation according to the distinct contexts of experience and/or according to the different entities involved, is here briefly addressed in order to identify what is really at stake when we define [Trust] as a core criterion for accepting the development and deployment of artificial intelligent systems (either embodied or non embodied) in different domains of human life.

Highlighting the deep biological roots the concept stems from, the paper posits the existence of a preconceptual primitive common to most living organisms. Human conceptualising capacity incorporates this primitive and through a process that is socially and culturally determined shapes its rich meaning-nuanced character.

Claiming that the concept is primarily inherently associated with an implicit or explicit risk assessment by the potentially affected entity, the paper identifies two fundamental cognitive dimensions involved in that assessment: an intuitive and a rational. The rational risk assessment bases itself on the individual’s direct experience or on the empirical evidence provided by the experience of others. In cases such as those of adopting a scientific or technological innovation, and lacking the competence to ground their judgement, the end-user’s assessment will rely heavily on a certification issued by organisms, publicly acknowledged as competent, that declare the risk free character of a product or procedure.

It is in this context that benchmarking, standardisation and certification assume a fundamental role across the processual pipeline. The permanent and systematic monitoring of how all these new technologies evolve in their different contexts of use, updating predefined sets of standards will guarantee their quality, efficiency, reliability, integrity and safety.

The paper concludes by advocating the urge of making standards in general, and the standardisation of AIS in particular, available as open-source for the sake of a universal harmonious and beneficial technological development.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Cf., https://en.wikipedia.org/wiki/In_God_We_Trust.

  2. 2.

    For a comprehensive view on the different philosophical approaches to the concept cf. https://plato.stanford.edu/entries/trust/.

  3. 3.

    According to Engelman and Herrman (2016) evidence indicates that human friendships have evolved especially robust forms of trust that are relatively immune to the contingencies of a volatile and ever-changing environment.

  4. 4.

    Steven Pinker (2005) refers that there is evidence that trust has a strong evolutionary value.

  5. 5.

    Those that don’t result from actual personal or from reported experiences.

  6. 6.

    Independently of its lec.

  7. 7.

    Game theory has its roots in an article by the Hungarian mathematician John von Neumann, who subsequently published his book on the Theory of Games and Economic Behavior with Oskar Morgenstern in 1944. Game theory models a social situation as a strategic game. Such a game consists formally of three elements: the players who interact with each other; a set of available actions for each player; and a so-called pay-off function for each player. The possible outcomes that an individual player can achieve depend not only on the player's own behaviour but also on the other players with whom he or she interacts.

  8. 8.

    The author played this game when visiting the museum in 2019.

    What seems to be a video taken by another visitor when playing the game is presently available in the internet at https://www.youtube.com/watch?v=kzBzi8LNk34.

  9. 9.

    We lack evidence if this behaviour can also be found in organisms as plants.

  10. 10.

    https://vax-trust.eu/. VAX-TRUST examines vaccine hesitancy as a broad societal phenomenon aiming to identify the societal factors that shape beliefs and attitudes towards vaccinations in current societies.

  11. 11.

    https://standards.ieee.org/industry-connections/ec/ead1e-infographic/.

  12. 12.

    https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf.

  13. 13.

    For the sake of conciseness we will not analyse structures as: “X trusts Y with Z” or “X trusts Z to Y”, which we consider as being a specification of the basic (1) X trusts Y.

  14. 14.

    https://www.kellogg.northwestern.edu/trust-project/videos/goldberg-ep-3.aspx.

  15. 15.

    The fact that for many consumers white label products, though cheaper, are viewed with suspicion is a proof of that.

  16. 16.

    https://osha.europa.eu/en/legislation/directives/directive-2006-42-ec-of-the-european-parliament-and-of-the-council.

  17. 17.

    https://osha.europa.eu/en/legislation/directives/directive-2006-42-ec-of-the-european-parliament-and-of-the-council.

  18. 18.

    Ethics, in this sense, (i) reflects on the nature of the changes technological innovation introduces in human reality by modifying behavioural patterns and lifestyles; (ii) identifies the eventual disruption caused on common values, accepted norms and relations and consequently the eventual harm to individuals, to the communities or to the environment; (iii) it defines a human-centred approach where human well-being and dignity as well as the respect for other species and the planet are prioritised.

  19. 19.

    https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf.

  20. 20.

    Cf on this purpose Fletcher [23], [24].

  21. 21.

    https://www.eu-robotics.net/robotics_league/smart-cities/about/index.html.

  22. 22.

    Technical Committee: Meysam Basiri, Instituto Superior Técnico, Portugal, Pedro Lima, Instituto Superior Técnico, Portugal, Daniele Nardi, Sapienza University of Rome, Italy, Gianluca Bardaro, The Open University, UK, External Expert: Alessandro Saffiotti, Örebro University, Sweden.

  23. 23.

    https://docs.google.com/document/d/122AS7SgOQe__Aj0V3fhoXb-766LgDYz9gZfcrGY7jMM/edit

  24. 24.

    http://sciroc.org/e03-deliver-coffee-shop-orders/.

  25. 25.

    https://metricsproject.eu/.

  26. 26.

    https://aiindex.stanford.edu/ai-index-report-2021/.

  27. 27.

    Keynote, at the International Conference on Robot Ethics and Standards 2019, London Southbank University.

  28. 28.

    https://media-exp1.licdn.com/dms/document/C4D1FAQE75SDL6s4O6Q/feedshare-document-pdf-analyzed/0/1649055234847?e=2147483647&v=beta&t=HUb8Wf1VEz8Ra1CI-oKiBKbW0OjLc6Fyxlq6MdegwIE.

  29. 29.

    https://ec.europa.eu/info/business-economy-euro/product-safety-and-requirements/product-safety/consumer-product-safety_en.

  30. 30.

    https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0696.

References

  1. Aly A, Griffiths S, Stramandinoli F (2016) Metrics and benchmarks in human-robot interaction: recent advances in cognitive robotics. Cogn Syst Res. https://doi.org/10.1016/j.cogsys.2016.06.002

  2. Amigoni F, Bastianelli E, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Schiaffonati V (2016) Competitions for benchmarking. IEEE Robot Autom Mag 22(3):53–61

    Google Scholar 

  3. Arrow KJ (1974) The limits of organization. Norton, New York, NY, USA

    Google Scholar 

  4. Berg J, Dickhaut J, McCabe K (1995) Trust, reciprocity, and social-history. Games Econ Behav 10:122–142. https://doi.org/10.1006/game.1995.1027

    Article  MATH  Google Scholar 

  5. Bjornskov C (2007) Determinants of generalized trust: a cross-county comparison. Pub Choice 130:1–21. https://doi.org/10.1007/s11127-006-9069-1

    Article  Google Scholar 

  6. Bonsignorio F, Del Pobil AP (2015) Toward replicable and measurable robotics research. IEEE Robot Autom Mag 22(3):32–35

    Article  Google Scholar 

  7. Chatila R, Dignum V, Fisher M, Giannoti F, Morik K, Russell S, Yening K (2021) Trustworthy AI. In: Braunschweig B, Ghallab M (eds) Reflections on artificial intelligence for humanity. Springer

    Google Scholar 

  8. Coeckelbergh M (2012) Can we trust robots? Ethics Inf Technol 14(1):53–60. https://doi.org/10.1007/s10676-011-9279-1

  9. DIN, DKE (2020) German standardisation roadmap on artificial intelligence. https://www.din.de/resource/blob/772610/e96c34dd6b12900ea75b460538805349/normungsroadmap-en-data.pdf

  10. Dai W, Berleant D (2019) Benchmarking contemporary deep learning hardware and frameworks: a survey of qualitative metrics. In: 2019 IEEE first international conference on cognitive machine intelligence (CogMI). IEEE, Los Angeles, CA, USA, pp 148–155. arXiv:1907.03626. doi:https://doi.org/10.1109/CogMI48466.2019.00029.

  11. Damasio A (2005) Brain trust. Nature 435:571–572. https://doi.org/10.1038/435571a

    Article  Google Scholar 

  12. Engelmann JM, Herrmann E (2016) Chimpanzees trust their friends. Curr Biol 26:252–256. Elsevier

    Google Scholar 

  13. Ethics Guidelines for Trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  14. European Commission (2019) Ethics guidelines for trustworthy AI. High level working group on AI https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf

  15. European Commission (2019) Building trust in human-centric artificial intelligence. https://ec.europa.eu/jrc/communities/en/community/digitranscope/document/building-trust-human-centric-artificial-intelligence

  16. European Commission (2020) White paper on artificial intelligence: a European approach to excellence and trust. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

  17. European Commission (2020) Report from the commission to the European parliament, the council and the European economic and social committee. Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

    Google Scholar 

  18. European Commission (2020) A new industrial strategy for Europe, communication from the commission to the European parliament, the European council, the council, the European economic and social committee and the committee of the regions. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0102&from=EN

  19. European Commission (2020) New consumer agenda. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0696

  20. European Commission (2021) The 2021 coordinated plan on artificial intelligence is the next step in creating EU global leadership in trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review

  21. European Commission (2021) General product safe directive. https://ec.europa.eu/info/business-economy-euro/product-safety-and-requirements/product-safety/consumer-product-safety_en

  22. European Community (2019) Communication from the commission to the European parliament, the council, the European economic and social committee and the committee of the regions on building trust in human-centric artificial intelligence, Brussels, COM, 168 final

    Google Scholar 

  23. Fletcher S, Charalambous G (2021) Trust in human robot collaboration. In: Ferreira MIA, Fletcher SR (eds) The 21st century industrial robot: when tools become collaborators. ISCA series. Springer

    Google Scholar 

  24. Franklin C (2021) The role of standards in human-robot integration safety. In: Ferreira MIA, Fletcher S (eds) The 21st century industrial robot: when tools become collaborators. ISCA series. Springer

    Google Scholar 

  25. Heidegger M (1977) The question concerning technology and other essays. Garland Publishing Incorporated. New York, London. https://monoskop.org/images/4/44/Heidegger_Martin_The_Question_Concerning_Technology_and_Other_Essays.pdf

  26. IEEE, Ethically aligned design, from theory to practice. Chair: Raja Chatila. https://standards.ieee.org/industry-connections/ec/ead1e-infographic/

  27. IEEE, The IEEE global initiative on ethics of autonomous and intelligent systems. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead1e-introduction.pdf

  28. Johnson W (2007) Genetic and environmental in—fluences on behavior: capturing all the interplay. Psychol Rev 114:423–440. https://doi.org/10.1037/0033-295X.114.2.423

  29. Le Roux M, Peterson S, Mougous J (2015) Bacterial danger sensing. J Mol Biol 427(23):3744–3753. Published online 2015 Oct 3. https://doi.org/10.1016/j.jmb.2015.09.018

  30. OECD (2021) Tools for trustworthy AI: a framework to compare implementation tools for trustworthy AI systems. OECD Digit Econ Pap, nº312. https://www.oecd.org/science/tools-for-trustworthy-ai-008232ec-en.htm

  31. OECD (2022) Enabling effective AI policies. https://oecd-events.org/2022-ai-wips/session/c6e2c2b1-bd7a-ec11-94f6-a04a5e7d3e1c

  32. Oreskes N (2019) Why trust science? Princeton University Press, Princeton NJ

    Book  Google Scholar 

  33. Pinker S (2005) Mind & language, vol 20, no 1 February 2005, pp 1–24. #Blackwell Publishing Ltd. 2005, 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. https://stevenpinker.com/files/pinker/files/so_how_does_the_mind_work.pdf

  34. Riedl R, Javor A (2012) The biology of trust: integrating evidence from genetics, endocrinology, and functional brain imaging. J Neurosci Psychol Econ 2011 Am Psychol Assoc 5(2):63–91. https://doi.org/10.1037/a0026318

  35. Righetti F, Finkenauer C (2011) If you are able to control yourself, I will trust you: the role of perceived self-control in interpersonal trust. J Pers Soc Psychol 100:874–886. https://doi.org/10.1037/a0021827

  36. Studley M, Little H (2021) Robots in smart cities. In: Ferreira MIA (ed) How smart is your city?—technological innovation, ethics and inclusiveness. ISCA series. Springer

    Google Scholar 

  37. Sullins JP (2020) Trust in robots. Simon 2020:313–325

    Google Scholar 

  38. Todorov A, Baron SG, Oosterhof NN (2008) Evaluating face trustworthiness: A model based ap- proach. Soc Cogn Affect Neurosci 3:119–127. https://doi.org/10.1093/scan/nsn009

    Article  Google Scholar 

  39. Townley C, Garfield JL (2013) Public trust. In: Makela P, Townley C (eds) Trust: analytic and applied perspectives. Rodopi Press, Amsterdam, pp 95–107

    Google Scholar 

  40. Wahlster W, Winterhalter C (2020) German standardisation roadmap on artificial intelligence, November 2021. DKE German Commission for Electrical, Electronic & Information Technologies of DIN and VDE

    Google Scholar 

  41. Welch M, Rivera R, Conway B, Yonskoski J, Lupton P, Giancola R (2005) Determinants and consequences of social trust. Sociol Inq. Wiley Online Library

    Google Scholar 

  42. Zak PJ, Kurzban R, Matzner WT (2004) The neurobiology of trust. Ann N Y Acad Sci 1032:224–227. https://doi.org/10.1196/annals.1314.025

  43. Zeder M (2012) Pathways to animal domestication. Biodivers Agric Domest Evol Sustain 227–259

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Isabel Aldinhas Ferreira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ferreira, M.I.A. (2022). In Machines We Trust?. In: Ferreira, M.I.A., Tokhi, M.O. (eds) Towards Trustworthy Artificial Intelligent Systems. Intelligent Systems, Control and Automation: Science and Engineering, vol 102. Springer, Cham. https://doi.org/10.1007/978-3-031-09823-9_2

Download citation

Publish with us

Policies and ethics