Skip to main content

Requirements for Trustworthy Artificial Intelligence – A Review

  • Conference paper
  • First Online:
Advances in Networked-Based Information Systems (NBiS 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1264))

Included in the following conference series:

Abstract

The field of algorithmic decision-making, particularly Artificial Intelligence (AI), has been drastically changing. With the availability of a massive amount of data and an increase in the processing power, AI systems have been used in a vast number of high-stake applications. So, it becomes vital to make these systems reliable and trustworthy. Different approaches have been proposed to make theses systems trustworthy. In this paper, we have reviewed these approaches and summarized them based on the principles proposed by the European Union for trustworthy AI. This review provides an overview of different principles that are important to make AI trustworthy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. International Data Corporation IDC. Worldwide Spending on Artificial Intelligence Systems Will Be Nearly $98 Billion in 2023, According to New IDC Spending Guide (2019). https://www.idc.com/getdoc.jsp?containerId=prUS45481219

  2. Angwin, J., et al.: Machine bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  3. Dastin, J.: Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Reuters, San Fransico (2018). Accessed 9 Oct 2018

    Google Scholar 

  4. Thomas, M.: Six Dangerous Risks of Artificial Intelligence. Builtin. 14 January 2019

    Google Scholar 

  5. Levin, S., Carrie, J.: Wong “Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian” TheGuardian, 19 March 2018

    Google Scholar 

  6. Schlesinger, A., O’Hara, K.P., Taylor, A.S.: Let’s talk about race: identity, chatbots, and AI. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018)

    Google Scholar 

  7. Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–134 (2018)

    Google Scholar 

  8. Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017)

    Article  Google Scholar 

  9. Joshi, N.: How we can build Trustworthy AI. Forbes, 30 July 2019

    Google Scholar 

  10. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019)

    Article  Google Scholar 

  11. Smuha, N.A.: The EU approach to ethics guidelines for trustworthy artificial intelligence. In: CRi-Computer Law Review International (2019)

    Google Scholar 

  12. Daugherty, P.R., James Wilson, H.: Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press, Boston (2018)

    Google Scholar 

  13. European Commission. White paper on artificial intelligence–a European approach to excellence and trust (2020)

    Google Scholar 

  14. Veeramachaneni, K., et al.: AI^ 2: training a big data machine to defend. In: 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC) (2016)

    Google Scholar 

  15. Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. 17(2), 24 (2017). Article 16

    Article  Google Scholar 

  16. Uslu, S., et al.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Conference on Complex, Intelligent, and Software Intensive Systems. Springer, Cham (2020)

    Google Scholar 

  17. Uslu, S., et al.: Trust-based decision making for food-energy-water actors. In: International Conference on Advanced Information Networking and Applications. Springer, Cham (2020)

    Google Scholar 

  18. Uslu, S., et al.: Trust-based game-theoretical decision making for food-energy-water management. In: International Conference on Broadband and Wireless Computing, Communication and Applications. Springer, Cham (2019)

    Google Scholar 

  19. Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Workshops of the International Conference on Advanced Information Networking and Applications. Springer, Cham (2019)

    Google Scholar 

  20. Kaur, D., et al.: Trust-based human-machine collaboration mechanism for predicting crimes. In: International Conference on Advanced Information Networking and Applications. Springer, Cham (2020)

    Google Scholar 

  21. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: Proceedings 2018 Network and Distributed System Security Symposium (2018): n. pag. Crossref. Web

    Google Scholar 

  22. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  23. Raji, I.D., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)

    Google Scholar 

  24. Katell, M., et al.: Toward situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)

    Google Scholar 

  25. Wieringa, M.: What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)

    Google Scholar 

  26. Mehri, V.A., Ilie, D., Tutschku, K.: Privacy and DRM requirements for collaborative development of AI applications. In: Proceedings of the 13th International Conference on Availability, Reliability and Security (2018)

    Google Scholar 

  27. He, Y., et al.: Towards privacy and security of deep learning systems: a survey. arXiv preprint arXiv:1911.12562 (2019)

  28. Hintze, M.: Science and Privacy: Data Protection Laws and Their Impact on Research, vol. 14, p. 103. Wash. JL Tech. & Arts (2018)

    Google Scholar 

  29. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 463–480 (2015). https://doi.org/10.1109/sp.2015.35

  30. Ragot, M., Martin, N., Cojean, S.: AI-generated vs. human artworks. a perception bias towards artificial intelligence? In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (2020)

    Google Scholar 

  31. Brown, A.: Biased Algorithms Learn From Biased Data: 3 Kinds of Biases Found in AI datasets. Forbes, 7 February 2020 (2020)

    Google Scholar 

  32. Stock, P., Cisse, M.: Convnets and imagenet beyond accuracy: understanding mistakes and uncovering biases. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  33. Mehrabi, N., et al.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)

  34. Agarwal, A., et al.: Automated test generation to detect individual discrimination in AI models. arXiv preprint arXiv:1809.03260 (2018)

  35. Srivastava, B., Rossi, F.: Towards composable bias rating of AI services. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018)

    Google Scholar 

  36. Celis, L.E., et al.: How to be fair and diverse? arXiv preprint arXiv:1610.07183 (2016)

  37. Sablayrolles, A., et al.: Radioactive data: tracing through training. arXiv preprint arXiv:2002.00937 (2020)

  38. Lepri, B., et al.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)

    Article  Google Scholar 

  39. Bellamy, R.K.E., et al.: AI Fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63(4/5), 4:1–4:15 (2019)

    Article  Google Scholar 

  40. Mueller, S.T., et al.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876 (2019)

  41. Wang, D., et al.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  42. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  43. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision. Springer, Cham (2014)

    Google Scholar 

  44. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)

    Google Scholar 

  45. Zhang, Q.-s., Zhu, S.-C.: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018)

    Article  Google Scholar 

  46. Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International Conference on Machine Learning (2018)

    Google Scholar 

  47. Madumal, P., et al.: Explainable reinforcement learning through a causal lens. arXiv preprint arXiv:1905.10958 (2019)

  48. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

    Google Scholar 

Download references

Acknowledgments

This work was partially supported by the National Science Foundation under Grant No. 1547411 and by the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arjan Durresi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaur, D., Uslu, S., Durresi, A. (2021). Requirements for Trustworthy Artificial Intelligence – A Review. In: Barolli, L., Li, K., Enokido, T., Takizawa, M. (eds) Advances in Networked-Based Information Systems. NBiS 2020. Advances in Intelligent Systems and Computing, vol 1264. Springer, Cham. https://doi.org/10.1007/978-3-030-57811-4_11

Download citation

Publish with us

Policies and ethics