Skip to main content

Decoding the Recommender System: A Comprehensive Guide to Explainable AI in E-commerce

  • Chapter
  • First Online:
Role of Explainable Artificial Intelligence in E-Commerce

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1094))

  • 65 Accesses

Abstract

The rapid growth of e-commerce has resulted in an increasingly competitive landscape where businesses strive to provide personalized and engaging experiences to their customers. Recommender systems, powered by advanced algorithms and artificial intelligence, are central to this effort, curating tailored suggestions for products, services, and content. However, the complex and opaque decision-making processes of these systems often act as black boxes, limiting user understanding and trust. This chapter delves into the exclusive roles of explainable AI in the decision-making processes of recommender systems within the context of e-commerce, highlighting its importance in fostering trustworthiness, ensuring ethical and legal compliance, and facilitating debugging and model improvement. We explore various types of explanations, techniques for generating explanations, and real-world examples of explainable recommender systems. In conclusion, explainable AI is an indispensable component of recommender systems, playing a critical role in enhancing user trust and engagement, ultimately leading to improved customer satisfaction and increased revenues for e-commerce businesses. As AI systems continue to evolve and become more integrated into our lives, explainability will remain a crucial aspect of their design and implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdollahi, B., & Nasraoui, O. (2016). Explainable matrix factorization for collaborative filtering. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 5–6). https://doi.org/10.1145/2872518.2889402.

  2. Abdollahi, B., & Nasraoui, O. (2018). Transparent recommendations: An approach to explainable recommender systems. In Proceedings of the 12th ACM Conference on Recommender Systems (pp. 364–365). https://doi.org/10.1145/3240323.3240375.

  3. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  4. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Herrera, F., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  5. Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., & Eckersley, P., et al. (2020). Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648–657). https://doi.org/10.1145/3351095.3372850.

  6. Bilgic, M., & Mooney, R. J. (2005). Explaining recommendations: Satisfaction vs. promotion. In Proceedings of Beyond Personalization 2005: A Workshop on the Next Stage of Recommender Systems Research at the 2005 International Conference on Intelligent User Interfaces (pp. 13–18).

    Google Scholar 

  7. Biren, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In Proceedings of the IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI) (pp. 1–9).

    Google Scholar 

  8. Bobadilla, J., Ortega, F., Hernando, A., & Gutiérrez, A. (2013). Recommender systems survey. Knowledge-Based Systems, 46, 109–132. https://doi.org/10.1016/j.knosys.2013.03.012

    Article  Google Scholar 

  9. Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794). https://doi.org/10.1145/2939672.2939785.

  10. Gaur, L., & Sahoo, B. M. (2022). Introduction to explainable AI and intelligent transportation. Explainable artificial intelligence for intelligent transportation systems: Ethics and applications (pp. 1–25). Springer International Publishing.

    Chapter  Google Scholar 

  11. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv preprint arXiv:1806.00069.

  12. Gonzalez, M. F., Liu, W., Shirase, L., Tomczak, D. L., Lobbe, C. E., Justenhoven, R., & Martin, N. R. (2022). Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. Computers in Human Behavior, 130, 107179.

    Article  Google Scholar 

  13. Gramegna, A., & Giudici, P. (2021). SHAP and LIME: An evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence, 4, 752558.

    Article  Google Scholar 

  14. Guidotti, R., Monreale, A., Pedreschi, D., & Giannotti, F. (2021). Principles of explainable artificial intelligence. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications, 9–31.

    Google Scholar 

  15. Gupta, V., & Sahu, G. (2021). Reviving the Indian hospitality industry after the Covid-19 pandemic: The role of innovation in training. Worldwide hospitality and tourism themes, 13(5), 599–609.

    Article  Google Scholar 

  16. Gupta, V., Roy, H., & Sahu, G. (2022). HOW the tourism & hospitality lecturers coped with the transition to online teaching due to COVID-19: An assessment of stressors, negative sentiments and coping strategies. Journal of Hospitality, Leisure, Sport and Tourism Education, 30, 100341.

    Article  Google Scholar 

  17. Haque, A. B., Islam, A. N., & Mikalef, P. (2023). Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change, 186, 122120.

    Article  Google Scholar 

  18. Jannach, D., & Adomavicius, G. (2016). Recommendations with a purpose. In Proceedings of the 10th ACM Conference on Recommender Systems (pp. 7–10). https://doi.org/10.1145/2959100.2959176.

  19. Jiang, J., Kahai, S., & Yang, M. (2022). Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies, 165, 102839.

    Article  Google Scholar 

  20. Jiarpakdee, J., Tantithamthavorn, C. K., Dam, H. K., & Grundy, J. (2020). An empirical study of model-agnostic techniques for defect prediction models. IEEE Transactions on Software Engineering, 48(1), 166–185.

    Article  Google Scholar 

  21. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37. https://doi.org/10.1109/MC.2009.263

    Article  Google Scholar 

  22. Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.

    Article  MathSciNet  Google Scholar 

  23. Lee, S. (2022). AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems. In Human-Centered Artificial Intelligence (pp. 91–102). Academic Press.

    Google Scholar 

  24. Linden, G., Smith, B., & York, J. (2003). Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1), 76–80. https://doi.org/10.1109/MIC.2003.1167344

  25. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777).

    Google Scholar 

  26. Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655.

    Article  Google Scholar 

  27. Meske, C., Abedin, B., Klier, M., & Rabhi, F. (2022). Explainable and responsible artificial intelligence. Electronic Markets, 1–4.

    Google Scholar 

  28. Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547.

  29. Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011

    Article  MathSciNet  Google Scholar 

  30. Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3–5), 393–444. https://doi.org/10.1007/s11257-017-9195-0

    Article  Google Scholar 

  31. Paz-Ruza, J., Eiras-Franco, C., Guijarro-Berdiñas, B., & Alonso-Betanzos, A. (2022). Sustainable personalisation and explainability in dyadic data systems. Procedia Computer Science, 207, 1017–1026.

    Article  Google Scholar 

  32. Quadrianto, N., Schuller, B. W., & Lattimore, F. R. (2021). Ethical machine learning and artificial intelligence. Frontiers in Big Data, 4, 742589.

    Article  Google Scholar 

  33. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778

  34. Saeed, W., & Omlin, C. (2023). Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 110273.

    Google Scholar 

  35. Sahu, G., Gaur, L., & Singh, G. (2021). Applying niche and gratification theory approach to examine the users’ indulgence towards over-the-top platforms and conventional TV. Telematics and Informatics, 65, 101713.

    Article  Google Scholar 

  36. Sahu, G., Gaur, L., & Singh, G. (2022, November). Analyzing the Users’ De-familiarity with Thumbnails on OTT Platforms to Influence Content Streaming. In 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 551–556). IEEE.

    Google Scholar 

  37. Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 1–12.

    Google Scholar 

  38. Weber, P., Carl, K. V., & Hinz, O. (2023). Applications of explainable artificial intelligence in finance—A systematic review of finance, information systems, and computer science literature. Management Review Quarterly, 1–41.

    Google Scholar 

  39. Yalcin, O. G. (2021). GDPR compliant data processing and privacy preserving technologies: A literature review on notable Horizon 2020 projects. New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence: The DITTET Collection, 166–177.

    Google Scholar 

  40. Zhang, S., Yao, L., Sun, A., & Tay, Y. (2019). Deep learning-based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR), 52(1), 1–38. https://doi.org/10.1145/3285029

    Article  Google Scholar 

  41. Zhang, Y., & Chen, X. (2020). Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval, 14(1), 1–101. https://doi.org/10.1561/1500000066

  42. Zimmermann, R., Mora, D., Cirqueira, D., Helfert, M., Bezbradica, M., Werth, D., Weitzl, W. J., Riedl, R., & Auinger, A. (2023). Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant application using personalized recommendations and explainable artificial intelligence. Journal of Research in Interactive Marketing, 17(2), 273–298. https://doi.org/10.1108/JRIM-09-2021-0237

    Article  Google Scholar 

  43. Gaur, L., Sahoo, B. M. (2022). Intelligent transportation technology enablers. In: Explainable Artificial Intelligence for Intelligent Transportation Systems. Springer. https://doi.org/10.1007/978-3-031-09644-0_2.

  44. Gaur, L., Ratta, M., & Gaur, A. (2022). Future of DeepFakes and Ectypes In: Deepfakes. CRC Press, 9781003231493.

    Google Scholar 

  45. Gaur, L., & Sahoo, B. M. (2022). Explainable AI in ITS: Ethical concerns. In: Explainable Artificial Intelligence for Intelligent Transportation Systems. Springer. https://doi.org/10.1007/978-3-031-09644-0_5.

  46. Gaur, L., Sahoo, B.M. (2022). Intelligent transportation system: Modern business models. In: Explainable Artificial Intelligence for Intelligent Transportation Systems. Springer. https://doi.org/10.1007/978-3-031-09644-0_4.

  47. Gaur, L., Jhanjhi, N. Z., Bakshi, S., & Gupta, P. (2022). Analyzing consequences of artificial intelligence on jobs using topic modeling and keyword extraction. In 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM) (pp. 435–440). https://doi.org/10.1109/ICIPTM54933.2022.9754064.

  48. Gaur, L., Bhandari, M., Razdan, T., Mallik, S., & Zhao, Z. (2022). Explanation-driven deep learning model for prediction of brain Tumour status using MRI image data. Frontier in Genetics, 13, 822666.

    Article  Google Scholar 

  49. Anshu, K., Gaur, L., & Singh, G. (2021). Co-creation: Interface for online affective experience and repurchase intention. International Journal of Business and Economics, 20(2), 161–185. ISSN 1607–0704.

    Google Scholar 

  50. Sharma, S., Singh, G., Gaur, L., & Afaq, A. (2022). Exploring customer adoption of autonomous shopping systems. Telematics and Informatics, 73, 101861, ISSN 0736–5853. https://doi.org/10.1016/j.tele.2022.101861.

  51. Sahu, G., Singh, G., Singh, G. & Gaur, L. (2024). Exploring new dimensions in OTT consumption: an empirical study on perceived risks, descriptive norms and goal-directed behaviour. Asia Pacific Journal of Marketing and Logistics. https://doi.org/10.1108/APJML-07-2023-0690.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Garima Sahu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sahu, G., Gaur, L. (2024). Decoding the Recommender System: A Comprehensive Guide to Explainable AI in E-commerce. In: Gaur, L., Abraham, A. (eds) Role of Explainable Artificial Intelligence in E-Commerce. Studies in Computational Intelligence, vol 1094. Springer, Cham. https://doi.org/10.1007/978-3-031-55615-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-55615-9_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-55614-2

  • Online ISBN: 978-3-031-55615-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics