Skip to main content
Log in

An interpretable neural network TV program recommendation based on SHAP

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

With the development of artificial intelligence, many fields are trying to solve problems with the powerful representation ability of neural networks. Recently, recommendation systems based on neural networks have become increasingly popular and the applications are expanding, especially in TV program recommendations. However, the opacity of the neural networks has resulted in users being unable to fully trust the predicted recommendations, which increases the need for interpretable recommendation systems. This paper analyzes the interpretability of a recommendation model based on neural networks. We propose a convolutional neural TV program recommendation based on auxiliary information (CNPR-AI) to learn the program features effectively. First, we construct program dictionaries and leverage word embeddings to learn textual auxiliary information to generate program representations. We further learn program representations to generate user representations with convolutional neural networks. Then we input the program representation and user representation into the prediction module to obtain the recommendation results. As SHapley Additive exPlanations (SHAP) can provide interpretation solutions for deep learning, we utilize it to generate visual interpretations for our model to show the role played by each TV program feature in predicting user interests. We believe that the interpretations developed can help users better understand the learning mechanisms of the neural network and reflect different users’ preferences.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from Beijing Gehua CATV Network Co., Ltd. but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Beijing Gehua CATV Network Co., Ltd.

References

  1. Pu P, Chen L, Hu R (2012) Evaluating recommender systems from the user’s perspective: survey of the state of the art. User Model User-Adap Inter 22(4–5):317–355

    Article  Google Scholar 

  2. Khusro S, Ali Z, Ullah I (2016) Recommender Systems: Issues, Challenges, and Research Opportunities. In: Kim, K., Joukov, N. (eds) Information Science and Applications (ICISA) 2016. Lecture Notes in Electrical Engineering, vol 376. Springer, Singapore. https://doi.org/10.1007/978-981-10-0557-2_112

  3. Zhao Z-D, Shang M (2010) User-Based Collaborative-Filtering Recommendation Algorithms on Hadoop. In: Proceedings of the 2010 Third International Conference on Knowledge Discovery and Data Mining, pp 478–481

  4. Sarwar B, Karypis G, Konstan J, Riedl J (2001) Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web, pp 285–295

  5. Balabanović M, Shoham Y (1998) Fab: content-based, collaborative recommendation. Commun ACM 40(3):66–72

    Article  Google Scholar 

  6. Milano S, Taddeo M, Floridi L (2020) Recommender systems and their ethical challenges. AI & Soc 35:957–967

    Article  Google Scholar 

  7. Wang P, Cai R, Wang H (2022) Graph-based Extractive Explainer for Recommendations. Proc ACM Web Conf 2022:2163–2171

    Google Scholar 

  8. Xian Y, Zhao T, Li J, Chan J, Kan A, Ma J, Dong XL, Faloutsos C, Karypis G, Muthukrishnan S et al (2021) Ex3: Explainable attribute-aware item-set recommendations. In: Proceedings of the 15th ACM Conference on Recommender Systems, pp 484–494

  9. Zhang Y, Lai G, Zhang M, Zhang Y, Liu Y, Ma S (2014) Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th international ACM SIGIR conference on Research & Development in Information Retrieval, pp 83–92

  10. Zhang Y, Chen X (2020) Explainable recommendation: a survey and new perspectives. Found Trends Inf Retr 14(1):1–101

    Article  Google Scholar 

  11. Herlocker JL, Konstan JA, Riedl J (2000) Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM conference on Computer supported cooperative work, pp 241–250

  12. Zhao WX, Li S, He Y, Wang L, Wen J-R, Li X (2016) Exploring demographic information in social media for product recommendation. Knowl Inf Syst 49(1):61–89

    Article  Google Scholar 

  13. Hou Y, Yang N, Wu Y, Philip SY (2019) Explainable recommendation with fusion of aspect information. World Wide Web 22(1):221–240

    Article  Google Scholar 

  14. Li P, Wang Z, Ren Z, Bing L, Lam W (2017) Neural rating regression with abstractive tips generation for recommendation. In: Proceedings of the 40th International ACM SIGIR conference on Research & Development in Information Retrieval, pp 345–354

  15. Lin Y, Ren P, Chen Z, Ren Z, Ma J, de Rijke M (2019) Explainable fashion recommendation with joint outfit matching and comment generation. IEEE Trans Knowl Data Eng 32(8):1502–1516

    Article  Google Scholar 

  16. Quijano-Sanchez L, Sauer C, Recio-Garcia JA, Diaz-Agudo B (2017) Make it personal: a social explanation system applied to group recommendations. Expert Syst Appl 76:36–48

    Article  Google Scholar 

  17. Cheng W, Shen Y, Huang L, Zhu Y (2019) Incorporating Interpretability into Latent Factor Models via Fast Influence Analysis. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp 885–893

  18. Tan Y, Zhang M, Liu Y, Ma S (2016) Rating-Boosted Latent Topics: Understanding Users and Items with Ratings and Reviews. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp 2640–2646

  19. Ai Q, Azizi V, Chen X, Zhang Y (2018) Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9):137

    Article  MathSciNet  Google Scholar 

  20. Xian Y, Fu Z, Muthukrishnan S, de Melo G, Zhang Y (2019) Reinforcement Knowledge Graph Reasoning for Explainable Recommendation. In: Proceedings of the 42nd International ACM SIGIR Conference on Research & Development in Information Retrieval, pp 285–294

  21. Davidson J, Liebald B, Liu J, Nandy P, Van Vleet T, Gargi U, Gupta S, He Y, Lambert M, Livingston B et al (2010) The YouTube video recommendation system. In: Proceedings of the fourth ACM conference on Recommender systems, pp 293–296

  22. Wang S, Wan W, Qu T, Dong Y (2021) Auxiliary information-enhanced recommendations. Appl Sci 11(19):8830

    Article  Google Scholar 

  23. Liu Y-Y, Yang B, Pei H-B, Huang J (2020) Neural explainable recommender model based on attributes and reviews. J Comput Sci Technol 35:1446–1460

    Article  Google Scholar 

  24. Schafer JB, Frankowski D, Herlocker J, Sen S (2007) Collaborative filtering recommender systems. The Adaptive Web 4321:291–324

    Article  Google Scholar 

  25. Lee Y-H, Wei C-P, Hu PJ-H, Cheng T-H, Lan C-W (2020) Small clues tell: a collaborative expansion approach for effective content-based recommendations. J Organ Comput Electron Commer 30(2):111–128

    Article  Google Scholar 

  26. Lops P, Gemmis MD, Semeraro G (2011) Content-based recommender systems: State of the art and trends. Recommender Systems Handbook 73–105

  27. Oord A, Dieleman S, Schrauwen B (2013) Deep content-based music recommendation. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, pp 2643–2651

  28. Wang H, Wang N, Yeung DY (2015) Collaborative deep learning for recommender systems. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1235–1244

  29. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  Google Scholar 

  30. Batmaz Z, Yurekli A, Bilge A, Kaleli C (2019) A review on deep learning for recommender systems: challenges and remedies. Artif Intell Rev 52:1–37

    Article  Google Scholar 

  31. Cheng H-T, Koc L, Harmsen J, Shaked T, Chandra T, Aradhye H, Anderson G, Corrado G, Chai W, Ispir M et al (2016) Wide & deep learning for recommender systems. In: Proceedings of the 1st workshop on deep learning for recommender systems, pp 7–10

  32. Wang R, Fu B, Fu G, Wang M (2017) Deep & cross network for ad click predictions. In: Proceedings of the ADKDD’17, pp 1–7

  33. Guo H, Tang R, Ye Y, Li Z, He Z (2017) DeepFM: a factorization-machine based neural network for CTR prediction. arXiv: 1703.04247

  34. Zhang X, Liu H, Chen X, Zhong J, Wang D (2020) A novel hybrid deep recommendation system to differentiate user’s preference and item’s attractiveness. Inf Sci 519:306–316

    Article  Google Scholar 

  35. Chu W-T, Tsai Y-L (2017) A hybrid recommendation system considering visual information for predicting favorite restaurants. World Wide Web 20(6):1313–1331

    Article  Google Scholar 

  36. Kim D, Park CY, Oh J, Lee S, Yu H (2016) Convolutional Matrix Factorization for Document Context-Aware Recommendation. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp 233–240

  37. Wang W, Ye C, Yang P (2020) Research on Movie Recommendation Model Based on LSTM and CNN. In: Proceedings of 2020 5th international conference on computational intelligence and applications, pp 28–32

  38. Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery & data mining, pp 1135–1144

  39. Strumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41:647–665

    Article  Google Scholar 

  40. Lipton ZC (2016) The mythos of model interpretability. Commun ACM 61:36–43

    Article  Google Scholar 

  41. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42

    Article  Google Scholar 

  42. Molnar C (2019) Interpretable machine learning: A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book/

  43. Wu L, Quan C, Li C, Wang Q, Zheng B, Luo X (2019) A context-aware user-item representation learning for item recommendation. ACM Trans Inform Syst 37(2):1–29

    Article  Google Scholar 

  44. Chen X, Chen H, Xu H, Zhang Y, Cao Y, Qin Z, Zha H (2019) Personalized fashion recommendation with visual explanations based on multimodal attention network: towards visually explainable recommendation. In: Proceedings of the 42nd international ACM SIGIR conference on research & development in information retrieval, pp 765–774

  45. Ozsoy MG, O’Reilly-Morgan D, Symeonidis P, Tragos EZ, Hurley N, Smyth B, Lawlor A (2020) MP4Rec: explainable and accurate top-N recommendations in heterogeneous information networks. IEEE Access 8:181835–181847

    Article  Google Scholar 

  46. Chen C, Zhang M, Liu Y, Ma S (2018) Neural attentional rating regression with review-level explanations. In: Proceedings of the 2018 world wide web conference, pp 1583–1592

  47. Zheng L, Noroozi V, Yu PS (2017) Joint deep modeling of users and items using reviews for recommendation. In: Proceedings of the Tenth ACM international conference on web search and data mining, pp 425–434

  48. Lundberg SM, Lee S (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems, pp 4768–4777

  49. Shapley LS (1953) A value for n-person games. https://www.rand.org/pubs/papers/P295.html

  50. Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: Proceedings of the 34th international conference on machine learning, pp 3145–3153

  51. Fernando ZT, Singh J, Anand A (2019) A study on the interpretability of neural retrieval models using DeepSHAP. In: Proceedings of the 42nd international ACM SIGIR conference on research & development in information retrieval, pp 1005–1008

Download references

Acknowledgements

The work was supported by the National Key Research and Development Program (No. 2021YFF0901705, 2021YFF0901700); the State Key Laboratory of Media Convergence and Communication, Communication University of China; the Fundamental Research Funds for the Central Universities; the High-quality and Cutting-edge Disciplines Construction Project for Universities in Beijing (Internet Information, Communication University of China).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Meiqi Ji.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, F., Fu, R., Feng, X. et al. An interpretable neural network TV program recommendation based on SHAP. Int. J. Mach. Learn. & Cyber. 14, 3561–3574 (2023). https://doi.org/10.1007/s13042-023-01850-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01850-5

Keywords

Navigation