Skip to main content
Log in

A comprehensive approach for the evaluation of recommender systems using implicit feedback

  • Original Research
  • Published:
International Journal of Information Technology Aims and scope Submit manuscript

Abstract

Evaluation strategies are essentials in assessing the degree of satisfaction that recommender systems can provide to users. The evaluation schemes rely heavily on user feedback, however these feedbacks may be casual, biased or spam which leads to an inappropriate evaluation. In this paper, a comprehensive approach for the evaluation of recommendation system is proposed. The implicit user feedbacks are taken for the different products on the basis of the reviews provided to them. A novel sincerity check mechanism is suggested to render the biasedness and casual among the users. Further, mathematical model is presented to classify the products preference criteria. The list of the preferred products yield different ranking. Rank aggregation algorithm is used to obtain a final ranking, which is compared with the base ranking to be evaluated. Hence, with the help of suggested methodology, an evaluation strategy is suggested that avoids the risk of fake and biased feedbacks. The comparison of the proposed approach with existing schemes shows the superiority of the aforementioned approach from various parameters. It is envisaged that the proposed evaluation scheme lays a platform for users to assess the recommender systems for their ease and reliable online shopping.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Andreevskaia A, Bergler S (2006) Mining WordNet for fuzzy sentiment: sentiment tag extraction from WordNet glosses. In: EACL’06, pp 209–216

  2. Carenini G, Ng RT, Pauls A (2006) Interactive multimedia summaries of evaluative text. In: IUI’06

  3. Hu M, Liu B (2004) Mining and summarizing customer reviews. In: KDD’04

  4. Beel J, Langer S, Genzmehr M (2013) Sponsored vs. organic (research paper) recommendations and the impact of labeling. In: Aalberg T, Dobreva M, Papatheodorou C, Tsakonas G, Farrugia C (eds) Proceedings of the 17th international conference on theory and practice of digital libraries (TPDL 2013). Springer, Valletta, pp 395–399

    Google Scholar 

  5. Beel J, Langer S, Nürnberger A, Genzmehr M (2013) The impact of demographics (age and gender) and other user characteristics on evaluating recommender systems. In: Aalberg T, Dobreva M, Papatheodorou C, Tsakonas G, Farrugia C (eds) Proceedings of the 17th international conference on theory and practice of digital libraries (TPDL 2013). Springer, Valletta, pp 400–404

    Google Scholar 

  6. Ali R (2013) Pro-mining: product recommendation using web based opinion mining. IJCET 4(6):299–313

    Google Scholar 

  7. Beg MMS (2002) Novel fuzzy queries for searching the World Wide Web. In: Proceedings of international conference on high performance computing (HiPC 2002) workshop on soft computing (WOSCO 2002), Bangalore

  8. Shani G, Gunawardana A (2011) Evaluating recommendation systems. In: Ricci F, Rokach L, Shapira B, Kantor P (eds) Recommender systems handbook. Springer, Boston, MA

    Google Scholar 

  9. Ge M, Delgado-Battenfeld C, Jannach D (2010) Beyond accuracy: evaluating recommender systems by coverage and serendipity. In: Proceedings of the fourth ACM conference on recommender systems, ACM, pp 257–260

  10. Schröder G, Thiele M, Lehner W (2011) Setting goals and choosing metrics for recommender system evaluations. In: UCERSTI2 Workshop at the 5th ACM conference on recommender systems, Chicago, USA, vol 23, p 53

  11. Mcnee S, Riedl J, Konstan J (2006) Accurate is not always good: how accuracy metrics have hurt recommender systems. In: Conference on human factors in computing systems. Quebec, Canada, pp 1–5

  12. Beel J, et al (2013) Research paper recommender system evaluation: a quantitative literature survey. In: Proceedings of the international workshop on reproducibility and replication in recommender systems evaluation. ACM

  13. Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6):734–749

    Article  Google Scholar 

  14. Herlocker JL, Konstan JA, Terveen LG, Riedl JT (2004) Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst (TOIS) 22:5–53

    Article  Google Scholar 

  15. Ricci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. In: Ricci F, Rokach L, Shapira B, Kantor P (eds) Recommender systems handbook. Springer, Boston, MA, pp 1–35

    Chapter  Google Scholar 

  16. Sohail SS, Siddiqui J, Ali R (2013) Book recommender system using opinion mining technique. Advances in computing, communications and informatics (ICACCI), 2013 international conference on. IEEE

  17. Vinagre J, Jorge AM, Gama J (2015) Evaluation of recommender systems in streaming environments. In: Workshop on ‘recommender systems evaluation: dimensions and design’ (REDD 2014), held in conjunction with RecSys 2014. October 10, 2014, Silicon Valley, United States. arXiv preprint arXiv:1504.08175

  18. Sohail SS, Siddiqui J, Ali R (2015) User feedback based evaluation of a product recommendation system using rank aggregation method. In: El-Alfy ES, Thampi S, Takagi H, Piramuthu S, Hanne T (eds) Advances in intelligent informatics. Advances in intelligent systems and computing, vol 320. Springer, Cham

    Chapter  Google Scholar 

  19. Sohail SS, Siddiqui J, Ali R (2014) User feedback scoring and evaluation of a product recommender system. In: Contemporary computing (IC3), 2014 seventh international conference on. IEEE

  20. Thom J, Scholer F (2007) A comparison of evaluation measures given how users perform on search tasks. In: ADCS2007 Australasian document computing symposium. RMIT University, School of Computer Science and Information Technology

  21. Beg MMS (2005) A subjective measure of web search quality. Inf Sci 169(3):365–381

    Article  MathSciNet  Google Scholar 

  22. Beg MMS, Ahmad N (2007) Web search enhancement by mining user actions. Inf Sci 177(23):5203–5218

    Article  Google Scholar 

  23. Cremonesi P, Turrin R, Lentini E, Matteucci M (2008) An evaluation methodology for collaborative recommender systems. In: International conference on automated solutions for cross media content and multi-channel distribution, AXMEDIS, pp 224–231

  24. Gunawardana A, Shani G (2009) A survey of accuracy evaluation metrics of recommendation tasks. J Mach Learn Res 10:2935–2962

    MathSciNet  MATH  Google Scholar 

  25. Hersh W, Turpin A, Price S, Chan B, Kramer D, Sacherek L, Olson D (2000) Do batch and user evaluations give the same results? In: Proceedings of the 23rd annual international ACM SIGIR conference on research and development in information retrieval, ACM, pp 17–24

  26. Jannach D, Lerche L, Gedikli F, Bonnin G (2013) What recommenders recommend–an analysis ofaccuracy, popularity, and sales diversity effects. In: Carberry S, Weibelzahl S, Micarelli A, Semeraro G (eds) User modeling, adaptation, and personalization. UMAP 2013. Lecture notes in computer science, vol 7899. Springer, Berlin, Heidelberg

    Chapter  Google Scholar 

  27. Turpin AH, Hersh W (2001) Why batch and user evaluations do not give the same results. In: Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval, ACM, pp 225–231

  28. McNee SM, Albert I, Cosley D, Gopalkrishnan P, Lam SK, Rashid AM, Konstan JA, Riedl J (2002) On the recommending of citations for research papers. In: Proceedings of the ACM conference on computer supported cooperative work. ACM, New Orleans, Louisiana, USA, pp 116–125

  29. http://sloanreview.mit.edu/article/the-problem-with-online-ratings-2/. Accessed 13 Jun 2016

  30. Nielsen: global consumers’ trust in ‘earned’ advertising grows in importance. 2012. http://www.nielsen.com. Accessed 25 May 2018

  31. Hernández del Olmo F, Gaudioso E (2008) Evaluation of recommender systems: a new approach. Expert Syst Appl 35(3):790–804

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shahab Saquib Sohail.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sohail, S.S., Siddiqui, J. & Ali, R. A comprehensive approach for the evaluation of recommender systems using implicit feedback. Int. j. inf. tecnol. 11, 549–567 (2019). https://doi.org/10.1007/s41870-018-0202-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41870-018-0202-4

Keywords

Navigation