Advertisement

Metrics for Evaluating the Serendipity of Recommendation Lists

  • Tomoko Murakami
  • Koichiro Mori
  • Ryohei Orihara
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4914)

Abstract

In this paper we propose metrics unexpectedness and unexpectedness_r for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here, unexpectedness is a metric for a whole recommendation list, while unexpectedness_r is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations.

Keywords

Prediction Method Recommender System User Satisfaction Collaborative Filter Recommendation List 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Breese 98]
    Breese, J., Herlocker, J., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: UAI 1998. Proc. of the 14th Conference on Uncertainty in Artificial Intelligence, pp. 43–52 (1998)Google Scholar
  2. [Cleverdon 68]
    Cleverdon, C., Kean, M.: Factors Determining the Performance of Indexing Systems. In: Aslib Cranfield Research Project, Cranfield, England (1968)Google Scholar
  3. [Billsus 98]
    Billsus, D., Pazzani, M.: Learning collaborative information filters. In: Proc. of the 15th National Conference on Artificial Intelligence(AAAI), pp. 46–53 (1998)Google Scholar
  4. [Sarwar 00]
    Sarwar, B., et al.: Analysis of recommendation algorithms for E-commerce. In: EC 2000. Proc. of the 2nd ACM Conference on Electronic Commerce, pp. 285–295 (2000)Google Scholar
  5. [Shardanand 95]
    Shardanand, U., Maes, P.: Social Information Filtering: Algorithms for Automating ”Word of Mouth”. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(ACM SIGCHI), pp. 210–217. ACM Press, New York (1995)Google Scholar
  6. [Swets 69]
    Swets, J.: Effectiveness of information retrieval methods. Amer. Doc. 20, 72–89 (1969)CrossRefGoogle Scholar
  7. [Swearingen 01]
    Swearingen, K., Sinha, R.: Beyond Algorithms: An HCI Perspective on Recommender Systems. In: ACM SIGIR Workshop on Recommender Systems (2001)Google Scholar
  8. [Herlocker 04]
    Herlocker, J., et al.: Evaluating Collaborative Filtering Recommender Systems. J. of ACM Transactions on Information Systems 22(1), 5–53 (2004)CrossRefGoogle Scholar
  9. [Ziegler 05]
    Ziegler, C.-N, et al.: Improving Recommendation Lists Through Topic Diversification. In: Proc. of WWW 2005, pp. 22–32 (2005)Google Scholar
  10. [Graham 02]
    Graham, P.: A plan for spam (August 2002), http://www.paulgraham.com/spam.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Tomoko Murakami
    • 1
  • Koichiro Mori
    • 1
  • Ryohei Orihara
    • 1
  1. 1.Corporate Research and Development CenterKomukai Toshiba-choJapan

Personalised recommendations