Skip to main content

Fast Online Learning to Recommend a Diverse Set from Big Data

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9101))

Abstract

Building a recommendation system to withstand the rapid change in items’ relevance to users is a challenge requiring continual optimization. In a Big Data scenario, it becomes a harder problem, in which users get substantially diverse in their tastes. We propose an algorithm that is based on the UBC1 bandit algorithm to cover a large variety of users. To enhance UCB1, we designed a new rewarding scheme to encourage the bandits to choose items that satisfy a large number of users. Our approach takes account of the correlation among the items preferred by different types of users, in effect, increasing the coverage of the recommendation set efficiently. Our method performs better than existing techniques such as Ranked Bandits [8] and Independent Bandits [6] in terms of satisfying diverse types of users.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agrawal, S.: Optimization under uncertainty: Bounding the correlation gap. Ph.D. thesis, March 2011. http://research.microsoft.com/apps/pubs/default.aspx?id=200425

  2. Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002). http://dx.doi.org/10.1023/A:1013689704352

    Article  MATH  Google Scholar 

  3. Burke, R.: Hybrid web recommender systems. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) Adaptive Web 2007. LNCS, vol. 4321, pp. 377–408. Springer, Heidelberg (2007). http://dl.acm.org/citation. cfm?id=1768197.1768211

    Chapter  Google Scholar 

  4. Ekstrand, M.D., Riedl, J.T., Konstan, J.A.: Collaborative filtering recommender systems. Found. Trends Hum.-Comput. Interact. 4(2), 81–173 (2011). http://dx.doi.org/10.1561/1100000009

    Article  Google Scholar 

  5. Goldberg, K., Roeder, T., Gupta, D., Perkins, C.: Eigentaste: A constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001). http://dx.doi.org/10.1023/A:1011419012209

    Article  MATH  Google Scholar 

  6. Kohli, P., Salek, M., Stoddard, G.: A fast bandit algorithm for recommendation to users with heterogenous tastes. In: desJardins, M., Littman, M.L. (eds.) AAAI. AAAI Press (2013). http://dblp.uni-trier.de/db/conf/aaai/aaai2013.html#KohliSS13

  7. MovieLens dataset. http://www.grouplens.org/data/ (as of 2003). http://www.grouplens.org/data/

  8. Radlinski, F., Kleinberg, R., Joachims, T.: Learning diverse rankings with multi-armed bandits. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 784–791. ACM, New York (2008). http://doi.acm.org/10.1145/1390156.1390255

  9. Robertson, S.E.: The probability ranking principle in IR. In: Readings in Information Retrieval, pp. 281–286. Morgan Kaufmann Publishers Inc., San Francisco (1997). http://dl.acm.org/citation.cfm?id=275537.275701

  10. Shani, G., Gunawardana, A.: Evaluating recommendation systems. In: Recommender Systems Handbook, pp. 257–297 (2011). http://scholar.google.de/scholar.bib?q=info:AW2lmZl44hMJ:scholar.google.com/&output=citation&hl=de&as_sdt=0,5&ct=citation&cd=0

  11. Vermorel, J., Mohri, M.: Multi-armed bandit algorithms and empirical evaluation. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 437–448. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahmuda Rahman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Rahman, M., Oh, J.C. (2015). Fast Online Learning to Recommend a Diverse Set from Big Data. In: Ali, M., Kwon, Y., Lee, CH., Kim, J., Kim, Y. (eds) Current Approaches in Applied Artificial Intelligence. IEA/AIE 2015. Lecture Notes in Computer Science(), vol 9101. Springer, Cham. https://doi.org/10.1007/978-3-319-19066-2_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19066-2_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19065-5

  • Online ISBN: 978-3-319-19066-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics