Advertisement

How Relevant is the Long Tail?

A Relevance Assessment Study on Million Short
  • Philipp Schaer
  • Philipp Mayr
  • Sebastian Sünkler
  • Dirk Lewandowski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9822)

Abstract

Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.

Keywords

Search Engine Retrieval Performance Result Page Result List Retrieval Effectiveness 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Hariri, N.: Relevance ranking on Google: are top ranked results really considered more relevant by the users? Online Inf. Rev. 35(4), 598–610 (2011)CrossRefGoogle Scholar
  2. 2.
    Hosseini, M., Cox, I.J., Milić-Frayling, N., Kazai, G., Vinay, V.: On aggregating labels from multiple crowd workers to infer relevance of documents. In: Baeza-Yates, R., Vries, A.P., Zaragoza, H., Cambazoglu, B.B., Murdock, V., Lempel, R., Silvestri, F. (eds.) ECIR 2012. LNCS, vol. 7224, pp. 182–194. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Lewandowski, D., Sünkler, S.: Designing search engine retrieval effectiveness tests with RAT. Inf. Serv. Use 33(1), 53–59 (2013)Google Scholar
  4. 4.
    Schaer, P.: Better than their reputation? On the reliability of relevance assessments with students. In: Catarci, T., Forner, P., Hiemstra, D., Peñas, A., Santucci, G. (eds.) CLEF 2012. LNCS, vol. 7488, pp. 124–135. Springer, Heidelberg (2012)Google Scholar
  5. 5.
    Sterling, G.: iProspect: blended search resulting in more clicks on news, images, and video (2008). http://searchengineland.com/iprospect-blended-search-resulting-in-more-clicks-on-news-images-and-video-13708
  6. 6.
    Zaragoza, H., Cambazoglu, B.B., Baeza-Yates, R.: Web search solved?: all result rankings the same? In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM 2010, pp. 529–538. ACM, New York (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Philipp Schaer
    • 1
  • Philipp Mayr
    • 2
  • Sebastian Sünkler
    • 3
  • Dirk Lewandowski
    • 3
  1. 1.Cologne University of Applied SciencesCologneGermany
  2. 2.GESIS – Leibniz Institute for the Social SciencesCologneGermany
  3. 3.Hamburg University of Applied SciencesHamburgGermany

Personalised recommendations