Ranked-Listed or Categorized Results in IR: 2 Is Better Than 1

  • Zheng Zhu
  • Ingemar J. Cox
  • Mark Levene
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5039)

Abstract

In this paper we examine the performance of both ranked-listed and categorized results in the context of known-item search (target testing). Performance of known-item search is easy to quantify based on the number of examined documents and class descriptions. Results are reported on a subset of the Open Directory classification hierarchy, which enable us to control the error rate and investigate how performance degrades with error. Three types of simulated user model are identified together with the two operating scenarios of correct and incorrect classification. Extensive empirical testing reveals that in the ideal scenario, i.e. perfect classification by both human and machine, a category-based system significantly outperforms a ranked list for all but the best queries, i.e. queries for which the target document was initially retrieved in the top-5. When either human or machine error occurs, and the user performs a search strategy that is exclusively category based, then performance is much worse than for a ranked list. However, most interestingly, if the user follows a hybrid strategy of first looking in the expected category and then reverting to a ranked list if the target is absent, then performance can remain significantly better than for a ranked list, even with misclassification rates as high as 30%. We also observe that this hybrid strategy results in performance degradations that degrade gracefully with error rate.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hearst, M.A., Pedersen, J.O.: Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In: Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 76–84.Google Scholar
  2. 2.
    Chen, H., Dumais, S.: Bring order to the web: Automatically categorizing search results. In: CHI 2000: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 145–152. ACM Press, New York (2000)CrossRefGoogle Scholar
  3. 3.
    Zeng, H.J., He, Q.C., Chen, Z., Ma, W.Y., Ma, J.W.: Learning to cluster web search results. In: SIGIR 2004: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 210–217. ACM Press, New York (2004)Google Scholar
  4. 4.
    Kummamuru, K., Lotlikar, R., Roy, S., Singal, K., Krishnapuram, R.: A hierarchical monothetic document clustering algorithm for summarization and browsing search results. In: Proceedings of the 13th International Conference on World Wide Web, pp. 658–665 (2004)Google Scholar
  5. 5.
    Osinski, S., Weiss, D.: Carrot 2: Design of a flexible and efficient web information retrieval framework. In: Proceedings of the third International Atlantic Web Intelligence Conference, Berlin. LNCS, pp. 439–444. Springer, Heidelberg (2005)Google Scholar
  6. 6.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. 2nd edn. Wiley-Interscience, New York (2000)Google Scholar
  7. 7.
    Azzopardi, L., Rijke, M.D.: Automatic construction of known-item finding test beds. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 603–604. ACM Press, New York (2006)CrossRefGoogle Scholar
  8. 8.
    Vinay, V., Cox, I.J., Milic-Frayling, N., Wood, K.: Evaluating relevance feedback algorithms for searching on small displays. In: 27th European Conference on IR Research. ECIR (2005)Google Scholar
  9. 9.
    Bar-Ilan, J., Keenoy, K., Yaari, E., Levene, M.: User rankings of search engine results. J. American Society for Information Science and Technology 58(9), 1254–1266 (2007)CrossRefGoogle Scholar
  10. 10.
    Su, L.T.: A comprehensive and systematic model of user evaluation of web search engines: Ii. an evaluation by undergraduates. J. American Society for Information Science and Technology 54(13), 1193–1223 (2003)CrossRefGoogle Scholar
  11. 11.
    Broder, A.: A taxonomy of web search. SIGIR Forum 36(2), 3–10 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Zheng Zhu
    • 1
  • Ingemar J. Cox
    • 2
  • Mark Levene
    • 1
  1. 1.School of Computer Science and Information SystemsBirkbeck College, University of London 
  2. 2.Department of Computer ScienceUniversity College London 

Personalised recommendations