Improving Mobile Web-IR Using Access Concentration Sites in Search Results

  • Masaya Murata
  • Hiroyuki Toda
  • Yumiko Matsuura
  • Ryoji Kataoka
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5175)

Abstract

Effective ranking algorithms for mobile web search are being actively pursued. Due to the peculiar and troublesome properties of mobile contents such as scant text, few outward links, and few input keywords, conventional web search techniques using bag-of-words ranking functions or link-based algorithms are not good enough for mobile web search. Our solution is to use click logs; the aim is to extract only access concentrated search results from among the many search results. Users typically click a search result after seeing its title and snippet, so the titles and snippets of the access concentrated sites must be good relevance feedback sources that will greatly improve mobile web search performance. In this paper, we introduce a new measure that is capable of estimating the degree of access concentration and present a method that uses the measure to precisely extract the access concentration sites from many search results. Query expansion with terms extracted from the access concentration sites is then performed. The effectiveness of our proposal is verified in an experiment that uses click logs and data from a real mobile web search site.

Keywords

Search Result Query Expansion Mean Average Precision Expansion Term Input Query 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Information Processing and Management 24(5), 513–523 (1988)CrossRefGoogle Scholar
  2. 2.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at TREC-3. In: Proc. of TREC-3, pp. 109–126 (1995)Google Scholar
  3. 3.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank Citation Ranking: Bringing Order to the Web. Technical report, Stanford Digital Library Technologies Project (1998)Google Scholar
  4. 4.
    Brin, S., Page, L.: The Anatomy of a Large-Scale Hypertextual Web Search Engine. In: Proc. of WWW7, pp.107–117 (1998)Google Scholar
  5. 5.
    Jansen, B.J., Spink, A.: An Analysis of Web Documents Retrieved and Viewed. In: International Conference on Internet Computing, pp. 65–69 (2003)Google Scholar
  6. 6.
    Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. ACM Press Series. Addison-Wesley Pub.(Sd), Reading (1999)Google Scholar
  7. 7.
    Carpineto, C., De Mori, R., Ropano, G., Bigi, B.: An information-Theoretic Approach to Automatic Query Expansion. ACM Transactions on Information Systems 19(1), 1–27 (2001)CrossRefGoogle Scholar
  8. 8.
    Attar, R., Fraenkel, A.S.: Local Feedback in Full-Text Retrieval Systems. Journal of ACM 24(3), 397–417 (1977)MATHCrossRefGoogle Scholar
  9. 9.
    Mitra, M., Singhal, A., Buckley, C.: Improving Automatic Query Expansion. In: Proc. of SIGIR 1998, pp. 206–214 (1998)Google Scholar
  10. 10.
    Sakai, T., Robertson, S.E.: Flexible Pseudo-Relevance Feedback Using Optimization Tables. In: Proc. of SIGIR 2001, pp. 396–397 (2001)Google Scholar
  11. 11.
    Tao, T., Zhai, C.: Regularized Estimation of Mixture Models for Robust Pseudo-Relevance Feedback. In: Proc. of SIGIR 2006, pp. 162–169 (2006)Google Scholar
  12. 12.
    Cui, H., Wen, J., Nie, J., Ma, W.: Probabilistic Query Expansion Using Query Logs. In: Proc. of WWW 2002, pp. 325–332 (2002)Google Scholar
  13. 13.
    Shen, X., Tan, B., Zhai, C.: Context-Sensitive Information Retrieval Using Implicit Feedback. In: Proc. of SIGIR 2005, pp. 43–50 (2005)Google Scholar
  14. 14.
    Zhuang, Z., Cucerzan, S.: Re-Ranking Search Results Using Query Logs. In: Proc. of CIKM 2006, pp. 860–861 (2006)Google Scholar
  15. 15.
    Parikh, J., Kapur, S.: Unity: Relevance Feedback using User Query Logs. In: Proc. of SIGIR 2006, pp. 689–690 (2006)Google Scholar
  16. 16.
    Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately Interpreting Clickthrough Data as Implicit Feedback. In: Proc. of SIGIR 2005, pp. 154–161 (2005)Google Scholar
  17. 17.
    Joachims, T.: Optimizing Search Engines using Clickthrough Data. In: Proc. of SIGKDD 2002. pp. 133–142 (2002)Google Scholar
  18. 18.
    Robertson, S.E.: On Term Selection for Query Expansion. Journal of Documentation 46(4), 359–364 (1990)CrossRefGoogle Scholar
  19. 19.
    Robertson, S.E., Jones, K.S.: Relevance Weighting of Search Terms. Journal of the American Society for Information Science 27(3), 129–146 (1976)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Masaya Murata
    • 1
  • Hiroyuki Toda
    • 1
  • Yumiko Matsuura
    • 1
  • Ryoji Kataoka
    • 1
  1. 1.NTT Cyber Solutions LaboratoriesNTT CorporationKanagawaJapan

Personalised recommendations