Challenges for Search Engine Retrieval Effectiveness Evaluations: Universal Search, User Intents, and Results Presentation

Part of the Intelligent Systems Reference Library book series (ISRL, volume 50)

Abstract

This chapter discusses evaluating the quality of Web search engines to effectively retrieve information. It identifies three factors that lead to a need for new evaluation methods: (1) the changed results presentation in Web search engines, called Universal Search, (2) the different query types that represent different user intentions, and (3) the presentation of individual results. It discusses implications for evaluation methodology and provides some suggestions about measures.

Keywords

Web search engines retrieval effectiveness evaluation Universal Search search engine results page (SERP) user behavior 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Beiler, M.: Selektionsverhalten in den Ergebnislisten von Suchmaschinen. Modellentwicklung und empirische Überprüfung am Beispiel von Google. In: Machill, M., Schneider, N. (eds.) Suchmaschinen: Neue Herausforderuingen für die Medienpolitik, vol. 50, pp. 165–189. VISTAS Verl, Berlin (2005)Google Scholar
  2. Broder, A.: A taxonomy of Web search. ACM Sigir Forum 36(2), 3–10 (2002)CrossRefGoogle Scholar
  3. Bundesverband Digitale Wirtschaft, Nutzerverhalten auf Google-Suchergebnisseiten: Eine Eyetracking-Studie im Auftrag des Arbeitskreises Suchmaschinen-Marketing des Bundesverbandes Digitale Wirtschaft (BVDW) e.V. (2009)Google Scholar
  4. Carterette, B., Kanoulas, E., Yilmaz, E.: Evaluating Web retrieval effectiveness. In: Lewandowski, D. (ed.) Web Search Engine Research, pp. 105–137. Emerald, Bingley (2012)CrossRefGoogle Scholar
  5. ComScore. Global search market draws more than 100 billion searches per month (2009), http://www.comscore.com/Press_Events/Press_Releases/2009/8/Global_Search_Market_Draws_More_than_100_Billion_Searches_per_Month (retrieved)
  6. Cutrell, E., Guan, Z.: Eye tracking in MSN Search: Investigating snippet length, target position and task types. Technical Report, TR-2007-01 2007), http://research.microsoft.com/pubs/70395/tr-2007-01.pdf (retrieved)
  7. Dewan, R., Freimer, M., Zhang, J.: Managing Web sites for profitability: Balancing content and advertising. In: Proceedings from the 35th Annual Hawaii International Conference on System Sciences, HICSS, pp. 2340–2347 (2002)Google Scholar
  8. Edelman, B.: Hard-coding bias in Google “algorithmic” search results (2010), http://www.benedelman.org/hardcoding/ (retrieved)
  9. Edelman, B., Lockwood, B.: Measuring bias in an “organic” Web search (2011), http://www.benedelman.org/searchbias/ (retrieved)
  10. Fallows, D.: Search engine users: Internet searchers are confident, satisfied and trusting–but they are also unaware and naive. In: Pew Internet & American Life Project, pp. 1–36. Pew Internet & American Life Project, Washington, DC (2005)Google Scholar
  11. Frants, V.I., Shapiro, J., Voiskunskii, V.G.: Automated information retrieval: Theory and methods. Academic Press, San Diego (1997)Google Scholar
  12. Granka, L.: The politics of search: A decade retrospective. The Information Society 26(5), 364–374 (2010)CrossRefGoogle Scholar
  13. Grimmelmann, J.: Some skepticism about search deutrality. In: Szoka, B., Marcus, A. (eds.) The Next Digital Decade: Essays on the Future of the Internet, pp. 435–460. TechFreedom, Washington, DC (2010)Google Scholar
  14. Gulli, A.: The indexable Web is more than 11.5 billion pages. In: 14th International Conference on World Wide Web, pp. 902–903. ACM, New York (2005)CrossRefGoogle Scholar
  15. Hotchkiss, G.: Eye tracking report: Google, MSN, and Yahoo! Compared. Enquiro, Kelown, British Columbia (2006)Google Scholar
  16. Huffman, S.B., Hochster, M.: How well does result relevance predict session satisfaction? In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 567–574. ACM, New York (2007)CrossRefGoogle Scholar
  17. Höchstötter, N., Lewandowski, D.: What users see – Structures in search engine results pages. Information Sciences 179(12), 1796–1812 (2009)CrossRefGoogle Scholar
  18. Jansen, B.J., Booth, D.L., Spink, A.: Determining the informational, navigational, and transactional intent of Web queries. Information Processing and Management 44(3), 1251–1266 (2008)CrossRefGoogle Scholar
  19. Jansen, B.J.: The comparative effectiveness of sponsored and nonsponsored links for Web e-commerce queries. ACM Transactions on the Web 1(1), 1–25 (2007)CrossRefGoogle Scholar
  20. Jansen, B.J., Spink, A.: How are we searching the World Wide Web? A comparison of nine search engine transaction logs. Information Processing & Management 42(1), 248–263 (2006)CrossRefGoogle Scholar
  21. Jansen, J.: Understanding sponsored search: Core elements of keyword advertising. Cambridge University Press, Cambridge (2011)CrossRefGoogle Scholar
  22. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately interpreting click-through data as implicit feedback. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 154–161. ACM, New York (2005)Google Scholar
  23. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., Gay, G.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search. ACM Transactions on Information Systems 25(2), article 7 (2007)Google Scholar
  24. Jung, S., Herlocker, J.L., Webster, J.: Click data as implicit relevance feedback in web search. Information Processing & Management 43(3), 791–807 (2007)CrossRefGoogle Scholar
  25. Kammerer, Y., Gerjets, P.: How search engine users evaluate and select Web search results: The impact of the search engine interface on credibility assessments. In: Lewandowski, D. (ed.) Web Search Engine Research, pp. 251–279. Emerald, Bingley (2012)CrossRefGoogle Scholar
  26. Kang, I.H., Kim, G.C.: Query type classification for Web document retrieval. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 64–71. ACM, New York (2003)CrossRefGoogle Scholar
  27. Keane, M.T., O’Brien, M., Smyth, B.: Are people biased in their use of search engines? Communications of the ACM 51(2), 49–52 (2008)CrossRefGoogle Scholar
  28. Lewandowski, D.: Date-restricted queries in Web search engines. Online Information Review 28(6), 420–427 (2004)CrossRefGoogle Scholar
  29. Lewandowski, D.: A three-year study on the freshness of Web search engine databases. Journal of Information Science 34, 817–831 (2008a)CrossRefGoogle Scholar
  30. Lewandowski, D.: The retrieval effectiveness of Web search engines: Considering results descriptions. Journal of Documentation 64(6), 915–937 (2008b)MathSciNetCrossRefGoogle Scholar
  31. Lewandowski, D.: Problems with the use of Web search engines to find results in foreign languages. Online Information Review 32(5), 668–672 (2008c)MathSciNetCrossRefGoogle Scholar
  32. Lewandowski, D.: Search engine user behaviour: How can users be guided to quality content? Information Services & Use 28, 261–268 (2008d)Google Scholar
  33. Lewandowski, D.: The retrieval effectiveness of search engines on navigational queries. ASLIB Proceedings 61(4), 354–363 (2011)CrossRefGoogle Scholar
  34. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines. In: Jouis, C., Biskri, I., Ganascia, G., Roux, M. (eds.) Next Generation Search Engines: Advanced Models for Information Retrieval, pp. 456–479. IGI Global, Hershey (2012)CrossRefGoogle Scholar
  35. Lewandowski, D., Höchstötter, N.: Web searching: A quality measurement perspective. In: Spink, A., Zimmer, M. (eds.) Web Search: Multidisciplinary Perspectives, pp. 309–340. Springer, Berlin (2008)CrossRefGoogle Scholar
  36. Lewandowski, D., Höchstötter, N.: Standards der Ergebnispräsentation. In: Lewandowski, D. (ed.) Handbuch Internet-Suchmaschinen, pp. 204–219. Akademische Verlagsgesellschaft Aka, Heidelberg (2009)Google Scholar
  37. Lewandowski, D., Wahlig, H., Meyer-Bautor, G.: The freshness of Web search engine databases. Journal of Information Science 32(2), 131–148 (2006)CrossRefGoogle Scholar
  38. Li, Z., Shi, S.: Improving relevance judgment of Web search results with image excerpts. In: 17th International Conference on World Wide Web, vol. 17, pp. 21–30 (2008)Google Scholar
  39. Lorigo, L., Haridasan, M., Brynjarsdóttir, H., Xia, L., Joachims, T., Gay, G., Granka, L., et al.: Eye tracking and online search: Lessons learned and challenges ahead. Journal of the American Society for Information Science and Technology 59(7), 1041–1052 (2008)CrossRefGoogle Scholar
  40. MacFarlane, A.: Evaluation of web search for the information practitioner. Aslib Proceedings: New Information Perspectives 59(4-5), 352–366 (2007)Google Scholar
  41. Machill, M., Neuberger, C., Schweiger, W., Wirth, W.: Navigating the Internet: A study of German-language search engines. European Journal of Communication 19(3), 321–347 (2004)CrossRefGoogle Scholar
  42. Mansourian, Y.: Web search efficacy: definition and implementation. Aslib Proceedings 60(4), 349–363 (2008)CrossRefGoogle Scholar
  43. Nicholson, S., Sierra, T., Eseryel, U.Y., Park, J.-H., Barkow, P., Pozo, E.J., Ward, J.: How much of it is real? Analysis of paid placement in Web search engine results. Journal of the American Society for Information Science and Technology 57(4), 448–461 (2006)CrossRefGoogle Scholar
  44. Nielsen, J., Tahir, M. (n.d.): Homepage Usability: 50 websites deconstructed. New Riders Publishing, IndianapolisGoogle Scholar
  45. Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., Granka, L.: Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication 12, 801–823 (2007)CrossRefGoogle Scholar
  46. Pirolli, P.: Information foraging theory: Adaptive interaction with information. Oxford University Press, London (2009)Google Scholar
  47. Rose, D.E., Levinson, D.: Understanding user goals in Web search. In: Proceedings of the 13th International Conference on World Wide Web, pp. 13–19 (2004)Google Scholar
  48. Sullivan, D.: Searching with invisible tabs. Search Engine Watch (2003), http://searchenginewatch.com/showPage.html?page=3115131 (retrieved)
  49. Sullivan, D.: Study: Google “favors” itself only 19% of the time. Search Engine Land (2011), http://searchengineland.com/survey-google-favors-itself-only-19-of-the-time-61675 (retrieved)
  50. Thurow, S.: Search engine visibility. New Riders, Berkeley (2007)Google Scholar
  51. Thurow, S., Musica, N.: When search meets Web usability. New Riders, Berkeley (2009)Google Scholar
  52. Vaughan, L., Thelwall, M.: Search engine coverage bias: Evidence and possible causes. Information Processing & Management 40(4), 693–707 (2004)CrossRefGoogle Scholar
  53. Vaughan, L., Zhang, Y.: Equal representation by search engines? A comparison of websites across countries and domains. Journal of Computer-Mediated Communication 12(3), 888–909 (2007)CrossRefGoogle Scholar
  54. W3 Schools Browser Statistics (2011), http://www.w3schools.com/browsers/browsers_stats.asp (retrieved)
  55. Xie, M., Wang, H., Goh, T.N.: Quality dimensions of Internet search engines. Journal of Information Science 24(5), 365–372 (1998)CrossRefGoogle Scholar

Copyright information

© Springer- Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Faculty DMI, Department of InformationHamburg University of Applied SciencesHamburgGermany

Personalised recommendations