Advertisement

Scientometrics

, Volume 116, Issue 2, pp 1383–1400 | Cite as

Analysis of search stratagem utilisation

  • Ameni Kacem
  • Philipp Mayr
Article

Abstract

In interactive information retrieval, researchers consider the user behaviour towards systems and search tasks in order to adapt search results and to improve the search experience of users. Analysing the users’ past interactions with the system is one typical approach. In this paper, we analyse the user behaviour in retrieval sessions towards Marcia Bates’ search stratagems such as “Footnote Chasing”, “Citation Searching”, “Keyword Searching”, “Author Searching” and “Journal Run” in a real-life academic search engine. In fact, search stratagems represent high-level search behaviour as the users go beyond simple execution of queries and investigate more of the system functionalities. We performed analyses of these five search stratagems using two datasets extracted from the social sciences search engine sowiport. A specific focus was the detection of the search phase and frequency of the usage of these stratagems. In addition, we explored the impact of these stratagems on the whole search process performance. We addressed mainly the usage patterns’ observation of the stratagems, their impact on the conduct of retrieval sessions and explored whether they are used similarly in both datasets. From the observation and metrics proposed, we can conclude that the utilisation of search stratagems in real retrieval sessions leads to an improvement of the precision in terms of positive interactions. For both datasets (SUSS 14–15 and SUSS 16–17), the user behaviour was similar as all stratagems appear most frequently in the middle of a session. However, the difference is that “Footnote Chasing”, “Citation Searching” and “Journal Run” appear mostly at the end of a session while Keyword and Author Searching appear typically at the beginning. Thus, we can conclude from the log analysis that the improvement of search functionalities including personalisation and/or recommendation could be achieved by considering references, citations, and journals in the ranking process.

Keywords

Whole-session evaluation Information behaviour Retrieval session log Cited reference searching Stratagem search Academic search 

Notes

Acknowledgements

This work was funded by Deutsche Forschungsgemeinschaft (DFG), Grant No. MA 3964/5-1; the AMUR project at GESIS together with the working group of Norbert Fuhr. The AMUR project aims at improving the support of interactive retrieval sessions following two major goals: improving user guidance and system tuning. We thank Julia Achenbach for her proof reading of the final version of this paper.

References

  1. Agosti, M., Crivellari, F., & Di Nunzio, G. M. (2012). Web log analysis: A review of a decade of studies about information acquisition, inspection and interpretation of user interaction. Data Mining and Knowledge Discovery, 24(3), 663–696.CrossRefGoogle Scholar
  2. Akbar, M., Shaffer, C. A., & Fox, E. A. (2012). Deduced social networks for an educational digital library. In Proceedings of JCDL’12 (pp. 43–46). ACM.  https://doi.org/10.1145/2232817.2232828.
  3. Bates, M. J. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review, 13(5), 407–424.  https://doi.org/10.1108/eb024320.CrossRefGoogle Scholar
  4. Bates, M. J. (1990). Where should the person stop and the information search interface start? Information Processing & Management, 26(5), 575–591.  https://doi.org/10.1016/0306-4573(90)90103-9.CrossRefGoogle Scholar
  5. Bates, M. J., Wilde, D. N., & Siegfried, S. (1993). An analysis of search terminology used by humanities scholars: The Getty online searching project report number 1. The Library Quarterly, 63(1), 1–39.  https://doi.org/10.1086/602526.CrossRefGoogle Scholar
  6. Belkin, N., Cool, C., Kelly, D., Lin, S. J., Park, S., Perez-Carballo, J., et al. (2001). Iterative exploration, design and evaluation of support for query reformulation in interactive information retrieval. Information Processing and Management, 37(3), 403–434.  https://doi.org/10.1016/S0306-4573(00)00055-8.CrossRefzbMATHGoogle Scholar
  7. Belkin, N., Marchetti, P., & Cool, C. (1993). Braque: Design of an interface to support user interaction in information retrieval. Information Processing and Management, 29(3), 325–344.  https://doi.org/10.1016/0306-4573(93)90059-M.CrossRefGoogle Scholar
  8. Borlund, P. (2016). Interactive information retrieval: An evaluation perspective. In Proceedings of the 2016 ACM on conference on human information interaction and retrieval, CHIIR’16 (pp. 151–151). New York, NY: ACM.  https://doi.org/10.1145/2854946.2870648.
  9. Borlund, P. (2016). A study of the use of simulated work task situations in interactive information retrieval evaluations: A meta-evaluation. Journal of Documentation, 72(3), 394–413.  https://doi.org/10.1108/JD-06-2015-0068.CrossRefGoogle Scholar
  10. Carevic, Z., Lusky, M., van Hoek, W., & Mayr, P. (2017). Investigating exploratory search activities based on the stratagem level in digital libraries. International Journal on Digital Libraries.  https://doi.org/10.1007/s00799-017-0226-6.Google Scholar
  11. Carevic, Z., & Mayr, P. (2015). Extending search facilities via bibliometric-enhanced stratagems. In Proceedings of the BIR workshop at ECIR 2015, Vienna, Austria (pp. 40–46). http://ceur-ws.org/Vol-1344/paper5.pdf.
  12. Carevic, Z., & Mayr, P. (2016). Survey on high-level search activities based on the stratagem level in digital libraries (pp. 54–66). Cham: Springer.  https://doi.org/10.1007/978-3-319-43997-6_5.
  13. Carevic, Z., Schüller, S., Mayr, P., & Fuhr, N. (2018). Contextualised browsing in a digital library’s living lab. In Proceedings of the 18th ACM/IEEE on joint conference on digital libraries, Fort Worth, Texas, USA (pp. 89–98). New York, NY: ACM.  https://doi.org/10.1145/3197026.3197054. https://arxiv.org/abs/1804.06426.
  14. Cohen, J. (1988). Statistical power analyses for the social sciences. NJ, Lawrence Erlbauni Associates: Hillsdale.Google Scholar
  15. Dumais, S., Cutrell, E., Cadiz, J. J., Jancke, G., Sarin, R., & Robbins, D. C. (2016). Stuff i’ve seen: A system for personal information retrieval and re-use. SIGIR Forum, 49(2), 28–35.  https://doi.org/10.1145/2888422.2888425.CrossRefGoogle Scholar
  16. Dungs, S., & Fuhr, N. (2017). Advanced hidden Markov models for recognizing search phases. In Proceedings of the ACM SIGIR international conference on theory of information retrieval, ICTIR’17 (pp. 257–260). New York, NY: ACM.  https://doi.org/10.1145/3121050.3121090.
  17. Ellis, D. (1989). A behavioural approach to information retrieval system design. Journal of Documentation, 45(3), 171–212.  https://doi.org/10.1108/eb026843.CrossRefGoogle Scholar
  18. Fuhr, N., Tsakonas, G., Aalberg, T., Agosti, M., Hansen, P., Kapidakis, S., et al. (2007). Evaluation of digital libraries. International Journal on Digital Libraries, 8(1), 21–38.  https://doi.org/10.1007/s00799-007-0011-z.CrossRefGoogle Scholar
  19. Hienert, D., & Mutschke, P. (2016). A usefulness-based approach for measuring the local and global effect of IIR services. In Proceedings of CHIIR’16 (pp. 153–162). ACM .  https://doi.org/10.1145/2854946.2854962.
  20. Hienert, D., Sawitzki, F., & Mayr, P. (2015). Digital library research in action supporting information retrieval in Sowiport. D-Lib Magazine,.  https://doi.org/10.1045/march2015-hienert.Google Scholar
  21. Jansen, B. J. (2006). Search log analysis: What it is, what’s been done, how to do it. Library and Information Science Research, 28(3), 407–432.  https://doi.org/10.1016/j.lisr.2006.06.005.MathSciNetCrossRefGoogle Scholar
  22. Kacem, A., & Mayr, P. (2017). Analysis of footnote chasing and citation searching in an academic search engine. In Proceedings of the 2nd joint workshop on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL 2017) co-located with the 40th international ACM SIGIR conference on research and development in information, Tokyo, Japan (pp. 91–100). http://ceur-ws.org/Vol-1888/paper8.pdf.
  23. Kacem, A., & Mayr, P. (2018). Users are not influenced by high impact and core journals while searching. In Proceedings of the 7th international workshop on bibliometric-enhanced information retrieval (BIR 2018) co-located with the 40th European conference on information retrieval (ECIR 2018), Grenoble, France, March 26, 2018 (pp. 63–75).Google Scholar
  24. Kelly, D., et al. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval®, 3(1–2), 1–224.Google Scholar
  25. Kriewel, S., & Fuhr, N. (2010). An evaluation of an adaptive search suggestion system. In 32nd European conference on information retrieval research (ECIR 2010) (pp. 544–555). Springer.Google Scholar
  26. Liu, C., Belkin, N. J., & Cole, M. J. (2012). Personalization of search results using interaction behaviors in search sessions. In Proceedings of SIGIR’12 (pp. 205–214). ACM.  https://doi.org/10.1145/2348283.2348314.
  27. Mahoui, M., & Cunningham, S. J. (2001). Search behavior in a research-oriented digital library. In Proceedings of ECDL 2001 (pp. 13–24). Springer.Google Scholar
  28. Marchionini, G. (1992). Interfaces for end-user information seeking. Journal of the American Society for Information Science, 43(2), 156–163.CrossRefGoogle Scholar
  29. Mayr, P. (2016). How do practitioners, PhD students and postdocs in the social sciences assess topic-specific recommendations? In Proceedings of the joint workshop on bibliometric-enhanced information retrieval and natural language processing for digital libraries (BIRNDL2016) (pp. 84–92). Newark, NJ: CEUR-WS.org. http://ceur-ws.org/Vol-1610/paper10.pdf.
  30. Mayr, P. (2016). Sowiport user search sessions data set (SUSS).  https://doi.org/10.7802/1380.
  31. Mayr, P., & Kacem, A. (2017). A complete year of user retrieval sessions in a social sciences academic search engine. In 21st international conference on theory and practice of digital libraries (TPDL 2017).  https://doi.org/10.1007/978-3-319-67008-9_46.http://arxiv.org/abs/1706.00816.
  32. Mayr, P., & Scharnhorst, A. (2015). Scientometrics and information retrieval—weak-links revitalized. Scientometrics, 102(3), 2193–2199.  https://doi.org/10.1007/s11192-014-1484-3.CrossRefGoogle Scholar
  33. Mutschke, P., Mayr, P., Schaer, P., & Sure, Y. (2011). Science models as value-added services for scholarly information systems. Scientometrics, 89(1), 349–364.  https://doi.org/10.1007/s11192-011-0430-x.CrossRefGoogle Scholar
  34. Rice, R. E., & Borgman, C. L. (1983). The use of computer-monitored data in information science and communication research. Journal of the American Society for Information Science, 34(4), 247–256.  https://doi.org/10.1002/asi.4630340404.CrossRefGoogle Scholar
  35. Rohini, U., & Ambati, V. (2005). A collaborative filtering based re-ranking strategy for search in digital libraries (pp. 194–203). Berlin: Springer.Google Scholar
  36. Ruthven, I. (2008). Interactive information retrieval. Annual Review of Information Science and Technology, 42(1), 43–91.MathSciNetCrossRefGoogle Scholar
  37. Schneider, J. W., & Borlund, P. (2004). Introduction to bibliometrics for construction and maintenance of thesauri: Methodical considerations. Journal of Documentation, 60(5), 524–549.  https://doi.org/10.1108/00220410410560609.CrossRefGoogle Scholar
  38. Shiri, A. A., Revie, C., & Chowdhury, G. (2002). Thesaurus-enhanced search interfaces. Journal of Information Science, 28(2), 111–122.  https://doi.org/10.1177/016555150202800203.CrossRefGoogle Scholar
  39. Shute, S. J., & Smith, P. J. (1993). Knowledge-based search tactics. Information Processing & Management, 29(1), 29–45.  https://doi.org/10.1016/0306-4573(93)90021-5.CrossRefGoogle Scholar
  40. Siegfried, S., Bates, M. J., & Wilde, D. N. (1993). A profile of end-user searching behavior by humanities scholars: The Getty online searching project report no. 2. Journal of the American Society for Information Science, 44(5), 273–291.CrossRefGoogle Scholar
  41. Tran, V., Maxwell, D., Fuhr, N., & Azzopardi, L. (2017). Personalised search time prediction using Markov chains. In Proceedings of the ACM SIGIR international conference on theory of information retrieval, ICTIR’17 (pp. 237–240). New York, NY: ACM.  https://doi.org/10.1145/3121050.3121085.
  42. Xie, H. I. (2002). Patterns between interactive intentions and information-seeking strategies. Information Processing & Management, 38(1), 55–77.  https://doi.org/10.1016/S0306-4573(01)00018-8.CrossRefzbMATHGoogle Scholar
  43. Xie, I., Joo, S., & Bennett-Kapusniak, R. (2017). User involvement and system support in applying search tactics. JASIST, 68(5), 1165–1185.  https://doi.org/10.1002/asi.23765.Google Scholar
  44. Zhuang, M., Toms, E. G., & Demartini, G. (2016). The relationship between user perception and user behaviour in interactive information retrieval evaluation. In N. Ferro, F. Crestani, M. F. Moens, J. Mothe, F. Silvestri, G. M. Di Nunzio, C. Hauff, & G. Silvello (Eds.), Advances in information retrieval (pp. 293–305). Cham: Springer.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2018

Authors and Affiliations

  1. 1.GESIS – Leibniz Institute for the Social SciencesCologneGermany

Personalised recommendations