Evaluating Semantic Search Query Approaches with Expert and Casual Users

  • Khadija Elbedweihy
  • Stuart N. Wrigley
  • Fabio Ciravegna
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7650)

Abstract

Usability and user satisfaction are of paramount importance when designing interactive software solutions. Furthermore, the optimal design can be dependent not only on the task but also on the type of user. Evaluations can shed light on these issues; however, very few studies have focused on assessing the usability of semantic search systems. As semantic search becomes mainstream, there is growing need for standardised, comprehensive evaluation frameworks. In this study, we assess the usability and user satisfaction of different semantic search query input approaches (natural language and view-based) from the perspective of different user types (experts and casuals). Contrary to previous studies, we found that casual users preferred the form-based query approach whereas expert users found the graph-based to be the most intuitive. Additionally, the controlled-language model offered the most support for casual users but was perceived as restrictive by experts, thus limiting their ability to express their information needs.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tummarello, G., Delbru, R., Oren, E.: Sindice.com: Weaving the Open Linked Data. In: Aberer, K., Choi, K.-S., Noy, N., Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D., Mizoguchi, R., Schreiber, G., Cudré-Mauroux, P. (eds.) ASWC 2007 and ISWC 2007. LNCS, vol. 4825, pp. 552–565. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  2. 2.
    Kaufmann, E., Bernstein, A.: Evaluating the usability of natural language query languages and interfaces to semantic web knowledge bases. J. Web Sem. 8 (2010)Google Scholar
  3. 3.
    López, V., Motta, E., Uren, V.: PowerAqua: Fishing the Semantic Web. In: Sure, Y., Domingue, J. (eds.) ESWC 2006. LNCS, vol. 4011, pp. 393–410. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Harth, A.: VisiNav: A system for visual search and navigation on web data. J. Web Sem. 8, 348–354 (2010)CrossRefGoogle Scholar
  5. 5.
    Saracevic, T.: Evaluation of evaluation in information retrieval. In: Proc. SIGIR (1995)Google Scholar
  6. 6.
    Halpin, H., Herzig, D.M., Mika, P., Blanco, R., Pound, J., Thompson, H.S., Tran, D.T.: Evaluating Ad-Hoc Object Retrieval. In: Proc. IWEST 2010 Workshop (2010)Google Scholar
  7. 7.
    Cleverdon, C.W.: Report on the first stage of an investigation onto the comparative efficiency of indexing systems. Technical report, The College of Aeronautics, Cranfield, England (1960)Google Scholar
  8. 8.
    Spärck Jones, K.: Further reflections on TREC. Inf. Process. Manage. 36, 37–85 (2000)CrossRefGoogle Scholar
  9. 9.
    Voorhees, E.M.: The Philosophy of Information Retrieval Evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Ingwersen, P., Järvelin, K.: The Turn: Integration of Information Seeking and Retrieval in Context. Springer (2005)Google Scholar
  11. 11.
    Salton, G.: Evaluation problems in interactive information retrieval. Information Storage and Retrieval 6, 29–44 (1970)CrossRefGoogle Scholar
  12. 12.
    Tague, J., Schultz, R.: Evaluation of the user interface in an information retrieval system: A model. Inf. Process. Manage., 377–389 (1989)Google Scholar
  13. 13.
    Hersh, W., Over, P.: SIGIR workshop on interactive retrieval at TREC and beyond. SIGIR Forum 34 (2000)Google Scholar
  14. 14.
    Kelly, D., Lin, J.: Overview of the TREC 2006 ciQA task. SIGIR Forum 41, 107–116 (2007)CrossRefGoogle Scholar
  15. 15.
    Xie, H.: Supporting ease-of-use and user control: desired features and structure of web-based online IR systems. Inf. Process. Manage. 39, 899–922 (2003)CrossRefGoogle Scholar
  16. 16.
    Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval. Inf. Process. Manage. 44, 22–38 (2008)CrossRefGoogle Scholar
  17. 17.
    Tabatabai, D., Shore, B.M.: How experts and novices search the Web. Library & Information Science Research 27, 222–248 (2005)CrossRefGoogle Scholar
  18. 18.
    Navarro-Prieto, N., Scaife, M., Rogers, Y.: Cognitive strategies in web searching. In: Proc. the 5th Conference on Human Factors and the Web, vol. 2004, pp. 1–13 (1999)Google Scholar
  19. 19.
    Balog, K., Serdyukov, P., de Vries, A.: Overview of the TREC 2010 Entity Track. In: TREC 2010 Working Notes (2010)Google Scholar
  20. 20.
    Kaufmann, E., Bernstein, A.: How Useful Are Natural Language Interfaces to the Semantic Web for Casual End-Users? In: Aberer, K., Choi, K.-S., Noy, N., Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D., Mizoguchi, R., Schreiber, G., Cudré-Mauroux, P. (eds.) ISWC/ASWC 2007. LNCS, vol. 4825, pp. 281–294. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  21. 21.
    Elbedweihy, K., Wrigley, S.N., Ciravegna, F., Reinhard, D., Bernstein, A.: Evaluating semantic search systems to identify future directions of research. In: Proc. 2nd International Workshop on Evaluation of Semantic Technologies (IWEST 2012) (2012)Google Scholar
  22. 22.
    Hölscher, C., Strube, G.: Web search behavior of Internet experts and newbies. Comput. Netw. 33, 337–346 (2000)CrossRefGoogle Scholar
  23. 23.
    Popescu, A.M., Etzioni, O., Kautz, H.: Towards a theory of natural language interfaces to databases. In: IUI 2003, pp. 149–157 (2003)Google Scholar
  24. 24.
    Damljanovic, D., Agatonovic, M., Cunningham, H.: FREyA: an Interactive Way of Querying Linked Data using Natural Language. In: Proc. QALD-1 WorkshopGoogle Scholar
  25. 25.
    López, V., Fernández, M., Motta, E., Stieler, N.: PowerAqua: supporting users in querying and exploring the semantic web. Semantic Web 3 (2012)Google Scholar
  26. 26.
    Albert, W., Tullis, T., Tedesco, D.: Beyond the Usability Lab: Conducting Large-Scale User Experience Studies. Elsevier Science (2010)Google Scholar
  27. 27.
    Bernstein, A., Reinhard, D., Wrigley, S.N., Ciravegna, F.: Evaluation design and collection of test data for semantic search tools. Technical Report D13.1, SEALS Consortium (2009)Google Scholar
  28. 28.
    Miller, R.: Human Ease of Use Criteria and Their Tradeoffs. IBM, Systems Development Division (1971)Google Scholar
  29. 29.
    Kelly, D.: Methods for Evaluating Interactive Information Retrieval Systems with Users. Found. Trends Inf. Retr. 3, 1–224 (2009)CrossRefGoogle Scholar
  30. 30.
    Hix, D., Hartson, H.R.: Developing User Interfaces: Ensuring Usability Through Product and Process. J. Wiley (1993)Google Scholar
  31. 31.
    Shneiderman, B.: Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison Wesley Longman (1998)Google Scholar
  32. 32.
    Ericsson, K.A., Simon, H.A.: Protocol analysis: Verbal reports as data. MIT Press (1993)Google Scholar
  33. 33.
    Brooke, J.: SUS: a quick and dirty usability scale. In: Usability Evaluation in Industry, pp. 189–194. CRC Press (1996)Google Scholar
  34. 34.
    Bangor, A., Kortum, P.T., Miller, J.T.: An Empirical Evaluation of the System Usability Scale. Int’t J. Human-Computer Interaction 24, 574–594 (2008)CrossRefGoogle Scholar
  35. 35.
    Bhagdev, R., Chapman, S., Ciravegna, F., Lanfranchi, V., Petrelli, D.: Hybrid Search: Effectively Combining Keywords and Semantic Searches. In: Bechhofer, S., Hauswirth, M., Hoffmann, J., Koubarakis, M. (eds.) ESWC 2008. LNCS, vol. 5021, pp. 554–568. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  36. 36.
    Strauss, A., Corbin, J.: Basics of qualitative research: grounded theory procedures and techniques. Sage Publications (1990)Google Scholar
  37. 37.
    Bangor, A., Kortum, P.T., Miller, J.T.: Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Studies 4, 114–123 (2009)Google Scholar
  38. 38.
    Uren, V., Lei, Y., López, V., Liu, H., Motta, E., Giordanino, M.: The usability of semantic search tools: a review. Knowledge Engineering Review 22, 361–377 (2007)CrossRefGoogle Scholar
  39. 39.
    López, V., Motta, E., Uren, V., Sabou, M.: Literature review and state of the art on Semantic Question Answering (2007)Google Scholar
  40. 40.
    López, V., Uren, V., Sabou, M., Motta, E.: Is question answering fit for the Semantic Web? A survey. Semantic Web 2, 125–155 (2011)CrossRefGoogle Scholar
  41. 41.
    Uren, V., Lei, Y., López, V., Liu, H., Motta, E., Giordanino, M.: The usability of semantic search tools: a review. The Knowledge Eng. Rev. 22, 361–377 (2007)Google Scholar
  42. 42.
    Meij, E., Mika, P., Zaragoza, H.: Investigating the Demand Side of Semantic Search through Query Log Analysis. In: Proc. SemSearch 2009 (2009)Google Scholar
  43. 43.
    Meij, E., Bron, M., Hollink, L., Huurnink, B., de Rijke, M.: Mapping queries to the Linking Open Data cloud: A case study using DBpedia. J. Web Sem. 9, 418–433 (2011)CrossRefGoogle Scholar
  44. 44.
    Elbedweihy, K., Wrigley, S.N., Ciravegna, F.: Improving Semantic Search Using Query Log Analysis. In: Proc. Interacting with Linked Data (ILD) 2012 Workshop (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Khadija Elbedweihy
    • 1
  • Stuart N. Wrigley
    • 1
  • Fabio Ciravegna
    • 1
  1. 1.Department of Computer ScienceUniversity of SheffieldUK

Personalised recommendations