BESDUI: A Benchmark for End-User Structured Data User Interfaces

  • Roberto García
  • Rosa Gil
  • Juan Manuel Gimeno
  • Eirik Bakke
  • David R. Karger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9982)

Abstract

The Semantic Web Community has invested significant research efforts in developing systems for Semantic Web search and exploration. But while it has been easy to assess the systems’ computational efficiency, it has been much harder to assess how well different semantic systems’ user interfaces help their users. In this article, we propose and demonstrate the use of a benchmark for evaluating such user interfaces, similar to the TREC benchmark for evaluating traditional search engines. Our benchmark includes a set of typical user tasks and a well-defined procedure for assigning a measure of performance on those tasks to a semantic system. We demonstrate its application to two such system, Virtuoso and Rhizomer. We intend for this work to initiate a community conversation that will lead to a generally accepted framework for comparing systems and for measuring, and thus encouraging, progress towards better semantic search and exploration tools.

Keywords

Benchmark User experience Usability Semantic data Exploration Relational data 
Resource type:

Benchmark

Permanent URL:

http://w3id.org/BESDUI

References

  1. 1.
    Shadbolt, N., Hall, W., Berners-Lee, T.: The semantic web revisited. Intell. Syst. 21, 96–101 (2006)Google Scholar
  2. 2.
    Cyganiak, R., Jentzsch, A.: The Linking Open Data cloud diagram. http://lod-cloud.net
  3. 3.
    Guha, R.: Introducing schema.org: Search engines come together for a richer web (2011). https://googleblog.blogspot.com.es/2011/06/introducing-schemaorg-search-engines.html
  4. 4.
    Alani, H., Kalfoglou, Y., O’Hara, K., Shadbolt, N.R.: Towards a killer app for the semantic web. In: Gil, Y., Motta, E., Benjamins, V., Musen, M.A. (eds.) ISWC 2005. LNCS, vol. 3729, pp. 829–843. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  5. 5.
    Krug, S., Black, R.: Don’t Make Me Think! A Common Sense Approach to Web Usability. New Riders Publishing, Indianapolis (2000)Google Scholar
  6. 6.
    Freitas, A., Curry, E., Oliveira, J.G., O’Riain, S.: Querying heterogeneous datasets on the linked data web: challenges, approaches, and trends. IEEE Internet Comput. 16, 24–33 (2012)CrossRefGoogle Scholar
  7. 7.
    Dadzie, A.-S., Rowe, M.: Approaches to visualising linked data: a survey. Semant. Web. 2, 89–124 (2011)Google Scholar
  8. 8.
    Berners-Lee, T., Chen, Y., Chilton, L., Connolly, D., Dhanaraj, R., Hollenbach, J., Lerer, A., Sheets, D.: Tabulator: exploring and analyzing linked data. In: Proceedings of the 3rd Semantic Web and User Interaction Workshop (SWUI 2006), Athens, Georgia (2006)Google Scholar
  9. 9.
    Kaufmann, E., Bernstein, A.: Evaluating the usability of natural language query languages and interfaces to Semantic Web knowledge bases. Web Semant. Sci. Serv. Agents World Wide Web 8, 377–393 (2010)CrossRefGoogle Scholar
  10. 10.
    Brunetti, J.M., García, R., Auer, S.: From overview to facets and pivoting for interactive exploration of semantic web data. Int. J. Semant. Web Inf. Syst. 9, 1–20 (2013)CrossRefGoogle Scholar
  11. 11.
    Bevan, N.: Extending quality in use to provide a framework for usability measurement. In: Kurosu, M. (ed.) HCD 2009. LNCS, vol. 5619, pp. 13–22. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    González-Sánchez, J.L., García, R., Brunetti, J.M., Gil, R., Gimeno, J.M.: Using SWET-QUM to compare the quality in use of semantic web exploration tools. J. Univ. Comput. Sci. 19, 1025–1045 (2013)Google Scholar
  13. 13.
    García-Castro, R.: Benchmarking Semantic Web Technology. IOS Press (2009)Google Scholar
  14. 14.
    Sim, S.E., Easterbrook, S., Holt, R.C.: Using benchmarking to advance research: a challenge to software engineering. In: Proceedings of the 25th International Conference on Software Engineering, pp. 74–83. IEEE Computer Society, Washington (2003)Google Scholar
  15. 15.
    Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing). The MIT Press (2005)Google Scholar
  16. 16.
    Bizer, C., Schultz, A.: The Berlin SPARQL benchmark. Int. J. Semant. Web Inf. Syst. (IJSWIS) 5, 1–24 (2009)Google Scholar
  17. 17.
    Catarci, T., Costabile, M.F., Levialdi, S., Batini, C.: Visual query systems for databases: a survey. J. Vis. Lang. Comput. 8, 215–260 (1997)CrossRefGoogle Scholar
  18. 18.
    Card, S.K., Moran, T.P., Newell, A.: The keystroke-level model for user performance time with interactive systems. Commun. ACM 23, 396–410 (1980)CrossRefGoogle Scholar
  19. 19.
    García, R., Gil, R., Gimeno, J.M., Bakke, E., Karger, D.R.: BESDUI: A Benchmark for End-User Structured Data User Interfaces. http://w3id.org/BESDUI
  20. 20.
    Erling, O., Mikhailov, I.: RDF support in the virtuoso DBMS. In: Pellegrini, T., Auer, S., Tochtermann, K., Schaffert, S. (eds.) Networked Knowledge - Networked Media. SCI, vol. 221, pp. 7–24. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  21. 21.
    Bakke, E., Karger, D.R.: Expressive query construction through direct manipulation of nested relational results. In: Proceedings of the 2016 International Conference on Management of Data (SIGMOD 2016), pp. 1377–1392, ACM, New York (2016)Google Scholar
  22. 22.
    John, B.E., Kieras, D.E.: The GOMS family of user interface analysis techniques: comparison and contrast. ACM Trans. Comput. Hum. Interact. 3(4), 320–351 (1996)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Roberto García
    • 1
  • Rosa Gil
    • 1
  • Juan Manuel Gimeno
    • 1
  • Eirik Bakke
    • 2
  • David R. Karger
    • 2
  1. 1.Computer Science and Engineering DepartmentUniversitat de LleidaLleidaSpain
  2. 2.Computer Science and Artificial Intelligence Laboratory, MITCambridgeUSA

Personalised recommendations