On the Status of Experimental Research on the Semantic Web

  • Heiner Stuckenschmidt
  • Michael Schuhmacher
  • Johannes Knopp
  • Christian Meilicke
  • Ansgar Scherp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8218)


Experimentation is an important way to validate results of Semantic Web and Computer Science research in general. In this paper, we investigate the development and the current status of experimental work on the Semantic Web. Based on a corpus of 500 papers collected from the International Semantic Web Conferences (ISWC) over the past decade, we analyse the importance and the quality of experimental research conducted and compare it to general Computer Science. We observe that the amount and quality of experiments are steadily increasing over time. Unlike hypothesised, we cannot confirm a statistically significant correlation between a paper’s citations and the amount of experimental work reported. Our analysis, however, shows that papers comparing themselves to other systems are more often cited than other papers.


  1. 1.
    Popper, K.: The logic of scientific discovery. Routledge (2002)Google Scholar
  2. 2.
    Glass, R., Ramesh, V., Vessey, I.: An analysis of research in computing disciplines. Communications of the ACM 47, 89–94 (2004)CrossRefGoogle Scholar
  3. 3.
    Wright, D.: Motivation, design, and ubiquity: A discussion of research ethics and computer science. arXiv preprint arXiv:0706.0484 (2007)Google Scholar
  4. 4.
    van Harmelen, F.: Where Does It Break? or: Why the Semantic Web Is Not Just “Research as Usual”. In: Sure, Y., Domingue, J. (eds.) ESWC 2006. LNCS, vol. 4011, p. 1. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Tichy, W., Lukowicz, P., Prechelt, L., Heinz, E.: Experimental evaluation in computer science: A quantitative study. Journal of Systems and Software 28, 9–18 (1995)CrossRefGoogle Scholar
  6. 6.
    Wainer, J., Novoa Barsottini, C., Lacerda, D., Magalhães de Marco, L.: Empirical evaluation in computer science research published by acm. Information and Software Technology 51, 1081–1085 (2009)CrossRefGoogle Scholar
  7. 7.
    Ramesh, V., Glass, R., Vessey, I.: Research in computer science: an empirical study. Journal of Systems and Software 70, 165–176 (2004)CrossRefGoogle Scholar
  8. 8.
    Zelkowitz, M., Wallace, D.: Experimental models for validating technology. Computer 31, 23–31 (1998)CrossRefGoogle Scholar
  9. 9.
    Zelkowitz, M.: An update to experimental models for validating computer technology. Journal of Systems and Software 82, 373–376 (2009)CrossRefGoogle Scholar
  10. 10.
    Pinelle, D., Gutwin, C.: A review of groupware evaluations. In: Proeedings of the IEEE 9th International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET ICE 2000), pp. 86–91. IEEE (2000)Google Scholar
  11. 11.
    Wainer, J., Barsottini, C.: Empirical research in CSCW a review of the ACM/CSCW conferences from 1998 to 2004. Journal of the Brazilian Computer Society 13, 27–36 (2007)CrossRefGoogle Scholar
  12. 12.
    Prechelt, L.: A quantitative study of experimental evaluations of neural network learning algorithms: Current research practice. Neural Networks 9, 457–462 (1996)CrossRefGoogle Scholar
  13. 13.
    Bauer, K., Bakkalbasi, N.: An examination of citation counts in a new scholarly communication environment. D-Lib Magazine (2005)Google Scholar
  14. 14.
    Cohen, J., et al.: A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 37–46 (1960)CrossRefGoogle Scholar
  15. 15.
    Viera, A.J., Garrett, J.M.: Understanding interobserver agreement: The kappa statistic. Fam. Med. 37, 360–363 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Heiner Stuckenschmidt
    • 1
  • Michael Schuhmacher
    • 1
  • Johannes Knopp
    • 1
  • Christian Meilicke
    • 1
  • Ansgar Scherp
    • 1
  1. 1.Data - and Web Science Research GroupUniversity of MannheimGermany

Personalised recommendations