Liquid Benchmarks: Towards an Online Platform for Collaborative Assessment of Computer Science Research Results

  • Sherif Sakr
  • Fabio Casati
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6417)

Abstract

Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Benchmark of XML compression tools, http://xmlcompbench.sourceforge.net/
  2. 2.
    SourceForge: A free repository of open source software, http://sourceforge.net/
  3. 3.
    Transaction Processing Performance Council, http://www.tpc.org/default.asp
  4. 4.
    XMark Benchmark, http://monetdb.cwi.nl/xml/
  5. 5.
    Abadi, D.J., Marcus, A., Madden, S., Hollenbach, K.J.: Scalable Semantic Web Data Management Using Vertical Partitioning. In: VLDB, pp. 411–422 (2007)Google Scholar
  6. 6.
    Aggarwal, C.C., Wang, H. (eds.): Managing and Mining Graph Data. Springer, Heidelberg (2010)MATHGoogle Scholar
  7. 7.
    Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R.H., Konwinski, A., Lee, G., Patterson, D.A., Rabkin, A., Stoica, I., Zaharia, M.: Above the Clouds: A Berkeley View of Cloud Computing. Technical Report UCB/EECS-2009-28, EECS Department, University of California, Berkeley (February 2009)Google Scholar
  8. 8.
    Barbosa, D., Manolescu, I., Yu, J.X.: Microbenchmark. In: Encyclopedia of Database Systems, p. 1737 (2009)Google Scholar
  9. 9.
    Carey, M.J., DeWitt, D.J., Naughton, J.F.: The oo7 Benchmark. In: SIGMOD, pp. 12–21 (1993)Google Scholar
  10. 10.
    Crolotte, A.: Issues in Benchmark Metric Selection. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 146–152. Springer, Heidelberg (2009)Google Scholar
  11. 11.
    Dolog, P., Krötzsch, M., Schaffert, S., Vrandecic, D.: Social Web and Knowledge Management. In: Weaving Services and People on the World Wide Web, pp. 217–227 (2008)Google Scholar
  12. 12.
    He, H., Singh, A.K.: Closure-Tree: An Index Structure for Graph Queries. In: ICDE, p. 38 (2006)Google Scholar
  13. 13.
    Huppler, K.: The Art of Building a Good Benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009)Google Scholar
  14. 14.
    La, H.J., Kim, S.D.: A Systematic Process for Developing High Quality SaaS Cloud Services. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) CloudCom 2009. LNCS, vol. 5931, pp. 278–289. Springer, Heidelberg (2009)Google Scholar
  15. 15.
    MahmoudiNasab, H., Sakr, S.: An Experimental Evaluation of Relational RDF Storage and Querying Techniques. In: Proceedings of the 2nd International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX 2010), DASFAA Workshops (2010)Google Scholar
  16. 16.
    Manolescu, I., Afanasiev, L., Arion, A., Dittrich, J., Manegold, S., Polyzotis, N., Schnaitter, K., Senellart, P., Zoupanos, S., Shasha, D.: The repeatability experiment of sigmod 2008. SIGMOD Record 37(1), 39–45 (2008)CrossRefGoogle Scholar
  17. 17.
    Miles, S., Groth, P.T., Deelman, E., Vahi, K., Mehta, G., Moreau, L.: Provenance: The Bridge Between Experiments and Data. Computing in Science and Engineering 10(3), 38–46 (2008)CrossRefGoogle Scholar
  18. 18.
    Pavlo, A., Paulson, E., Rasin, A., Abadi, D.J., DeWitt, D.J., Madden, S., Stonebraker, M.: A comparison of approaches to large-scale data analysis. In: SIGMOD, pp. 165–178 (2009)Google Scholar
  19. 19.
    Sakr, S.: XML compression techniques: A survey and comparison. J. Comput. Syst. Sci. 75(5), 303–322 (2009)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Sakr, S., Al-Naymat, G.: Relational Processing of RDF Queries: A Survey. SIGMOD Record (2009)Google Scholar
  21. 21.
    Schmidt, M., Hornung, T., Lausen, G., Pinkel, C.: SP2Bench: A SPARQL Performance Benchmark. In: ICDE, pp. 222–233 (2009)Google Scholar
  22. 22.
    Sidirourgos, L., Goncalves, R., Kersten, M.L., Nes, N., Manegold, S.: Column-store support for RDF data management: not all swans are white. PVLDB 1(2), 1553–1563 (2008)Google Scholar
  23. 23.
    Stonebraker, M.: A new direction for TPC? In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 11–17. Springer, Heidelberg (2009)Google Scholar
  24. 24.
    Yan, X., Yu, P.S., Han, J.: Graph Indexing: A Frequent Structure-based Approach. In: SIGMOD, pp. 335–346 (2004)Google Scholar
  25. 25.
    Zhang, S., Hu, M., Yang, J.: TreePi: A Novel Graph Indexing Method. In: ICDE, pp. 966–975 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sherif Sakr
    • 1
    • 2
  • Fabio Casati
    • 3
  1. 1.NICTAAustralia
  2. 2.University of New South WalesSydneyAustralia
  3. 3.University of TrentoTrentoItaly

Personalised recommendations