Iguana: A Generic Framework for Benchmarking the Read-Write Performance of Triple Stores

  • Felix Conrads
  • Jens Lehmann
  • Muhammad Saleem
  • Mohamed Morsey
  • Axel-Cyrille Ngonga Ngomo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10588)

Abstract

The performance of triples stores is crucial for applications driven by RDF. Several benchmarks have been proposed that assess the performance of triple stores. However, no integrated benchmark-independent execution framework for these benchmarks has yet been provided. We propose a novel SPARQL benchmark execution framework called Iguana. Our framework complements benchmarks by providing an execution environment which can measure the performance of triple stores during data loading, data updates as well as under different loads and parallel requests. Moreover, it allows a uniform comparison of results on different benchmarks. We execute the FEASIBLE and DBPSB benchmarks using the Iguana framework and measure the performance of popular triple stores under updates and parallel user requests. We compare our results (See https://doi.org/10.6084/m9.figshare.c.3767501.v1) with state-of-the-art benchmarking results and show that our benchmark execution framework can unveil new insights pertaining to the performance of triple stores.

Keywords

Benchmarking Triple stores SPARQL RDF Log analysis 

References

  1. 1.
    Aluç, G., Hartig, O., Özsu, M.T., Daudjee, K.: Diversified stress testing of RDF data management systems. In: Mika, P., et al. (eds.) ISWC 2014. LNCS, vol. 8796, pp. 197–212. Springer, Cham (2014). doi:10.1007/978-3-319-11964-9_13 Google Scholar
  2. 2.
    Bizer, C., Schultz, A.: The Berlin SPARQL benchmark. Int. J. Semant. Web Inf. Syst. 5(2), 1–24 (2009)CrossRefGoogle Scholar
  3. 3.
    Görlitz, O., Thimm, M., Staab, S.: SPLODGE: systematic generation of SPARQL benchmark queries for linked open data. In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012. LNCS, vol. 7649, pp. 116–132. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35176-1_8 CrossRefGoogle Scholar
  4. 4.
    Gray, J. (ed.): The Benchmark Handbook for Database and Transaction Systems, 1st edn. Morgan Kaufmann, Burlington (1991)Google Scholar
  5. 5.
    Guo, Y., Pan, Z., Heflin, J.: LUBM: a benchmark for OWL knowledge base systems. J. Web Semant. 3(2–3), 158–182 (2005)CrossRefGoogle Scholar
  6. 6.
    Morsey, M., Lehmann, J., Auer, S., Ngonga Ngomo, A.-C.: DBpedia SPARQL benchmark – performance assessment with real queries on real data. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 454–469. Springer, Heidelberg (2011). doi:10.1007/978-3-642-25073-6_29 CrossRefGoogle Scholar
  7. 7.
    Morsey, M., Lehmann, J., Auer, S., Ngonga Ngomo, A.-C.: Usage-centric benchmarking of RDF triple stores. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI 2012) (2012)Google Scholar
  8. 8.
    Przyjaciel-Zablocki, M., Schätzle, A., Hornung, T., Taxidou, I.: Towards a SPARQL 1.1 feature benchmark on real-world social network data. In: Proceedings of the First International Workshop on Benchmarking RDF Systems (2013)Google Scholar
  9. 9.
    Saleem, M., Hasnain, A., Ngomo, A.-C.N.: Largerdfbench: a billion triples benchmark for sparql endpoint federation. Web Semantics: Science, Services and Agents on the World Wide Web (2017). ElsevierGoogle Scholar
  10. 10.
    Saleem, M., Kamdar, M.R., Iqbal, A., Sampath, S., Deus, H.F., Ngonga Ngomo, A.-C.: Big linked cancer data: integrating linked TCGA and pubmed. JWS (2014)Google Scholar
  11. 11.
    Saleem, M., Mehmood, Q., Ngonga Ngomo, A.-C.: FEASIBLE: a feature-based SPARQL benchmark generation framework. In: Arenas, M., et al. (eds.) ISWC 2015. LNCS, vol. 9366, pp. 52–69. Springer, Cham (2015). doi:10.1007/978-3-319-25007-6_4 CrossRefGoogle Scholar
  12. 12.
    Saleem, M., Ngonga Ngomo, A.-C., Xavier Parreira, J., Deus, H.F., Hauswirth, M.: DAW: duplicate-aware federated query processing over the web of data. In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8218, pp. 574–590. Springer, Heidelberg (2013). doi:10.1007/978-3-642-41335-3_36 CrossRefGoogle Scholar
  13. 13.
    Schmidt, M., Görlitz, O., Haase, P., Ladwig, G., Schwarte, A., Tran, T.: FedBench: a benchmark suite for federated semantic data query processing. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 585–600. Springer, Heidelberg (2011). doi:10.1007/978-3-642-25073-6_37 CrossRefGoogle Scholar
  14. 14.
    Schmidt, M., Hornung, T., Lausen, G., Pinkel, C.: SP2Bench: a SPARQL performance benchmark. In: International Conference on Data Engineering (ICDE), pp. 222–233. IEEE (2009)Google Scholar
  15. 15.
    Tarasova, T., Marx, M.: ParlBench: a SPARQL benchmark for electronic publishing applications. In: Cimiano, P., Fernández, M., Lopez, V., Schlobach, S., Völker, J. (eds.) ESWC 2013. LNCS, vol. 7955, pp. 5–21. Springer, Heidelberg (2013). doi:10.1007/978-3-642-41242-4_2 CrossRefGoogle Scholar
  16. 16.
    Zhang, Y., Duc, P.M., Corcho, O., Calbimonte, J.-P.: SRBench: a streaming RDF/SPARQL benchmark. In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012. LNCS, vol. 7649, pp. 641–657. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35176-1_40 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Felix Conrads
    • 1
  • Jens Lehmann
    • 2
  • Muhammad Saleem
    • 1
  • Mohamed Morsey
    • 3
  • Axel-Cyrille Ngonga Ngomo
    • 1
    • 4
  1. 1.University of Leipzig, AKSWLeipzigGermany
  2. 2.University of Bonn and Fraunhofer IAISBonnGermany
  3. 3.System and Network Engineering GroupUniversity of AmsterdamAmsterdamNetherlands
  4. 4.Department of Computer ScienceUniversity of PaderbornPaderbornGermany

Personalised recommendations