Advertisement

Diefficiency Metrics: Measuring the Continuous Efficiency of Query Processing Approaches

  • Maribel AcostaEmail author
  • Maria-Esther Vidal
  • York Sure-Vetter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10588)

Abstract

During empirical evaluations of query processing techniques, metrics like execution time, time for the first answer, and throughput are usually reported. Albeit informative, these metrics are unable to quantify and evaluate the efficiency of a query engine over a certain time period – or diefficiency –, thus hampering the distinction of cutting-edge engines able to exhibit high-performance gradually. We tackle this issue and devise two experimental metrics named dief@t and dief@k, which allow for measuring the diefficiency during an elapsed time period t or while k answers are produced, respectively. The dief@t and dief@k measurement methods rely on the computation of the area under the curve of answer traces, and thus capturing the answer concentration over a time interval. We report experimental results of evaluating the behavior of a generic SPARQL query engine using both metrics. Observed results suggest that dief@t and dief@k are able to measure the performance of SPARQL query engines based on both the amount of answers produced by an engine and the time required to generate these answers.

References

  1. 1.
    Acosta, M., Vidal, M.-E.: Networks of linked data eddies: an adaptive web query processing engine for RDF data. In: Arenas, M., et al. (eds.) ISWC 2015. LNCS, vol. 9366, pp. 111–127. Springer, Cham (2015). doi: 10.1007/978-3-319-25007-6_7 CrossRefGoogle Scholar
  2. 2.
    Acosta, M., Vidal, M.-E., Lampo, T., Castillo, J., Ruckhaus, E.: ANAPSID: an adaptive query processing engine for SPARQL endpoints. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 18–34. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25073-6_2 CrossRefGoogle Scholar
  3. 3.
    Aluç, G., Hartig, O., Özsu, M.T., Daudjee, K.: Diversified stress testing of RDF data management systems. In: Mika, P., et al. (eds.) ISWC 2014. LNCS, vol. 8796, pp. 197–212. Springer, Cham (2014). doi: 10.1007/978-3-319-11964-9_13 Google Scholar
  4. 4.
    Angele, J., Sure, Y., (eds).: Proceedings of the EON Workshop, CEUR Workshop Proceedings, vol. 62, CEUR-WS.org (2002)Google Scholar
  5. 5.
    Bizer, C., Schultz, A.: The Berlin SPARQL benchmark. Int. J. Semant. Web Inf. Syst. 5(2), 1–24 (2009)CrossRefGoogle Scholar
  6. 6.
    Cheng, L., Kotoulas, S.: Efficient large outer joins over mapreduce. In: Dutot, P.-F., Trystram, D. (eds.) Euro-Par 2016. LNCS, vol. 9833, pp. 334–346. Springer, Cham (2016). doi: 10.1007/978-3-319-43659-3_25 Google Scholar
  7. 7.
    Duan, S., Kementsietsidis, A., Srinivas, K., Udrea, O.: Apples and oranges: a comparison of RDF benchmarks and real RDF datasets. In: SIGMOD, pp. 145–156 (2011)Google Scholar
  8. 8.
    Euzenat, J., Shvaiko, P.: Ontology Matching, 2nd edn. Springer, Heidelberg (2013)CrossRefzbMATHGoogle Scholar
  9. 9.
    Guo, Y., Pan, Z., Heflin, J.: LUBM: a benchmark for OWL knowledge base systems. Web Semant. 3(2–3), 158–182 (2005)CrossRefGoogle Scholar
  10. 10.
    Montoya, G., Vidal, M.-E., Corcho, O., Ruckhaus, E., Buil-Aranda, C.: Benchmarking federated SPARQL query engines: are existing testbeds enough? In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012. LNCS, vol. 7650, pp. 313–324. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35173-0_21 CrossRefGoogle Scholar
  11. 11.
    Morsey, M., Lehmann, J., Auer, S., Ngonga Ngomo, A.-C.: DBpedia SPARQL benchmark – performance assessment with real queries on real data. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 454–469. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25073-6_29 CrossRefGoogle Scholar
  12. 12.
    Nentwig, M., Hartung, M., Ngomo, A.N., Rahm, E.: A survey of current link discovery frameworks. Semant. Web 8(3), 419–436 (2017)CrossRefGoogle Scholar
  13. 13.
    Le-Phuoc, D., Dao-Tran, M., Xavier Parreira, J., Hauswirth, M.: A native and adaptive approach for unified processing of linked streams and linked data. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 370–388. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25073-6_24 CrossRefGoogle Scholar
  14. 14.
    Rakhmawati, N. A., Saleem, M., Lalithsena, S., Decker, S.: QFed: query set for federated SPARQL query benchmark. In: iiWAS, pp. 207–211 (2014)Google Scholar
  15. 15.
    Saleem, M., Mehmood, Q., Ngonga Ngomo, A.C.: FEASIBLE: a feature-based SPARQL benchmark generation framework. In: Arenas, M., et al. (eds.) ISWC 2015. LNCS, vol. 9366, pp. 52–69. Springer, Cham (2015). doi: 10.1007/978-3-319-25007-6_4 Google Scholar
  16. 16.
    Schmidt, M., Görlitz, O., Haase, P., Ladwig, G., Schwarte, A., Tran, T.: FedBench: a benchmark suite for federated semantic data query processing. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 585–600. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-25073-6_37 CrossRefGoogle Scholar
  17. 17.
    Schmidt, M., Hornung, T., Lausen, G., Pinkel, C.: SP\(^2\)Bench: a SPARQL performance benchmark. In: ICDE, pp. 222–233 (2009)Google Scholar
  18. 18.
    Sharaf, M.A., Chrysanthis, P.K., Labrinidis, A., Pruhs, K.: Algorithms and metrics for processing multiple heterogeneous continuous queries. ACM Trans. Database Syst. 33(1), 5:1–5:44 (2008)CrossRefGoogle Scholar
  19. 19.
    Vrandečić, D., Sure, Y.: How to design better ontology metrics. In: Franconi, E., Kifer, M., May, W. (eds.) ESWC 2007. LNCS, vol. 4519, pp. 311–325. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-72667-8_23 Google Scholar
  20. 20.
    Zhang, Y., Duc, P.M., Corcho, O., Calbimonte, J.-P.: SRBench: a streaming RDF/SPARQL benchmark. In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012. LNCS, vol. 7649, pp. 641–657. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35176-1_40 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Maribel Acosta
    • 1
    Email author
  • Maria-Esther Vidal
    • 2
    • 3
  • York Sure-Vetter
    • 1
  1. 1.Institute AIFBKarlsruhe Institute of TechnologyKarlsruheGermany
  2. 2.Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS)Sankt AugustinGermany
  3. 3.Universidad Simón BolívarCaracasVenezuela

Personalised recommendations