Advertisement

\(SPgen \): A Benchmark Generator for Spatial Link Discovery Tools

  • Tzanina Saveta
  • Irini Fundulaki
  • Giorgos Flouris
  • Axel-Cyrille Ngonga-Ngomo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11136)

Abstract

A number of real and synthetic benchmarks have been proposed for evaluating the performance of link discovery systems. So far, only a limited number of link discovery benchmarks target the problem of linking geo-spatial entities. However, some of the largest knowledge bases of the Linked Open Data Web, such as LinkedGeoData contain vast amounts of spatial information. Several systems that manage spatial data and consider the topology of the spatial resources and the topological relations between them have been developed. In order to assess the ability of these systems to handle the vast amount of spatial data and perform the much needed data integration in the Linked Geo Data Cloud, it is imperative to develop benchmarks for geo-spatial link discovery. In this paper we propose the Spatial Benchmark Generator \(SPgen \) that can be used to test the performance of link discovery systems which deal with topological relations as proposed in the state of the art DE-9IM (Dimensionally Extended nine-Intersection Model). \(SPgen \) implements all topological relations of DE-9IM between LineStrings and Polygons in the two-dimensional space. A comparative analysis with benchmarks produced using \(SPgen \) to assess and identify the capabilities of AML, OntoIdea, RADON and Silk spatial link discovery systems is provided.

Notes

Acknowledgments

The work presented in this paper was funded by the H2020 project HOBBIT (#688227).

References

  1. 1.
    Ngonga Ngomo, A.-C.: On link discovery using a hybrid approach. J. Data Semant. 1(4), 203–217 (2012)CrossRefGoogle Scholar
  2. 2.
    Saveta, T., Daskalaki, E., Flouris, G., Fundulaki, I., Herschel, M., Ngonga Ngomo, A.-C.: Pushing the limits of instance matching systems: a semantics-aware benchmark for linked data. In: WWW, pp. 105–106. ACM (2015). PosterGoogle Scholar
  3. 3.
    Strobl, C.: Dimensionally extended nine-intersection model (DE-9IM). In: Shekhar, S., Xiong, H., Zhou, X. (eds.) Encyclopedia of GIS, pp. 240–245. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-17885-1CrossRefGoogle Scholar
  4. 4.
    Boncz, P., Neumann, T., Erling, O.: TPC-H analyzed: hidden messages and lessons learned from an influential benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2013. LNCS, vol. 8391, pp. 61–76. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-04936-6_5CrossRefGoogle Scholar
  5. 5.
    Cruz, I.F., Antonelli, F.P., Stroe, C.: AgreementMaker: efficient matching for large real-world schemas and ontologies. VLDB Endow. 2(2), 1586–1589 (2009)CrossRefGoogle Scholar
  6. 6.
    Cruz, I.F., et al.: Using agreementmaker to align ontologies for OAEI2011, vol. 814, pp. 114–121 (2011)Google Scholar
  7. 7.
    Khiat, A., Mackeprang, M.: I-Match and OntoIdea results for OAEI 2017. In: OM, p. 135 (2017)Google Scholar
  8. 8.
    Sherif, M.-A., Dreßler, K., Smeros, P., Ngonga Ngomo, A.-C.: RADON - rapid discovery of topological relations. In: AAAI (2017)Google Scholar
  9. 9.
    Smeros, P., Koubarakis, M.: Discovering spatial and temporal links among RDF data. In: LDOW (2016)Google Scholar
  10. 10.
    Garbis, G., Kyzirakos, K., Koubarakis, M.: Geographica: a benchmark for geospatial RDF stores (long version). In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8219, pp. 343–359. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-41338-4_22CrossRefGoogle Scholar
  11. 11.
    Ray, S., Simion, B., Brown, A.D.: Jackpine: a benchmark to evaluate spatial database performance. In: ICDE, pp. 1139–1150. IEEE (2011)Google Scholar
  12. 12.
    Kolas, D.: A benchmark for spatial semantic web systems. In: SSWS (2008)Google Scholar
  13. 13.
    Guo, Y., Pan, Z., Heflin, J.: LUBM: a benchmark for OWL knowledge base systems. Web Semant.: Sci. Serv. Agents World Wide Web 3(2–3), 158–182 (2005)CrossRefGoogle Scholar
  14. 14.
    Paton, N.W., Williams, M.H., Dietrich, K., Liew, O., Dinn, A., Patrick, A.: VESPA: a benchmark for vector spatial databases. In: Lings, B., Jeffery, K. (eds.) BNCOD 2000. LNCS, vol. 1832, pp. 81–101. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-45033-5_7CrossRefGoogle Scholar
  15. 15.
    Gunther, O., Oria, V., Picouet, P., Saglio, J.M., Scholl, M.: Benchmarking spatial joins a la carte. In: SSDM, pp. 32–41. IEEE (1998)Google Scholar
  16. 16.
    Stonebraker, M., Frew, J., Gardels, K., Meredith, J.: The Sequoia 2000 storage benchmark. In: ACM SIGMOD Record, vol. 22, pp. 2–11. ACM (1993)Google Scholar
  17. 17.
    Patel, J., et al.: Building a scaleable geo-spatial DBMS: technology, implementation, and evaluation. In: ACM SIGMOD Record, vol. 26, pp. 336–347. ACM (1997)Google Scholar
  18. 18.
    Doudali, T.D., Konstantinou, I., Koziris, N.: Spaten: a spatio-temporal and textual big data generator. In: IEEE Big Data, pp. 3416–3421 (2017)Google Scholar
  19. 19.
    Goutte, C., Gaussier, E.: A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In: Losada, D.E., Fernández-Luna, J.M. (eds.) ECIR 2005. LNCS, vol. 3408, pp. 345–359. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31865-1_25CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Tzanina Saveta
    • 1
  • Irini Fundulaki
    • 1
  • Giorgos Flouris
    • 1
  • Axel-Cyrille Ngonga-Ngomo
    • 2
  1. 1.Institute of Computer Science - FORTHHeraklionGreece
  2. 2.University of PaderbornPaderbornGermany

Personalised recommendations