Skip to main content

Creating Large Size of Data with Apache Hadoop

  • Conference paper
  • First Online:
The Rise of Big Spatial Data

Part of the book series: Lecture Notes in Geoinformation and Cartography ((LNGC))

  • 1815 Accesses

Abstract

The paper is focused on research in the area of building large datasets using Apache Hadoop. Our team is managing an information system that is able to calculate probability of existence of different objects in space and time. The system works with a lot of different data sources, including large datasets. The workflow of data processing is quite complicated and time consuming, so we were looking for some framework that could help with system management and, if possible, to speed up data processing as well. Apache Hadoop was selected as a platform for enhance our information system. Apache Hadoop is usually used for processing large datasets, but in a case of our information system is necessary to perform other types of tasks as well. The systems computes spatio-temporal relations between different types of objects. This means that from relatively small amount of records (thousands) are built relatively large datasets (millions of records). For this purposes is usually used PostgreSQL/PostGIS database or tools written in Java or other language. Our research was focused to determination if we could simply move some of this tasks to Apache Hadoop platform using simple SQL editor like Hive. We have selected two types of common tasks and tested them on PostgreSQL and Apache Hadoop (Hive) platform to be able compare time necessary to complete these tasks. The paper presents results of our research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

Download references

Acknowledgments

Supported by grant from Student Grant Competition, FMG, VSB-TUO. We would like to thank to all open source developers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Růžička .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Růžička, J., Kocich, D., Orčík, L., Svozilík, V. (2017). Creating Large Size of Data with Apache Hadoop. In: Ivan, I., Singleton, A., Horák, J., Inspektor, T. (eds) The Rise of Big Spatial Data. Lecture Notes in Geoinformation and Cartography. Springer, Cham. https://doi.org/10.1007/978-3-319-45123-7_22

Download citation

Publish with us

Policies and ethics