Abstract
Hadoop uses the Hadoop distributed file system for storing big data, and uses MapReduce to process big data in cloud computing environments. Because Hadoop is optimized for large file sizes, it has difficulties processing large numbers of small files. A small file can be defined as any file that is significantly smaller than the Hadoop block size, which is typically set to 64 MB. Hadoop is optimized to store data in relatively large files, and thus suffers from name node memory insufficiency and increased scheduling and processing time when processing large numbers of small files. This study proposes a performance improvement method for MapReduce processing, which integrates the CombineFileInputFormat method and the reuse feature of the Java Virtual Machine (JVM). Existing methods create a mapper for every small file. Unlike these methods, the proposed method reduces the number of created mappers by processing large numbers of files that are combined by a single split using CombineFileInputFormat. Moreover, to improve MapReduce processing performance, the proposed method reduces JVM creation time by reusing a single JVM to run multiple mappers (rather than creating a JVM for every mapper).
Similar content being viewed by others
References
Bhandarkar, M. (2010). MapReduce programming with apache Hadoop. Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on. IEEE, 2010.
Choi, C., Choi, J., & Kim, P. (2014). Ontology based access control model for security policy reasoning in cloud computing. Journal of Supercomputing, 67(3), 711–722.
Choi, J., Choi, C., Ko, B., & Kim, P. (2014). A method of DDoS attack detection using HTTP packet pattern and rule engine in cloud computing environment. Journal of Soft Computing, 18(9), 1697–1703.
Choi, J., Choi, C., Yim, K., Kim, J., & Kim, P. (2013). Intelligent recongurable method of cloud computing resources for multimedia data delivery. Journal of Informatica, 24(3), 381–394.
Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113.
Heger, D. (2013). Hadoop performance tuning-a pragmatic & iterative approach. CMG Journal, 4, 97–113.
Ogiela, M.R., & Ogiela, U. (2003). Linguistic Approach to Cryptographic Data Sharing. In The 2nd International Conference on Future Generation Communication and Networking(FGCN) (Vol. 1, pp. 377–380), December 2008.
Ogiela, M. R., & Ogiela, U. (2010). Grammar encoding in DNA-like secret sharing infrastructure. Lecture Notes in Computer Science, 6059, 175–182.
Shvachko, K., et al. (2010). The hadoop distributed file system. Mass Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium on. IEEE, 2010.
Zhou, F., Pham, H., Yue, J., Zou, H., & Yu, W., (2015). SFMapReduce: An Optimized MapReduce Framework for Small Files, Networking, Architecture and Storage (NAS). In 2015 IEEE International Conference on, (pp. 23–32), August 2015.
Zikopoulos, P., & Eaton, C. (2011). Understanding big data: Analytics for enterprise class hadoop and streaming data. New York: McGraw-Hill Osborne Media.
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2015R1D1A3A01019642) and Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1C1A1A02037515).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Choi, C., Choi, C., Choi, J. et al. Improved performance optimization for massive small files in cloud computing environment. Ann Oper Res 265, 305–317 (2018). https://doi.org/10.1007/s10479-016-2376-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-016-2376-0