Abstract
Hadoop is an open-source software framework for storing and processing large sets of data on a platform consisting of commodity hardware. Hadoop is mostly designed to handle large amounts of data, which can easily run into many petabytes and even exabytes. Hadoop file sizes are usually very large, ranging from gigabytes to terabytes, and large Hadoop clusters store millions of these files. Hadoop depends on large number of servers so it can parallelize work across them. Server and storage failures are to be expected, and the system is not affected by non-functioning storage units or even failed servers. Traditional databases are geared mostly for fast access to data and not for batch processing. Hadoop was originally designed for batch processing, such as the indexing of millions of web pages, and provides streaming access to datasets. Data consistency issues that may arise in an updatable database are not an issue with Hadoop file systems, because only a single writer can deal with write operation. Activity on the server will be captured by logs. There are two types of logs, and they can be generated from web servers or application servers or both. Access logs and error logs are two types of log files. An access log will have client info, whereas an error log consists of exceptions and error info. This chapter will address the log file analysis process using ElasticSearch, LogStash and Kibana. We can show the frequency of errors by the given time period using different forms such as trend graphs, bar graphs, pie charts and gauge charts.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Apache Hadoop. Available at Hadoop Apache.
Apache Hadoop. Distributed file system. Available at Hadoop Distributed File System Apache.
Scalability of hadoop distributed file system.
Kakade, Archana, and Suhas Raut. 2014. Hadoop distributed file system with cache technology. Industrial Science 1 (6). ISSN: 2347-5420.
Gavandi, Pushkar. 2016. Web server log processing using Hadoop. International Journal for Research in Engineering Application & Management (IJREAM), 01 (10).
Dean, J., and S. Ghemawat. 2004. Mapreduce: Simplified data processing on large clusters. In Proceedings of the 6th conference on symposium on operating systems design and implementation (OSDI’04), 137–150, Berkeley, CA, USA.
Tankel, Derek. 2010. Scalability of Hadoop distributed file system, Yahoo Developer Work.
Alapati, Sam R. Expert Hadoop administration, managing, tuning and securing.
Shafer, J., S. Rixner, and A.L. Cox. 2010. The Hadoop distributed filesystem: Balancing portability and performance. In Proceedings of IEEE international symposium on performance analysis of systems and software (ISPASS 2010), White Plains, NY.
Wang, Feng et al. 2009. Hadoop high availability through metadata replication, IBM China Research Laboratory, ACM.
Porter, George. 2010. Decoupling storage and computation in Hadoop with SuperDataNodes. ACM SIGOPS Operating System Review, 44.
Ankam, Venkat. Big data analytics. Packet Publishing Ltd. ISBN 978-1-78588-469-6.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Purnachandra Rao, B., Nagamalleswara Rao, N. (2019). HDFS Logfile Analysis Using ElasticSearch, LogStash and Kibana. In: Krishna, A., Srikantaiah, K., Naveena, C. (eds) Integrated Intelligent Computing, Communication and Security. Studies in Computational Intelligence, vol 771. Springer, Singapore. https://doi.org/10.1007/978-981-10-8797-4_20
Download citation
DOI: https://doi.org/10.1007/978-981-10-8797-4_20
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-8796-7
Online ISBN: 978-981-10-8797-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)