Skip to main content

Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

  • Chapter
  • First Online:
Conquering Big Data with High Performance Computing

Abstract

Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-of-the-art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workflows.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The current version of Apache SparkTM is optimized when using local disk as an intermediate data storage instead of accessing data from a parallel file system in scientific clusters. However, the lack of local disk in scientific clusters did not impact much on performance. This was because the most of the performance analyses in PATHA were compute bound as the most of the data movement was happened in parsing and loading time.

  2. 2.

    The Edison cluster system for PATHA has different configurations with that of the Edison cluster system for the PTF.

  3. 3.

    The linear regression coefficients are 5. 673 × 10−3 for checkpoint 31 and 8. 515 × 10−4 for checkpoint 36.

  4. 4.

    Please note that the points in Fig. 7.7b, c near (0,[−0.05, −0.25]) are not shown in Fig. 7.7a as they are the part of the major cluster near (0,0).

References

  1. L. Adhianto, S. Banerjee, M. Fagan, M. Krentel, G. Marin, J. Mellor-Crummey, N.R. Tallent, HPCTOOLKIT: tools for performance analysis of optimized parallel programs. Concurr. Comput. Pract. Exp. 22 (6), 685–701 (2010)

    Google Scholar 

  2. M. Attariyan, M. Chow, J. Flinn, X-ray: automating root-cause diagnosis of performance anomalies in production software, in OSDI ’12: Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation (2012), pp. 307–320

    Google Scholar 

  3. P. Barham, A. Donnelly, R. Isaacs, R. Mortier, Using magpie for request extraction and workload modelling, in OSDI’04: Proceedings of the 6th Conference on Symposium on Operating Systems Design & Implementation (2004), pp. 259–272

    Google Scholar 

  4. P. Bod, U.C. Berkeley, M. Goldszmidt, A. Fox, U.C. Berkeley, D.B. Woodard, H. Andersen, P. Bodik, M. Goldszmidt, A. Fox, D.B. Woodard, H. Andersen, Fingerprinting the datacenter, in EuroSys’10: Proceedings of the 5th European Conference on Computer Systems (ACM, New York, 2010), pp. 111–124. doi:10.1145/1755913.1755926

    Google Scholar 

  5. D. Bohme, M. Geimer, F. Wolf, L. Arnold, Identifying the root causes of wait states in large-scale parallel applications, in Proceedings of the 2010 39th International Conference on Parallel Processing (IEEE, San Diego, 2010), pp. 90–100

    Book  Google Scholar 

  6. J.C. Browne, R.L. DeLeon, C.D. Lu, M.D. Jones, S.M. Gallo, A. Ghadersohi, A.K. Patra, W.L. Barth, J. Hammond, T.R. Furlani, R.T. McLay, Enabling comprehensive data-driven system management for large computational facilities, in 2013 International Conference for High Performance Computing, Networking, Storage and Analysis (SC) (2013), pp. 1–11. doi:10.1145/2503210.2503230

  7. H. Brunst, M. Winkler, W.E. Nagel, H.C. Hoppe, Performance optimization for large scale computing: the scalable vampir approach, in Computational Science-ICCS 2001 (Springer, Heidelberg, 2001), pp. 751–760

    MATH  Google Scholar 

  8. M. Burtscher, B.D. Kim, J. Diamond, J. McCalpin, L. Koesterke, J. Browne, PerfExpert: an easy-to-use performance diagnosis tool for HPC applications, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (2010), pp. 1–11

    Google Scholar 

  9. J. Cao, D. Kerbyson, E. Papaefstathiou, G.R. Nudd, Performance modeling of parallel and distributed computing using pace, in Conference Proceeding of the IEEE International Performance, Computing, and Communications Conference, 2000. IPCCC ’00 (2000), pp. 485–492. doi:10.1109/PCCC.2000.830354

  10. P. Chen, Y. Qi, P. Zheng, D. Hou, CauseInfer: automatic and distributed performance diagnosis with hierarchical causality graph in large distributed systems, in INFOCOM’14: Proceedings IEEE International Conference of Computer Communications (IEEE, Toronto, 2014), pp. 1887–1895. doi:10.1109/INFOCOM.2014.6848128

    Google Scholar 

  11. E. Chuah, A. Jhumka, S. Narasimhamurthy, J. Hammond, J.C. Browne, B. Barth, Linking resource usage anomalies with system failures from cluster log data, in IEEE 32nd International Symposium on Reliable Distributed Systems (SRDS) (2013), pp. 111–120. doi:10.1109/SRDS.2013.20

  12. I. Cohen, J.S. Chase, M. Goldszmidt, T. Kelly, J. Symons, Correlating instrumentation data to system states: a building block for automated diagnosis and control, in OSDI, vol. 6 (USENIX, Berkeley, 2004), pp. 231–244

    Google Scholar 

  13. C. Cortes, V. Vapnik, Support-vector networks. Mach. Learn. 20 (3), 273–297 (1995). doi:10.1007/BF00994018

    MATH  Google Scholar 

  14. R. Duan, F. Nadeem, J. Wang, Y. Zhang, R. Prodan, T. Fahringer, A hybrid intelligent method for performance modeling and prediction of workflow activities in grids, in Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, CCGRID ’09 (IEEE Computer Society, Washington, 2009), pp. 339–347. doi:10.1109/CCGRID.2009.58

    Google Scholar 

  15. Genepool cluster, http://www.nersc.gov/users/computational-systems/genepool (2015)

  16. J.A. Hartigan, M.A. Wong, Algorithm AS 136: a K-means clustering algorithm. J. R. Stat. Soc. Ser. C Appl. Stat. 28 (1), 100–108 (1979). doi:10.2307/2346830

    MATH  Google Scholar 

  17. T. Hey, S. Tansley, K. Tolle (eds.), The Fourth Paradigm: Data-Intensive Scientific Discovery (Microsoft, Redmond, 2009)

    Google Scholar 

  18. I. Jolliffe, Principal component analysis, in Wiley StatsRef: Statistics Reference Online (Wiley, New York, 2014)

    Google Scholar 

  19. C. Killian, K. Nagaraj, C. Killian, J. Neville, Structured comparative analysis of systems logs to diagnose performance problems, in NSDI’12: Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation (USENIX, Berkeley, 2012)

    Google Scholar 

  20. Kim, M., Sumbaly, R., Shah, S., Root cause detection in a service-oriented architecture. ACM SIGMETRICS Perform. Eval. Rev. 41 (1), 93–104 (2013). doi:10.1145/2465529.2465753

    Article  Google Scholar 

  21. S. Kundu, R. Rangaswami, A. Gulati, M. Zhao, K. Dutta, Modeling virtualized applications using machine learning techniques. ACM SIGPLAN Not. 47 (7), 3–14 (2012)

    Article  Google Scholar 

  22. N.M. Law, S.R. Kulkarni, R.G. Dekany, E.O. Ofek, R.M. Quimby, P.E. Nugent, J. Surace, C.C. Grillmair, J.S. Bloom, M.M. Kasliwal, L. Bildsten, T. Brown, S.B. Cenko, D. Ciardi, E. Croner, S.G. Djorgovski, J.V. Eyken, A.V. Filippenko, D.B. Fox, A. Gal-Yam, D. Hale, N. Hamam, G. Helou, J. Henning, D.A. Howell, J. Jacobsen, R. Laher, S. Mattingly, D. McKenna, A. Pickles, D. Poznanski, G. Rahmer, A. Rau, W. Rosing, M. Shara, R. Smith, D. Starr, M. Sullivan, V. Velur, R. Walters, J. Zolkower, The palomar transient factory: system overview, performance, and first results. Publ. Astron. Soc. Pac. 121 (886), 1395–1408 (2009)

    Google Scholar 

  23. B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M.B. Jones, E.A. Lee, J. Tao, Y. Zhao, Scientific workflow management and the kepler system. Concurr. Comput. Pract. Exp. 18 (10), 1039–1065 (2006)

    Article  Google Scholar 

  24. M. Malawski, G. Juve, E. Deelman, J. Nabrzyski, Cost- and deadline-constrained provisioning for scientific workflow ensembles in iaas clouds, in Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’12 (IEEE Computer Society Press, Los Alamitos, 2012), pp. 22:1–22:11

    Google Scholar 

  25. A. Matsunaga, J.A.B. Fortes, On the use of machine learning to predict the time and resources consumed by applications, in Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, CCGRID ’10 (IEEE Computer Society, Washington, 2010), pp. 495–504. doi:10.1109/CCGRID.2010.98

    Google Scholar 

  26. A.J. Oliner, A.V. Kulkarni, A. Aiken, Using correlated surprise to infer shared influence, in DSN’10: IEEE/IFIP International Conference on Dependable Systems & Networks (IEEE, Chicago, 2010), pp. 191–200. doi:10.1109/DSN.2010.5544921

    Google Scholar 

  27. X. Pan, J. Tan, S. Kavulya, R. Gandhi, P. Narasimhan, Ganesha: blackBox diagnosis of MapReduce systems. ACM SIGMETRICS Perform. Eval. Rev. 37 (3), 8–13 (2009). doi:10.1145/1710115.1710118

    Article  Google Scholar 

  28. F. Rusu, P. Nugent, K. Wu, Implementing the palomar transient factory real-time detection pipeline in GLADE: results and observations, in Databases in Networked Information Systems. Lecture Notes in Computer Science, vol. 8381 (Springer, Heidelberg, 2014), pp. 53–66

    Google Scholar 

  29. R.R. Sambasivan, A.X. Zheng, M.D. Rosa, E. Krevat, S. Whitman, M. Stroucken, W. Wang, L. Xu, G.R. Ganger, M. De Rosa, E. Krevat, S. Whitman, M. Stroucken, W. Wang, L. Xu, G.R. Ganger, Diagnosing performance changes by comparing request flows, in NSDI’11: Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation (USENIX, Berkeley, 2011)

    Google Scholar 

  30. S.S. Shende, A.D. Malony, The TAU parallel performance system. Int. J. High Perform. Comput. Appl. 20 (2), 287–311 (2006)

    Article  Google Scholar 

  31. A. Shoshani, D. Rotem, (eds.), Scientific Data Management: Challenges, Technology, and Deployment (Chapman & Hall/CRC Press, Boca Raton, 2010)

    MATH  Google Scholar 

  32. E. Thereska, G.R. Ganger, Ironmodel: robust performance models in the wild. ACM SIGMETRICS Perform. Eval. Rev. 36 (1), 253–264 (2008). doi:10.1145/1375457.1375486

    Article  Google Scholar 

  33. B. Tierney, W. Johnston, B. Crowley, G. Hoo, C. Brooks, D. Gunter, The netlogger methodology for high performance distributed systems performance analysis, in The Seventh International Symposium on High Performance Distributed Computing, 1998. Proceedings (1998), pp. 260–267. doi:10.1109/HPDC.1998.709980

  34. M. Tikir, L. Carrington, E. Strohmaier, A. Snavely, A genetic algorithms approach to modeling the performance of memory-bound computations, in Proceedings of the 2007 ACM/IEEE Conference on Supercomputing (ACM, New York, 2007), p. 47

    Google Scholar 

  35. Univa grid engine, http://www.univa.com/products/grid-engine.php (2015)

  36. D.D. Vento, D.L. Hart, T. Engel, R. Kelly, R. Valent, S.S. Ghosh, S. Liu, System-level monitoring of floating-point performance to improve effective system utilization, in 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 1–6

    Google Scholar 

  37. S. Williams, A. Waterman, D. Patterson, Roofline: an insightful visual performance model for multicore architectures. Commun. ACM 52 (4), 65–76 (2009). doi:10.1145/1498765.1498785

    Article  Google Scholar 

  38. W. Xu, L. Huang, A. Fox, D. Patterson, M.I. Jordan, Detecting large-scale system problems by mining console logs, in SOSP’09: Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles (ACM, New York, 2009), pp. 117–131. doi:10.1145/1629575.1629587

    Google Scholar 

  39. N.J. Yadwadkar, G. Ananthanarayanan, R. Katz, Wrangler: predictable and faster jobs using fewer resources, in Proceedings of the ACM Symposium on Cloud Computing, SOCC ’14 (ACM, New York, 2014), pp. 26:1–26:14. doi:10.1145/2670979.2671005. http://doi.acm.org/10.1145/2670979.2671005

  40. W. Yoo, K. Larson, L. Baugh, S. Kim, R.H. Campbell, ADP: automated diagnosis of performance pathologies using hardware events, in Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE, vol. 40 (ACM, New York, 2012), pp. 283–294. doi:10.1145/2254756.2254791

    Google Scholar 

  41. W. Yoo, M. Koo, Y. Cao, A. Sim, P. Nugent, K. Wu, Patha: performance analysis tool for hpc applications, in IPCCC’15: Proceedings of the 34st IEEE International Performance Computing and Communications Conference (2015)

    Google Scholar 

  42. C. Yuan, N. Lao, J.R. Wen, J. Li, Z. Zhang, Y.M. Wang, W.Y. Ma, Automated known problem diagnosis with event traces, in EuroSys’06: Proceedings of the 1st ACM SIGOPS/EuroSys European Conference on Computer Systems, vol. 40 (ACM, New York, 2006), pp. 375–388. doi:10.1145/1218063.1217972

    Google Scholar 

  43. M. Zaharia, M. Chowdhury, M.J. Franklin, S. Shenker, I. Stoica, Spark: cluster computing with working sets, in Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing, HotCloud’10 (USENIX, Berkeley, 2010)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Office of Advanced Scientific Computing Research, Office of Science, the U.S. Dept. of Energy, under Contract No. DE-AC02-05CH11231. This work used resources of NERSC. The authors would like to thank Douglas Jacobson, Jay Srinivasan, and Richard Gerber at NERSC, Bryce Foster and Alex Copeland at JGI, and Arie Shoshani at LBNL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wucherl Yoo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Yoo, W., Koo, M., Cao, Y., Sim, A., Nugent, P., Wu, K. (2016). Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters. In: Arora, R. (eds) Conquering Big Data with High Performance Computing. Springer, Cham. https://doi.org/10.1007/978-3-319-33742-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33742-5_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33740-1

  • Online ISBN: 978-3-319-33742-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics