Skip to main content

Performance Prediction for Scalability Analysis

  • Chapter
  • First Online:
Performance Analysis of Parallel Applications for HPC
  • 197 Accesses

Abstract

Performance prediction is an effective approach for understanding the scalability of large-scale parallel applications when the whole target systems are not available. However, accurate and efficient performance prediction is difficult because the execution time of parallel applications is determined by several factors, including sequential computation time in each process, communication time, and their convolution. This chapter proposes a novel approach to acquire the sequential computation time accurately and efficiently, which only needs a single node of the target platform. First, we employ deterministic replay techniques to execute any process of a parallel application on a single node at real speed. Thus, we can simply measure the real sequential computation time on a target node for each process one by one. Second, we observe that processes in parallel applications can be clustered into a few groups where processes in each group have similar computation behaviors. Based on this observation, we only execute representative parallel processes and significantly reduce measurement time. We implement a performance prediction framework, called Phantom, which integrates the above computation time acquisition approach with a trace-driven network simulator. We validate our approach on several platforms, and the prediction error of our approach is less than 8% on average. (Ⓒ 2015 IEEE. Reproduced, with permission, from Jidong Zhai, et al., Performance prediction for large-scale parallel applications using representative replay, IEEE Transactions on Computers, 2015.)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Because message passing interface (MPI) is the dominant programming model in large-scale high-performance computing, we use parallel applications to indicate parallel applications written in MPI in this chapter. However, our approach is applicable to other message passing programming models.

References

  1. Snavely, A., et al. (2002). A framework for application performance modeling and prediction. In SC ’02: Proceedings of the 2002 ACM/IEEE Conference on Supercomputing (pp. 1–17).

    Google Scholar 

  2. Marin, G., & Mellor-Crummey, J. (2004). Cross-architecture performance predictions for scientific applications using parameterized models. In SIGMETRICS.

    Google Scholar 

  3. Kerbyson, D. J., et al. (2001). Predictive performance and scalability modeling of a large-scale application. In Supercomputing (pp. 37–48).

    Google Scholar 

  4. Sundaram-Stukel, D., & Vernon, M. K. (1999). Predictive analysis of a wavefront application using LogGP. In PPoPP ’99: Proceedings of the Seventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 141–150).

    Google Scholar 

  5. Barnes, B. J., et al. (2008). A regression-based approach to scalability prediction. In Proceedings of the 22nd Annual International Conference on Supercomputing (pp. 368–377). ACM.

    Google Scholar 

  6. Mathias, M., Kerbyson, D., & Hoisie, A. (2003). A performance model of non-deterministic particle transport on large-scale systems. In Workshop on Performance Modeling and Analysis. ICCS.

    Google Scholar 

  7. Zheng, G., Kakulapati, G., & Kale, L. V. (2004). BigSim: A parallel simulator for performance prediction of extremely large parallel machines. In 18th International Parallel and Distributed Processing Symposium, 2004. Proceedings (pp. 78–87).

    Google Scholar 

  8. Yang, X., et al. (2008). Compiler-assisted application-level checkpointing for MPI programs. In 2008 The 28th International Conference on Distributed Computing Systems.

    Google Scholar 

  9. Amazon Inc. (2011). High Performance Computing (HPC). http://aws.amazon.com/ec2/hpc-applications/

    Google Scholar 

  10. Zhai, Y., et al. (2011). Cloud versus in-house cluster: Evaluating amazon cluster compute instances for running MPI applications. In SC ’11: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis.

    Google Scholar 

  11. Choudhury, N., Mehta, Y., & Wilmarth, T. L., et al. (2005). Scaling an optimistic parallel simulation of large-scale interconnection networks. In Proceedings of the Winter Simulation Conference, 2005 (pp. 591–600).

    Google Scholar 

  12. Labarta, J., et al. (1996). DiP: A parallel program development environment. In European Conference on Parallel Processing (pp. 665–674). Springer.

    Google Scholar 

  13. Susukita, R., Ando, H., & Aoyagi, M., et al. (2008). Performance prediction of large-scale parallel system and application using macro-level simulation. In SC ’08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing (pp. 1–9).

    Google Scholar 

  14. Barker, K. J., Pakin, S., & Kerbyson, D. J. (2006). A performance model of the Krak hydrodynamics application. In: 2006 International Conference on Parallel Processing (ICPP’06) (pp. 245–254).

    Google Scholar 

  15. Xue, R., et al. (2009). MPIWiz: Subgroup reproducible replay of mpi applications. In: Proceedings of the 14th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 251–260).

    Google Scholar 

  16. Maruyama, M., Tsumura, T., & Nakashima, H. (2005). Parallel program debugging based on data-replay. In PDCS’05 (pp. 151–156).

    Google Scholar 

  17. LeBlanc, T. J., & Mellor-Crummey, J. M. (1987). Debugging parallel programs with instant replay. IEEE Transactions on Computers, 36(4), 471–482.

    Article  Google Scholar 

  18. Bouteiller, A., Bosilca, G., & Dongarra, J. (2007). Retrospect: Deterministic replay of MPI applications for interactive distributed debugging. In EuroPVM/MPI (pp. 297–306).

    Google Scholar 

  19. Zhai, J., et al. (2015). Performance prediction for large-scale parallel applications using representative replay. IEEE Transactions on Computers, 65(7), 2184–2198.

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhai, J., Chen, W., & Zheng, W. (2010). PHANTOM: Predicting performance of parallel applications on large-scale parallel machines using a single node. In PPoPP.

    Google Scholar 

  21. Shao, S., Jones, A. K., & Melhem, R. G. (2006). A compiler-based communication analysis approach for multiprocessor systems. In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium.

    Google Scholar 

  22. Alexandrov, A., et al. (1997). LogGP: Incorporating long messages into the LogP model for parallel computation. Journal of Parallel and Distributed Computing, 44(1), 71–79.

    Article  MathSciNet  Google Scholar 

  23. Sur, S., et al. (2006). RDMA read based rendezvous protocol for MPI over InfiniBand: Design alternatives and benefits. In PPoPP ’06: Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 32–39).

    Google Scholar 

  24. Zhang, J., et al. (2009). Process mapping for MPI collective communications. In Euro-Par.

    Google Scholar 

  25. Tsinghua University. SIM-MPI Simulator. http://www.hpctest.org.cn/resources/sim-mpi.tgz

  26. Bailey, D., et al. (1995). The NAS Parallel Benchmarks 2.0. Moffett Field, CA: NAS Systems Division, NASA Ames Research Center.

    Google Scholar 

  27. Standard Performance Evaluation Corporation. SPEC MPI2007 Benchmark Suite (2007). http://www.spec.org/mpi2007/

  28. LLNL. ASCI Purple Benchmark. https://asc.llnl.gov/computing_resources/purple/archive/benchmarks

  29. Hoisie, A., et al. (2000). A general predictive performance model for wavefront algorithms on clusters of SMPs. In Proceedings 2000 International Conference on Parallel Processing (pp. 219–228).

    Google Scholar 

  30. Meng, J., et al. (2012). Dataflow-driven GPU performance projection for multi-kernel transformations. In SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (pp. 82:1–82:11).

    Google Scholar 

  31. Spafford, K. L., & Vetter, J. S. (2012). Aspen: A domain specific language for performance modeling. In SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (pp. 84:1–84:11).

    Google Scholar 

  32. Wilmarth, T., Zheng, G., & Bohm, E. J., et al. (2005). Performance prediction using simulation of large-scale interconnection networks in POSE. In Proceedings of the 19th Workshop on Parallel and Distributed Simulation (pp. 109–118).

    Google Scholar 

  33. Prakash, S., & Bagrodia, R. (1998). MPI-SIM: Using parallel simulation to evaluate MPI programs. In Winter Simulation Conference (pp. 467–474).

    Google Scholar 

  34. Lee, B. C., Brooks, D. M., & de Supinski, B. R., et al. (2007). Methods of inference and learning for performance modeling of parallel applications. In PPoPP ’07: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 249–258).

    Google Scholar 

  35. Zhong, Y., et al. (2004). Array regrouping and structure splitting using whole-program reference affinity. In PLDI’04 (pp. 255–266).

    Google Scholar 

  36. Sherwood, T., et al. (2002). Automatically characterizing large scale program behavior. In ASPLOS X: Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems (pp. 45–57).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zhai, J., Jin, Y., Chen, W., Zheng, W. (2023). Performance Prediction for Scalability Analysis. In: Performance Analysis of Parallel Applications for HPC. Springer, Singapore. https://doi.org/10.1007/978-981-99-4366-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4366-1_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4365-4

  • Online ISBN: 978-981-99-4366-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics