Advertisement

The Journal of Supercomputing

, Volume 72, Issue 12, pp 4771–4809 | Cite as

HPC node performance and energy modeling with the co-location of applications

  • Daniel Dauwe
  • Eric Jonardi
  • Ryan D. Friese
  • Sudeep Pasricha
  • Anthony A. Maciejewski
  • David A. Bader
  • Howard Jay Siegel
Article

Abstract

Multicore processors have become an integral part of modern large-scale and high-performance parallel and distributed computing systems. Unfortunately, applications co-located on multicore processors can suffer from decreased performance and increased dynamic energy use as a result of interference in shared resources, such as memory. As this interference is difficult to characterize, assumptions about application execution time and energy usage can be misleading in the presence of co-location. Consequently, it is important to accurately characterize the performance and energy usage of applications that execute in a co-located manner on these architectures. This work investigates some of the disadvantages of co-location, and presents a methodology for building models capable of utilizing varying amounts of information about a target application and its co-located applications to make predictions about the target application’s execution time and the system’s energy use under arbitrary co-locations of a wide range of application types. The proposed methodology is validated on three different server class Intel Xeon multicore processors using eleven applications from two scientific benchmark suites. The model’s utility for scheduling is also demonstrated in a simulated large-scale high-performance computing environment through the creation of a co-location aware scheduling heuristic. This heuristic demonstrates that scheduling using information generated with the proposed modeling methodology is capable of making significant improvements over a scheduling heuristic that is oblivious to co-location interference.

Keywords

Performance modeling Energy modeling Resource management Memory interference Application co-location Benchmarking Multicore processors Scheduling 

Notes

Acknowledgments

The authors thank Mark Oxley for his valuable comments on this research. This work was supported by the National Science Foundation (NSF) under Grant Numbers CNS-0905339, CCF-1252500, CCF-1302693, ACI-1339745, and an NSF Graduate Research Fellowship. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. The authors thank Hewlett Packard (HP) of Fort Collins for providing us some of the machines used for testing. Pacific Northwest National Laboratory is operated by Batelle for the U.S. Department of Energy under contract DE-AC0576RL01830. A preliminary version of portions of this work appeared in [40]. The additions to this work include creating an additional set of models for energy use prediction, validating the execution time and energy use prediction models on an additional multicore processor, and creating and analyzing a co-location aware scheduling heuristic that utilizes prediction models generated by our modeling methodology for making intelligent co-location decisions.

References

  1. 1.
    Verma A, Ahuja P, Neogi A (2008) Power-aware dynamic placement of HPC applications. In: 22nd Annual International Conference on Supercomputing (ICS ’08), pp 175–184Google Scholar
  2. 2.
    Zhu Q, Zhu J, Agrawal G (2010) Power-aware consolidation of scientific workflows in virtualized environments. In: ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (SC ’10), pp 1–12Google Scholar
  3. 3.
    Tang L, Mars J, Vachharajani N, Hundt R, Soffa M (2011) The impact of memory subsystem resource sharing on datacenter applications. In: 38th Annual International Symposium on Computer Architecture (ISCA ’11), pp 283–294Google Scholar
  4. 4.
    Sandberg A, Sembrant A, Hagersten E, Black-Schaffer D (2013) Modeling performance variation due to cache sharing. In: IEEE 19th International Symposium on High Performance Computer Architecture (HPCA ’13), pp 155–166Google Scholar
  5. 5.
    Choi J, Dukhan M, Liu X, Vuduc R (2014) Algorithmic time, energy, and power on candidate HPC compute building blocks. In: IEEE 28th International Parallel and Distributed Processing Symposium (IPDPS ’14), pp 447–457Google Scholar
  6. 6.
    Dauwe D, Friese R, Pasricha S, Maciejewski AA, Koenig GA, Siegel HJ (2014) Modeling the effects on power and performance from memory interference of co-located applications in multicore systems. In: The 2014 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA ’14), pp 3–9Google Scholar
  7. 7.
    Subramanian L, Seshadri V, Ghosh A, Khan S, Mutlu O (2015) The application slowdown model: quantifying and controlling the impact of inter-application interference at shared caches and main memory. In: 48th International Symposium on Microarchitecture (MICRO-48 ’15), pp 62–75Google Scholar
  8. 8.
    Merkel A, Stoess J, Bellosa F (2010) Resource-conscious scheduling for energy efficiency on multicore processors. In: 5th European Conference on Computer Systems (EuroSys ’10), pp 153–166Google Scholar
  9. 9.
    Luque C, Moreto M, Cazorla FJ, Gioiosa R, Buyuktosunoglu A, Valero M (2012) CPU accounting for multicore processors. IEEE Trans Comput 61(2):251–264MathSciNetCrossRefGoogle Scholar
  10. 10.
    Mars J, Tang, L, Hundt R, Skadron K, Soffa M (2011) Bubble-up: increasing utilization in modern warehouse scale computers via sensible co-locations. In: IEEE/ACM 44th International Symposium on Microarchitecture (MICRO ’11), pp 248–259Google Scholar
  11. 11.
    Dwyer T, Fedorova A, Blagodurov S, Roth M, Gaud F, Pei J (2013) A practical method for estimating performance degradation on multicore processors, and its application to HPC workloads. In: ACM/IEEE International Conference on High Performance Computing, Networking, Storage and Analysis (SC ’12), pp 83:1–83:11Google Scholar
  12. 12.
    Cazorla FJ, Ramirez A, Valero M, Fernandez E (2004) Dynamically controlled resource allocation in SMT processors. In: 37th International Symposium on Microarchitecture (MICRO-37 ’04), pp 171–182Google Scholar
  13. 13.
    De Vuyst M, Kumar R, Tullsen DM (2006) Exploiting unbalanced thread scheduling for energy and performance on a CMP of SMT processors. In: IEEE 20th International Parallel and Distributed Processing Symposium (IPDPS ’06), pp 10–20Google Scholar
  14. 14.
    Feliu J, Sahuquillo J, Petit S, Duato J (2015) Addressing fairness in SMT multicores with a progress-aware scheduler. In: IEEE 29th International Parallel and Distributed Processing Symposium (IPDPS ’15), pp 187–196Google Scholar
  15. 15.
    Young BD, Apodaca J, Briceño LD, Smith J, Pasricha S, Maciejewski AA, Siegel HJ, Khemka B, Bahirat S, Ramirez A, Zou Y (2013) Deadline and energy constrained dynamic resource allocation in a heterogeneous computing environment. J Supercomput 63(2):326–347CrossRefGoogle Scholar
  16. 16.
    Al-Qawasmeh AM, Pasricha S, Maciejewski AA, Siegel HJ (2015) Power and thermal-aware workload allocation in heterogeneous data centers. IEEE Trans Comput 64(2):477–491MathSciNetCrossRefGoogle Scholar
  17. 17.
    Khemka B, Friese R, Pasricha S, Maciejewski AA, Siegel HJ, Koenig GA, Powers S, Hilton M, Rambharos R, Poole S (2015) Utility maximizing dynamic resource management in an oversubscribed energy-constrained heterogeneous computing system. Sustain Comput Inf Syst 5:14–30Google Scholar
  18. 18.
    Oxley M, Pasricha S, Maciejewski AA, Siegel HJ, Apodaca J, Young D, Briceño L, Smith J, Bahirat S, Khemka B, Ramirez A, Zou Y (2015) Makespan and energy robust stochastic static resource allocation of bags-of-tasks to a heterogeneous computing system. IEEE Trans Parallel Distrib Syst 2791–2805Google Scholar
  19. 19.
    Talby D, Feitelson DG (1999) Supporting priorities and improving utilization of the IBM SP scheduler using slack-based backfilling. In: 13th International Parallel Processing Symposium (IPPS ’99), pp 513–517Google Scholar
  20. 20.
    Sadhasivam S, Nagaveni N, Jayarani R, Ram RV (2009) Design and implementation of an efficient two-level scheduler for cloud computing environment. In: International Conference on Advances in Recent Technologies in Communication and Computing (ARTCom ’09), pp 884–886Google Scholar
  21. 21.
    Utrera G, Corbalan J, Labarta J (2014) Scheduling parallel jobs on multicore clusters using CPU oversubscription. J Supercomput 68(3):1113–1140CrossRefGoogle Scholar
  22. 22.
    Lifka DA (1995) The ANL/IBM SP scheduling system. In: Job scheduling strategies for parallel processing, pp 295–303Google Scholar
  23. 23.
    Jolliffe I (2002) Principal component analysis. Wiley, Hoboken, NJMATHGoogle Scholar
  24. 24.
    Chong EK, Zak SH (2013) An introduction to optimization. Wiley, Hoboken, NJMATHGoogle Scholar
  25. 25.
    LeCun YA, Bottou L, Orr GB, Müller K (2012) “Efficient backprop”, neural networks: tricks of the trade. Springer, New YorkGoogle Scholar
  26. 26.
    Bishop CM (2006) Pattern recognition and machine learning. Springer, New York, NYMATHGoogle Scholar
  27. 27.
    Ubuntu 14 Release Notes. https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes. Accessed Jan 2016
  28. 28.
    Intel 64 and IA-32 Architectures Software Developer’s Manual Combined Volumes 1,2A,2B,2C,3A,3B,3C and 3D, Technical Report 2015. http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-manual-325462. Accessed Jan 2016
  29. 29.
    Intel Xeon E3-1225v3 Processor http://ark.intel.com/products/75461/. Accessed Jan 2016
  30. 30.
    Intel Xeon E5649 Processor http://ark.intel.com/products/52581/. Accessed Jan 2016
  31. 31.
    Intel Xeon E5-2697v2 Processor http://ark.intel.com/products/75283/. Accessed Jan 2016
  32. 32.
    Performance application programming interface http://icl.cs.utk.edu/papi/. Accessed Jan 2016
  33. 33.
    HPCToolkit http://hpctoolkit.org/. Accessed Jan 2016
  34. 34.
    Watts Up? Plug Load Meters https://www.wattsupmeters.com/secure/products.php?pn=0. Accessed Jan 2016
  35. 35.
    PARSEC Benchmark Suite http://parsec.cs.princeton.edu/. Accessed Jan 2016
  36. 36.
    NAS Parallel Benchmarks http://www.nas.nasa.gov/publications/npb.html. Accessed Jan 2016
  37. 37.
    Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. CRC Press, New York, NYMATHGoogle Scholar
  38. 38.
    Khemka B, Friese R, Pasricha S, Maciejewski AA, Siegel HJ, Koenig GA, Powers S, Hilton M, Rambharos R, Wright M, Poole S (2015) Comparison of energy-constrained resource allocation heuristics under different task management environments. In: The 2015 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2015), pp 3–12Google Scholar
  39. 39.
    Khemka B, Friese R, Briceno LD, Siegel HJ, Maciejewski AA, Koenig GA, Groer C, Okonski G, Hilton MM, Rambharos R, Poole S (2015) Utility functions and resource management in an oversubscribed heterogeneous computing environment. IEEE Trans Comput 64(8):2394–2407MathSciNetCrossRefGoogle Scholar
  40. 40.
    Dauwe D, Jonardi E, Friese R, Pasricha S, Maciejewski AA, Bader DA, Siegel HJ (2015) A methodology for co-location aware application performance modeling in multicore computing. In: 17th Workshop on Advances on Parallel and Distributed Computing Models (APDCM ’15), pp 434–443Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Daniel Dauwe
    • 1
  • Eric Jonardi
    • 1
  • Ryan D. Friese
    • 1
  • Sudeep Pasricha
    • 1
    • 2
  • Anthony A. Maciejewski
    • 1
  • David A. Bader
    • 3
  • Howard Jay Siegel
    • 1
    • 2
  1. 1.Department of Electrical and Computer EngineeringColorado State UniversityFort CollinsUSA
  2. 2.Department of Computer ScienceColorado State UniversityFort CollinsUSA
  3. 3.College of ComputingGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations