The Journal of Supercomputing

, Volume 67, Issue 1, pp 1–30 | Cite as

AMRC: an algebraic model for reconfiguration of high performance cluster computing systems at runtime



High Performance Cluster Computing Systems (HPCSs) represent the best performance because their configuration is customized regarding the features of the problem to be solved at design time. Therefore, if the problem has static nature and features, the best customized configuration can be done. New generations of scientific and industrial problems usually have dynamic nature and behavior. A drawback of this dynamicity is that the customized HPCSs face challenges at runtime, and consequently show the worse performance. The reason for this might be due to the fact that dynamic problems are not adapted to configuration of the HPCS. Hence, requests of the dynamic problem are not in the direction of the HPCS configuration. The main proposed solutions for this challenge are dynamic load balancing or using reconfigurable platforms.

In this paper, a vector algebra-based model for HPCS reconfiguration at runtime is presented and named AMRC. This model determines the element causing the dynamic behavior and analyzes the reason regarding both software and hardware at runtime. Some results of the presented model show that by defining a general state vector whose direction is toward reaching high performance computing and whose weight is based on the initial features and explicit requirements of the problem, as well as by defining a vector for each process in the problem at runtime, we can trace changes in the directions and uncover the reason for them.


High performance cluster computing Reconfiguration Dynamic problems Vector algebra model 


  1. 1.
    Dongarra J, Sterling T, Simon H, Strohmaier E (2005) High performance computing, clusters, constellations, MPPs, and future directions. IEEE Comput Sci Eng 7(2):51–59 Google Scholar
  2. 2.
    Gropp W, Lusk E, Sterling T (2003) Beowulf cluster computing with Linux, 2nd edn. MIT Press, Cambridge Google Scholar
  3. 3.
    Reyes S, Niño A, Muñoz-Caro C (2005) Customizing clustering computing for a computational chemistry environment. The case of the DBO-83 nicotinic analgesic. J Mol Struct (Theochem) 41–48 Google Scholar
  4. 4.
    Dongarra J, Beckman P et al (2010) International exascale software project roadmap UT-CS-10-652 Google Scholar
  5. 5.
    Sterling T (2009) The biggest need: a new model of computation. Int J High Perform Comput Appl 23(4):335–336 CrossRefGoogle Scholar
  6. 6.
    Brittan M, Kowalik J (2005) Autonomous performance and risk management in large distributed systems and grids. In: Grandinetti L (ed) Grid computing: the new frontier of high performance computing, vol 141, pp 225–253 CrossRefGoogle Scholar
  7. 7.
    Maccabe A, Falter H, Kramer W (2009) Resource management. Int J High Perform Comput Appl 23:347 CrossRefGoogle Scholar
  8. 8.
    Vecchiola C, Pandey S, Buyya R (2009) High-performance cloud computing: a view of scientific applications. In: The 10th international symposium on pervasive systems, algorithms and networks, Kaohsiung, Taiwan, December 14–16 Google Scholar
  9. 9.
    Sarkar AD, Roy S, Biswas S, Mukherjee N (2006) An integrated framework for performance analysis and tuning in grid environment. In: International conference on high performance computing HiPC, India, December 18–21 Google Scholar
  10. 10.
    Jacob AM, Troxel IA, George AD (2004) Distributed configuration management for reconfigurable cluster computing. In: International conference on engineering of reconfigurable systems and algorithms, Las Vegas, USA, June 21–24 Google Scholar
  11. 11.
    Sass R, Kritikos WV, Schmidt AG, Beeravolu S, Beeraka P (2007) Reconfigurable computing cluster (RCC) project: investigating the feasibility of FPGA-based petascale computing, field-programmable custom computing machines—FCCM, pp. 127–140 Google Scholar
  12. 12.
    El-Araby E, Gonzalez I, El-Ghazawi T (2009) Exploiting partial run-time reconfiguration for high-performance reconfigurable computing. ACM Trans Reconfigurable Technol Syst 1(4):21 CrossRefGoogle Scholar
  13. 13.
    Kogge PM, Bergman K et al (2008) ExaScale computing study: technology challenges in achieving exascale systems, Univ. of Notre Dame, CSE Dept. tech, report TR-200813 Google Scholar
  14. 14.
    Brightwell R, Pedretti K (2011) A perspective on operating and runtime systems for exascale computing, scalable system software. Sandia National Laboratories, Albuquerque, NM, USA, University of Houston, April 15 Google Scholar
  15. 15.
    Beltran M, Guzman A (2009) The impact of workload variability on load balancing algorithms. Scalable Comput Pract Experience 10(2):131–146 Google Scholar
  16. 16.
    Rajkumar B, Cortes T, Jin H (2001) Single system image. Int J High Perform Comput Appl 15(2):124–135 CrossRefGoogle Scholar
  17. 17.
    Hoffman FM, Hargrove WW (1999) Cluster computing: Linux taken to the extreme. Linux Mag 1(1):56–59 Google Scholar
  18. 18.
    Malloy RP (1987) Invisible hand or sleight of Hand-Adam Smith, Richard Posner and the philosophy of law and economics. Univ Kans Law Rev 36:209 Google Scholar
  19. 19.
    Shaojun Z (2012) Analysis and algorithm of load balancing strategy of the web server cluster system, communications and information processing. Springer, Berlin, pp 699–706 Google Scholar
  20. 20.
    Carino RL, Banicescu I (2008) Dynamic load balancing with adaptive factoring methods in scientific applications. J Supercomput 44(1):41–63 CrossRefGoogle Scholar
  21. 21.
    Barak A, Shiloh A (2012) The MOSIX cluster operating system for high-performance computing on Linux clusters, multi-clusters and clouds Google Scholar
  22. 22.
    Choudhury AR, George T, Kedia M, Sabharwal Y, Saxena V (2013) Method for improving the performance of high performance computing applications on cloud using integrated load balancing. US patent 20,130,031,550, issued January 31 Google Scholar
  23. 23.
    Lieber M, Grützun V, Wolke R, Müller MS, Nagel WE (2012) Highly scalable dynamic load balancing in the atmospheric modeling system COSMO-SPECS+ FD4. In: Applied parallel and scientific computing. Springer, Berlin, pp 131–141 CrossRefGoogle Scholar
  24. 24.
    Devine KD, Rajamanickam S, Boman EG (2013) Combinatorial scientific computing for exascale systems and applications. No. SAND2013-3968C, Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States) Google Scholar
  25. 25.
    Hubbard JH, Hubbard BB (2001) Vector calculus, linear algebra, and differential forms: a unified approach, 2nd edn. Prentice Hall, New York Google Scholar
  26. 26.
    Asanovic K, Bodik R, Demmel J, Keaveny T, Keutzer K, Kubiatowicz J, Morgan N et al (2009) A view of the parallel computing landscape. Commun ACM 52(10):56–67 CrossRefGoogle Scholar
  27. 27.
    Watanabe M (2013) High-performance computing based on high-speed dynamic reconfiguration, high-performance computing using FPGAs. Springer, New York, pp 605–627 CrossRefGoogle Scholar
  28. 28.
    Sharifi M, Mirtaheri SL, Mousavi Khaneghah E (2010) A dynamic framework for integrated management of all types of resources in P2P systems. J Supercomput 52(2):149–170 CrossRefGoogle Scholar
  29. 29.
    Sharifi M, Mousavi Khaneghah E, Kashyian M, Mirtaheri SL (2012) A platform independent distributed IPC mechanism in support of programming heterogeneous distributed systems. J Supercomput 59(1):548–567 CrossRefGoogle Scholar
  30. 30.
    Churchman CW (1968) The systems approach. Delacorte Press, New York Google Scholar
  31. 31.
    El-Ghazawi T, El-Araby E, Huang M, Gaj K, Kindratenko V, Buell D (2008) The promise of high-performance reconfigurable computing. Computer 41(2):69–76 CrossRefGoogle Scholar
  32. 32.
    Van Craeynest K, Jaleel A, Eeckhout L, Narvaez P, Emer J (2012) Scheduling heterogeneous multi-cores through performance impact estimation (PIE). In: The 39th international symposium on computer architecture. IEEE Press, New York, pp 213–224 Google Scholar
  33. 33.
    Marks M, Jantura J, Niewiadomska-Szynkiewicz E, Strzelczyk P, Góźdź K (2012) Heterogeneous GPU&CPU cluster for high performance computing in cryptography. Comput Sci 13(2):63–79 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.School of Computer EngineeringIran University of Science and TechnologyNarmakIran

Personalised recommendations