L3C Model of High-Performance Computing Cluster for Scientific Applications

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 732)

Abstract

High-performance computing clusters (HPCCs) are widely used for various scientific applications. In a typical scientific research environment, software applications need large but varying number of processing elements and processor cores. To maximize throughput of a computing cluster and optimum utilization of resources, one new model has been proposed. The proposed model visualizes the computing cluster as loosely coupled cluster of clusters (L3C). Execution time for scientific applications also varies in terms of lapsed time for execution and CPU time utilized. The process scheduling algorithm maintains a list of applications to be executed along with respective number of node/core required. Using the L3C model and scheduling algorithm, multiple applications are scheduled on the computing cluster for concurrent execution. Basis for proposing L3C model along with its details is discussed in the paper. Experimental results of performance evaluation of HPC clusters were published earlier by the authors and are referred at respective places. L3C model has certain inherent advantages which are also discussed in the paper.

Keywords

High-performance computing cluster Performance evaluation HPCC throughput Scientific applications 

References

  1. 1.
    Alam, S.R., Barrett, R.F., Kuehn, J.A., Roth, P.C., Vetter, J.S.: Characterization of scientific workloads on systems with multi-core processors. In: IEEE International Symposium on Workload Characterization, pp. 225–236 (2006)Google Scholar
  2. 2.
    Dongarra, J., Luszczek, P., Petitet, A.: The LINPACK Benchmark: past, present, and future. Concurrency: Pract. Exp. 15, 803–820 (2003)CrossRefGoogle Scholar
  3. 3.
    Langou, J., Dongarra, J.: The problem with the Linpack benchmark matrix generator. Int. J. High Perform. Comput. Appl. 23(1), 5–14 (2009)CrossRefGoogle Scholar
  4. 4.
    Petitet, R.C., Whaley, J. Dongarra, A.: Cleary, HPL—a Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers. Innovative Computing Laboratory, Computer Science Department, University of Tennessee, September 2008Google Scholar
  5. 5.
    Buyya, R. (ed.): High Performance Cluster Computing: Architectures and Systems, vol. 1. Prentice Hall PTR, NJ (1999) ISBN: 0-13-013784-7Google Scholar
  6. 6.
    Buyya, R. (ed.): High Performance Cluster Computing: Programming and Applications, vol. 2. Prentice Hall PTR, NJ, USA (1999) ISBN: 0-13-013785-5Google Scholar
  7. 7.
    Hwang, K., Dongarra, J., Fox, G.: Distributed and Cloud Computing, 1st edn. Morgan Kaufmann (2011)Google Scholar
  8. 8.
    Rajan, A., Joshi, B.K., Rawat, A.: Critical analysis of HPL performance under different process distribution patterns. In: CSI 6th International Conference on Software Engineering (CONSEG 2012), Devi Ahilya Vishwavidyalaya (DAVV), Indore, MP, India, 5–7 Sept 2012Google Scholar
  9. 9.
    Vaidya, M.: Parallel processing of cluster by map reduce. Int. J. Distrib. Parallel Syst. (IJDPS) 3(1), 167 (2012)CrossRefGoogle Scholar
  10. 10.
    Rajan, A., Joshi, B.K., Rawat, A.: Analysis of process distribution in HPC cluster using HPL. In: The Second IEEE International Conference on Parallel, Distributed and Grid Computing 2012 (PDGC 2012), Jaypee University of Information Technology, Solan, HP, India, 6–8 Dec 2012Google Scholar
  11. 11.
    Rajan, A., Joshi, B.K., Rawat, A.: Analytical studies of peak computing power deliverable by small and mid size HPCC. In: INDIACom 2013—7th International Conference on ‘Computing for Nation Development’, BVICAM, New Delhi, 7–8 Mar 2013Google Scholar
  12. 12.
    Rajan, A., Joshi, B.K.: Performance comparison of 20 Gbps and 40 Gbps Infiniband Interconnect. In: IEEE International Conference on Global Sustainable Development (IndiaCom 2014), BVICAM, pp. 5–6, New Delhi, Mar 2014Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Alpana Rajan
    • 1
  • Brijendra Kumar Joshi
    • 2
  • Anil Rawat
    • 1
  1. 1.Computer DivisionRaja Ramanna Centre for Advanced TechnologyIndoreIndia
  2. 2.Military College of Telecommunication EngineeringMhowIndia

Personalised recommendations