The Journal of Supercomputing

, Volume 51, Issue 1, pp 40–57 | Cite as

New techniques for simulating high performance MPI applications on large storage networks

  • Alberto Núñez
  • Javier Fernández
  • Jose D. Garcia
  • Félix Garcia
  • Jesús Carretero


In this work, we propose new techniques to analyze the behavior, the performance, and specially the scalability of High Performance Computing (in short, HPC) applications on different computing architectures. Our final objective is to test applications using a wide range of architectures (real or merely designed) and scaling it to any number of nodes or components. This paper presents a new simulation framework, called SIMCAN, for HPC architectures. The main characteristic of the proposed simulation framework is the ability to be configured for simulating a wide range of possible architectures that involve any number of components. SIMCAN is developed to simulate complete HPC architectures, but putting special emphasis on the storage and network subsystems. The SIMCAN framework can handle complete components (nodes, racks, switches, routers, etc.), but also key elements of the storage and network subsystems (disks, caches, sockets, file systems, schedulers, etc.). We also propose several methods to implement the behavior of HPC applications. Each method has its own advantages and drawbacks. In order to evaluate the possibilities and the accuracy of the SIMCAN framework, we have tested it by executing a HPC application called BIPS3D on a hardware-based computing cluster and on a modeled environment that represent the real cluster. We also checked the scalability of the application using this kind of architecture by simulating the same application with an increased number of computing nodes.


I/O simulation High performance I/O Large storage networks Simulation of high performance applications 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Varga A (2001) The OMNeT++ discrete event simulation system. In: Proceedings of the European simulation multiconference (ESM’2001), Prague, Czech Republic, 2001 Google Scholar
  2. 2.
  3. 3.
    Gropp W, Huss-Lederman S, Lumsdaine A, Lusk E, Netzberg B, Saphir W, Snir M (1998) MPI: the complete reference, vol 2—The MPI-2 extensions. MTI-Press Google Scholar
  4. 4.
    Berenbrink P, Brinkmann A, Scheideler C (2001) SIMLAB: a simulation environment for storage area networks. In: Proceedings of the 9th Euromicro workshop on parallel and distributed processing, Mantova, Italy, 2001, pp 227–234 Google Scholar
  5. 5.
    Molero X, Silla F, Santonja V, Duato J (2000) Modeling and simulation of storage area networks. In: MASCOTS ’00: proceedings of the 8th international symposium on modeling, analysis and simulation of computer and telecommunication systems, Washington, DC, USA, 2000, pp 307–314 Google Scholar
  6. 6.
    Bagrodia R, Meyer R, Takai M, Chen Y, Zeng X, Martin J, Song HY (1998) PARSEC: a parallel simulation environment for complex systems. Comput Mag 31(10):77–85 Google Scholar
  7. 7.
    Bajaj S, Breslau L, Estrin D, Fall K, Floyd S, Haldar P, Handley M, Helmy A, Heidemann J, Huang P, Kumar S, McCanne S, Rejaie R, Sharma P, Varadhan K, Xu Y, Yu H, Zappala D (1999) Improving simulation for network research. University of Southern California. Tech. Rep. 99-702b, March (1999).
  8. 8.
    Martin MMK, Sorin DJ, Beckmann BM, Marty MR, Xu M, Alameldeen AR, Moore KE, Hill MD, Wood DA (2005) Multifacet’s general execution-driven multiprocessor simulator (GEMS) toolset. ACM SIGARCH Comput Archit News 33(4):92–99 CrossRefGoogle Scholar
  9. 9.
    Hardavellas N, Somogyi S, Wenisch TF, Wunderlich E, Chen S, Kim J, Falsafi B, Hoe JC, Nowatzyk AG (2004) Simflex: a fast, accurate, flexible full-system simulation framework for performance evaluation of server architecture. SIGMETRICS Perform Eval Rev 31:31–35 CrossRefGoogle Scholar
  10. 10.
    Prakash S, Bagrodia RL (1998) MPI-SIM: using parallel simulation to evaluate MPI programs. In: Winter Simulation Conference Proceedings, Washington, DC, USA vol 1, 1998, pp 467–474 Google Scholar
  11. 11.
    Bagrodia R, Deelman E, Phan T (2001) Parallel simulation of large-scale parallel applications. Int J High Perform Comput Appl 15(1):3–12 CrossRefGoogle Scholar
  12. 12.
    Corbett PF, Feitelson DG (1996) The Vesta parallel file system. ACM Trans Comput Syst 14(3):225–264 CrossRefGoogle Scholar
  13. 13.
    Riesen R (2006) A hybrid MPI simulator. In: 2006 IEEE international conference on cluster computing, 2006, pp 1–9 Google Scholar
  14. 14.
    Khnemann M, Rauber T, Runger G (2004) A source code analyzer for performance prediction. In: 18th international parallel and distributed processing symposium (CDROM), 2004 Google Scholar
  15. 15.
    Adve VS, Vernon MK (2004) Parallel program performance prediction using deterministic task graph analysis. ACM Trans Comput Syst (TOCS) 22(1):94–136 CrossRefGoogle Scholar
  16. 16.
    Bagrodia R, Deeljman E, Docy S, Phan T (1999) Performance prediction of large parallel applications using parallel simulations. ACM SIGPLAN Not 34(8):151–162 CrossRefGoogle Scholar
  17. 17.
    Adve V, Sakellariou R (2000) Application representations for multiparadigm performance modeling of large-scale parallel scientific codes. Int J High Perform Comput Appl 14(4):304–316 CrossRefGoogle Scholar
  18. 18.
    Sundaram-Stukel D, Vernon MK (1999) Predictive analysis of a wavefront application using LogGP. ACM SIGPLAN Not 34(8):141–150 CrossRefGoogle Scholar
  19. 19.
    Loureiro A, González J, Pena TF (2003) A parallel 3D semiconductor device simulator for gradual heterojunction bipolar transistors. J Numer Model: Electron Netw Devices Fields 16:53–66 zbMATHCrossRefGoogle Scholar
  20. 20.
    Filgueira R, Singh DE, Isaila F, Carretero J, García Loureiro AJ (2007) Optimization and evaluation of parallel I/O in BIPS3D parallel irregular application. In: 21st IEEE international parallel and distributed processing symposium, Long Beach, USA, 2007, pp 1–8 Google Scholar
  21. 21.
    Karypis G, Kumar V (1998) METIS. A software package for partitioning unstructured graphs, partitioning meshes and computing fill-reducing orderings of sparse matrices. Department of Computer Science/Army HPC Research Center, University of Minnesota, Minneapolis Google Scholar
  22. 22.
    Gropp W, Lusk E (1998) Users’s guide for MPE: extensions for MPI programs. Argonne National Laboratory Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Alberto Núñez
    • 1
  • Javier Fernández
    • 1
  • Jose D. Garcia
    • 1
  • Félix Garcia
    • 1
  • Jesús Carretero
    • 1
  1. 1.Computer Architecture GroupComputer Science Department, Universidad Carlos III de MadridLeganés, MadridSpain

Personalised recommendations