Advertisement

An Emulation System for Predicting Master/Slave Program Performance

  • Yasuharu Mizutani
  • Fumihiko Ino
  • Kenichi Hagihara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2790)

Abstract

This paper describes the design and implementation of a performance prediction system, named Master/Slave Emulator (MSE) for message passing programs. The aim of MSE is to assist the performance study of master/slave (M/S) programs on clusters of PCs. MSE provides accurate predictions, so that enables us to analyze the performance of M/S programs.

Keywords

Message Passing Interface Round Trip Time Program Performance Emulation System Network Contention 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Beaumont, O., Legrand, A., Robert, Y.: The master-slave paradigm with heterogeneous processors. In: Proc. 3rd IEEE Int’l Conf. on Cluster Computing, pp. 419–426 (2001)Google Scholar
  2. 2.
    Culler, D.E., Karp, R.M., Patterson, D.A., Sahay, A., Schauser, K.E., Santos, E., Subramonian, R., von Eicken, T.: LogP: Towards a realistic model of parallel computation. In: Proc. 4th ACM SIGPLAN Symp. on Principles Practice of Parallel Programming, pp. 1–12 (1993)Google Scholar
  3. 3.
    Alexandrov, A., Ionescu, M.F., Schauser, K.E., Scheiman, C.: LogGP: Incorporating long messages into the LogP model for parallel computation. J. of Parallel and Distributed Computing 44, 71–79 (1997)CrossRefGoogle Scholar
  4. 4.
    Ino, F., Fujimoto, N., Hagihara, K.: LogGPS: A parallel computational model for synchronization analysis. In: Proc. 8thACM SIGPLAN Symp. on Principles and Practice of Parallel programming, pp. 133–142 (2001)Google Scholar
  5. 5.
    Moritz, C.A., Frank, M.I.: LoGPC: Modeling network contention in message-passing programs. IEEE Trans. on Parallel and Distributed Systems 12, 404–415 (2001)CrossRefGoogle Scholar
  6. 6.
    Adve, V.S., Bagrodia, R., Browne, J.C., Deelman, E., Dube, A., Houstis, E.N., Rice, J.R., Sakellariou, R., Sundaram-Stukel, D.J., Teller, P.J., Vernon, M.K.: POEMS: End-to-end performance design of large parallel adaptive computational systems. IEEE Trans. on Software Engineering 26, 1027–1048 (2000)CrossRefGoogle Scholar
  7. 7.
    Kvasnicka, D.F., Hlavacs, H., Ueberhuber, C.W.: Simulating parallel program performance ith CLUE. In: Proc. 2001 Int’l Symp. on Performance Evaluation of Computer andTelecommuication systems (SPECTS 2001), pp. 140–149 (2001)Google Scholar
  8. 8.
    Adve, V.S., Bagrodia, R., Deelman, E., Sakellariou, R.: Compiler-optimized simulation of large-scale applications on high performance architectures. J. of Parallel and Distributed Computing 62, 393–426 (2002)MATHCrossRefGoogle Scholar
  9. 9.
    Czarnul, P., Tomko, K., Krawczyk, H.: Dynamic partitioning of the divide-and-conquer cheme with migration in PVM environment. In: Proc. 8th European PVM/MPI Users’ Group Meeting (EuroPVM/MPI 2001), pp. 174–182 (2001)Google Scholar
  10. 10.
    O’Carroll, F., Tezuka, H., Hori, A., Ishikawa, Y.: The design and implementation of zero copy MPI using commodity hardware with a high performance network. In: Proc. 12thACM Int’l Conf. on Supercomputing, pp. 243–250 (1998), http://www.pccluster.org/

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Yasuharu Mizutani
    • 1
  • Fumihiko Ino
    • 1
  • Kenichi Hagihara
    • 1
  1. 1.Graduate School of Information Science and TechnologyOsaka UniversityToyonaka, OsakaJapan

Personalised recommendations