Advertisement

Using an Enterprise Grid for Execution of MPI Parallel Applications – A Case Study

  • Adam K. L. Wong
  • Andrzej M. Goscinski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

An enterprise has not only a single cluster but a set of geographically distributed clusters – they could be used to form an enterprise grid. In this paper we show based on our case study that enterprise grids could be efficiently used as parallel computers to carry out high-performance computing.

Keywords

Parallel Application Execution Performance Fast Cluster Enterprise Grid International Parallel Processing Symposium 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bal, H.E., Kaashoek, M.F., Tanenbaum, A.S.: Orca: A Language for Parallel Programming of Distributed Systems. IEEE Transactions on Software Engineering 18(3), 190–205 (1992)CrossRefGoogle Scholar
  2. 2.
    Bal, H.E., Plaat, A., Bakker, M.G., Dozy, P., Hofman, R.F.H.: Optimizing Parallel Applications for Wide-Area Clusters. In: Proceedings of the 12th International Parallel Processing Symposium, Orlando, April 1998, pp. 784–790 (1998)Google Scholar
  3. 3.
    Enterprise Grid Alliance, http://www.gridalliance.org (accessed September 2005)
  4. 4.
    Goscinski, A.M., Wong, A.K.L.: Performance Evaluation of the Concurrent Execution of NAS Parallel Benchmarks with BYTE Sequential Benchmarks on a Cluster. In: Proceedings of the 11th International Conference on Parallel and Distributed Systems (ICPADS 2005), Fukuoka, Japan, July 2005, pp. 313–319 (2005) Google Scholar
  5. 5.
    He, L., Jarvis, S.A., Spooner, D.P., Chen, X., Nudd, G.R.: Dynamic Scheduling of Parallel Jobs with QoS Demands in Multiclusters and Grids. In: Proceedings of the 5th International Workshop on Grid Computing, pp. 402–409 (2004)Google Scholar
  6. 6.
    Jones, W.M., Pang, L.W., Stanzione, D., Ligon III, W.B.: Characterization of Bandwidth-aware Meta-schedulers for Co-allocating Jobs in a Mini-grid. The Journal of Supercomputing 34(2), 135–163 (2005)CrossRefGoogle Scholar
  7. 7.
    LAM/MPI, http://www.lam-mpi.org (accessed July 2005)
  8. 8.
    NAS Benchmarks, URL, http://www.nas.nasa.gov/Software/NPB/ (accessed July 2005)
  9. 9.
    The POVRAY Homepage. Persistence of Vision Ray-tracer, URL, http://www.povray.org (accessed July 2005)
  10. 10.
    Stewart, C.A., Hart, D., Berry, D.K., Olsen, G.J., Wernert, E.A., Fischer, W.: Parallel Implmentation and Performance of fastDNAml – A Program for Maximum Likelihood Phylogenetic Inference. In: Proceedings of the 2001 ACM/IEEE Conference on Supercomputing (CDROM), November 2001, pp. 20–20 (2001)Google Scholar
  11. 11.
    Verrall, L.: MPI-Povray: Distributed Povray Using MPI Message Passing, URL, http://www.verrall.demon.co.uk/mpipov
  12. 12.
    Wong, A.K.L., Goscinski, A.M.: Execution Environments and Benchmarks for the Study of Applications’ Scheduling on Clusters. In: Hobbs, M., Goscinski, A.M., Zhou, W. (eds.) ICA3PP 2005. LNCS, vol. 3719, pp. 204–213. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Adam K. L. Wong
    • 1
  • Andrzej M. Goscinski
    • 1
  1. 1.School of Engineering and Information TechnologyDeakin UniversityGeelongAustralia

Personalised recommendations