Skip to main content
Log in

Distributed Shared Arrays: Portable Shared-Memory Programming Interface for Multiple Computer Systems

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

This paper describes the design and implementation of a parallel programming environment called “Distributed Shared Array” (DSA), which provides a shared global array abstract across different machines connected by a network. In DSA, users can define and use global arrays that can be accessed uniformly from any machines in the network. Explicit management of array area allocation, replication, and migration is achieved by explicit calls for array manipulation: defining array regions, reading and writing array regions, synchronization, and control of replication and migration. The DSA is integrated with Grid (Globus) services. This paper also describes the use of our model for gene cluster analysis, multiple alignment and molecular dynamics simulation. In these applications, global arrays are used for storing the distance matrix, alignment matrix and atom coordinates, respectively. Large array areas, which cannot be stored in the memory of individual machines, are made available by the DSA. Scalable performance of DSA was obtained compared to that of conventional parallel programs written in MPI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. B.N. Bershad, M.J. Zekauskas and W.A. Sawdon, The Midway distributed shared memory system, in: Proceedings of 38th IEEE Computer Society International Conference (1993) pp. 528-537.

  2. J.B. Carter, J.K. Bennett and W. Zwaenepoel, Implementation and performance of MUNIN, in: Proceedings of 13th ACM Symposium on Operating Systems Principles (1991) pp. 152-164.

  3. M. Castro, P. Guedes, M. Sequeira and M. Costa, Efficient and flexible object sharing, in: Proceedings of 1996 International Conference Parallel Processing (1996) pp. 128-137.

  4. E. Durbin, S. Eddy, A. Krogh and G. Mitchison, Biological Sequential Analysis (Cambridge University Press, Cambridge, 1998).

    Google Scholar 

  5. L. Kal et al., NAMD2: Greater scalability for parallel molecular dynamics, Journal of Computational Physics 151 (1999) 283–312.

    Google Scholar 

  6. P. Keleher, A.L. Cox, S. Dwarkadas and W. Zwaenepoel, TreadMarks: Shared memory computing on networks of workstations, IEEE Computer 29(2) (1996) 18–28.

    Google Scholar 

  7. Message Passing Interface Forum, MPI: A Message-Passing Interface Standard, 1.1 edition (1995).

  8. MPI Software Technology, http://www.mpi-softtech.com/ (2001).

  9. J. Nieplocha and I. Foster, Disk Resident Arrays: An array-oriented I/O library for out-of-core computations, in: Proceedings of IEEE Conference Frontiers of Massively Parallel Computing (1996) pp. 196-204.

  10. J. Nieplocha, R.J. Harrison and R.J. Littlefield, Global Arrays: A nonuniform memory access programming model for high-performance computers, The Journal of Supercomputing 10 (1996) 197–220.

    Google Scholar 

  11. J.D. Thompson, D.G. Higgins and T.J. Gibson, CLUSTALW: Improving the sensitivity of progressive multiple sequence alignment through sequence weighting, positions-specific gap penalties and weight matrix choice, Nucleic Acids Research 22 (1994) 4673–4680.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nomoto, A., Watanabe, Y., Kaneko, W. et al. Distributed Shared Arrays: Portable Shared-Memory Programming Interface for Multiple Computer Systems. Cluster Computing 7, 65–72 (2004). https://doi.org/10.1023/B:CLUS.0000003944.78311.72

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:CLUS.0000003944.78311.72

Navigation