Advertisement

Data movement and control substrate for parallel scientific computing

  • Nikos Chrisochoides
  • Induprakas Kodukula
  • Keshav Pingali
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1199)

Abstract

In this paper, we describe the design and implementation of a data-movement and control substrate (DMCS) for network-based, homogeneous communication within a single multiprocessor. DMCS is an implementation of an API for communication and computation that has been proposed by the PORTS consortium. One of the goals of this consortium is to define an API that can support heterogeneous computing without undue performance penalties for homogeneous computing. Preliminary results in our implementation suggest that this is quite feasible. The DMCS implementation seeks to minimize the assumptions made about the homogeneous nature of its target architecture. Finally, we present some extensions to the API for PORTS that will improve the performance of sparse, adaptive and irregular type of numeric computations.

Keywords

parallel processing runtime systems communication threads networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Thorsten von Eicken, Davin E. Culler, Seth Cooper Goldstein, and Klaus Erik Schauser, Active Messages: a mechanism for integrated communication and computation Proceedings of the 19th International Symposium on Computer Architecture, ACM Press, May 1992.Google Scholar
  2. 2.
    Matthew Haines, David Cronk, and Piyush Mehrotra, On the design of Chant: A talking threads package, NASA CR-194903 ICASE Report No. 94-25, Institute for Computer Applications in Science and Engineering Mail Stop 132C, NASA Langley Research Center Hampton, VA 23681-0001, April 1994.Google Scholar
  3. 3.
    R.S. Nikhil, Cid: A Parallel, “Shared-Memory” C for Distributed Memory Machines. In Lecture Notes in Computer Science, vol 892.Google Scholar
  4. 4.
    Christopher F. Joerg. The Cilk system for Parallel Multithreaded Computing. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, January, 1996.Google Scholar
  5. 5.
    L.V. Kale and M. Bhandarkar and N. Jagathesan and S. Krishnan and J. Yelon, CONVERSE: An Interoperability Framework for Parallel Programming, Parallel Programming Laboratory Report #95-2, Dept. of Computer Science, University of Illinois, March 1995Google Scholar
  6. 6.
    Nikos Chrisochoides and Nikos Pitsianis, FFT Sensitive Messages, to appear as Cornell Theory Center Technical Report, 1996.Google Scholar
  7. 7.
    Nikos Chrisochoides and Juan Miguel del Rosario, A Remote Service Protocol for Dynamic Load Balancing of Multithreaded Parallel Computations. Poster presentation in Frontiers'95.Google Scholar
  8. 8.
    MPI Forum, Message-Passing Interface Standard, April 15, 1994.Google Scholar
  9. 9.
    Runtime Support for Portable Distributed Data Structures C.-P. Wen, S. Chakrabarti, E. Deprit, Chih-Po Wen, A. Krishnamurthy, and K. Yelick. Workshop on Languages, Compilers, and Runtime Systems for Scalable Computers, May 1995.Google Scholar
  10. 10.
    N. Sundaresan and L. Lee, An object-oriented thread model for parallel numerical applications. Proceedings of the 2n Annual Object-Oriented Numerics Conference — OONSKI 94, Sunriver, Oregon, pp 291–308, April 24–27 1994.Google Scholar
  11. 11.
    I. Foster, Carl Kesselman, Steve Tuecke, Portable Mechanisms for Multithreaded Distributed Computations Argonne National Laboratory, MCS-P494-0195.Google Scholar
  12. 12.
    Ian Foster, Carl Kesselman and Steven Tuecke, The NEXUS approach to integrating multithreading and communication, Argonne National Laboratory.Google Scholar
  13. 13.
    Ralph M. Butler, and Ewing L. Lusk, User's Guide to p4 Parallel Programming System Oct 1992, Mathematics and Computer Science division, Argonne National Laboratory.Google Scholar
  14. 14.
    Nikos Chrisochoides, Florian Sukup, Task parallel implementation of the Bowyer-Watson algorithm, CTC96TR235, Technical Report, Cornell Theory Center, 1996.Google Scholar
  15. 15.
    Portable Runtime System (PORTS) consortium, http://www.cs.uoregon.edu/research/paracomp/ports/Google Scholar
  16. 16.
    PORTS Level 0 Thread Modules from Argonne/CalTech, ftp://ftp.mcs.anl.gov/pub/ports/Google Scholar
  17. 17.
    A Proposal for PORTS Level 1 Communication Routines, http://www.cs.uoregon.edu/research/paracomp/portsGoogle Scholar
  18. 18.
    A. Belguelin, J. Dongarra, A. Geist, R. Manchek, S. Otto, and J. Walpore, PVM: Experiences, current status and future direction. Supercomputing'93 Proceedings, pp 765–6.Google Scholar
  19. 19.
    David Keppel, Tools and Techniques for Building Fast Portable Threads Package, UW-CSE-93-05-06, Technical Report, University of Washington at Seattle, 1993.Google Scholar
  20. 20.
    Data Parallel Programming in a Multithreaded Environment, (Need authors...)to appear i a Special Issue of Scientific Programming, 1996.Google Scholar
  21. 21.
    Chichao Chang, Grzegorz Czajkowski, Chris Hawblitzell and Thorsten von Eicken, Low-latency communication on the IBM risc system/6000 SP. To appear in Supercomputing '96.Google Scholar
  22. 22.
    David E. Culler, Andrea Dusseau, Seth Copen Goldstein, Arvind Krishnamurthy, Steven Lumetta, Thorsten von Eicken and Katherine Yelick. Parallel Programming in Split-C. Supercomputing'93.Google Scholar
  23. 23.
    Veena Avula. SplitThreads — Split-C threads. Masters thesis, Cornell University. 1994.Google Scholar
  24. 24.
    Portable Clock and Timer Module from Oregon, http://www.cs.uoregon.edu/research/paracomp/portsGoogle Scholar
  25. 25.
    Pete Beckman and Dennis Gannon, Tulip: Parallel Run-time Support System for pC++, http://www.extreme.indiana.edu.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Nikos Chrisochoides
    • 1
  • Induprakas Kodukula
    • 1
  • Keshav Pingali
    • 1
  1. 1.Computer Science DepartmentCornell UniversityIthaca

Personalised recommendations