Advertisement

Flexible communication mechanisms for dynamic structured applications

  • Stephen J. Fink
  • Scott B. Baden
  • Scott R. Kohn
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1117)

Abstract

Irregular scientific applications are often difficult to parallelize due to elaborate dynamic data structures with complicated communication patterns. We describe flexible data orchestration abstractions that enable the programmer to express customized communication patterns arising in an important class of irregular computations—adaptive finite difference methods for partial differential equations. These abstractions are supported by KeLP, a c++ run-time library. KeLP enables the programmer to manage spatial data dependence patterns and express data motion handlers as first-class mutable objects. Using two finite difference applications, we show that KeLP's flexible communication model effectively manages elaborate data motion arising in semi-structured adaptive methods.

Keywords

Communication Pattern Ghost Cell Communication Schedule Dynamic Data Structure Structure Adaptive Mesh Refinement 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    High Performance Fortran Forum, “High performance fortran language specification,” Rice Univeristy, Houston, Texas, May 1993.Google Scholar
  2. 2.
    S. R. Kohn, A Parallel Software Infrastructure for Dynamic Block-Irregular Scientific Calculations. PhD thesis, University of California at San Diego, 1995.Google Scholar
  3. 3.
    R. Das, M. Uysal, J. Saltz, and Y.-S. Hwang, “Communication optimizations for irregular scientific computations on distributed memory architectures,” Journal of Parallel and Distributed Computing, vol. 22, no. 3, pp. 462–479, 1994.CrossRefGoogle Scholar
  4. 4.
    G. Agrawal, A. Sussman, and J. Saltz, “An integrated runtime and compile-time approach for parallelizing structured and block structured applications,” IEEE Transactions on Parallel and Distributed Systems, to appear.Google Scholar
  5. 5.
    C. Chang, A. Sussman, and J. Saltz, “Support for distributed dynamic data structures in C++,” Tech. Rep. CS-TR-3266, University of Maryland, 1995.Google Scholar
  6. 6.
    M. Kaddoura and S. Ranka, “Runtime support for data parallel applications for adaptive and non-uniform computational environments.” Preliminary Version, February 1995.Google Scholar
  7. 7.
    S. R. Kohn and S. B. Baden, “Irregular coarse-grain data parallelism under LPARX,” J. Scientific Programming, vol. 5, no. 3, 1996.Google Scholar
  8. 8.
    E. J. Bylaska, S. R. Kohn, S. B. Baden, A. Edelman, R. Kawai, M. E. G. Ong, and J. H. Weare, “Scalable parallel numerical methods and software tools for material design,” in Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, (San Francisco, CA), February 1995.Google Scholar
  9. 9.
    Message-Passing Interface Standard, “MPI: A message-passing interface standard,” University of Tennessee, Knoxville, TN, June 1995.Google Scholar
  10. 10.
    G. M. Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” in Proceedings of the AFIPS Spring Joint Computer Conference, (Atlantic City, NJ), pp. 483–485, April 1967.Google Scholar
  11. 11.
    C. Rendleman, V. B. J. Bell, B. Crutchfield, L. Howell, and M. Welcome, “Boxlib users guide and manual: A library for managing rectangular domains. edition 1.99,” tech. rep., Center for Computational Science and Engineering, Lawrence Livermore National Laboratory, 1995.Google Scholar
  12. 12.
    M. Parashar and J. C. Browne, “Dagh: A data-management infrastructure for parallel adaptive mesh refinement techniques,” tech. rep., Department of Computer Science, University of Texas at Austin, 1995.Google Scholar
  13. 13.
    A. Choudhary, R. Bordawekar, M. Harry, R. Krishnaiyer, R. Ponnusamy, T. Singh, and R. Thakur, “PASSION:parallel and scalable software for I/O,” Tech. Rep. SCCS-636, NPAC., September 1994.Google Scholar
  14. 14.
    R.Bennett, K. Bryant, A. Sussman, R. Das, and J. Saltz, “A framework for optimizing parallel I/O,” Tech. Rep. CS-TR-3417, University of Maryland: Department of Computer Science, January 1995.Google Scholar
  15. 15.
    S. S. Mukherjee, S. D. Sharman, M. D. Hill, J. R. Larus, A. Rogers, and J. Saltz, “Efficient support for irregular applications on distributed memory machines,” in PPoPP 95, 1995.Google Scholar
  16. 16.
    B. Falsafi, A. R. Lebeck, S. K. Reinhardt, I. Schoinas, M. D. Hill, J. R. Larus, A. Rogers, and D. A. Wood, “Application-specific protocols for user-level shared memory,” in Proceedings of Supercomputing '94, November 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Stephen J. Fink
    • 1
  • Scott B. Baden
    • 1
  • Scott R. Kohn
    • 2
  1. 1.Department of Computer Science and EngineeringUniversity of California, San DiegoLa Jolla
  2. 2.Department of Chemistry and BiochemistryUniversity of California, San DiegoLa Jolla

Personalised recommendations