Skip to main content

Parallel PDE-Based Simulations Using the Common Component Architecture

  • Conference paper
Numerical Solution of Partial Differential Equations on Parallel Computers

Summary

The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component- based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general-purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations.

This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. Ahrem, P. Post, B. Steckel, and K. Wolf. MpCCI: A tool for coupling CFD with other disciplines. In Proceedings of the Fifth World Conference in Applied Fluid Dynamics, CFD-Efficiency and Economic Benefit in Manufacturing, 2001.

    Google Scholar 

  2. R. Ahrem, P. Post, and K. Wolf. A communication library to couple simulation codes on distributed systems for multi-physics computations. In E. D’Hollander, G. Joubert, F. Peters, and H. Sips, editors, Parallel Computing: Fundamentals and Applications, Proceedings of the International Conference ParCO 99, pages 47–55. Imperial College Press, 1999.

    Google Scholar 

  3. B. Allan, R. Armstrong, S. Lefantzi, J. Ray, E. Walsh, and P. Wolfe. Ccaffeine — a CCA component framework for parallel computing. http://www.cca-forum.org/ccafe/, 2005.

    Google Scholar 

  4. B. A. Allan, R. C. Armstrong, A. P. Wolfe, J. Ray, D. E. Bernholdt, and J. A. Kohl. The CCA core specification in a distributed memory SPMD framework. Concurrency and Computation: Practice and Experience, 14(5):1–23, 2002.

    Article  Google Scholar 

  5. B. A. Allan, S. Lefantzi, and J. Ray. ODEPACK++: Refactoring the LSODE Fortran library for use in the CCA high performance component software architecture. In Proceedings of the 9th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2004), Santa Fe, NM, April 2004. IEEE Press. see http://www.cca-forum.org/~baallan/odepp.

    Google Scholar 

  6. G. Allen, W. Benger, T. Goodale, H. Hege, G. Lanfermann, A. Merzky, T. Radke, E. Seidel, and J. Shalf. The Cactus code: A problem solving environment for the Grid. In High Performance Distributed Computing (HPDC), pages 253–260. IEEE Computer Society, 2000.

    Google Scholar 

  7. P. Alpatov, G. Baker, C. Edwards, J. Gunnels, G. Morrow, J. Overfelt, R. van de Geijn, and Y.-J. J. Wu. PLAPACK: Parallel linear algebra package-design overview. In Proceedings of SC97, 1997.

    Google Scholar 

  8. R. Armstrong, D. Gannon, A. Geist, K. Keahey, S. Kohn, L. McInnes, S. Parker, and B. Smolinski. Toward a Common Component Architecture for high-performance scientific computing. In Proceedings of the Eighth IEEE International Symposium on High Performance Distributed Computing, 1999.

    Google Scholar 

  9. L. F. S. B. P. Sommeijer and J. G. Verwer. RKC: an explicit solver for parabolic PDEs. J. Comp. Appl. Math., 88:315–326, 1998.

    Article  MathSciNet  Google Scholar 

  10. S. Balay, K. Buschelman, W. Gropp, D. Kaushik, M. Knepley, L. McInnes, B. F. Smith, and H. Zhang. PETSc users manual. Technical Report ANL-95/11-Revision 2.2.1, Argonne National Laboratory, 2004. http://www.mcs.anl.gov/petsc.

    Google Scholar 

  11. S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith. Efficient management of parallelism in object oriented numerical software libraries. In E. Arge, A. M. Bruaset, and H. P. Langtangen, editors, Modern Software Tools in Scientific Computing, pages 163–202. Birkhauser Press, 1997.

    Google Scholar 

  12. P. Beckman, P. Fasel, W. Humphrey, and S. Mniszewski. Efficient coupling of parallel applications using PAWS. In Proceedings of the 7th IEEE International Symposium on High Performance Distributed Computation, July 1998.

    Google Scholar 

  13. S. Benson, M. Krishnan, L. McInnes, J. Nieplocha, and J. Sarich. Using the GA and TAO toolkits for solving large-scale optimization problems on parallel computers. Technical Report ANL/MCS-P1084-0903, Argonne National Laboratory, September 2003.

    Google Scholar 

  14. S. Benson, L. C. McInnes, and J. Moré. A case study in the performance and scalability of optimization algorithms. ACM Transactions on Mathematical Software, 27:361–376, 2001.

    Article  Google Scholar 

  15. S. Benson, L. C. McInnes, J. Moré, and J. Sarich. TAO users manual. Technical Report ANL/MCS-TM-242-Revision 1.7, Argonne National Laboratory, 2004. http://www.mcs.anl.gov/tao/.

    Google Scholar 

  16. S. Benson and J. Moré. A limited-memory variable-metric algorithm for bound-constrained minimization. Technical Report ANL/MCS-P909-0901, Mathematics and Computer Science Division, Argonne National Laboratory, 2001.

    Google Scholar 

  17. M. Berger and P. Colella. Local adaptive mesh refinement for shock hydrodynamics. J. Comp. Phys., 82:64–84, 1989.

    Article  Google Scholar 

  18. M. J. Berger and J. Oliger. Adaptive mesh refinement for hyperbolic partial differential equations. J. Comp. Phys., 53:484–523, 1984.

    Article  MathSciNet  Google Scholar 

  19. D. E. Bernholdt, B. A. Allan, R. Armstrong, F. Bertrand, K. Chiu, T. L. Dahlgren, K. Damevski, W. R. Elwasif, T. G. W. Epperly, M. Govindaraju, D. S. Katz, J. A. Kohl, M. Krishnan, G. Kumfert, J. W. Larson, S. Lefantzi, M. J. Lewis, A. D. Malony, L. C. McInnes, J. Nieplocha, B. Norris, S. G. Parker, J. Ray, S. Shende, T. L. Windus, and S. Zhou. A component architecture for high-performance scientific computing. Intl. J. High-Perf. Computing Appl., 2005. Submitted to ACTS Collection special issue, in press.

    Google Scholar 

  20. D. E. Bernholdt, R. C. Armstrong, and B. A. Allan. Managing complexity in modern high end scientific computing through component-based software engineering. In Proceedings. of the HPCA Workshop on Productivity and Performance in High-End Computing (P-PHEC 2004), Madrid, Spain. IEEE Computer Society, 2004.

    Google Scholar 

  21. D. E. Bernholdt, W. R. Elwasif, J. A. Kohl, and T. G.W. Epperly. A component architecture for high-performance computing. In Proceedings of the Workshop on Performance Optimization via High-Level Languages and Libraries (POHLL-02), 2002.

    Google Scholar 

  22. F. Bertrand, R. Bramley, K. Damevski, J. Kohl, J. Larson, and A. Sussman. MxN interactions in parallel component architectures. Technical Report TR604, Department of Computer Science, Indiana University, Bloomington, 2004.

    Google Scholar 

  23. T. Bettge, A. Craig, R. James, and V. Wayland. The DOE Parallel Climate Model (PCM): The computational highway and backroads. In V. N. Alexandrov, J. J. Dongarra, B. A. Juliano, R. S. Renner, and C. J. K. Tan, editors, Proceedings of the International Conference on Computational Science (ICCS) 2001, volume 2073 of Lecture Notes in Computer Science, pages 148–156, Berlin, 2001. Springer-Verlag.

    Google Scholar 

  24. G. E. Blelloch, M. A. Heroux, and M. Zagha. Segmented operations for sparse matrix computation on vector multiprocessor. Technical Report CMU-CS-93-173, Carnegie Mellon University, 1993.

    Google Scholar 

  25. Boost. http://www.boost.org, 2005.

    Google Scholar 

  26. D. Box. Essential COM. Addison-Wesley, December 1997.

    Google Scholar 

  27. R. Bramley, D. Gannon, T. Stuckey, J. Vilacis, E. Akman, J. Balasubramanian, F. Berg, S. Diwan, and M. Govindaraju. The linear system analyzer. In Enabling Technologies for Computational Science, Kluwer, 2000.

    Google Scholar 

  28. CCA Forum homepage. http://www.cca-forum.org/, 2005.

    Google Scholar 

  29. CCA Specification. http://cca-forum.org/specification/, 2005.

    Google Scholar 

  30. S. Chatterjee, G. E. Blelloch, and M. Zagha. Scan primitives for vector computers. In Supercomputing 1990, 1990.

    Google Scholar 

  31. R. Clay et al. ESI homepage. http://www.terascale.net/esi, 2001.

    Google Scholar 

  32. B. Cockburn and C.-W. Shu. The local discontinuous Galerkin method for time-dependent convection-diffusion systems. SIAM J. Numer. Anal., 35(6):2440–2463, 1998.

    Article  MathSciNet  Google Scholar 

  33. S. D. Cohen and A. C. Hindmarsh. CVODE, a stiff/nonstiff ODE solver in C. Computers in Physics, 10(2):138–143, 1996.

    Google Scholar 

  34. P. Colella. An Algorithmic and Software Framework for Applied Partial Differential Equations Center (APDEC). http://davis.lbl.gov/APDEC/, 2005.

    Google Scholar 

  35. P. Colella et al. Chombo — Infrastructure for Adaptive Mesh Refinement. http://seesar.lbl.gov/anag/chombo, 2005.

    Google Scholar 

  36. Colorado State University. The CSU GCM (BUGS) homepage. http://kiwi.atmos.colostate.edu/BUGS/, 2005.

    Google Scholar 

  37. Combustion Research Facility. http://www.ca.sandia.gov/CRF, 2005.

    Google Scholar 

  38. T. Dahlgren, T. Epperly, and G. Kumfert. Babel User’s Guide. CASC, Lawrence Livermore National Laboratory, version 0.9.0 edition, January 2004.

    Google Scholar 

  39. J. de St. Germain, A. Morris, S. Parker, A. Malony, and S. Shende. Integrating performance analysis in the Uintah software development cycle. In The Fourth International Symposium on HighPerformance Computing (ISHPC-IV), pages 190–206, May 15–17 2002.

    Google Scholar 

  40. J. D. de St. Germain, J. McCorquodale, S. G. Parker, and C. R. Johnson. Uintah: A massively parallel prolem solving environment. In Proceedings of the Ninth IEEE International Symposium on High Performance and Distributed Computing, August 2000.

    Google Scholar 

  41. D. R. Durran. Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. Springer, 1999.

    Google Scholar 

  42. G. Edjlali, A. Sussman, and J. Saltz. Interoperability of data-parallel runtime libraries. In International Parallel Processing Symposium, Geneva, Switzerland, April 1997. IEEE Computer Society Press.

    Google Scholar 

  43. G. Eisenhauer, F. Bustamante, and K. Schwan. Event services for high performance systems. Cluster Computing: The Journal of Networks, Software Tools, and Applications, 3(3), 2001.

    Google Scholar 

  44. R. Englander. Developing Java Beans. O’Reilly and Associates, June 1997.

    Google Scholar 

  45. J. T. Feo, D. C. Cann, and R. R. Oldehoeft. A report on the Sisal language project. Journal of Parallel and Distributed Computing, 10(4):349–366, 1990.

    Article  Google Scholar 

  46. E. Gamma et al. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.

    Google Scholar 

  47. L. Ge, L. Lee, L. Zenghai, C. Ng, K. Ko, Y. Luo, and M. Shephard. Adaptive mesh refinement for high accuracy wall loss determination in accelerating cavity design. In IEEE Conf. on Electromagnetic Field Computations, June 2004.

    Google Scholar 

  48. G. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM: Parallel Virtual Machine, A User’s Guide and Tutorial for Networked Parallel Computing. MIT Press, Cambridge, MA, 1994.

    Google Scholar 

  49. G. A. Geist, J. A. Kohl, and P. M. Papadopoulos. CUMULVS: Providing fault tolerance, visualization and steering of parallel applications. Intl. J. High-Perf. Computing Appl., 11(3):224–236, 1997.

    Google Scholar 

  50. GFDL Flexible Modeling System. http://www.gfdl.noaa.gov/fms, 2004.

    Google Scholar 

  51. M. S. Gockenbach, M. J. Petro, and W. W. Symes. C++ classes for linking optimization with complex simulations. ACM Transactions on Mathematical Software, 25(2):191–212, 1999.

    Article  Google Scholar 

  52. M. Govindaraju, S. Krishnan, K. Chiu, A. Slominski, D. Gannon, and R. Bramley. Merging the CCA component model with the OGSI framework. In 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, 12–15 May 2003.

    Google Scholar 

  53. E. Guilyardi, R. G. Budich, and S. Valcke. PRISM and ENES: European approaches to Earth System Modelling. In Proceedings of Realizing TeraComputing-Tenth Workshop on the Use of High Performance Computing in Meteorology, November 2002.

    Google Scholar 

  54. L. Harper and B. Kauffman. Community Climate System Model. http://www.ccsm.ucar.edu/, 2005.

    Google Scholar 

  55. T. C. Henderson, P. A. McMurtry, P. J. Smith, G. A. Voth, C. A. Wight, and D. W. Pershing. Simulating accidental fires and explosions. Comp. Sci. Eng., 2:64–76, 1994.

    Article  Google Scholar 

  56. M. A. Heroux and J. M. Willenbring. Trilinos Users Guide. Technical Report SAND2003-2952, Sandia National Laboratories, 2003. http://software.sandia.gov/Trilinos.

    Google Scholar 

  57. C. Hill et al. The architecture of the earth system modeling framework. Computing in Science and Engineering, 6(1):18–28, 2003.

    Google Scholar 

  58. A. C. Hindmarsh. ODEPACK, a systematized collection of ODE solvers. Scientific Computing, 1993.

    Google Scholar 

  59. P. Hovland, K. Keahey, L. C. McInnes, B. Norris, L. F. Diachin, and P. Raghavan. A quality of service approach for high-performance numerical components. In Proceedings of Workshop on QoS in Component-Based Software Engineering, Software Technologies Conference, Toulouse, France, 20 June 2003.

    Google Scholar 

  60. E. C. Hunke and J. K. Dukowicz. An elastic-viscous-plastic model for sea ice dynamics. J. Phys. Oc., 27:1849–1867, 1997.

    Article  Google Scholar 

  61. Indiana University. XCAT homepage. http://www.extreme.indiana.edu/xcat/, 2005.

    Google Scholar 

  62. G. E. Karniadakis and S. J. Sherwin. Spectral/Hp Element Methods for CFD. Numerical Mathematics and Scientific Computation. Oxford University Press, 1999.

    Google Scholar 

  63. K. Keahey, P. Beckman, and J. Ahrens. Ligature: Component architecture for high performance applications. Intl. J. High-Perf. Computing Appl., 14(4):347–356, Winter 2000.

    Article  Google Scholar 

  64. K. Keahey, P. Fasel, and S. Mniszewski. PAWS: Collective interactions and data transfers. In Proceedings of the High Performance Distributed Computing Conference, San Francisco, CA, August 2001.

    Google Scholar 

  65. J. P. Kenny, S. J. Benson, Y. Alexeev, J. Sarich, C. L. Janssen, L. C. McInnes, M. Krishnan, J. Nieplocha, E. Jurrus, C. Fahlstrom, and T. L. Windus. Component-based integration of chemistry and optimization software. J. of Computational Chemistry, 25(14):1717–1725, 2004.

    Article  Google Scholar 

  66. D. Keyes. Terascale Optimal PDE Simulations (TOPS) Center. http://tops-scidac.org/, 2005.

    Google Scholar 

  67. T. Killeen, J. Marshall, and A. da Silva. Earth System Modeling Framework. http://www.esmf.ucar.edu, 2005.

    Google Scholar 

  68. O. Knio, H. Najm, and P. Wyckoff. A semi-implicit numerical scheme for reacting flow. II. stiff, operator-split formulation. J. Comp. Phys., 154:428–467, 1999.

    Article  MathSciNet  Google Scholar 

  69. J. A. Kohl and P. M. Papadopoulos. A library for visualization and steering of distributed simulations using PVM and AVS. In High Performance Computing Symposium, Montreal, CA, July 1995.

    Google Scholar 

  70. S. H. Lam and D. A. Goussis. The CSP method of simplifying kinetics. International Journal of Chemical Kinetics, 26:461–486, 1994.

    Article  Google Scholar 

  71. J. Larson, R. Jacob, and E. Ong. The Model Coupling Toolkit: A new Fortran90 toolkit for building multi-physics parallel coupled models. Technical Report ANL/MCSP1208-1204, Argonne National Laboratory, 2004. Submitted to Int. J. High Perf. Comp. App. See also http://www.mcs.anl.gov/mct/.

    Google Scholar 

  72. J. W. Larson, R. L. Jacob, I. T. Foster, and J. Guo. The Model Coupling Toolkit. In V. N. Alexandrov, J. J. Dongarra, B. A. Juliano, R. S. Renner, and C. J. K. Tan, editors, Proceedings of the International Conference on Computational Science (ICCS) 2001, volume 2073 of Lecture Notes in Computer Science, pages 185–194, Berlin, 2001. Springer-Verlag.

    Google Scholar 

  73. J. W. Larson, B. Norris, E. T. Ong, D. E. Bernholdt, J. B. Drake, W. R. Elwasif, M. W. Ham, C. E. Rasmussen, G. Kumfert, D. S. Katz, S. Zhou, C. DeLuca, and N. S. Collins. Components, the Common Component Architecture, and the climate/weather/ocean community. In 84th American Meteorological Society Annual Meeting, Seattle, Washington, 11–15 January 2004. American Meteorological Society.

    Google Scholar 

  74. Lawrence Livermore National Laboratory. Babel. http://www.llnl.gov/CASC/components/babel.html, 2005.

    Google Scholar 

  75. J. C. Lee, H. N. Najm, M. Valorani, and D. A. Goussis. Using computational singular perturbation to analyze large scale reactive flows. In Proceedings of the Fall Meeting of the Western States Section of the The Combustion Institute, Los Angeles, California, October 2003. Distributed via CD-ROM.

    Google Scholar 

  76. S. Lefantzi, C. Kennedy, J. Ray, and H. Najm. A study of the effect of higher order spatial discretizations in SAMR (Structured Adaptive Mesh Refinement) simulations. In Proceedings of the Fall Meeting of the Western States Section of the The Combustion Institute, Los Angeles, California, October 2003. Distributed via CD-ROM.

    Google Scholar 

  77. S. Lefantzi and J. Ray. A component-based scientific toolkit for reacting flows. In Proceedings of the Second MIT Conference on Computational Fluid and Solid Mechanics, June 17–20, 2003, Cambridge, MA, volume 2, pages 1401–1405. Elsevier, 2003.

    Google Scholar 

  78. S. Lefantzi, J. Ray, C. Kennedy, and H. Najm. A component-based toolkit for reacting flow with high order spatial discretizations on structured adaptively refined meshes. Progress in Computational Fluid Dynamics: An International Journal, 2004. To appear.

    Google Scholar 

  79. S. Lefantzi, J. Ray, and H. N. Najm. Using the Common Component Architecture to design high performance scientific simulation codes. In Proceedings of the 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), 22–26 April 2003, Nice, France. IEEE Computer Society, 2003.

    Google Scholar 

  80. S. Lefantzi, J. Ray, and S. Shende. Strong scalability analysis and performance evaluation of a CCA-based hydrodynamic simulation on structured adaptively refined meshes. Poster in ACM/IEEE Conference on Supercomputing, November 2003, Phoenix, AZ.

    Google Scholar 

  81. S. Lele. Compact finite differnece schemes with spectral-like resolution. J. Comp. Phys., 103:16–42, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  82. Z. Lilek and M. Perić. A fourth-order finite volume method with collocated variable arrangement. Computers & Fluids, 24, 1995.

    Google Scholar 

  83. S. J. Lin et al. Global weather prediction and high-end computing at NASA. Computing in Science and Engineering, 6(1):29–35, 2003.

    Google Scholar 

  84. J. Lindemann, O. Dahlblom, and G. Sandberg. Using CORBA middleware in finite element software. In P. M. A. Sloot, C. J. K. Tan, J. J. Dongarra, and A. G. Hoekstra, editors, Proceedings of the 2nd International Conference on Computational Science, Lecture Notes in Computer Science. Springer, 2002. To appear in Future Generation Computer Systems (2004).

    Google Scholar 

  85. A. Lumsdaine et al. Matrix Template Library. http://www.osl.iu.edu/research/mtl, 2005.

    Google Scholar 

  86. Massachussetts Institute of Technology. The MIT GCM homepage. http://mitgcm.org/, 2005.

    Google Scholar 

  87. J. McCorquodale, J. de St. Germain, S. Parker, and C. Johnson. The Uintah parallelism infrastructure: A performance evaluation on the SGI Origin 2000. In High Performance Computing 2001, Mar 2001.

    Google Scholar 

  88. J. J. Moré and S. J. Wright. Optimization Software Guide. SIAM Publications, Philadelphia, 1993.

    Google Scholar 

  89. MPI Forum. MPI: a message-passing interface standard. International Journal of Supercomputer Applications and High Performance Computing, 8(3/4):159–416, Fall–Winter 1994.

    Google Scholar 

  90. H. N. Najm et al. CFRFS homepage. http://cfrfs.ca.sandia.gov/, 2005.

    Google Scholar 

  91. H. N. Najm, R.W. Schefer, R. B. Milne, C. J. Mueller, K. D. Devine, and S. N. Kempka. Numerical and experimental investigation of vortical flow-flame interaction. SAND Report SAND98-8232, UC-1409, Sandia National Laboratories, Livermore, CA 94551-0969, February 1998.

    Google Scholar 

  92. J. Nieplocha, R. J. Harrison, and R. J. Littlefield. Global arrays: A non-uniform-memory-access programming model for high-performance computers. J. Supercomputing, 10(2):169, 1996.

    Article  Google Scholar 

  93. B. Norris, S. Balay, S. Benson, L. Freitag, P. Hovland, L. McInnes, and B. Smith. Parallel components for PDEs and optimization: Some issues and experiences. Parallel Computing, 28(12):1811–1831, 2002.

    Article  Google Scholar 

  94. B. Norris, J. Ray, R. Armstrong, L. C. McInnes, D. E. Bernholdt, W. R. Elwasif, A. D. Malony, and S. Shende. Computational quality of service for scientific components. In Proc. of International Symposium on Component-Based Software Engineering (CBSE7), Edinburgh, Scotland, 2004.

    Google Scholar 

  95. Object Management Group. CORBA component model. http://www.omg.org/technology/documents/formal/components.htm, 2002.

    Google Scholar 

  96. B. Palmer and J. Nieplocha. Efficient algorithms for ghost cell updates on two classes of MPP architectures. In 14th IASTED International Conference on Parallel and Distributed Computing and Systems, Cambridge, MA, 2002.

    Google Scholar 

  97. M. Parashar et al. GrACE homepage. http://www.caip.rutgers.edu/TASSL/Projects/GrACE/, 2005.

    Google Scholar 

  98. S. G. Parker. A component-based architecture for parallel multi-physics PDE simulation. In Proceedings of the International Conference on Computational Science-Part III, pages 719–734. Springer-Verlag, 2002.

    Google Scholar 

  99. C. Pérez, T. Priol, and A. Ribes. A parallel CORBA component model for numerical code coupling. Intl. J. High-Perf. Computing Appl., 17(4), Nov 2003.

    Google Scholar 

  100. R. Peyret and T. Taylor. Computational Methods for Fluid Flow, chapter 6. Springer Series in Computational Physics. Springer-Verlag, New York, 1983. Finite-Difference Solution of the Navier-Stokes Equations.

    Google Scholar 

  101. Phillip Jones. Parallel Ocean Program (POP) homepage. http://climate.lanl.gov/Models/POP/, 2004.

    Google Scholar 

  102. T. Poinsot, S. Candel, and A. Trouvé. Applications of direct numerical simulation to premixed turbulent combustion. Progress in Energy and Combustion Science, 21:531–576, 1995.

    Article  Google Scholar 

  103. R. Pozo. Template Numerical Toolkit. http://math.nist.gov/tnt, 2004.

    Google Scholar 

  104. K. Radhakrishnan and A. C. Hindmarsh. Description and use of LSODE, the Livermore solver for ordinary differential equations. Technical Report UCRL-ID-113855, Lawrence Livermore National Laboratory, 1993.

    Google Scholar 

  105. M. Ranganathan, A. Acharya, G. Edjlali, A. Sussman, and J. Saltz. A runtime coupling of data-parallel programs. In Proceedings of the 1996 International Conference on Supercomputing, Philadelphia, PA, May 1996.

    Google Scholar 

  106. J. Ray, B. A. Allan, R. Armstrong, and J. Kohl. Structured mesh demo for supercomputing 2004. http://www.cca-forum.org/~jaray/SC04/sc04.html, 2004.

    Google Scholar 

  107. J. Ray, C. Kennedy, S. Lefantzi, and H. Najm. High-order spatial discretizations and extended stability methods for reacting flows on structured adaptively refined meshes. In Proceedings of the Third Joint Meeting of the U.S. Sections of The Combustion Institute, March 16–19, 2003, Chicago, Illinois., 2003. Distributed via CD-ROM.

    Google Scholar 

  108. J. V. W. Reynders, J. C. Cummings, P. J. Hinker, M. Tholburn, M. S. S. Banerjee, S. Karmesin, S. Atlas, K. Keahey, and W. F. Humphrey. POOMA: A FrameWork for Scientific Computing Applications on Parallel Architectures, chapter 14. MIT Press, 1996.

    Google Scholar 

  109. E. Roman. Mastering Enterprise JavaBeans. O’Reilly and Associates, June 1997.

    Google Scholar 

  110. J. Sarich. A programmer’s guide for providing CCA component interfaces to the Toolkit for Advanced Optimization. Technical Report ANL/MCS-TM-279, Argonne National Laboratory, 2004.

    Google Scholar 

  111. B. Smith et al. TOPS Solver Interface. http://www-unix.mcs.anl.gov/scidac-tops/tops-solver-interface, 2005.

    Google Scholar 

  112. K. Smith, J. Ray, and B. A. Allan. CVODE component user guidelines. Technical Report SAND2003-8276, Sandia National Laboratory, May 2003.

    Google Scholar 

  113. B. Sportisse. An analysis of operator splitting techniques in the stiff case. J. Comp. Phys., 161:140–168, 2000.

    Article  MATH  MathSciNet  Google Scholar 

  114. G. Strang. On the construction and comparison of difference schemes. SIAM J. Numer. Anal., 5(3):506–517, 1968.

    Article  MATH  MathSciNet  Google Scholar 

  115. M. J. Suarez and L. Takacs. Documentation of the Aries-GEOS dynamical core: Version 2. Technical Report TM-1995-104606, NASA, 1995.

    Google Scholar 

  116. D. Sulsky, Z. Chen, and H. L. Schreyer. A Particle Method for History Dependent Materials. Comp. Methods Appl. Mech. Engrg, 118, 1994.

    Google Scholar 

  117. Y. Sun, N. Folwell, Z. Li, and G. Golub. High precision accelerator cavity design using the parallel eigensolver Omega3P. In Proc. of the 18th Annual Review of Progress in Applied Computational Electromagnetics ACES 2002, Monterey, CA, 2002.

    Google Scholar 

  118. C. Szyperski. Component Software: Beyond Object-Oriented Programming. ACM Press, New York, 1999.

    Google Scholar 

  119. B. Talbot, S. Zhou, and G. Higgins. Software engineering support of the third round of scientific grand challenge investigations-earth system modeling software framework survey task4 report. Technical Report TM-2001-209992, NASA, 2001.

    Google Scholar 

  120. L. Trease, and H.E. Trease. NWGrid: A multi-dimensional, hybrid, unstructured, parallel mesh generation system. http://www.emsl.pnl.gov/nwgrid, 2005.

    Google Scholar 

  121. The Terascale Simulation Tools and Technologies (TSTT) Center. http://www.tstt-scidac.org, 2005.

    Google Scholar 

  122. U. S. Dept. of Energy. SciDAC Initiative homepage. http://www.osti.gov/scidac/, 2005.

    Google Scholar 

  123. University Corporation for Atmospheric Research. The Community Atmosphere Model (CAM) homepage. http://www.ccsm.ucar.edu/models/atm-cam/, 2005.

    Google Scholar 

  124. University of Oregon. TAU: Tuning and analysis utilities. http://www.cs.uoregon.edu/research/tau, 2005.

    Google Scholar 

  125. S. Vajracharya, S. Karmesin, P. Beckman, J. Crotinger, A. Malony, S. Shende, R. Oldehoeft, and S. Smith. Smarts: Exploiting temporal locality and parallelism through vertical execution. In Proceedings of the 13th International Conference on Supercomputing (ICS 99), pages 302–310, Rhodes, Greece, 1999. ACM Press.

    Google Scholar 

  126. T. Veldhuizen et al. BLITZ++: Object-oriented scientific computing. http://www.oonumerics.org/blitz, 2005.

    Google Scholar 

  127. M. R. Visbal and D. V. Gaitonde. On the use of higher-order finite-difference schemes on curvilinear and deforming meshes. J. Comp. Phys., 181:155–185, 2002.

    Article  MathSciNet  Google Scholar 

  128. Z. Wang and G. P. Huang. An essentially nonoscillatory high-order Padé-type (ENO-Padé) scheme. J. Comp. Phys., 177:37–58, 2002.

    Article  MathSciNet  Google Scholar 

  129. Weather Research and Forecasting Model. http://www.wrf-model.org/, 2005.

    Google Scholar 

  130. F. Williams. Combustion Theory. Addison-Wesley, New York, 2nd edition, 1985.

    Google Scholar 

  131. A. Wissink, R. Hornung, S. Kohn, S. Smith, and N. Elliott. Large scale parallel structured AMR calculations using the SAMRAI framework. In Proceedings of the SC01 Conf. High Perf. Network. and Comput, Denver, CO, November 2001.

    Google Scholar 

  132. A. Wissink, D. Hysom, and R. Hornung. Enhancing scalability of parallel structured AMR calculations. In Proceedings of the 17th ACM International Conference on Supercomputing (ICS03), pages 336–347, San Francisco, CA, June 2003.

    Google Scholar 

  133. M. Wolf, Z. Cai, W. Huang, and K. Schwan. Smart pointers: Personalized scientific data portals in your hand. In Proceedings of Supercomputing 2002, November 2002.

    Google Scholar 

  134. M. Wolf, A. Guetz, and C.-K. Ng. Modeling large accelerator structures with the parallel field solver Tau3P. In Proc. of the 18th Annual Review of Progress in Applied Computational Electromagnetics ACES 2002, Monterey, CA, 2002.

    Google Scholar 

  135. K. Zhang, K. Damevski, V. Venkatachalapathy, and S. Parker. SCIRun2: A CCA framework for high performance computing. In Proceedings of the 9th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS 2004), Santa Fe, NM, April 2004. IEEE Press.

    Google Scholar 

  136. S. Zhou. Coupling earth system models: An ESMF-CCA prototype. http://webserv.gsfc.nasa.gov/ESS/esmf_tasc, 2003.

    Google Scholar 

  137. S. Zhou, A. da Silva, B. Womack, and G. Higgins. Prototyping the ESMF using DOE’s CCA. In NASA Earth Science Technology Conference 2003, College Park, MD, 24–26 June 2003. http://esto.nasa.gov/conferences/estc2003/papers/A4P3(Zhou).pdf.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

McInnes, L.C. et al. (2006). Parallel PDE-Based Simulations Using the Common Component Architecture. In: Bruaset, A.M., Tveito, A. (eds) Numerical Solution of Partial Differential Equations on Parallel Computers. Lecture Notes in Computational Science and Engineering, vol 51. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-31619-1_10

Download citation

Publish with us

Policies and ethics