Advertisement

Locality Improvement of Data-Parallel Adams–Bashforth Methods through Block-Based Pipelining of Time Steps

  • Matthias Korch
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7484)

Abstract

Adams–Bashforth methods are a well-known class of explicit linear multi-step methods for the solution of initial value problems of ordinary differential equations. This article discusses different data-parallel implementation variants with different loop structures and communication patterns and compares the resulting locality and scalability. In particular, pipelining of time steps is employed to improve the locality of memory references. The comparison is based on detailed runtime experiments performed on parallel computer systems with different architectures, including the two supercomputer systems JUROPA and HLRB II.

Keywords

Processing Element Loop Structure Cache Line Sequential Implementation Access Distance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burrage, K.: Parallel and Sequential Methods for Ordinary Differential Equations. Oxford University Press, New York (1995)zbMATHGoogle Scholar
  2. 2.
    Butcher, J.C.: Numerical methods for ordinary differential equations, 2nd edn. John Wiley & Sons, Chichester (2008)zbMATHCrossRefGoogle Scholar
  3. 3.
    Chen, J., Watson III, W.: Software barrier performance on dual quad-core Opterons. In: Proceedings of the 2008 International Conference on Networking, Architecture, and Storage, pp. 303–309. IEEE Computer Society (2008)Google Scholar
  4. 4.
    Cong, N.H., Xuan, L.N.: Twostep-by-twostep PIRK-type PC methods with continuous output formulas. J. Comput. Appl. Math. 221, 165–173 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd rev. edn. Springer, Berlin (2000)Google Scholar
  6. 6.
    Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd rev. edn. Springer, Berlin (2002)Google Scholar
  7. 7.
    Korch, M., Rauber, T.W.: Locality Optimized Shared-Memory Implementations of Iterated Runge-Kutta Methods. In: Kermarrec, A.-M., Bougé, L., Priol, T. (eds.) Euro-Par 2007. LNCS, vol. 4641, pp. 737–747. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  8. 8.
    Korch, M., Rauber, T.: Parallel low-storage Runge-Kutta solvers for ODE systems with limited access distance. Int. J. High Perf. Comput. Appl. 25(2), 236–255 (2011)CrossRefGoogle Scholar
  9. 9.
    Korch, M., Rauber, T., Scholtes, C.: Scalability and locality of extrapolation methods on large parallel systems. Concurrency Computat.: Pract. Exper. 23(15), 1789–1815 (2011)CrossRefGoogle Scholar
  10. 10.
    Ley, K.: Parallele Implementierung und Analyse eines expliziten Adams-Verfahrens. Bachelor’s thesis, University of Bayreuth (November 2010)Google Scholar
  11. 11.
    Rauber, T.W., Rünger, G.: Execution Schemes for Parallel Adams Methods. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 708–717. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Schmitt, B.A., Weiner, R., Jebens, S.: Parameter optimization for explicit parallel peer two-step methods. Appl. Numer. Math. 59, 769–782 (2008)MathSciNetCrossRefGoogle Scholar
  13. 13.
    van der Houwen, P.J., Messina, E.: Parallel Adams methods. J. Comput. Appl. Math. 101, 153–165 (1999)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Matthias Korch
    • 1
  1. 1.Applied Computer Science 2University of BayreuthGermany

Personalised recommendations