Performance Engineering of GemsFDTD Computational Electromagnetics Solver

  • Ulf Andersson
  • Brian J. N. Wylie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7133)

Abstract

Since modern high-performance computer systems consist of many hardware components and software layers, they present severe challenges for application developers who are primarily domain scientists and not experts with continually evolving hardware and system software. Effective tools for performance analysis are therefore decisive when developing performant scalable parallel applications. Such tools must be convenient to employ in the application development process and analysis must be both clear to interpret and yet comprehensive in the level of detail provided. We describe how the Scalasca toolset was applied in engineering the GemsFDTD computational electromagnetics solver, and the dramatic performance and scalability gains thereby achieved.

Keywords

performance engineering parallel execution tuning scalability MPI computational electromagnetics 

References

  1. 1.
    Gedney, S.D.: An anisotropic perfectly matched layer-absorbing medium for the truncation of FDTD lattices. IEEE Transactions on Antennas and Propagation 44(12), 1630–1639 (1996)CrossRefGoogle Scholar
  2. 2.
    Geimer, M., Wolf, F., Wylie, B.J.N., Ábrahám, E., Becker, D., Mohr, B.: The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience 22(6), 702–719 (2010)Google Scholar
  3. 3.
    Jülich Supercomputing Centre, Germany: Scalasca toolset for scalable performance analysis of large-scale parallel applications (2010), http://www.scalasca.org/
  4. 4.
    Martin, T., Pettersson, L.: Dispersion compensation for Huygens’ sources and far-zone transformation in FDTD. IEEE Transactions on Antennas and Propagation 48(4), 494–501 (2000)CrossRefGoogle Scholar
  5. 5.
    Parallel & Scientific Computing Institute (PSCI), Sweden: GEMS: General ElectroMagnetic Solvers project (2005), http://www.psci.kth.se/Programs/GEMS/
  6. 6.
    Standard Performance Evaluation Corporation (SPEC), USA: SPEC MPI2007 benchmark suite, version 2.0 (2010), http://www.spec.org/mpi2007/
  7. 7.
    Szebenyi, Z., Wylie, B.J.N., Wolf, F.: SCALASCA Parallel Performance Analyses of SPEC MPI2007 Applications. In: Kounev, S., Gorton, I., Sachs, K. (eds.) SIPEW 2008. LNCS, vol. 5119, pp. 99–123. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    Taflove, A., Hagness, S.C.: Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd edn. Artech House (2005)Google Scholar
  9. 9.
    University of Edinburgh HPCx, UK: HECToR: United Kingdom National Supercomputing Service (2010), http://www.hector.ac.uk/service/
  10. 10.
    Wylie, B.J.N., Böhme, D., Frings, W., Geimer, M., Mohr, B., Szebenyi, Z., Becker, D., Hermanns, M.-A., Wolf, F.: Scalable performance analysis of large-scale parallel applications on Cray XT systems with Scalasca. In: Proc. 52nd CUG Meeting, Edinburgh, Scotland. Cray User Group (May 2010)Google Scholar
  11. 11.
    Wylie, B.J.N., Böhme, D., Mohr, B., Szebenyi, Z., Wolf, F.: Performance analysis of Sweep3D on Blue Gene/P with the Scalasca toolset. In: Proc. 24th Int’l Parallel & Distributed Processing Symposium, Workshop on Large-Scale Parallel Processing, IPDPS–LSPP, Atlanta, GA, USA. IEEE Computer Society (April 2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ulf Andersson
    • 1
  • Brian J. N. Wylie
    • 2
  1. 1.PDC Centre for High Performance ComputingKTH Royal Institute of TechnologyStockholmSweden
  2. 2.Jülich Supercomputing CentreForschungszentrum JülichJülichGermany

Personalised recommendations