The ParaWise Expert Assistant – Widening Accessibility to Efficient and Scalable Tool Generated OpenMP Code

  • Stephen Johnson
  • Emyr Evans
  • Haoqiang Jin
  • Constantinos Ierotheou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3349)

Abstract

Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    OpenMP home page, http://www.openmp.org
  2. 2.
    Wilson, R.P., French, R.S., Wilson, C.S., Amarasinghe, S.P., Anderson, J.M., Tjiang, S.W.K., Liao, S., Tseng, C., Hall, M.W., Lam, M., Hennessy, J.: SUIF:An infrastructure for research on Parallelizing and Optimizing Compilers, Stanford University, CA (1996)Google Scholar
  3. 3.
    Blume, W., Eigenmann, R., Fagin, K., Grout, J., Lee, J., Lawrence, T., Hoeflinger, J., Padua, D., Tu, P., Weatherford, S.: Restructuring Programs for high speed computers with Polaris. In: ICPP Workshop on Challenges for Parallel Processing, pp. 149–162 (1996)Google Scholar
  4. 4.
    Johnson, S.P., Cross, M., Everett, M.G.: Exploitation of Symbolic Information in Interprocedural Dependence Analysis. Parallel Computing 22, 197–226 (1996)MATHCrossRefGoogle Scholar
  5. 5.
    Leggett, P.F., Marsh, A.T.J., Johnson, S.P., Cross, M.: Integrating user Knowledge with Information from parallelisation Tools to Facilitate the Automatic Generation of Efficient Parallel FORTRAN code. Parallel Comp. 22(2), 197–226 (1996)CrossRefGoogle Scholar
  6. 6.
    Evans, E.W., Johnson, S.P., Leggett, P.F., Cross, M.: Automatic and effective multi-dimensional parallelisation of structured mesh based codes. Parallel Computing 26(6), 677–703 (2000)MATHCrossRefGoogle Scholar
  7. 7.
    Johnson, S.P., Ierotheou, C.S., Cross, M.: Computer Aided Parallelisation of unstructured mesh codes. In: Proceedings of PDPTA Conference, Las Vegas. CSREA, vol. 1, pp. 344–353 (1997)Google Scholar
  8. 8.
    Matthews, G., Hood, R., Jin, H., Johnson, S., Ierotheou, C.: Automatic Relative Debugging of OpenMP Programs. In: Proceedings of EWOMP, Aachen, Germany (2003)Google Scholar
  9. 9.
  10. 10.
  11. 11.
  12. 12.
    Parallel Software Products Inc, http://www.parallelsp.com
  13. 13.
    Jin, H., Frumkin, M., Yan, J.: Automatic generation of openMP directives and its application to computational fluid dynamics codes. In: Valero, M., Joe, K., Kitsuregawa, M., Tanaka, H. (eds.) ISHPC 2000. LNCS, vol. 1940, pp. 440–456. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  14. 14.
    Jin, H., Jost, G., Yan, J., Ayguade, E., Gonzalez, M., Martorell, X.: Automatic Multilevel Parallelization Using OpenMP. In: Proceeding of EWOMP 2001, Barcelona, Spain (September 2001), Scientific Programming, vol. 11(2), pp. 177–190 (2003)Google Scholar
  15. 15.
    Ierotheou, C.S., Johnson, S.P., Leggett, P., Cross, M., Evans, E.: The Automatic Parallelization of Scientific Application codes using a Computer Aided Parallelization Toolkit, Proceedings of WOMPAT 2000, San Diego, USA (July 2000), Scientific Programming Journal, Vol. 9(2+3), pp. 163-173 (2003)Google Scholar
  16. 16.
    Jin, H., Jost, G., Johnson, D., Tao, W.: Experience on the parallelization of a cloud modelling code using computer aided tools, Technical report NAS-03-006, NASA Ames Research Center, NAS-03-006 (March 2003)Google Scholar
  17. 17.
    FORGE, Applied Parallel Research, Placerville California 95667, USA (1999)Google Scholar
  18. 18.
  19. 19.
    Zima, H.P., Bast, H.-J., Gerndt, H.M.: SUPERB-A tool for Semi-Automatic MIMD/SIMD Parallelisation. Parallel Computing 6 (1988)Google Scholar
  20. 20.
    The Dragon Analysis Tool, http://www.cs.uh.edu/~dragon
  21. 21.
    Rauchwerger, L., Amato, N., Padua, D.: A Scalable Method for Run-Time Loop Parallelization. International Journal of Parallel Processing 26(6), 537–576 (1995)CrossRefGoogle Scholar
  22. 22.
    Rauchwerger, L., Padua, D.: The LRPD Test: Speculative Run-Time Parallelization of Loops with Privatization and Reduction Parallelization. IEEE Transactions on Parallel and Distributed Systems 19(2) (February 1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Stephen Johnson
    • 1
  • Emyr Evans
    • 1
  • Haoqiang Jin
    • 2
  • Constantinos Ierotheou
    • 1
  1. 1.Parallel Processing Research GroupUniversity of GreenwichLondonUK
  2. 2.NAS DivisionNASA Ames Research CenterMoffet FieldUSA

Personalised recommendations