High Performance Fortran: Status and prospects

  • Piyush Mehrotra
  • John Van Rosendale
  • Hans Zima
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1541)


High Performance Fortran (HPF) is a data-parallel language that was designed to provide the user with a high-level interface for programming scientific applications, while delegating to the compiler the task of generating an explicitly parallel message-passing program. In this paper, we give an outline of developments that led to HPF, shortly explain its major features, and illustrate its use for irregular applications. The final part of the paper points out some classes of problems that are difficult to deal with efficiently within the HPF paradigm.


Unstructured Grid Parallel Loop Grid Partition Mapping Array NASA Langley Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    E. Albert, K. Knobe, J. D. Lukas, and Jr. G. L. Steele. Compiling Fortran 8x array features for the Connection Machine Computer System. In Proceedings of the Symposium on Parallel Programming: Experience with Applications, Languages, and Systems (PPEALS), pages 42–56, New Haven, CT, July 1988.Google Scholar
  2. 2.
    F. André, J.-L. Pazat, and H. Thomas. PANDORE: A system to manage data distribution. In International Conference on Supercomputing, pages 380–388, Amsterdam, The Netherlands, June 1990.Google Scholar
  3. 3.
    S. Benkner et al. Vienna Fortran Compilation System—Version 1.2—User’s Guide. Institute for Software Technology and Parallel Systems, University of Vienna, October 1995.Google Scholar
  4. 4.
    S. Benkner, P. Mehrotra, J. Van Rosendale and H. P. Zima. High-Level Management of Communication Schedules in HPF-like Languages. Technical Report, TR 97-5, Institute for Software Technology and Parallel Systems, University of Vienna, April 1997.Google Scholar
  5. 5.
    B. Chapman, M. Haines, P. Mehrotra, J. Van Rosendale, and H. Zima. Opus: A coordination language for multidisciplinary applications. Scientific Programming (to appear), 1998.Google Scholar
  6. 6.
    B. Chapman, P. Mehrotra, and H. Zima. Programming in Vienna Fortran. Scientific Programming, 1 (1):31–50, Fall 1992.Google Scholar
  7. 7.
    B. Chapman, P. Mehrotra, and H. Zima. Extending HPF for Advanced Data Parallel Applications. IEEE Parallel and Distributed Technology, Fall 1994, pp. 59–70.CrossRefGoogle Scholar
  8. 8.
    R. Das, J. Saltz, R. von Hanxleden. Slicing Analysis and Indirect Access to Distributed Arrays. In: Proc. 6th Workshop on Languages and Compilers for Parallel Computing, 152–168, Springer Verlag (August 1993).Google Scholar
  9. 9.
    B. Numrich. A Parallel Extension to Fortran 90. In: Proc. Spring’96 Cray User Group Conference, Barcelona, March 1996.Google Scholar
  10. 10.
    P. Hatcher, A. Lapadula, R. Jones, M. Quinn, and J. Anderson. A production quality C* compiler for hypercube machines. In 3rd ACM SIGPLAN Symposium on Principles Practice of Parallel Programming, papges 73–82, April 1991.Google Scholar
  11. 11.
    E. Haug, J. Dubois, J. Clinckemaillie, S. Vlachoutsis, G. Lonsdale. Transport Vehicle Crash, Safety and Manufacturing Simulation in the Perspective of High Performance Computing and Networking. Future Generation Computer Systems, Vol 10, pp. 173–181, 1994.CrossRefGoogle Scholar
  12. 12.
    High Performance C++. Http:// Google Scholar
  13. 13.
    High Performance FORTRAN Forum. High Performance FORTRAN Language Specification, Version 2.0, January 1997.Google Scholar
  14. 14.
    S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD Distributed Memory Machines. Communications of the ACM, 35(8):66–80, August 1992.CrossRefGoogle Scholar
  15. 15.
    Y.C.Hu, S. Lennart Johnsson, and S.-H.Teng. A Data-Parallel Adaptive N-body Method. Proc.Eight SIAM Conference on Parallel Proceedings for Scientific Computing, March 14–17, 1997.Google Scholar
  16. 16.
    C. Koelbel and P. Mehrotra. Compiling global name-space parallel loops for distributed execution. IEEE Transactions on Parallel and Distributed Systems, 2(4):440–451, October 1991.CrossRefGoogle Scholar
  17. 17.
    J. Li and M. Chen. Generating explicit communication from shared-memory program references. In Proceedings of Supercomputing ’90, pages 865–876, New York, NY, November 1990.Google Scholar
  18. 18.
    P. Mehrotra. Programming parallel architectures: The BLAZE family of languages. In Proceedings of the Third SIAM Conference on Parallel Processing for Scientific Computing, pages 289–299, Los Angeles, CA, December 1988.Google Scholar
  19. 19.
    P. Mehrotra and J. Van Rosendale. Compiling high level constructs to distributed memory architectures. In Proceedings of the Fourth Conference on Hypercube Concurrent Computers and Applications, March 1989.Google Scholar
  20. 20.
    P. Mehrotra and J. Van Rosendale. Programming distributed memory architectures using Kali. In A. Nicolau, D. Gelernter, T. Gross, and D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, pages 364–384. Pitman/MIT-Press, 1991.Google Scholar
  21. 21.
    P. Mehrotra, J. Van Rosendale, and H.Zima. High Performance Fortran: History, Status, and Future. Parallel Computing, 1998 (in print).Google Scholar
  22. 22.
    J. H. Merlin. Adapting fortran 90 array programs for distributed memory architectures. In H. P. Zima, editor, Proc. First International ACPC Conference, Salzburg, Austria, pages 184–200. Lecture Notes in Computer Science 591, Springer Verlag, 1991.Google Scholar
  23. 23.
    D. Middleton, P. Mehrotra, and J. Van Rosendale. Expressing Direct Simulation Monte Carlo Code in High Performance Fortran. In Proceedings of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, pages 698–703, February 1995.Google Scholar
  24. 24.
    MIMDizer User’s Guide, Version 7.02. Placerville, CA., 1991.Google Scholar
  25. 25.
    D. Pase. MPP Fortran programming model. In High Performance Fortran Forum, January 1992.Google Scholar
  26. 26.
    R. Ponnusamy, J. Saltz, A. Choudhary. Runtime Compilation Techniques for Data Partitioning and Communication Schedule Reuse. Technical Report, UMIACS-TR-93-32, University of Maryland, April 1993.Google Scholar
  27. 27.
    A. P. Reeves and C. M. Chase. The Paragon programming paradigm and distributed memory multicomputers. In Compilers and Runtime Software for Scalable Multiprocessors, J. Saltz and P. Mehrotra Editors, Amsterdam, The Netherlands, Elsevier, 1991.Google Scholar
  28. 28.
    A. Rogens and K. Pingali. Process decomposition through locality of reference. In Conference on Programming Language Design and Implementation, pages 69–80, Portland, OR, June 1989.Google Scholar
  29. 29.
    M. Rosing, R. W. Schnabel, and R. P. Weaver. Expressing complex parallel algorithms in DINO. In Proceedings of the 4th Conference on Hypercubes, Concurrent Computers, and Applications, pages 553–560, 1989.Google Scholar
  30. 30.
    R. Rühl and M. Annaratone. Parallelization of Fortran code on distributed-memory parallel processors. In Proceedings of the International Conference on Supercomputing. ACM Press, June 1990.Google Scholar
  31. 31.
    J. Saltz, K. Crowley, R. Mirchandaney, and H. Berryman. Run-time scheduling and execution of loops on message passing machines, Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.CrossRefGoogle Scholar
  32. 32.
    M. Ujaldon, E.L. Zapata, B. Chapman, and H. Zima. Vienna Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilations. IEEE Transactions on Parallel and Distributed Systems, 8(11), November 1997.Google Scholar
  33. 33.
    H. Zima, H. Bast, and B. Gerndt. Superb: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6:1–18, 1988.CrossRefGoogle Scholar
  34. 34.
    H. Zima, P. Brezany, B. Chapman, P. Mehrotra, and A. Schwald. Vienna Fortran —a language specification. Internal Report 21, ICASE, Hampton, VA, March 1992.Google Scholar
  35. 35.
    H. Zima and B. Chapman. Compiling for Distributed Memory Systems. Proceedings of the IEEE, Special Section on Languages and Compilers for Parallel Machines, pp. 264–287, February 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Piyush Mehrotra
    • 1
  • John Van Rosendale
    • 1
  • Hans Zima
    • 2
  1. 1.ICASE, MS 132CNASA Langley Research CenterHamptonUSA
  2. 2.Institute for Software Technology and Parallel SystemsUniversity of ViennaViennaAustria

Personalised recommendations