Skip to main content

Automatische Parallelisierung Sequentieller Programme

  • Chapter
Book cover Parallelrechner

Part of the book series: Leitfäden der Informatik ((XLINF))

  • 64 Accesses

Zusammenfassung

Dieses Kapitel befaßt sich mit der Parallelisierung von Programmen für massiv parallele Rechensysteme. Dies sind Architekturen, deren Speicher auf die einzelnen Prozessoren aufgeteilt ist. Beispiele für solche Systeme sind iPSC/860 und Paragon von Intel, Transputer Arrays, nCUBE, CM-5 von Thinking Machine Corporation, und das Meiko CS-2 System. Massiv parallele Maschinen erfreuen sich wachsender Akzeptanz durch wissenschaftliche Anwender, da sie relativ kostengünstig und gleichzeitig in hohem Maße skalierbar sind, was es ermöglicht, sie auf die Lösung sehr großer Anwendungsprobleme hin zu strukturieren.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literaturverzeichnis

  1. A.V. Aho, R. Sethi, J.D. Ullman. Compilers. Principles, Techniques and Tools. Addison-Wesley, 1986

    Google Scholar 

  2. S. Ahuja, N. Carriero, D. Gelernter. Linda and friends. IEEE Computer, 19:26–34, August 1986.

    Article  Google Scholar 

  3. F. André, J.-L. Pazat, H. Thomas. PANDORE: A system to manage data distribution. In International Conference on Supercomputing, S. 380–388, Juni 1990.

    Google Scholar 

  4. U. Banerjee. Dependence Analysis for Supercomputing. Kluwer, Boston, 1988.

    Google Scholar 

  5. P. Bose. Heuristic rule-based program transformations for enhanced vectorization. In Proceedings of the International Conference on Parallel Processing, 1988.

    Google Scholar 

  6. P. Bose. Interactive program improvement via EAVE: an expert adviser for vectorization. In Proceedings of the International Conference on Supercomputing, St. Malo, 1988.

    Google Scholar 

  7. B. Chapman, S. Benkner, R. Blasko, et al. Vienna Fortran Compilation System Version 1.0 User’s Guide. Technical Report, Institute for Software Technology and Parallel Systems, University of Vienna, Jan. 1993.

    Google Scholar 

  8. B. Chapman, H. Herbeck, H.P. Zima. Automatic Support for Data Distribution. In Proceedings of the Sixth Distributed Memory Computing Conference, 51–58, 1991.

    Chapter  Google Scholar 

  9. B. Chapman, P. Mehrotra, H. Zima. Programming in Vienna Fortran. Scientific Programming 1, 1

    Google Scholar 

  10. B. Chapman, P. Mehrotra, J. Van Rosendale, H. Zima. A Software Architecture for Multidisciplinary Applications: Integrating Task and Data Parallelism. Technical Report TR 94–1, Institute for Software Technology and Parallel Systems, University of Vienna, März 1994.

    Google Scholar 

  11. M. Chen, J. Li. Optimizing Fortran 90 programs for data motion on massively parallel systems. Technical Report YALE/DCS/TR-882, Yale University, January 1992.

    Google Scholar 

  12. T. Fahringer. Automatic Performance Prediction for Parallel Programs on Massively Parallel Computers. Dissertation. Technical Report TR 93–3, Institute for Software Technology and Parallel Systems, University of Vienna, Sep. 1993.

    Google Scholar 

  13. I.T. Foster, K.M. Chandy, Fortran M: A Language for Modular Parallel Programming. Technical Report MCS-P327–0992 Revision 1. Mathematics and Computer Science Division, Argonne National Laboratory, Juni 1993.

    Google Scholar 

  14. I. Foster, S. Taylor. Strand: New Concepts in Parallel Programming. Prentice-Hall, Englewood Cliffs, NJ, 1990.

    Google Scholar 

  15. G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel, U. Kremer, C. Tseng, M. Wu, Fortran D language specification. Department of Computer Science Rice COMP TR90079, Rice University, März 1991.

    Google Scholar 

  16. Fortran 90. ANSI X3J3 Internal Document S.8.118, Mai 1991.

    Google Scholar 

  17. H. M. Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems. Dissertation, Universität Bonn, Dez. 1989.

    Google Scholar 

  18. H.M. Gerndt. Updating Distributed Variables in Local Computations. Concurrency: Practice and Experience, Bd. 2(3), S. 171–193, Sep. 1990.

    Article  Google Scholar 

  19. M. Gupta, P. Banerjee. Automatic Data Partitioning on Distributed Memory Multiprocessors. In Proceedings of the Sixth Distributed Memory Computing Conference, 1991.

    Google Scholar 

  20. High Performance Fortran Forum. High Performance Fortran Language Specification Version 1.0. Scientific Programming 2(1–2): 1–170, 1993.

    Google Scholar 

  21. S. Hiranandani, K. Kennedy, C.-W. Tseng. Compiling Fortran D for MIMD-Distributed-Memory Machines. Comm. ACM, Bd. 35,Nr. 8, S. 66–80, Aug. 1992.

    Article  MathSciNet  Google Scholar 

  22. K. Ikudome, G. Fox, A. Kolawa, J. Flower. An automatic and symbolic parallelization system for distributed memory parallel computers. In Proceedings of the The Fifth Distributed Memory Computing Conference, S. 1105–1114, Charleston, SC, April 1990.

    Chapter  Google Scholar 

  23. C. Koelbel. Compiling programs for nonshared memory machines. Dissertation, Purdue University, Aug. 1990.

    Google Scholar 

  24. C. Koelbel, P. Mehrotra, J. Van Rosendale. Semi-automatic process partitioning for parallel computation. International Journal of Parallel Programming, 16(5):365–382, 1987.

    Article  Google Scholar 

  25. Jingke Li. Compiling Crystal for Distributed-Memory Machines. Dissertation, Yale University, Technical Report DCS/RR-876, Okt. 1991.

    Google Scholar 

  26. J. Li, M. Chen. Compiling Communication-Efficient Programs for Massively Parallel Machines. IEEE Transactions on Parallel and Distributed Systems, Bd. 2(3), 361–376, Juli 1991.

    Article  Google Scholar 

  27. P. Mehrotra, J. Van Rosendale. Programming distributed memory architectures using Kali. In A. Nicolau, D. Gelernter, T. Gross, D. Padua, editors, Advances in Languages and Compilers for Parallel Processing, S. 364–384. Pitman/MIT-Press, 1991.

    Google Scholar 

  28. J. H. Merlin. ADAPTing Fortran 90 Array Programs for Distributed Memory Architectures. In H.P. Zima, editor, Parallel Computation. Proc. First International ACPC Conference, Salzburg, Austria, S. 184–200. Lecture Notes in Computer Science 591, Springer Verlag, 1991

    Google Scholar 

  29. MIMDizer User’s Guide, Version 8.0. Applied Parallel Research Inc., Placerville, CA., 1992.

    Google Scholar 

  30. E. Paalvast, H. Sips. A high-level language for the description of parallel algorithms. In Proceedings of Parallel Computing 89, Leyden, Netherlands, Aug. 1989.

    Google Scholar 

  31. D.A. Padua, M.J. Wolfe. Advanced compiler optimizations for supercomputers. In Communications of the ACM, 29, S. 1184–1201.

    Google Scholar 

  32. D. Pase. MPP Fortran programming model. In High Performance Fortran Forum, Houston, TX, January 1992.

    Google Scholar 

  33. R. Ponnusamy, J. Saltz, A. Choudhary. Runtime Compilation Techniques for Data Partitioning and Communication Schedule Reuse. In: Proc. Supercomputing 93 (Portland, Oregon), S. 361–370.

    Google Scholar 

  34. A. Rogers, K. Pingali. Process decomposition through locality of reference. In Conference on Programming Language Design and Implementation, S. 69–80. ACM SIGPLAN, Juni 1989.

    Google Scholar 

  35. R. Rühl, M. Annaratone. Parallelization of Fortran code on distributed-memory parallel processors. In Proceedings of the ACM International Conference on Supercomputing, Juni 1990.

    Google Scholar 

  36. J. Saltz, K. Crowley, R. Mirchandaney, H. Berryman. Run-time scheduling and execution of loops on message passing machines. Journal of Parallel and Distributed Computing, 8(2):303–312, 1990.

    Article  Google Scholar 

  37. CM Fortran Reference Manual, Version 5.2. Thinking Machines Corporation, Cambridge, MA, 1989.

    Google Scholar 

  38. K.Y. Wang. A framework for intelligent parallel compilers. Tech. Report CSD-TR-1044, Computer Science Dept., Purdue University, 1990.

    Google Scholar 

  39. K.Y. Wang, D. Gannon. Applying AI techniques to program optimization for parallel computers. Parallel Processing for Supercomputers and Artificial Intelligence, 12, 441–485, McGraw-Hill, 1989.

    Google Scholar 

  40. J. Wu, J. Saltz, H. Berryman, S. Hiranandani. Distributed memory compiler design for sparse problems. ICASE Report 91–13, January 1991.

    Google Scholar 

  41. J. Subhlok, J. Stichnoth, D. O’Hallaron, T. Gross. Exploiting Task and Data Parallelism on a Multicomputer. Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP’93).

    Google Scholar 

  42. H. Zima, H. Bast, M. Gerndt. Superb: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6:1–18, 1988.

    Article  Google Scholar 

  43. H. Zima, P. Brezany, B. Chapman, P. Mehrotra, A. Schwald. Vienna Fortran — a language specification. ICASE Internal Report 21, ICASE, Hampton, VA, 1992.

    Google Scholar 

  44. H. Zima, B. Chapman. Supercompilers for Parallel and Vector Computers. ACM Press Frontier Series, Addison-Wesley, 1990.

    Google Scholar 

  45. H. Zima, B. Chapman. Compiling for Distributed-Memory Systems. Proceedings of the IEEE. Special Section on Languages and Compilers for Parallel Machines, S. 264–287, Feb. 1993.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1995 B. G. Teubner Stuttgart

About this chapter

Cite this chapter

Zima, H.P., Chapman, B.M. (1995). Automatische Parallelisierung Sequentieller Programme. In: Waldschmidt, K. (eds) Parallelrechner. Leitfäden der Informatik. Vieweg+Teubner Verlag. https://doi.org/10.1007/978-3-322-86771-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-322-86771-1_14

  • Publisher Name: Vieweg+Teubner Verlag

  • Print ISBN: 978-3-519-02135-3

  • Online ISBN: 978-3-322-86771-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics