Abstract
Dynamic program optimization is the only recourse for optimizing compilers when machine and program parameters necessary for applying an optimization technique are unknown until runtime. With the movement toward portable parallel programs, facilitated by language standards such as OpenMP, many of the optimizations developed for high-performance machines can no longer be applied prior to runtime without potential performance degradation. As an alternative, we propose dynamically adaptive programs, programs that adapt themselves to their runtime environment. We discuss the key issues in successfully applying this approach and show examples of its application. Experimental results are given for dynamically adaptive programs that seek to eliminate redundant runtime data dependence tests, to select the optimal tile size for tiled loops and to serialize loops that do not profit from parallelism.
This work was supported in part by U. S. Army contract DABT63-92-C-0033, NSF award ASC-9612133, and an NSF CAREER award. This work is not necessarily representative of the positions or policies of the U. S. Army or the Government.
Preview
Unable to display preview. Download preview PDF.
References
[BDE+96] William Blume, Ramon Doallo, Rudolf Eigenmann, John Grout, Jay Hoeflinger, Thomas Lawrence, Jaejin Lee, David Padua, Yunheung Paek, Bill Pottenger, Lawrence Rauchwerger, and Peng Tu. Advanced Program Restructuring for High-Performance Computers with Polaris. IEEE Computer, pages 78–87, December 1996.
[BDH+87] M. Byler, J.R.B. Davies, C. Huson, B. Leasure, and M. Wolfe. Multiple version loops. In International Conf. on Parallel Processing, pages 312–318, August 1987.
[BEF+96] W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Lee, T. Lawrence, J. Hoeflinger, D. Padua, Y. Paek, P. Petersen, B. Pottenger L. Rauchwerger, P. Tu, and S. Weatherford. Restructuring programs for high-speed computers with Polaris. In 1996 ICPP Workshop on Challenges for Parallel Processing, pages 149–162, August 1996.
[DR97] Pedro Diniz and Matrin Rinard. Dynamic feeback: An effective technique for adaptive computing. In Proc. of the ACM SiGPLAN '96 Conf. on Programming Language Design and Implementation, Las Vegas, NV, May 1997.
[FK97] Ian Foster and Carl Kesselmann. Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputing Applications, 11(2):115–128, January 1997.
[GB95] Rajiv Gupta and Rastislav Bodik. Adaptive loop transformations for scientific programs. In IEEE Symposium on Parallel and Distributed Processing, pages 368–375, San Antonio, Texas, October 1995.
[HM97] Mary W. Hall and Margaret Martonosi. Adaptive parallelism in compilerparallelized code. In Proc. of the 2nd SUIF Compiler Workshop, August 1997.
[KF98] Nirav H. Kapadia and José A.B. Fortes. On the Design of a Demand-Based Network-Computing System: The Purdue University Network Computing Hubs. In Proc. of IEEE Symposium on High Performance Distributed Computing, pages 71–80, Chicago, IL, 1998.
[Law96] Thomas Lawrence. Implementation of Run Time Techniques in the Polaris Fortran Restructurer. Master's thesis, Univ. of Illinois at Urbana-Champaign, Center for Supercomputing Res. & Dev., 1996.
[LLM88] M. Litzkow, M. Livny, and M. W. Mutka. Condor—a hunter of idle workstations. In Proc. of the 8th Int'l Conf. of Distributed Computing Systems, pages 104–111, June 1988.
[RP94] Lawrence Rauchwerger and David Padua. The PRIVATIZING DOALL Test: A Run-Time Technique for DOALL Loop Identification and Array Privatization. Proceedings of the 8th ACM International Conference on Supercomputing, Manchester, England, pages 33–43, July 1994.
[RP95] L. Rauchwerger and D. Padua. The LRPD Test: speculative run-time parallelization of loops with privatization and reduction parallelization. In Proceedings of the SIGPLAN 1995 Conference on Programming Languages Design and Implementation, June 1995.
[SMC91] J. Saltz, R. Mirchandaney, and K. Crowley. Run time parallelization and scheduling of loops. IEEE Transactions on Computers, 40(5), May 1991.
[SP96] R. Saavedra and D. Park. Improving the effectiveness of software prefetching with adaptive execution. In Proc. of the 1996 Conf. on Parallel Algorithms and Compilation Techniques, Boston, MA, October 1996.
[VE99] Michael J. Voss and Rudolf Eigenmann. Reducing parallel overheads through dynamic serialization. In Proc. of the 2nd Merged Symposium IPPS/SPDP: 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing (to appear), San Juan, Puerto Rico, April 1999.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Voss, M., Eigenmann, R. (1999). Dynamically adaptive parallel programs. In: Polychronopoulos, C., Fukuda, K.J.A., Tomita, S. (eds) High Performance Computing. ISHPC 1999. Lecture Notes in Computer Science, vol 1615. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0094915
Download citation
DOI: https://doi.org/10.1007/BFb0094915
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65969-3
Online ISBN: 978-3-540-48821-7
eBook Packages: Springer Book Archive