Parallelizing User-Defined and Implicit Reductions Globally on Multiprocessors

  • Shih-wei Liao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4186)

Abstract

Multiprocessors are becoming prevalent in the PC world. Major CPU vendors such as Intel and Advanced Micro Devices have migrated to multicore processors. However, this also means that computers will run an application at full speed only if that application is parallelized. To take advantage of more than a fraction of compute resource on a die, we develop a compiler to parallelize a common and powerful programming paradigm, namely reduction. Our goal is to exploit the full potential of reductions for efficient execution of applications on multiprocessors, including multicores. Note that reduction operations are common in streaming applications, financial computing and HPC domain. In fact, 9% of all MPI invocations in the NAS Parallel Benchmarks are reduction library calls. Recognizing implicit reductions in Fortran and C is important for parallelization on multiprocessors. Recent languages such as Brook Streaming language and Chapel language allow users to specify reduction functions. Our compiler provides a unified framework for processing both implicit and user-defined reductions. Both types of reductions are propagated and analyzed interprocedurally. Our global algorithm can enhance the scope of user-defined reductions and parallelize coarser-grained reductions. Thanking to the powerful algorithm and representation, we obtain an average speedup of 3 on 4 processors. The speedup is only 1.7 if only intraprocedural scalar reductions are parallelized.

Keywords

Reduction multiprocessor multicore reduction recognition interprocedural analysis data flow analysis parallelization implicit reductions user-defined reductions 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Buck. Brook Language Specification (October 2003), http://merrimac.stanford.edu/brook
  2. 2.
    Deitz, S., Callahan, D., Chamberlain, B., Snyder, L.: Global-View Abstractions for User-Defined Reductions and Scans. In: Proceedings of the ACM SIGPLAN Symposium on Principles and Practices of Parallel Programming, New York (March 2006)Google Scholar
  3. 3.
    Hall, M., Amarasinghe, S., Murphy, B., Liao, S., Lam, M.: Detecting Coarse-Grain Parallelism Using an Interprocedural Parallelizing Compiler. In: Proceedings of Supercomputing, San Diego, CA (December 1995)Google Scholar
  4. 4.
    Hall, M., Anderson, J., Amarasinghe, S., Murphy, B., Liao, S., Bugnion, E., Lam, M.S.: Maximizing Multiprocessor Performance with the SUIF Compiler. IEEE Computer 29(12) (December 1996)Google Scholar
  5. 5.
    Bailey, D., Harris, T., Saphir, W., Van der Wijngaart, R., Woo, A., Yarrow, M.: The NAS Parallel Benchmarks 2.0. Technical Report RNR-95-020, NASA Ames Research Center, Moffet Field, CA (December 1995)Google Scholar
  6. 6.
    Blelloch, G.E.: Vector Models for Data Parallel Computing. MIIT Press, Cambridge (1990)Google Scholar
  7. 7.
    Intel Multi-Core and AMD Multi-Core Technology (June 2006), http://www.intel.com/multi-core/, http://www.intel.com/multi-core/
  8. 8.
    Iverson, K.: A Programming Language. John Wiley & Sons, Chichester (1962)MATHGoogle Scholar
  9. 9.
    Liao, S., Du, Z., Wu, G., Lueh, G.: Data and Computation Transformations for Brook Streaming Applications on Multiprocessors. In: IEEE/ACM International Symposium on Code Generation and Optimization, New York (March 2006)Google Scholar
  10. 10.
    High Performance Fortran Forum. High Performance Fortran Specification Version 2.0 (January 1997)Google Scholar
  11. 11.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. MIT Press, Cambridge (1999)Google Scholar
  12. 12.
    Charles, P., Donawa, C., Ebcioglu, K., Grothoff, C., Kielstra, A., von Praun, C., Saraswat, V., Sarkar, V.: X10: An Object-oriented Approach to Non-uniform Cluster Computing. In: Proceedings Of the Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA) – Onward! Track (October 2005)Google Scholar
  13. 13.
    Official OpenMP Specifications Version 2.5 (May 2005), http://www.openmp.org
  14. 14.
    Fortress: A New Programming Language for Scientific Computing (2005), http://research.sun.com/projects/plrg/fortress0618.pdf
  15. 15.
    Ammarguellat, Z., Harrison, W.: Automatic Recognition of Induction Variables and Recurrence Relations by Abstract Interpretation. In: Proceedings of the SIGPLAN 1990 Conference on Programming Language Design and Implementation. White Plains, NY (1990)Google Scholar
  16. 16.
    Haghighat, M., Polychronopoulos, C.: Symbolic Analysis: A Basis for Parallelization, Optimization and Scheduling of Programs. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D.A. (eds.) LCPC 1993. LNCS, vol. 768. Springer, Heidelberg (1994)Google Scholar
  17. 17.
    Haghighat, M., Polychronopoulos, C.: Symbolic Analysis for Parallelizing Compilers. ACM Transactions on Programming Languages and Systems 18(4) (July 1996)Google Scholar
  18. 18.
    Pottenger, B., Eigenmann, R.: Parallelization in the Presence of Generalized Induction and Reduction Variables. In: Proceedings of the 1995 ACM International Conference on Supercomputing (June 1995)Google Scholar
  19. 19.
    Pointer, L.: Perfect: Performance Evaluation for Cost Effective Transformations Report 2. In: Technical Report 964, University of Illinois, Urbana-Champaign (March 1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Shih-wei Liao
    • 1
  1. 1.Intel CorporationSanta ClaraUSA

Personalised recommendations