Advertisement

Language Extensions in Support of Compiler Parallelization

  • Jun Shirako
  • Hironori Kasahara
  • Vivek Sarkar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5234)

Abstract

In this paper, we propose an approach to automatic compiler parallelization based on language extensions that is applicable to a broader range of program structures and application domains than in past work. As a complement to ongoing work on high productivity languages for explicit parallelism, the basic idea in this paper is to make sequential languages more amenable to compiler parallelization by adding enforceable declarations and annotations. Specifically, we propose the addition of annotations and declarations related to multidimensional arrays, points, regions, array views, parameter intents, array and object privatization, pure methods, absence of exceptions, and gather/reduce computations. In many cases, these extensions are also motivated by best practices in software engineering, and can also contribute to performance improvements in sequential code. A detailed case study of the Java Grande Forum benchmark suite illustrates the obstacles to compiler parallelization in current object-oriented languages, and shows that the extensions proposed in this paper can be effective in enabling compiler parallelization. The results in this paper motivate future work on building an automatically parallelizing compiler for the language extensions proposed in this paper.

Keywords

Detailed Case Study Parallel Loop Language Extension Multidimensional Array Automatic Parallelization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen, R., Kennedy, K.: Optimizaing Compilers for Modern Architectures. Morgan Kaufmann Publishers, San Francisco (2001)Google Scholar
  2. 2.
    Barik, R., Cave, V., Donawa, C., Kielstra, A., Peshansky, I., Sarkar, V.: Experiences with an smp implementation for x10 based on the java concurrency utilities. In: Workshop on Programming Models for Ubiquitous Parallelism (PMUP), held in conjunction with PACT 2006 (September 2006)Google Scholar
  3. 3.
    Charles, P., Donawa, C., Ebcioglu, K., Grothoff, C., Kielstra, A., von Praun, C., Saraswat, V., Sarkar, V.: X10: an object-oriented approach to non-uniform cluster computing. In: Proceedings of OOPSLA 2005, pp. 519–538. ACM Press, New York (2005)CrossRefGoogle Scholar
  4. 4.
    Choi, J.-D., Gupta, M., Serrano, M.J., Sreedhar, V.C., Midkiff, S.P.: Stack allocation and synchronization optimizations for java using escape analysis. ACM Trans. Program. Lang. Syst. 25(6), 876–910 (2003)CrossRefGoogle Scholar
  5. 5.
    Cytron, R., Lipkis, J., Schonberg, E.: A compiler-assisted approach to spmd execution. In: Supercomputing 1990: Proceedings of the 1990 ACM/IEEE conference on Supercomputing, Washington, DC, USA, pp. 398–406. IEEE Computer Society, Los Alamitos (1990)Google Scholar
  6. 6.
    Dagum, L., Menon, R.: OpenMP: An industry standard API for shared memory programming. IEEE Computational Science & Engineering (1998)Google Scholar
  7. 7.
    Eigenmann, R., Hoeflinger, J., Padua, D.: On the automatic parallelization of the perfect benchmarks. IEEE Trans. on parallel and distributed systems 9(1) (January 1998)Google Scholar
  8. 8.
    Gerlek, M.P., Stoltz, E., Wolfe, M.: Beyond induction variables: detecting and classifying sequences using a demand-driven ssa form. ACM Trans. Program. Lang. Syst. 17(1), 85–122 (1995)CrossRefGoogle Scholar
  9. 9.
    Haghighat, M.R., Polychronopoulos, C.D.: Symbolic analysis for parallelizing compilers. Kluwer Academic Publishers, Dordrecht (1995)zbMATHGoogle Scholar
  10. 10.
    Hall, M.W., Anderson, J.M., Amarasinghe, S.P., Murphy, B.R., Liao, S., Bugnion, E., Lam, M.S.: Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer (1996)Google Scholar
  11. 11.
    Harper, R., Mitchell, J.C., Moggi, E.: Higher-order modules and the phase distinction. In: POPL 1990: Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 341–354. ACM Press, New York (1990)CrossRefGoogle Scholar
  12. 12.
    Hiranandani, S., Kennedy, K., Tseng, C.-W.: Preliminary experiences with the fortran d compiler. In: Proc. of Supercomputing 1993 (1993)Google Scholar
  13. 13.
    The Java Grande Forum benchmark suite, http://www.epcc.ed.ac.uk/javagrande
  14. 14.
    Jouvelot, P., Dehbonei, B.: A unified semantic approach for the vectorization and parallelization of generalized reductions. In: ICS 1989: Proceedings of the 3rd international conference on Supercomputing, pp. 186–194. ACM Press, New York (1989)CrossRefGoogle Scholar
  15. 15.
    Jsr 305: Annotations for software defect detection, http://jcp.org/en/jsr/detail?id=305
  16. 16.
    Moreira, J.E., Midkiff, S.P., Gupta, M.: Supporting multidimensional arrays in java. Concurrency and Computation Practice & Experience (CCPE) 15(3:5), 317–340 (2003)zbMATHCrossRefGoogle Scholar
  17. 17.
    Pechtchanski, I., Sarkar, V.: Immutability Specification and its Applications. Concurrency and Computation Practice & Experience (CCPE) 17(5:6) (April 2005)Google Scholar
  18. 18.
    Pugh, W.: The omega test: A fast and practical integer programming algorithm for dependence analysis. In: Proc. of Super Computing 1991 (1991)Google Scholar
  19. 19.
    Rauchwerger, L., Amato, N.M., Padua, D.A.: Run-time methods for parallelizing partially parallel loops. In: Proceedings of the 9th ACM International Conference on Supercomputing, Barcelona, Spain, pp. 137–146 (July 1995)Google Scholar
  20. 20.
    Saraswat, V.: Report on the experimental language x10 version 1.01, http://x10.sourceforge.net/docs/x10-101.pdf
  21. 21.
    Sarkar, V.: The PTRAN Parallel Programming System. In: Szymanski, B. (ed.) Parallel Functional Programming Languages and Compilers. ACM Press Frontier Series, pp. 309–391. ACM Press, New York (1991)Google Scholar
  22. 22.
    Smith, L.A., Bull, J.M., Obdrzálek, J.: A parallel java grande benchmark suite. In: Supercomputing 2001: Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p. 8. ACM Press, New York (2001)CrossRefGoogle Scholar
  23. 23.
    Snyder, L.: The design and development of zpl. In: HOPL III: Proceedings of the third ACM SIGPLAN conference on History of programming languages, pp. 8–1–8–37. ACM Press, New York (2007)Google Scholar
  24. 24.
    MIT laboratory for computer science Supercomputing technologies group. Cilk 5.3.2 reference manual, http://supertech.csail.mit.edu/cilk/manual-5.3.2.pdf
  25. 25.
    Wolfe, M.: High Performance Compilers for Parallel Computing. Addison-Wesley Publishing Company, Reading (1996)zbMATHGoogle Scholar
  26. 26.
    Xu, H., Pickett, C.J.F., Verbrugge, C.: Dynamic purity analysis for java programs. In: PASTE 2007: Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, pp. 75–82. ACM Press, New York (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jun Shirako
    • 1
    • 2
  • Hironori Kasahara
    • 1
    • 3
  • Vivek Sarkar
    • 4
  1. 1.Dept. of Computer ScienceWaseda University 
  2. 2.Japan Society for the Promotion of Science, Research Fellow 
  3. 3.Advanced Chip Multiprocessor Research InstituteWaseda University 
  4. 4.Department of Computer ScienceRice University 

Personalised recommendations