Skip to main content
Log in

Run-Time Support for the Automatic Parallelization of Java Programs

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

We describe and evaluate a novel approach for the automatic parallelization of programs that use pointer-based dynamic data structures, written in Java. The approach exploits parallelism among methods by creating an asynchronous thread of execution for each method invocation in a program. At compile time, methods are analyzed to determine the data they access, parameterized by their context. A description of these data accesses is transmitted to a run-time system during program execution. The run-time system utilizes this description to determine when a thread may execute, and to enforce dependences among threads. This run-time system is the main focus of this paper. More specifically, the paper details the representation of data accesses in a method and the framework used by the run-time system to detect and enforce dependences among threads. Experimental evaluation of an implementation of the run-time system on a four-processor Sun multiprocessor indicates that close to ideal speedup can be obtained for a number of benchmarks. This validates our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. T. Abdelrahman and S. Huynh. Exploiting task-level parallelism using ptask. In Proceedings of Parallel and Distributed Processing Techniques and Applications, pp. 252–263, 1996.

  2. W. Blume, R. Doallo, R. Eigenmann, J. Grout, J. Hoeflinger, T. Lawrence, J. Lee, D. Padua, Y. Paek, B. Pottenger, L. Rauchwerger, and P. Tu. Parallel programming with Polaris. IEEE Computer, 29(12):78–82, 1996.

    Google Scholar 

  3. D. Callahan and B. Smith. A future-based parallel language for a general-purpose highly-parallel computer. In Proceedings of Languages and Compilers for Parallel Computing, pp. 95–113, 1990.

  4. B. Chan. Run-time support for the automatic parallelization of Java programs. Master's thesis, University of Toronto, 2002.

  5. P. Chan, R. Lee, and D. Kramer. The Java Class Libraries, 2nd edn. Vol. 1, Supplement for Java 2 Platform, Addison Wesley, Reading, MA, 1999.

    Google Scholar 

  6. J. Choi, M. Gupta, M. Serrano, V. Sreedhar, and S. Midkiff. Escape analysis for Java. In Proceedings of the Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 1–19, 1999.

  7. A. Deutsch. Interprocedural may-alias analysis for pointers: Beyond k-limiting. In Proceedings of the Conference on Programming Language Design and Implementation, pp. 230–241, 1994.

  8. M. Frigo, C. Leiserson, and K. Randall. The implementation of the Cilk-5 multithreaded language. In Proceedings of the Conference on Programming Language Design and Implementation, pp. 212–223, 1998.

  9. R. Ghiya and L. Hendren. Putting pointer analysis to work. In Proceedings of the Symposium on Principles of Programming Languages, pp. 121–133, 1998.

  10. T. Gross, D. O'Hallaron, and J. Subhlok. Task parallelism in a High Performance Fortran framework. IEEE Parallel and Distributed Technology: Systems and Applications, 2(3):16–26, 1994.

    Google Scholar 

  11. M. Hall, J. Anderson, S. Amarasinghe, B. Murphy, S. Liao, E. Bugnion, and M. Lam. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, 29(12):84–89, 1996.

    Google Scholar 

  12. M. Hind. Pointer analysis: Haven't we solved this problem yet? In Proceedings of Workshop on Program Analysis For Software Tools and Engineering, pp. 54–61, 2001.

  13. J. Hogg, D. Lea, A. Wills, D. de Champeaux, and R. Holt. The Geneva Convention on the treatment of object aliasing. OOPS Messenger, 3(2):11–16, 1992.

    Google Scholar 

  14. W. Hui, S. MacDonald, J. Schaeffer, and D. Szafron. Visualizing object and method granularity for program parallelization. In Proceedings of Parallel and Distributed Computing and Systems, pp. 286–291, 2000.

  15. S. Huynh. Exploiting task-level parallelism automatically using pTask. Master's thesis, Department of Electrical and Computer Engineering, University of Toronto, 1996.

  16. M. Lam and M. Rinard. Coarse-grain parallel programming in Jade. In Proceedings of the Symposium on Principles and Practice of Parallel Programming, pp. 94–105, 1991.

  17. J. Larus and P. Hilfinger. Detecting conflicts between structure accesses. In Proceedings of the Conference on Programming Language Design and Implementation, pp. 21–34, ACM, 1988.

  18. T. Lewis and H. El-Rewini. Introduction to Parallel Computing, Prentice Hall, Englewood Cliffs, NJ, 1992.

    Google Scholar 

  19. K. H. Randall. Cilk: Efficient multithreaded computing. Ph.D. Thesis, MIT, Department of Electrical Engineering and Computer Science, 1998.

  20. R. Rugina and M. Rinard. Automatic parallelization of divide and conquer algorithms. In Proceedings of the Symposium on Principles and Practice of Parallel Programming, pp. 72–83, 1999.

  21. G. Sohi and A. Roth. Speculative multithreaded processors. IEEE Computer, 34(4):66–73, 2001.

    Google Scholar 

Download references

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chan, B., Abdelrahman, T.S. Run-Time Support for the Automatic Parallelization of Java Programs. The Journal of Supercomputing 28, 91–117 (2004). https://doi.org/10.1023/B:SUPE.0000014804.20789.21

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:SUPE.0000014804.20789.21

Navigation