Advertisement

Optimizing Chip Multiprocessor Work Distribution Using Dynamic Compilation

  • Jisheng Zhao
  • Matthew Horsnell
  • Ian Rogers
  • Andrew Dinn
  • Chris Kirkham
  • Ian Watson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4641)

Abstract

How can sequential applications benefit from the ubiquitous next generation of chip multiprocessors (CMP)? Part of the answer may be a dynamic execution environment that automatically parallelizes programs and adaptively tunes the work distribution. Experiments using the Jamaica CMP show how a runtime environment is capable of parallelizing standard benchmarks and achieving performance improvements over traditional work distributions.

Keywords

Automatic parallelization feedback-directed optimization dynamic execution 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    IBM: JikesTM Research Virtual Machine (2005), http://jikesrvm.sourceforge.net/
  2. 2.
    Arnold, M., Fink, S.: Adaptive optimization in the Jalapeño JVM. In: Proceedings of the 15th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, pp. 47–65 (2000)Google Scholar
  3. 3.
    Burke, M., et al.: The Jalapeeño dynamic optimizing compiler for Java. In: Proceedings ACM 1999 Java Grande Conference, San Francisco, CA, United States, pp. 129–141. ACM Press, New York (1999)CrossRefGoogle Scholar
  4. 4.
    Knobe, K., Sarkar, V.: Array SSA form and its use in parallelization. In: Symposium on Principles of Programming Languages, pp. 107–120 (1998)Google Scholar
  5. 5.
    Banerjee, U.: Loop Transformations for Restructuring Compilers: The Foundations. Springer, Heidelberg (1993)zbMATHGoogle Scholar
  6. 6.
    Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press, Cambridge (1996)Google Scholar
  7. 7.
    Wolfe, M., Shanklin, C., Ortega, L.: High Performance Compilers for Parallel Computing. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA (1995)Google Scholar
  8. 8.
    Wright, G.: A single-chip multiprocessor architecture with hardware thread support. PhD thesis, The University of Manchester (2001)Google Scholar
  9. 9.
    Grehan, R., Rowell, D.: jBYTEMark Benchmark (1998)Google Scholar
  10. 10.
    Bull, J., et al.: A benchmark suite for high performance Java. Concurrency Practice and Experience 12(6), 375–388 (2000)CrossRefGoogle Scholar
  11. 11.
    Hicklin, J., Moler, C.: Java Matrix Package Benchmarks (July 2005)Google Scholar
  12. 12.
    Dongarra, J., Wade, R.: Linpack Benchmark - Java VersionGoogle Scholar
  13. 13.
    Henning, J.: SPEC CPU2000: measuring CPU performance in the New Millennium. Computer 33(7), 28–35 (2000)CrossRefGoogle Scholar
  14. 14.
    Lo, J., Eggers, S.: Tuning compiler optimizations for simultaneous multithreading. In: International Symposium on Microarchitecture, pp. 114–124 (1997)Google Scholar
  15. 15.
    Voss, M., Eigenmann, R.: High-level adaptive program optimization with ADAPT. In: PPOPP, pp. 93–102 (2001)Google Scholar
  16. 16.
    Fursin, G., Cohen, A., O’Boyle, M., Temam, O.: A practical method for quickly evaluating program optimizations. In: Conte, T., Navarro, N., Hwu, W.-m.W., Valero, M., Ungerer, T. (eds.) HiPEAC 2005. LNCS, vol. 3793, pp. 29–46. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  17. 17.
    Lau, J., Arnold, M.: Online performance auditing: using hot optimizations without getting burned. In: PLDI, pp. 239–251 (2006)Google Scholar
  18. 18.
    Diniz, P., Rinard, M.: Dynamic feedback: an effective technique for adaptive computing. In: Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation, pp. 71–84 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Jisheng Zhao
    • 1
  • Matthew Horsnell
    • 1
  • Ian Rogers
    • 1
  • Andrew Dinn
    • 1
  • Chris Kirkham
    • 1
  • Ian Watson
    • 1
  1. 1.University of ManchesterUK

Personalised recommendations