The MICROBE Benchmarking Toolkit for Java: a Component-Based Approach

  • Dawid Kurzyniec
  • Vaidy Sunderam
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2330)


Java technology has recently been receiving increasing attention as a platform for high performance and large scale scientific computing. The MICROBE benchmarking toolkit is being developed to assist in the accurate evaluation of Java platforms. MICROBE is based on the tenet that benchmarking suites, in addition to furnishing benchmark codes, should provide flexibility in customizing algorithms, instruments, and data interpretation to facilitate more thorough evaluation of virtual environments’ performance. The MICROBE architecture, projected usage scenarios, and preliminary experiences are presented in this paper.


Virtual Machine Benchmark Suite Java Virtual Machine Benchmark Algorithm Object Creation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    D. Bell. Make Java fast: Optimize! JavaWorld, 2(4), Apr. 1997.
  2. 2.
    R. F. Boisvert, J. J. Dongarra, R. Pozo, K. A. Remington, and G. W. Stewart. Developing numerical libraries in Java. In ACM-1998 Workshop on Java for High-Performance Network Computing, Stanford University, Palo Alto, California, February 1998. Google Scholar
  3. 3.
    J. M. Bull, L. A. Smith, M. D. Westhead, D. S. Henty, and R. A. Davey. A benchmark suite for high performance Java. Concurrency, Practice and Experience, 12:375–388, 2000. Available at CrossRefGoogle Scholar
  4. 4.
    BYTE Magazine. BYTE benchmarks.
  5. 5.
    P. S. Corporation. CaffeineMark 3.0. pendragon/cm3/.
  6. 6.
    Distributed and High-Performance Computing Group, University of Adelaide. Java Grande benchmarks. javagrande/benchmarks/.
  7. 7.
    O. P. Doederlein. The Java performance report. html/frm/javalobby/features/jpr/.
  8. 8.
    J. Dongarra, R. Wade, and P. McMahan. Linpack benchmark — Java version.
  9. 9.
    J. Gosling, B. Joy, G. Steele, and G. Bracha. The Java Language Specification. Addison-Wesley, second edition, 2000.
  10. 10.
    W. Griswold and P. Philips. Excellent UCSD benchmarks for Java. Available at
  11. 11.
    J. Hardwick. Java microbenchmarks. benchmarks.html.
  12. 12.
    Java HotSpot technology.
  13. 13.
    Jalapeño project home page.
  14. 14.
    Java Grande Forum.
  15. 15.
    Java Grande Forum. Java Grande Forum benchmark suite.
  16. 16.
  17. 17.
    J. A. Mathew, P. D. Coddington, and K. A. Hawick. Analysis and development of Java Grande benchmarks. In ACM 1999 Java Grande Conference, San Francisco, California, June 12–14 1999. Available at
  18. 18.
    NASA Numerical Aerospace Simulation. NAS parallel benchmarks.
  19. 19.
  20. 20.
    M. Philippsen. Is Java ready for computational science? In Proceedings of the 2nd European Parallel and Distributed Systems Conference for Scientific Computing, Vienna, July 1998.
  21. 21.
    R. Pozo. SciMark benchmark for scientific computing.
  22. 22.
    Standard Performance Evaluation Corporation. SPEC JVM98 benchmarks.

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Dawid Kurzyniec
    • 1
  • Vaidy Sunderam
    • 1
  1. 1.Department of Math and Computer ScienceEmory UniversityAtlantaUSA

Personalised recommendations