PCJ - New Approach for Parallel Computations in Java

  • Marek Nowicki
  • Piotr Bała
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7782)


In this paper we present PCJ - a new library for parallel computations in Java. The PCJ library implements partitioned global address space approach. It hides communication details and therefore it is easy to use and allows for fast development of parallel programs. With the PCJ user can focus on implementation of the algorithm rather than on thread or network programming. The design details with examples of usage for basic operations are described. We also present evaluation of the performance of the PCJ communication on the state of art hardware such as cluster with gigabit interconnect. The results show good performance and scalability when compared to native MPI implementations.


Parallel Computation Java Virtual Machine Remote Method Invocation Storage Class Partitioned Global Address Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Java Platform, Standard Edition 6, Features and Enhancements,
  2. 2.
    Carpenter, B., Getov, V., Judd, G., Skjellum, T., Fox, G.: MPJ: MPI-like Message Passing for Java. Concurrency: Practice and Experience 12(11) (September 2000)Google Scholar
  3. 3.
    Boner, J., Kuleshov, E.: Clustering the Java virtual machine using aspect-oriented programming. In: AOSD 2007: Proceedings of the 6th International Conference on Aspect-Oriented Software Development (2007)Google Scholar
  4. 4.
    Nester, C., Philippsen, M., Haumacher, B.: A more efficient RMI for Java. In: Proceedings of the ACM 1999 Conference on Java Grande (JAVA 1999), pp. 152–159. ACM, New York (1999)CrossRefGoogle Scholar
  5. 5.
    Mallón, D.A., Taboada, G.L., Teijeiro, C., Touriño, J., Fraguela, B.B., Gómez, A., Doallo, R., Mouriño, J.C.: Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures. In: Ropo, M., Westerholm, J., Dongarra, J. (eds.) EuroPVM/MPI. LNCS, vol. 5759, pp. 174–184. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  6. 6.
    Numrich, R.W., Reid, J.: Co-array Fortran for parallel programming. ACM SIGPLAN Fortran Forum 17(2), 1–31 (1998)CrossRefGoogle Scholar
  7. 7.
    Carlson, W., Draper, J., Culler, D., Yelick, K., Brooks, E., Warren, K.: Introduction to UPC and Language Specification. IDA Center for Computing (1999)Google Scholar
  8. 8.
    Yelick, K.A., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P.N., Graham, S.L., Gay, D., Colella, P., Aiken, A.: Titanium: A High-Performance Java Dialect. Concurrency: Practice and Experience 10(11-13) (September-November 1998)Google Scholar
  9. 9.
    PCJ library is avaliable from the authors upon requestGoogle Scholar
  10. 10.
    Java Grande Project: benchmark suite,
  11. 11.
    Nowicki, M., Bała, P.: Parallel computations in Java with PCJ library. In: Smari, W.W., Zeljkovic, V. (eds.) 2012 International Conference on High Performance Computing and Simulation (HPCS), pp. 381–387. IEEE (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Marek Nowicki
    • 1
    • 2
  • Piotr Bała
    • 1
    • 2
  1. 1.Faculty of Mathematics and Computer ScienceNicolaus Copernicus UniversityToruńPoland
  2. 2.Interdisciplinary Centre for Mathematical and Computational ModellingUniversity of WarsawWarsawPoland

Personalised recommendations