SCE Toolboxes for the Development of High-Level Parallel Applications

  • J. Fernández
  • M. Anguita
  • E. Ros
  • J. L. Bernier
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3992)


Users of Scientific Computing Environments (SCE) benefit from faster high-level software development at the cost of larger run time due to the interpreted environment. For time-consuming SCE applications, dividing the workload among several computers can be a cost-effective acceleration technique. Using our PVM and MPI toolboxes, Matlab \(^{\rm {\sc {\textregistered}}}\) and Octave users in a computer cluster can parallelize their interpreted applications using the native cluster programming paradigm — message-passing. Our toolboxes are complete interfaces to the corresponding libraries, support all the compatible datatypes in the base SCE and have been designed with performance and maintainability in mind. Although in this paper we focus on our new toolbox, MPITB for Octave, we describe the general design of these toolboxes and of the development aids offered to end users, mention some related work, mention speedup results obtained by some of our users and introduce speedup results for the NPB-EP benchmark for MPITB in both SCE’s.


Virtual Machine Message Passing Interface Head Cluster Node Speedup Result Major Edition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Eaton, J.W.: GNU Octave Manual. Network Theory Ltd (2002) ISBN: 0-9541617-2-6Google Scholar
  2. 2.
    Moler, C.B.: Numerical Computing with Matlab. SIAM, Philadelphia (2004) ISBN: 0-89871-560-1Google Scholar
  3. 3.
    Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, R., Sunderam, V.: PVM: Parallel Virtual Machine. In: A Users’ Guide and Tutorial for Networked Parallel Computing. The MIT Press, Cambridge (1994) ISBN: 0-262-57108-0Google Scholar
  4. 4.
    MPI Forum: MPI: A Message-Passing Interface Standard. Int. J. Supercomput. Appl. High Perform. Comput. 8(3/4), 159–416 (1994); See also the MPI Forum Documents: MPI 2.0 standard (2003) University of Tennessee, Knoxville. Web,
  5. 5.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd edn. The MIT Press, Cambridge (1999) ISBN: 0262571323Google Scholar
  6. 6.
    Burns, G., Daoud, R., Vaigl, J.: LAM: an Open Cluster Environment for MPI. In: Proceedings of Supercomputing symposium, pp. 379–386 (1994)Google Scholar
  7. 7.
    Squyres, J., Lumsdaine, A.: A component architecture for LAM/MPI. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 379–387. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Fernández, J., Cañas, A., Díaz, A.F., González, J., Ortega, J., Prieto, A.: Performance of Message-Passing Matlab Toolboxes. In: Palma, J.M.L.M., Sousa, A.A., Dongarra, J., Hernández, V. (eds.) VECPAR 2002. LNCS, vol. 2565, pp. 228–241. Springer, Heidelberg (2003),
  9. 9.
    Bailey, D., et al.: The NAS Parallel Benchmarks. RNR Technical Report RNR-94-007 (1994)Google Scholar
  10. 10.
    Bailey, D., et al.: The NAS Parallel Benchmarks 2.0. Report NAS-95-020 (1995), Reports and software available from,
  11. 11.
    Buss, B.J.: Comparison of serial and parallel implementations of Benchmark codes in MATLAB, Octave and FORTRAN. M.Sc. Thesis, Ohio State University (2005), Thesis and software available from,
  12. 12.
    Dormido, C.S., de Madrid, A.P., Dormido, B.S.: Parallel Dynamic Programming on Clusters of Workstations. IEEE Transactions on Parallel and Distributed Systems 16(9), 785–798 (2005)CrossRefGoogle Scholar
  13. 13.
    Goasguen, S., Venugopal, R., Lundstrom, M.S.: Modeling Transport in Nanoscale Silicon and Molecular Devices on Parallel Machines. In: Proceedings of the 3rd IEEE Conference on Nanotechnology, vol. 1, pp. 398–401 (2003) DOI 10.1109/NANO.2003.1231802Google Scholar
  14. 14.
    Goasguen, S., Butt, A.R., Colby, K.D., Lundstrorn, M.S.: Parallelization of the nano-scale device simulator nanoMOS-2.0 using a 100 nodes linux cluster. In: Proceedings of the 2nd IEEE Conference on Nanotechnology, pp. 409–412 (2002) DOI 10.1109/ NANO 1032277Google Scholar
  15. 15.
    Zhao, M., Chadha, V., Figueiredo, R.J.: Supporting Application-Tailored Grid File System Sessions with WSRF-Based Services. In: Procs of the 14th IEEE Int. Symp. on High Perf. Distributed Computing HPDC-14, pp. 24–33 (2005) DOI 10.1109/HPDC.2005.1520930Google Scholar
  16. 16.
    Creel, M.: User-Friendly Parallel Computations with Econometric Examples. In: Computational Economics, vol. 26 (2), pp. 107–128. Springer, Heidelberg (2005) DOI 10.1007/s10614-005-6868-2Google Scholar
  17. 17.
    Creel, M.: Parallel-Knoppix Linux,
  18. 18.
    Creel, M.: Econometrics Octave package at OctaveForge, See package index at
  19. 19.
    Law, M.: Matlab Laboratory for MPITB (2003), Coursework resource for MATH-2160 HKBU, available from, See also Guest Lecture on Cluster Computing (2005), Coursework resource for COMP-3320, available
  20. 20.
    Wang, C.L.: Grid Computing research in Hong Kong. In: 1st Workshop on Grid Tech. & Apps (2004),

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • J. Fernández
    • 1
  • M. Anguita
    • 1
  • E. Ros
    • 1
  • J. L. Bernier
    • 1
  1. 1.Departamento de Arquitectura y Tecnología de ComputadoresUniversidad de GranadaGranadaSpain

Personalised recommendations