Bulk Synchronous Parallel ML: Semantics and Implementation of the Parallel Juxtaposition

  • F. Loulergue
  • R. Benheddi
  • F. Gava
  • D. Louis-Régis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3967)


The design of parallel programs and parallel programming languages is a trade-off. On one hand the programs should be efficient. But the efficiency should not come at the price of non portability and unpredictability of performances. The portability of code is needed to allow code reuse on a wide variety of architectures and to allow the existence of legacy code. The predictability of performances is needed to guarantee that the efficiency will always be achieved, whatever is the used architecture.


Parallel Machine Local Reduction Parallel Vector Global Synchronization Bulk Synchronous Parallel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bäumker, W., adn Dittrich, A., Meyer auf der Heide, F.: Truly efficient parallel algorithms: c-optimal multisearch for an extension of the BSP model. In: 3rd European Symposium on Algorithms (ESA), pp. 17–30 (1995)Google Scholar
  2. 2.
    Bertot, Y., Castéran, P.: Interactive Theorem Proving and Program Development. Springer, Heidelberg (2004)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bisseling, R.: Parallel Scientific Computation. A structured approach using BSP and MPI. Oxford University Press, Oxford (2004)zbMATHGoogle Scholar
  4. 4.
    Bonorden, O., Juurlink, B., von Otte, I., Rieping, O.: The Paderborn University BSP (PUB) library. Parallel Computing 29(2), 187–207 (2003)CrossRefGoogle Scholar
  5. 5.
    Bonorden, O., Meyer auf der Heide, F., Wanka, R.: Composition of Efficient Nested BSP Algorithms: Minimum Spanning Tree Computation as an Instructive Example. In: Proceedings of PDPTA (2002)Google Scholar
  6. 6.
    Chailloux, E., Manoury, P., Pagano, B.: Développement d’applications avec Objective Caml. O’Reilly, France (2000)Google Scholar
  7. 7.
    de la Torre, P., Kruskal, C.P.: Submachine locality in the bulk synchronous setting. In: Fraigniaud, P., Mignotte, A., Bougé, L., Robert, Y. (eds.) Euro-Par 1996. LNCS, vol. 1123 and 1124. Springer, Heidelberg (1996)Google Scholar
  8. 8.
    Dehne, F.: Special issue on coarse-grained parallel algorithms. Algorithmica 14, 173–421 (1999)zbMATHGoogle Scholar
  9. 9.
    Gava, F.: Formal Proofs of Functional BSP Programs. Parallel Processing Letters 13(3), 365–376 (2003)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Gava, F.: Approches fonctionnelles de la programmation parallèle et des méta-ordinateurs. Sémantiques, implantations et certification. PhD thesis, University Paris Val-de-Marne, LACL (2005)Google Scholar
  11. 11.
    Gava, F., Loulergue, F.: A Static Analysis for Bulk Synchronous Parallel ML to Avoid Parallel Nesting. Future Generation Computer Systems 21(5), 665–671 (2005)CrossRefGoogle Scholar
  12. 12.
    Hains, G.: Subset synchronization in BSP computing. In: Arabnia, H.R. (ed.) Proceedings of PDPTA, vol. I, pp. 242–246. CSREA Press (1998)Google Scholar
  13. 13.
    Hill, J.M.D., McColl, W.F., et al.: BSPlib: The BSP Programming Library. Parallel Computing 24, 1947–1980 (1998)CrossRefGoogle Scholar
  14. 14.
    Leroy, X., Doligez, D., Garrigue, J., Rémy, D., Vouillon, J.: The Objective Caml System release 3.09 (2005), Web pages at:
  15. 15.
    Loulergue, F.: Parallel Composition and Bulk Synchronous Parallel Functional Programming. In: Gilmore, S. (ed.) Trends in Functional Programming, vol. 2, pp. 77–88. Intellect Books (2001)Google Scholar
  16. 16.
    Loulergue, F.: Parallel Juxtaposition for Bulk Synchronous Parallel ML. In: Kosch, H., Böszörményi, L., Hellwagner, H. (eds.) Euro-Par 2003. LNCS, vol. 2790, pp. 781–788. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  17. 17.
    Loulergue, F.: Parallel Superposition for Bulk Synchronous Parallel ML. In: Sloot, P.M.A., Abramson, D., Bogdanov, A.V., Gorbachev, Y.E., Dongarra, J., Zomaya, A.Y. (eds.) ICCS 2003. LNCS, vol. 2659, pp. 223–232. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  18. 18.
    Loulergue, F., Gava, F., Billiet, D.: BSML: Modular Implementation and Performance Prediction. In: Sunderam, V.S., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2005. LNCS, vol. 3515, pp. 1046–1054. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  19. 19.
    Martin, J.M.R., Tiskin, A.: BSP modelling a two-tiered parallel architectures. In: Cook, B.M. (ed.) WoTUG 1999, pp. 47–55 (1999)Google Scholar
  20. 20.
    Skillicorn, D.B., Hill, J.M.D., McColl, W.F.: Questions and Answers about BSP. Scientific Programming 6(3), 249–274 (1997)CrossRefGoogle Scholar
  21. 21.
    Tiskin, A.: A New Way to Divide and Conquer. Parallel Processing Letters (4) (2001)Google Scholar
  22. 22.
    Valiant, L.G.: A bridging model for parallel computation. Communications of the ACM 33(8), 103–111 (1990)CrossRefGoogle Scholar
  23. 23.
    Zavanella, A.: Skeletons and BSP : Performance Portability for Parallel Programming. PhD thesis, Universita degli studi di Pisa (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • F. Loulergue
    • 1
  • R. Benheddi
    • 1
  • F. Gava
    • 2
  • D. Louis-Régis
    • 1
  1. 1.Laboratoire d’Informatique, Fondamentale d’OrléansUniversité d’OrléansFrance
  2. 2.Laboratory of Algorithms, Complexity and LogicUniversity Paris XIIFrance

Personalised recommendations