Advertisement

Algorithmica

, Volume 57, Issue 4, pp 848–868 | Cite as

Broadcasting on Networks of Workstations

  • Samir Khuller
  • Yoo-Ah KimEmail author
  • Yung-Chun Justin Wan
Article

Abstract

Broadcasting and multicasting are fundamental operations. In this work we develop algorithms for performing broadcast and multicast in clusters of workstations. In this model, sending a message to a machine in the same cluster takes 1 time unit, and sending a message to a machine in a different cluster takes C(≥1) time units. The clusters may have arbitrary sizes. Lowekamp and Beguelin proposed heuristics for this model, but their algorithms may produce broadcast times that are arbitrarily worse than optimal. We develop the first constant factor approximation algorithms for this model. Algorithm LCF (Largest Cluster First) for the basic model is simple, efficient and has a worst case approximation guarantee of 2. We then extend these models to more complex models where we remove the assumption that an unbounded amount of communication may happen using the global network. The algorithms for these models build on the LCF method developed for the basic problem. Finally, we develop broadcasting algorithms for the postal model where the sending node does not block for C time units when the message is in transit.

Keywords

Message Passing Interface Multicast Group Postal Model Collective Communication Broadcasting Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bar-Noy, A., Kipnis, S.: Designing broadcast algorithms in the postal model for message-passing systems. Math. Syst. Theory 27(5), (1994) Google Scholar
  2. 2.
    Bar-Noy, A., Guha, S., Naor, J.S., Schieber, B.: Message multicasting in heterogeneous networks. SIAM J. Comput. 30(2), 347–358 (2001) CrossRefMathSciNetGoogle Scholar
  3. 3.
    Bhat, P.B., Raghavendra, C.S., Prasanna, V.K.: Efficient collective communication in distributed heterogeneous systems. J. Parallel Distrib. Comput. 63(3), 251–263 (2003) zbMATHCrossRefGoogle Scholar
  4. 4.
    Bruck, J., Dolev, D., Ho, C., Rosu, M., Strong, R.: Efficient message passing interface(mpi) for parallel computing on clusters of workstations. Parallel Distrib. Comput. 40, 19–34 (1997) CrossRefGoogle Scholar
  5. 5.
    Culler, D.E., Karp, R.M., Patterson, D.A., Sahay, A., Schauser, K.E., Santos, E., Subramonian, R., von Eicken, T.: LogP: Towards a realistic model of parallel computation. In: Proceedings 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 1–12 (1993) Google Scholar
  6. 6.
    Elkin, M., Kortsarz, G.: A combinatorial logarithmic approximation algorithm for the directed telephone broadcast problem. SIAM J. Comput. 35(3), 672–689 (2005) CrossRefMathSciNetGoogle Scholar
  7. 7.
    Elkin, M., Kortsarz, G.: Sublogarithmic approximation for telephone multicast. J. Comput. Syst. Sci. 72(4), 648–659 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Foster, I., Kesselman, K.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Mateo (1998) Google Scholar
  9. 9.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the mpi: a message passing interface standard. Parallel Comput. 22, 789–828 (1996) zbMATHCrossRefGoogle Scholar
  10. 10.
    Hedetniemi, S.M., Hedetniemi, S.T., Liestman, A.L.: A survey of broadcasting and gossiping in communication networks. Networks 18, 319–349 (1988) zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Husbands, P., Hoe, J.C.: Mpi-start: delivering network performance to numerical applications. In: Supercomputing ’98: Proceedings of the 1998 ACM/IEEE Conference on Supercomputing (CDROM), pp. 1–15. Washington, DC, USA, 1998. IEEE Computer Society, Los Alamitos (1998) Google Scholar
  12. 12.
    Karp, R., Sahay, A., Santos, E., Schauser, K.E.: Optimal broadcast and summation in the LogP model. In: Proceedings of 5th Annual Symposium on Parallel Algorithms and Architectures, pp. 142–153 (1993) Google Scholar
  13. 13.
    Kielmann, T., Bal, H., Gorlatch, S.: Bandwidth-efficient collective communication for clustered wide area systems. In: International Parallel and Distributed Processing Symposium, pp. 492–499. Washington, DC, USA, 2000. IEEE Computer Society, Los Alamitos (2000) Google Scholar
  14. 14.
    Kielmann, T., Hofman, R.F.H., Bal, H.E., Plaat, A., Bhoedjang, R.A.F.: Magpie: Mpiâs collective communication operations for clustered wide area systems. In: ACM SIGPLAN Notices, pp. 131–140 (1999) Google Scholar
  15. 15.
    Lowekamp, B.B., Beguelin, A.: Eco: Efficient collective operations for communication on heterogeneous networks. In: International Parallel Processing Symposium (IPPS), pp. 399–405. Honolulu, HI (1996) Google Scholar
  16. 16.
    Message passing interface forum. http://www.mpi-forum.org/index.html
  17. 17.
    Patterson, D.A., Culler, D.E., Anderson, T.E.: A case for NOWs (networks of workstations). IEEE Micro 15(1), 54–64 (1995) CrossRefGoogle Scholar
  18. 18.
    Pruyne, J., Livny, M.: Interfacing condor and pvm to harness the cycles of workstation clusters. J. Future Gener. Comput. Syst. 12(1), 53–65 (1996) CrossRefGoogle Scholar
  19. 19.
    Richards, D., Liestman, A.L.: Generalization of broadcasting and gossiping. Networks 18, 125–138 (1988) zbMATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Williams, T., Parsons, R.: Exploiting hierarchy in heterogeneous environments. In: IEEE/ACM IPDPS 2001, pp. 140–147. IEEE Press, New York (2001) Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Samir Khuller
    • 1
  • Yoo-Ah Kim
    • 2
    Email author
  • Yung-Chun Justin Wan
    • 1
  1. 1.Department of Computer ScienceUniversity of MarylandCollege ParkUSA
  2. 2.Department of Computer Science and EngineeringUniversity of ConnecticutStorrsUSA

Personalised recommendations