Cluster Computing

, Volume 13, Issue 3, pp 243–256 | Cite as

Harnessing parallelism in multicore clusters with the All-Pairs, Wavefront, and Makeflow abstractions

  • Li Yu
  • Christopher Moretti
  • Andrew Thrasher
  • Scott Emrich
  • Kenneth Judd
  • Douglas Thain
Article

Abstract

Both distributed systems and multicore systems are difficult programming environments. Although the expert programmer may be able to carefully tune these systems to achieve high performance, the non-expert may struggle. We argue that high level abstractions are an effective way of making parallel computing accessible to the non-expert. An abstraction is a regularly structured framework into which a user may plug in simple sequential programs to create very large parallel programs. By virtue of a regular structure and declarative specification, abstractions may be materialized on distributed, multicore, and distributed multicore systems with robust performance across a wide range of problem sizes. In previous work, we presented the All-Pairs abstraction for computing on distributed systems of single CPUs. In this paper, we extend All-Pairs to multicore systems, and introduce the Wavefront and Makeflow abstractions, which represent a number of problems in economics and bioinformatics. We demonstrate good scaling of both abstractions up to 32 cores on one machine and hundreds of cores in a distributed system.

Keywords

Abstractions Multicore Distributed systems Bioinformatics Economics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Altschul, S.F., et al.: Basic local alignment search tool. J. Mol. Biol. 3, 403–410 (1990) Google Scholar
  2. 2.
    Bakken, D., Schlichting, R.: Tolerating failures in the bag-of-tasks programming paradigm. In: IEEE International Symposium on Fault Tolerant Computing, June 1991 Google Scholar
  3. 3.
    Cheatham, T., Fahmy, A., Siefanescu, D., Valiani, L.: Bulk synchronous parallel computing—a paradigm for transportable software. In: Hawaii International Conference on Systems Sciences (2005) Google Scholar
  4. 4.
    Dean, J., Ghemawat, S.: Mapreduce: simplified data processing on large cluster. In: Operating Systems Design and Implementation (2004) Google Scholar
  5. 5.
    Deelman, E., Singh, G., Su, M.-H., Blythe, J., Gil, Y., Kesselman, C., Mehta, G., Vahi, K., Berriman, B., Good, J., Laity, A., Jacob, J., Katz, D.: Pegasus: a framework for mapping complex scientific workflows onto distributed systems. Sci. Program. J. 13(3), 219–237 (2005) Google Scholar
  6. 6.
    Doraszelski, U.: An R&D race with knowledge accumulation. Bell J. Econ. 34, 19–41 (2003) Google Scholar
  7. 7.
    Frigo, M., Leiserson, C., Prokop, H., Ramachandran, S.: Cache oblivious algorithms. In: Foundations of Computer Science (FOCS) (1999) Google Scholar
  8. 8.
    Gentzsch, W.: Sun grid engine: towards creating a compute power grid. In: CCGRID ’01: Proceedings of the 1st International Symposium on Cluster Computing and the Grid (2001) Google Scholar
  9. 9.
    Ghemawat, P., Spence, A.M.: Learning curve spillovers and market performance. Q. J. Econ. 100, 839–852 (1985) CrossRefGoogle Scholar
  10. 10.
    Isard, M., Budiu, M., Yu, Y., Birrell, A., Fetterly, D.: Dryad: distributed data parallel programs from sequential building blocks. In: Proceedings of EuroSys, March 2007 Google Scholar
  11. 11.
    Kung, H.T.: Why systolic architectures? IEEE Comput. 15, 37–46 (1982) Google Scholar
  12. 12.
    Moretti, C., Bulosan, J., Flynn, P., Thain, D.: All-pairs: An abstraction for data intensive cloud computing. In: International Parallel and Distributed Processing Symposium (IPDPS) (2008) Google Scholar
  13. 13.
    Moretti, C., Steinhaeuser, K., Thain, D., Chawla, N.V.: Scaling up classifiers to cloud computers. In: International Conference on Data Mining (ICDM) (2008) Google Scholar
  14. 14.
    Needleman, S.B., Wunsch, C.D.: A general method applicable to the search for similarities in amino acid sequence of two proteins. J. Mol. Biol. 48, 443–453 (1970) CrossRefGoogle Scholar
  15. 15.
    Oliver, T., Schmidt, B., Nathan, D., Clemens, R., Maskell, D.: Using reconfigurable hardware to accelerate multiple sequence alignment with clustalw. Bioinformatics 21, 3431–3432 (2005) CrossRefGoogle Scholar
  16. 16.
    Paterson, A.H., et al.: The Sorghum bicolor genome and the diversification of grasses. Nature 457, 551–556 (2009) CrossRefGoogle Scholar
  17. 17.
    Raicu, I., Zhao, Y., Dumitrescu, C., Foster, I., Wilde, M.: Falkon: a fast and light-weight task execution framework. In: IEEE/ACM Supercomputing (2007) Google Scholar
  18. 18.
    Rajko, S., Aluru, S.: Space and time optimal parallel sequence alignments. IEEE Trans. Parallel Distrib. Syst. 15(12), 1070–1081 (2004) CrossRefGoogle Scholar
  19. 19.
    Reinganum, J.: Dynamic games of innovation. J. Econ. Theory 25, 21–41 (1981) MATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Reinganum, J.: A dynamic game of R&D: patent protection and competitive behavior. Econometrica 50, 671–688 (1982) MATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    Reinganum, J.: Corrigendum. J. Econ. Theory 35, 196–197 (1985) CrossRefMathSciNetGoogle Scholar
  22. 22.
    Reinganum, J., Stokey, N.: Oligopoly extraction of a common property natural resource: the importance of the period of commitment in dynamic games. Int. Econ. Rev. 26, 161–174 (1985) MATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Sarje, A., Aluru, S.: Parallel biological sequence alignments on the cell broadband engine. In: International Parallel and Distributed Processing Symposium (IPDPS) (2008) Google Scholar
  24. 24.
    da Silva, D., Cirne, W., Brasilero, F.: Trading cycles for information: Using replication to schedule bag-of-tasks applications on computational grids. In: Euro-Par (2003) Google Scholar
  25. 25.
    Spence, A.M.: The learning curve and competition. Bell J. Econ. 12, 49–70 (1981) CrossRefGoogle Scholar
  26. 26.
    Spence, A.M.: Cost reduction, competition, and industry performance. Econometrica 52, 101–121 (1984) CrossRefGoogle Scholar
  27. 27.
    Thain, D., Tannenbaum, T., Livny, M.: Condor and the grid. In: Berman, F., Fox, G., Hey, T. (eds.) Grid Computing: Making the Global Infrastructure a Reality. Wiley, New York (2003) Google Scholar
  28. 28.
    Theobald, K.B., Gao, G.R.: An efficient parallel algorithm for all pairs examination. In: Supercomputing ’91: Proceedings of the 1991 ACM/IEEE Conference on Supercomputing, pp.  742–753. ACM, New York (1991) CrossRefGoogle Scholar
  29. 29.
    Ning, Z., Cox, A.J., Mullikin, J.C.: SSAHA: a fast search method for large DNA databases. Genome Res. 10, 1725–1729 (2001) CrossRefGoogle Scholar
  30. 30.
    Wilde, M.: Parallel scripting for applications at the petascale. IEEE Computer, November 2009 Google Scholar
  31. 31.
    Feldman, S.: Make—a program for maintaining computer programs. Softw. Pract. Exp. 9, 255–265 (1978) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Li Yu
    • 1
  • Christopher Moretti
    • 1
  • Andrew Thrasher
    • 1
  • Scott Emrich
    • 1
  • Kenneth Judd
    • 2
  • Douglas Thain
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of Notre DameSouth BendUSA
  2. 2.Hoover InstitutionStanford UniversityStanfordUSA

Personalised recommendations