Advertisement

Parallel Programming Paradigms

  • Efstratios Gallopoulos
  • Bernard Philippe
  • Ahmed H. Sameh
Chapter
Part of the Scientific Computation book series (SCIENTCOMP)

Abstract

In this chapter, we briefly present the main concepts in parallel computing. This is an attempt to make more precise some definitions that will be used throughout this book rather than a survey on the topic. Interested readers are referred to one of the many books available on the subject, e.g. Arbenz and Petersen, Introduction to Parallel Computing, (2004), Bertsekas and Tsitsiklis, Parallel and Distributed Computation, (1989), Culler et al. Parallel Computer Architecture: A Hardware/Software Approach, (1998), Kumar et al. Introduction to Parallel Computing: Design and Analysis of Algorithms, 2nd edn., (2003), Casanova et al. Parallel Algorithms, (2008), Hennessy and Patterson, Computer Architecture: A Quantitative Approach, (2011), Hockney and Jesshope, Parallel Computers 2: Architecture, Programming and Algorithms, 2nd edn., (1988), Tchuente, Parallel Computation on Regular Arrays. Algorithms and Architectures for Advanced Scientific Computing, (1991) [1, 2, 3, 4, 5, 6, 7, 8].

References

  1. 1.
    Arbenz, P., Petersen, W.: Introduction to Parallel Computing. Oxford University Press (2004)Google Scholar
  2. 2.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation. Prentice Hall, Englewood Cliffs (1989)MATHGoogle Scholar
  3. 3.
    Culler, D., Singh, J., Gupta, A.: Parallel Computer Architecture: A Hardware/Software Approach. Morgan Kaufmann, San Francisco (1998)Google Scholar
  4. 4.
    Kumar, V., Grama, A., Gupta, A., Karypis, G.: Introduction to Parallel Computing: Design and Analysis of Algorithms, 2nd edn. Addison-Wesley (2003)Google Scholar
  5. 5.
    Casanova, H., Legrand, A., Robert, Y.: Parallel Algorithms. Chapman & Hall/CRC Press (2008)Google Scholar
  6. 6.
    Hennessy, J., Patterson, D.: Computer Architecture: A Quantitative Approach. Elsevier Science & Technology (2011)Google Scholar
  7. 7.
    Hockney, R., Jesshope, C.: Parallel Computers 2: Architecture, Programming and Algorithms, 2nd edn. Adam Hilger (1988)Google Scholar
  8. 8.
    Tchuente, M.: Parallel Computation on Regular Arrays. Algorithms and Architectures for Advanced Scientific Computing. Manchester University Press (1991)Google Scholar
  9. 9.
    Flynn, M.: Some computer organizations and their effectiveness. IEEE Trans. Comput. C-21, 948–960 (1972)Google Scholar
  10. 10.
    Jeffers, J., Reinders, J.: Intel Xeon Phi Coprocessor High Performance Programming, 1st edn. Morgan Kaufmann Publishers Inc., San Francisco (2013)Google Scholar
  11. 11.
    Hockney, R.: The Science of Computer Benchmarking. SIAM, Philadelphia (1996)CrossRefGoogle Scholar
  12. 12.
    Higham, N.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)CrossRefMATHGoogle Scholar
  13. 13.
    Karp, R., Sahay, A., Santos, E., Schauser, K.: Optimal broadcast and summation in the logP model. In: Proceedings of the 5th Annual ACM Symposium on Parallel Algorithms and Architectures SPAA’93, pp. 142–153. ACM Press, Velen (1993). http://doi.acm.org/10.1145/165231.165250
  14. 14.
    Culler, D., Karp, R., Patterson, D., A. Sahay, K.S., Santos, E., Subramonian, R., von Eicken, T.: LogP: towards a realistic model of parallel computation. In: Principles, Practice of Parallel Programming, pp. 1–12 (1993). http://citeseer.ist.psu.edu/culler93logp.html
  15. 15.
    Pjesivac-Grbovic, J., Angskun, T., Bosilca, G., Gabriel, G.F.E., Dongarra, J.: Performance analysis of MPI collective operations. In: Fourth International Workshop on Performance Modeling, Evaluation, and Optimization of Parallel and Distributed Systems (PMEO-PDS’05). Denver (2005). (Submitted)Google Scholar
  16. 16.
    Breshaers, C.: The Art of Concurrency - A Thread Monkey’s Guide to Writing Parallel Applications. O’Reilly (2009)Google Scholar
  17. 17.
    Rauber, T., Rünger, G.: Parallel Programming—for Multicore and Cluster Systems. Springer (2010)Google Scholar
  18. 18.
    Darema., F.: The SPMD model : past, present and future. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface. LNCS, vol. 2131/2001, p. 1. Springer, Berlin (2001)Google Scholar
  19. 19.
    Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, Cambridge (1994)Google Scholar
  20. 20.
    Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI: The Complete Reference (1995). http://www.netlib.org/utk/papers/mpi-book/mpi-book.html
  21. 21.
    Chapman, B., Jost, G., Pas, R.: Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press, Cambridge (2007)Google Scholar
  22. 22.
    OpenMP Architecture Review Board: OpenMP Application Program Interface (Version 3.1). (2011). http://www.openmp.org/mp-documents/
  23. 23.
    Amdahl, G.M.: Validity of the single processor approach to achieving large scale computing capabilities. Proc. AFIPS Spring Jt. Comput. Conf. 31, 483–485 (1967)MATHGoogle Scholar
  24. 24.
    Juurlink, B., Meenderinck, C.: Amdahl’s law for predicting the future of multicores considered harmful. SIGARCH Comput. Archit. News 40(2), 1–9 (2012). doi: 10.1145/2234336.2234338. http://doi.acm.org/10.1145/2234336.2234338
  25. 25.
    Hill, M., Marty, M.: Amdahl’s law in the multicore era. In: HPCA, p. 187. IEEE Computer Society (2008)Google Scholar
  26. 26.
    Sun, X.H., Chen, Y.: Reevaluating Amdahl’s law in the multicore era. J. Parallel Distrib. Comput. 70(2), 183–188 (2010)CrossRefMATHGoogle Scholar
  27. 27.
    Flatt, H., Kennedy, K.: Performance of parallel processors. Parallel Comput. 12(1), 1–20 (1989). doi: 10.1016/0167-8191(89)90003-3. http://www.sciencedirect.com/science/article/pii/0167819189900033
  28. 28.
    Kuck, D.: High Performance Computing: Challenges for Future Systems. Oxford University Press, New York (1996)Google Scholar
  29. 29.
    Kuck, D.: What do users of parallel computer systems really need? Int. J. Parallel Program. 22(1), 99–127 (1994). doi: 10.1007/BF02577794. http://dx.doi.org/10.1007/BF02577794
  30. 30.
    Kumar, V., Gupta, A.: Analyzing scalability of parallel algorithms and architectures. J. Parallel Distrib. Comput. 22(3), 379–391 (1994)CrossRefGoogle Scholar
  31. 31.
    Worley, P.H.: The effect of time constraints on scaled speedup. SIAM J. Sci. Stat. Comput. 11(5), 838–858 (1990)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Gustafson, J.: Reevaluating Amdahl’s law. Commun. ACM 31(5), 532–533 (1988)CrossRefMATHGoogle Scholar
  33. 33.
    Grama, A., Gupta, A., Kumar, V.: Isoefficiency: measuring the scalability of parallel algorithms and architectures. IEEE Parallel Distrib. Technol. 12–21 (1993)Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  • Efstratios Gallopoulos
    • 1
  • Bernard Philippe
    • 2
  • Ahmed H. Sameh
    • 3
  1. 1.Computer Engineering and Informatics DepartmentUniversity of PatrasPatrasGreece
  2. 2.Campus de BeaulieuINRIA/IRISARennes CedexFrance
  3. 3.Department of Computer SciencePurdue UniversityWest LafayetteUSA

Personalised recommendations