Parallel Computing vs. Distributed Computing: A Great Confusion? (Position Paper)

  • Michel RaynalEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9523)


This short position paper discusses the fact that, from a teaching point of view, parallelism and distributed computing are often confused, while, when looking at their deep nature, they address distinct fundamental issues. Hence, appropriate curricula should be separately designed for each of them. The “everything is in everything (and reciprocally)” attitude does not seem to be a relevant approach to teach students the important concepts which characterize parallelism on the one side, and distributed computing on the other side.


  1. 1.
    Angluin, D.: Local and global properties in networks of processors. In: Proceedings of the 12th ACM Symposium on Theory of Computation (STOC 1981), pp. 82–93, ACM Press (1981)Google Scholar
  2. 2.
    Attiya, H., Welch, J.L.: Distributed Computing: Fundamentals, Simulations and Advanced Topics. Wiley-Interscience, 2nd edn, p. 414. Wiley, New York (2004)CrossRefGoogle Scholar
  3. 3.
    Barney, B.: Introduction to parallel computing.
  4. 4.
    Bernstein, P.A., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems, p. 370. Addison Wesley, Reading (1987). ISBN 0-201-10715-5Google Scholar
  5. 5.
    Blazewicz, J., Pesch, E., Trystram, D., Zhang, G.: New perspectives in scheduling theory. J. Sched. 18(4), 333–334 (2015)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Bouguerra, M.-M., Kondo, D., Mendonca, F.M., Trystram, D.: Fault-tolerant scheduling on parallel systems with non-memoryless failure distributions. J. Parallel Distrib. Comput. 74(5), 2411–2422 (2014)CrossRefGoogle Scholar
  7. 7.
    Brinch Hansen, P.: The Architecture of Concurrent Programs, p. 317. Prentice Hall, Englewood Cliffs (1977)zbMATHGoogle Scholar
  8. 8.
    Brinch Hansen, P. (ed.): The Origin of Concurrent Programming, p. 534. Springer, New York (2002)zbMATHGoogle Scholar
  9. 9.
    Cachin, C., Guerraoui, R., Rodrigues, L.: Introduction to Reliable and Secure Distributed Programming, p. 367. Springer, Heidelberg (2012). ISBN 978-3-642-15259-7Google Scholar
  10. 10.
    Chandy, K.M., Misra, J.: Parallel Program Design, p. 516. Addison Wesley, Reading (1988)zbMATHGoogle Scholar
  11. 11.
    Dijkstra, E.W.: Some beautiful arguments using mathematical induction. Algorithmica 13(1), 1–8 (1980)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Fischer, M.J., Lynch, N.A., Paterson, M.S.: Impossibility of distributed consensus with one faulty process. J. ACM 32(2), 374–382 (1985)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Gray, J.: Notes on Database Operating Systems: and Advanced Course. LNCS, vol. 60, pp. 10–17. Springer, Heildelberg (1978)Google Scholar
  14. 14.
    Hadzilacos, V.: On the relationship between the atomic commitment and consensus problems. In: Simons, B., Spector, A. (eds.) Fault-Tolerant Distributed Computing. LNCS, vol. 448, pp. 201–208. Springer, Heildelberg (1990)CrossRefGoogle Scholar
  15. 15.
    Herlihy, M.P.: Wait-free synchronization. Trans. Program. Lang. Syst. 13(1), 124–149 (1991)CrossRefGoogle Scholar
  16. 16.
    Herlihy, M.P., Luchangco, V.: Distributed computing and the multicore revolution. ACM SIGACT News 39(1), 62–72 (2008)CrossRefGoogle Scholar
  17. 17.
    Herlihy, M.P., Luchangco, V., Moir, M.: Obstruction-free synchronization: double-ended queues as an example. In: Proceedings of 23th International IEEE Conference on Distributed Computing Systems (ICDCS 2003), pp. 522–529. IEEE Press (2003)Google Scholar
  18. 18.
    Herlihy, M.P., Rajsbaum, S., Raynal, M.: Power and limits of distributed computing shared memory models. Theor. Comput. Sci. 509, 3–24 (2013)zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Herlihy, M., Shavit, N.: The topological structure of asynchronous computability. J. ACM 46(6), 858–923 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Herlihy, M., Shavit, N.: The Art of Multiprocessor Programming, p. 508. Morgan Kaufmann, Burlington (2008). ISBN 978-0-12-370591-4Google Scholar
  21. 21.
    Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12(3), 463–492 (1990)CrossRefGoogle Scholar
  22. 22.
    Kshemkalyani, A.D., Singhal, M.: Distributed Computing: Principles, p. 736. Algorithms and Systems. Cambridge University Press, Cambridge (2008)zbMATHCrossRefGoogle Scholar
  23. 23.
    Lamport, L.: Teaching concurrency. ACM SIGACT News, Distrib. Comput. Column 40(1), 58–62 (2009)CrossRefGoogle Scholar
  24. 24.
    Liu, C.L.: Scheduling algorithms for multiprogramming in a hard-real-time environment. J. ACM 20(1), 46–61 (1973)zbMATHCrossRefGoogle Scholar
  25. 25.
    Loui, M., Abu-Amara, H.: Memory requirements for agreement among unreliable asynchronous processes. In: Advances in Computing Research, vol. 4, pp. 163–183. JAI Press (1987)Google Scholar
  26. 26.
    Lynch, N.A.: Distributed Algorithms, p. 872. Morgan Kaufmann, San Francisco (1996)zbMATHGoogle Scholar
  27. 27.
    Matton, T., Sanders, B.A., Massingill, B.: Patterns for Parallel Programming, p. 384. Addison Wesley, Reading (2004). ISBN 978-0-321-22811-6Google Scholar
  28. 28.
    Padua, D. (ed.): Encyclopedia of Parallel Computing, p. 2180. Springer, Heildelberg (2011). ISBN 978-0-387-09765-7zbMATHGoogle Scholar
  29. 29.
    Raynal, M.: Communication and Agreement Abstractions for Fault-Tolerant Asynchronous Distributed Systems, p. 251. Morgan & Claypool Pub. (2010). ISBN 978-1-60845-293-4Google Scholar
  30. 30.
    Raynal, M.: Fault-Tolerant Agreement in Synchronous Distributed Systems, p. 167. Morgan & Claypool (2010). ISBN 978-1-608-45525-6Google Scholar
  31. 31.
    Raynal, M.: Concurrent Programming: Algorithms, Principles, and Foundations, p. 530. Springer, Heildelberg (2013). ISBN 978-3-642-32026-2CrossRefGoogle Scholar
  32. 32.
    Raynal, M.: Distributed Algorithms for Message-passing Systems, p. 515. Springer, Heilderberg (2013). ISBN 978-3-642-38122-5zbMATHCrossRefGoogle Scholar
  33. 33.
    Raynal, M.: What can be computed in a distributed system? In: Bensalem, S., Lakhneck, Y., Legay, A. (eds.) From Programs to Systems. LNCS, vol. 8415, pp. 209–224. Springer, Heidelberg (2014) Google Scholar
  34. 34.
    Robert, Y.: Task graph scheduling. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 2013–2025. Springer, Heilderberg (2011)CrossRefGoogle Scholar
  35. 35.
    Santoro, N.: Design and Analysis of Distributed Algorithms, p. 589. Wiley, New York (2007)zbMATHGoogle Scholar
  36. 36.
    Taubenfeld, G.: Synchronization Algorithms and Concurrent Programming, p. 423. Upper Saddle River, Pearson Education/Prentice Hall (2006). ISBN 0-131-97259-6Google Scholar
  37. 37.
    Tel, G.: Introduction to Distributed Algorithms, p. 596. Cambridge University Press, Cambridge (2000). ISBN 0-521-79483-8zbMATHCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Institut Universitaire de FranceParisFrance
  2. 2.IRISAUniversité de RennesRennesFrance
  3. 3.Department of ComputingHong Kong Polytechnic UniversityHung HomHong Kong

Personalised recommendations