Skip to main content

Introduction to Parallel Computation

  • Chapter
Parallel Computing

Abstract

This chapter is intended to provide an overview of the fundamental concepts and ideas shaping the field of parallel computation. If serial (or sequential) algorithms are designed for the generic uni-processor architecture of the Random Access Machine (RAM), in the case of parallel algorithms there are a variety of models and architectures supporting the parallel mode of operation: shared-memory models, interconnection networks, combinational circuits, clusters and grids.

Sometimes, the methods used in designing sequential algorithms can also lead to efficient parallel algorithms, as it is the case with divide and conquer techniques. In other cases, the particularities of a certain model or architecture impose specific tools and methods that need to be used in order to fully exploit the potential offered by that model. In all situations, however, we seek an improvement either in the running time of the parallel algorithm or in the quality of the solution produced by the parallel algorithm with respect to the best sequential algorithm dealing with the same problem.

The improvement in performance can even become superlinear with respect to the number of processors employed by the parallel model under consideration. This is the case, for example, of computations performed under real-time constraints, when the deadlines imposed on the availability of the input and/or output data leave little room for sequentially simulating the parallel approach. Furthermore, in the examples presented at the end of the chapter, the impossibility to simulate a parallel solution on a sequential machine is due to the intrinsically parallel nature of the computation, rather than being an artifact of externally imposed time constraints.

In this respect, parallelism proves to be the vehicle leading to a Non-Universality result in computing: there is no finite computational device, sequential or parallel, conventional or unconventional, that is able to simulate all others.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. G. Akl, Parallel Computation: Models and Methods, Prentice-Hall, Upper Saddle River, NJ, 1997.

    Google Scholar 

  2. J. JáJá, An Introduction to Parallel Algorithms, Addison-Wesley, Reading, MA, 1992.

    MATH  Google Scholar 

  3. Y. Ben-Asher, D. Peleg, R. Ramaswami, A. Schuster, The power of reconfiguration, Journal of Parallel and Distributed Computing 13 (1991) 139–153.

    Article  MathSciNet  Google Scholar 

  4. C. M. Krishna, K. G. Shin, Real-Time Systems, McGraw-Hill, New York, 1997.

    MATH  Google Scholar 

  5. S. D. Bruda, S. G. Akl, On limits on the computational power of data-accumulating algorithms, Information Processing Letters 86 (4) (2003) 221–227.

    Article  MATH  MathSciNet  Google Scholar 

  6. S. G. Akl, Superlinear performance in real-time parallel computation, The Journal of Supercomputing 29 (1) (2004) 89–111.

    Article  MATH  Google Scholar 

  7. S. G. Akl, Parallel real-time computation: Sometimes quality means quantity, Computing and Informatics 21 (5) (2002) 455–487.

    MATH  MathSciNet  Google Scholar 

  8. S. G. Akl, S. D. Bruda, Improving a solution’s quality through parallel processing, The Journal of Supercomputing 19 (2001) 219–231.

    Article  Google Scholar 

  9. T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, MIT Press, Cambridge, MA, 2001.

    MATH  Google Scholar 

  10. S. G. Akl, Evolving computational systems, in: S. Rajasekaran, J. H. Reif (Eds.), Parallel Computing: Models, Algorithms, and Applications, CRC Press, Boca Raton, FL 2007, a modified version is available as Technical Report No. 2006-526, School of Computing, Queen’s University, Kingston, Ontario, Canada.

    Google Scholar 

  11. J. Brown, The Quest for the Quantum Computer, Touchstone Edition, Simon & Schuster, New York, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Selim G. Akl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Akl, S.G., Nagy, M. (2009). Introduction to Parallel Computation. In: Trobec, R., Vajteršic, M., Zinterhof, P. (eds) Parallel Computing. Springer, London. https://doi.org/10.1007/978-1-84882-409-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-409-6_2

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-408-9

  • Online ISBN: 978-1-84882-409-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics