Encyclopedia of Operations Research and Management Science

2013 Edition
| Editors: Saul I. Gass, Michael C. Fu

Parallel Computing

Reference work entry
DOI: https://doi.org/10.1007/978-1-4419-1153-7_728

Introduction

Parallel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. Between 2000 and 2010, parallel computing underwent a sea change. Prior to this decade, the speed of single-processor computers advanced steadily, and parallel computing was generally employed only for applications requiring more computing power than a standard PC processor chip could deliver. Taking advantage of Moore’s Law (Moore 1965), which predicts the steady increase in the number of transistors that can be packed into a given chip area, microprocessor manufacturers built processors that could execute a single stream of calculations at steadily increasing speeds. In the 2000–2010 decade, Moore’s law continued to hold, but the way that chip builders used the ever-increasing number of transistors began to change. Applying ever-larger number of transistors to a single sequential stream of...

This is a preview of subscription content, log in to check access

References

  1. Barr, R. S., & Hickman, B. L. (1993). Reporting computational experiments with parallel algorithms: Issues, measures and experts = opinions. ORSA Journal of Computing, 5, 2–18.CrossRefGoogle Scholar
  2. Bertsekas, D. P., & Tsitsiklis, J. (1989). Parallel and distributed computation: Numerical methods. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  3. Blumofe, R. D., Joerg, C. F., Kuszmaul, B. C., Leiserson, C. E., Randall, K. H., Zhou, Y. (1995). Cilk: An efficient multithreaded runtime system. Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Santa Barbara, California, 207–216.Google Scholar
  4. Butenhof, D. R. (1997). Programming with Posix threads. Boston, MA: Addison-Wesley.Google Scholar
  5. Dagum, L., & Menon, R. (1998). OpenMP: An industry standard API for shared-memory programming. IEEE Computational Science and Engineering, 5, 46–55.CrossRefGoogle Scholar
  6. Eckstein, J. (1993). Large-scale parallel computing, optimization, and operations research: A survey. ORSA Computer Science Technical Section Newsletter, 14(2), 1, 8–12.Google Scholar
  7. Flynn, M. J. (1972). Some computer organizations and their effectiveness. IEEE Transactions on Computers, C-21, 948–960.CrossRefGoogle Scholar
  8. Gondzio, J., & Grothey, A. (2007). Parallel interior-point solver for structured quadratic programs: Application to financial planning problems. Annals of Operations Research, 152, 319–339.CrossRefGoogle Scholar
  9. Kindervater, G. A. P., & Lenstra, J. K. (1988). Parallel computing in combinatorial optimization. Annals of Operations Research, 14, 245–289.CrossRefGoogle Scholar
  10. Koelbel, C. H., Loveman, D. B., Schreiber, R. S., Steele, G. L., Zosel, M. E. (1993). The high performance Fortran handbook. Cambridge, MA: MIT Press.Google Scholar
  11. Kumar, V., & Gupta, A. (1994). Analyzing scalability of parallel algorithms and architectures. Journal of Parallel and Distributed Computing, 22, 379–391.CrossRefGoogle Scholar
  12. Leighton, F. T. (1991). Introduction to parallel algorithms and architectures: Arrays, trees, and hypercubes. San Mateo, CA: Morgan Kaufmann.Google Scholar
  13. Leiserson, C. E. (2009). The CILK++ concurrency platform. Proceedings of the 46th Annual Design Automation Conference, ACM, San Francisco, California, 522–527.Google Scholar
  14. Litzkow, M. J., Livny, M., & Mutka, M. W. (1988). Condor-a hunter of idle workstations. Proceedings of the 8th International Conference on Distributed Computing Systems, IEEE, San Jose, California, 104–111.Google Scholar
  15. Lougee-Heimer, R. (2003). The common optimization interface for operations research: Promoting open-source software in the operations research community. IBM Journal of Research and Development, 47, 57–66.CrossRefGoogle Scholar
  16. Metcalf, M., & Reid, J. (1990). Fortran 90 explained. Oxford, UK: Oxford University Press.Google Scholar
  17. Moore, G. (1965). Cramming more components onto integrated circuits. Electronics, 38(8), 114–117.Google Scholar
  18. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., Phillips, J. C. (2008). GPU computing. Proceedings IEEE, 96, 879–899.CrossRefGoogle Scholar
  19. Snir, M., Otto, S. W., Huss-Lederman, S., Dongarra, J., Kowalik, J. S. (1996). MPI: The complete reference. Cambridge, MA: MIT Press.Google Scholar
  20. Zenios, S. A. (1994). Parallel and supercomputing in the practice of management science. Interfaces, 24, 122–140.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Rutgers, The State University of New Jersey, Livingston CampusNew BurnswickUSA