Encyclopedia of Algorithms

2016 Edition
| Editors: Ming-Yang Kao

High Performance Algorithm Engineering for Large-Scale Problems

  • David A. BaderEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4939-2864-4_178

Years and Authors of Summarized Original Work

  • 2005; Bader

Problem Definition

Algorithm engineering refers to the process required to transform a pencil-and-paper algorithm into a robust, efficient, well tested, and easily usable implementation. Thus it encompasses a number of topics, from modeling cache behavior to the principles of good software engineering; its main focus, however, is experimentation. In that sense, it may be viewed as a recent outgrowth of Experimental Algorithmics [14], which is specifically devoted to the development of methods, tools, and practices for assessing and refining algorithms through experimentation. The ACM Journal of Experimental Algorithmics (JEA), at URL www.jea.acm.org, is devoted to this area.

High-performance algorithm engineering [2] focuses on one of the many facets of algorithm engineering: speed. The high‐performance aspect does not immediately imply parallelism; in fact, in any highly parallel task, most of the impact of high‐performance...

Keywords

Experimental algorithmics 
This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Aggarwal A, Vitter J (1988) The input/output complexity of sorting and related problems. Commun ACM 31:1116–1127MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bader DA, Moret BME, Sanders P (2002) Algorithm engineering for parallel computation. In: Fleischer R, Meineche-Schmidt E, Moret BME (eds) Experimental algorithmics. Lecture notes in computer science, vol 2547. Springer, Berlin, pp 1–23Google Scholar
  3. 3.
    Blelloch GE, Leiserson CE, Maggs BM, Plaxton CG, Smith SJ, Zagha M (1998) An experimental analysis of parallel sorting algorithms. Theory Comput Syst 31(2):135–167MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Choi J, Dongarra JJ, Pozo R, Walker DW (1992) ScaLAPACK: a scalable linear algebra library for distributed memory concurrent computers. In: The 4th symposium on the frontiers of massively parallel computations. McLean, pp 120–127Google Scholar
  5. 5.
    Culler DE, Karp RM, Patterson DA, Sahay A, Schauser KE, Santos E, Subramonian R, von Eicken T (1993) LogP: towards a realistic model of parallel computation. In: 4th symposium on principles and practice of parallel programming. ACM SIGPLAN, pp 1–12Google Scholar
  6. 6.
    Frigo M, Johnson SG (1998) FFTW: an adaptive software architecture for the FFT. In: Proceedings of IEEE international conference on acoustics, speech, and signal processing, Seattle, vol 3, pp 1381–1384Google Scholar
  7. 7.
    Frigo M, Leiserson CE, Prokop H, Ramachandran, S (1999) Cacheoblivious algorithms. In: Proceedings of 40th annual symposium on foundations of computer science (FOCS-99), New York. IEEE, pp 285–297Google Scholar
  8. 8.
    Gropp W, Lusk E, Doss N, Skjellum A (1996) A high-performance, portable implementation of the MPI message passing interface standard. Technical report, Argonne National Laboratory, Argonne. www.mcs.anl.gov/mpi/mpich/
  9. 9.
    Helman DR, JáJá J (1998) Sorting on clusters of SMP’s. In: Proceedings of 12th international parallel processing symposium, Orlando, pp 1–7Google Scholar
  10. 10.
    High Performance Fortran Forum (1993) High performance Fortran language specification, 1.0 edn., May 1993Google Scholar
  11. 11.
    Juurlink BHH, Wijshoff HAG (1998) A quantitative comparison of parallel computation models. ACM Trans Comput Syst 13(3):271–318CrossRefGoogle Scholar
  12. 12.
    Message Passing Interface Forum (1995) MPI: a message-passing interface standard. Technical report, University of Tennessee, Knoxville, June 1995. Version 1.1Google Scholar
  13. 13.
    Moret BME, Bader DA, Warnow T (2002) High-performance algorithm engineering for computational phylogenetics. J Supercomput 22:99–111, Special issue on the best papers from ICCS’01zbMATHCrossRefGoogle Scholar
  14. 14.
    Moret BME, Shapiro HD (2001) Algorithms and experiments: the new (and old) methodology. J Univ Comput Sci 7(5):434–446MathSciNetzbMATHGoogle Scholar
  15. 15.
    Nagel WE, Arnold A, Weber M, Hoppe HC, Solchenbach K (1996) VAMPIR: visualization and analysis of MPI resources. Supercomputer 63 12(1):69–80Google Scholar
  16. 16.
    OpenMP Architecture Review Board (1997) OpenMP: a proposed industry standard API for shared memory programming. www.openmp.org
  17. 17.
    Reed DA, Aydt RA, Noe RJ, Roth PC, Shields KA, Schwartz B, Tavera LF (1993) Scalable performance analysis: the Pablo performance analysis environment. In: Skjellum A (ed) Proceedings of scalable parallel libraries conference, Mississippi State University. IEEE Computer Society Press, pp 104–113Google Scholar
  18. 18.
    Reussner R, Sanders P, Träff J (1998, accepted) SKaMPI: a comprehensive benchmark for public benchmarking of MPI. Scientific programming, 2001. Conference version with Prechelt, L., Müller, M. In: Proceedings of EuroPVM/MPIGoogle Scholar
  19. 19.
    Vitter JS, Shriver EAM (1994) Algorithms for parallel memory. I: two-level memories. Algorithmica 12(2/3):110–147MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Vitter JS, Shriver EAM (1994) Algorithms for parallel memory II: hierarchical multilevel memories. Algorithmica 12(2/3):148–169MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Whaley R, Dongarra J (1998) Automatically tuned linear algebra software (ATLAS). In: Proceedings of supercomputing 98, Orlando. www.netlib.org/utk/people/JackDongarra/PAPERS/atlas-sc98.ps

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.College of ComputingGeorgia Institute of TechnologyAtlantaUSA