Nelder-Mead Simplex Optimization Routine for Large-Scale Problems: A Distributed Memory Implementation
- 504 Downloads
The Nelder-Mead simplex method is an optimization routine that works well with irregular objective functions. For a function of \(n\) parameters, it compares the objective function at the \(n+1\) vertices of a simplex and updates the worst vertex through simplex search steps. However, a standard serial implementation can be prohibitively expensive for optimizations over a large number of parameters. We describe an implementation of the Nelder-Mead method in parallel using a distributed memory. For \(p\) processors, each processor is assigned \((n+1)/p\) vertices at each iteration. Each processor then updates its worst local vertices, communicates the results, and a new simplex is formed with the vertices from all processors. We also describe how the algorithm can be implemented with only two MPI commands. In simulations, our implementation exhibits large speedups and is scalable to large problem sizes.
KeywordsParallel computing Optimization algorithms Nelder-Mead
- Creel, M. (2005). User-friendly parallel computations with econometric examples. Computational Economics, 26, 107–128.Google Scholar
- Lawver, D. (2012). Measuring quality increases in the medical sector. Santa Barbara: University of California Santa Barbara.Google Scholar
- Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical recipes: The art of scientific computing (3rd ed.). Cambridge University Press.Google Scholar