Abstract
Most practical optimization problems defy exact solution. In the current chapter we discuss an optimization method that relies heavily on convexity arguments and is particularly useful in high-dimensional problems such as image reconstruction [86]. This iterative method is called the MM algorithm. One of the virtues of this acronym is that it does double duty. In minimization problems, the first M of MM stands for majorize and the second M for minimize In maximization problems, the first M stands for minorize and the second M for maximize. When it is successful, the MM algorithm substitutes a simple optimization problem for a difficult optimization problem. Simplicity can be attained by (a) avoiding large matrix inversions, (b) linearizing an optimization problem, (c) separating the variables of an optimization problem, (d) dealing with equality and inequality constraints gracefully, and (e) turning a nondifferentiable problem into a smooth problem. In simplifying the original problem, we must pay the price of iteration or iteration with a slower rate of convergence.
Keywords
- Projection Line
- Multinomial Distribution
- Surrogate Function
- Posterior Mode
- Transmission Tomography
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, access via your institution.
Buying options
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer Science+Business Media New York
About this chapter
Cite this chapter
Lange, K. (2004). The MM Algorithm. In: Optimization. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4757-4182-7_6
Download citation
DOI: https://doi.org/10.1007/978-1-4757-4182-7_6
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-1910-6
Online ISBN: 978-1-4757-4182-7
eBook Packages: Springer Book Archive