Advertisement

Optimization pp 119-136 | Cite as

The MM Algorithm

  • Kenneth Lange
Part of the Springer Texts in Statistics book series (STS)

Abstract

Most practical optimization problems defy exact solution. In the current chapter we discuss an optimization method that relies heavily on convexity arguments and is particularly useful in high-dimensional problems such as image reconstruction [86]. This iterative method is called the MM algorithm. One of the virtues of this acronym is that it does double duty. In minimization problems, the first M of MM stands for majorize and the second M for minimize In maximization problems, the first M stands for minorize and the second M for maximize. When it is successful, the MM algorithm substitutes a simple optimization problem for a difficult optimization problem. Simplicity can be attained by (a) avoiding large matrix inversions, (b) linearizing an optimization problem, (c) separating the variables of an optimization problem, (d) dealing with equality and inequality constraints gracefully, and (e) turning a nondifferentiable problem into a smooth problem. In simplifying the original problem, we must pay the price of iteration or iteration with a slower rate of convergence.

Keywords

Projection Line Multinomial Distribution Surrogate Function Posterior Mode Transmission Tomography 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Kenneth Lange
    • 1
  1. 1.Department of Biomathematics and Human GeneticsUCLA School of MedicineLos AngelesUSA

Personalised recommendations