Local and Global Convergence
Proving convergence of the various optimization algorithms is a delicate exercise. In general, it is helpful to consider local and global convergence patterns separately. The local convergence rate of an algorithm provides a useful benchmark for comparing it to other algorithms. On this basis, Newton’s method wins hands down. However, the tradeoffs are subtle. Besides the sheer number of iterations until convergence, the computational complexity and numerical stability of an algorithm are critically important. The MM algorithm is often the epitome of numerical stability and computational simplicity. Scoring lies somewhere between these two extremes. It tends to converge more quickly than the MM algorithm and to behave more stably than Newton’s method. Quasi-Newton methods also occupy this intermediate zone. Because the issues are complex, all of these algorithms survive and prosper in certain computational niches.
KeywordsStationary Point Line Search Spectral Radius Global Convergence Gradient Algorithm
Unable to display preview. Download preview PDF.
- 1.de Leeuw J (1994) Block relaxation algorithms in statistics. Information Systems and Data Analysis, Bock HH, Lenski W, Richter MM, Springer, Berlin, pp 308-325Google Scholar