Abstract
Lagrangian methods are popular in solving continuous constrained optimization problems. In this paper, we address three important issues in applying Lagrangian methods to solve optimization problems with inequality constraints.
First, we study methods to transform inequality constraints into equality constraints. An existing method, called the slack-variable method, adds a slack variable to each inequality constraint in order to transform it into an equality constraint. Its disadvantage is that when the search trajectory is inside a feasible region, some satisfied constraints may still pose some effect on the Lagrangian function, leading to possible oscillations and divergence when a local minimum lies on the boundary of the feasible region. To overcome this problem, we propose the MaxQ method that carries no effect on satisfied constraints. Hence, minimizing the Lagrangian function in a feasible region always leads to a local minimum of the objective function. We also study some strategies to speed up its convergence.
Second, we study methods to improve the convergence speed of Lagrangian methods without affecting the solution quality. This is done by an adaptive-control strategy that dynamically adjusts the relative weights between the objective and the Lagrangian part, leading to better balance between the two and faster convergence.
Third, we study a trace-based method to pull the search trajectory from one saddle point to another in a continuous fashion without restarts. This overcomes one of the problems in existing Lagrangian methods that converges only to one saddle point and requires random restarts to look for new saddle points, often missing good saddle points in the vicinity of saddle points already found.
Finally, we describe a prototype Novel (Nonlinear Optimization via External Lead) that implements our proposed strategies and present improved solutions in solving a collection of benchmarks.
Similar content being viewed by others
References
Ben-Tal, A., Eiger, G. and Gershovitz, V. (1994), Global minimization by reducing the duality gap, Mathematical Programming 63: 193-212.
Bohachevsky, I.O., Johnson, M.E. and Stein, M.L. (1986), Generalized simulated annealing for function optimization, Technometrics 28: 209-217.
Corana, A., Marchesi, M., Martini, C. and Ridella, S. (1987), Minimizing multimodal functions of continuous variables with the simulated annealing algorithm, ACM Trans. Math. Software13: 2-280.
Diener, I. and Schaback, R. (1990), An extended continuous Newton method, Journal of Optimization Theory and Applications 67(1): 57-77.
Epperly T. (1995), Global Optimization of Nonconvex Nonlinear Programs Using Parallel Branch and Bound. Ph.D. Thesis, University of Wisconsin, Madison.
Floudas, C.A. and Pardalos, P.M. (1990), A Collection of Test Problems for Constrained Global Optimization Algorithms, vol. 455 of Lecture Notes in Computer Science. Springer Verlag.
Floudas, C.A. and Pardalos, P.M. eds. (1992), Recent Advances in Global Optimization. Princeton University Press.
Fogel, D.B. (1994), An introduction to simulated evolutionary optimization, IEEE Trans. Neural Networks 5(1): 3-14.
Hansen, E.R. (1992), Global Optimization Using Interval Analysis. Marcel Dekker, New York.
Horst, R. and Tuy, H. (1993), Global Optimization: Deterministic Approaches. Springer Verlag.
Ingber, L. (1995), Adaptive Simulated Annealing (ASA). Lester Ingber Research.
Jones, A.E.W. and Forbes, G.W. (1995), An adaptive simulated annealing algorithm for global optimization over continuous variables, Journal of Optimization Theory and Applications 6:1-37.
Lucidi, S. and Piccioni, M. (1989), Random tunneling by means of acceptance-rejection sampling for global optimization, Journal of Optimization Theory and Applications 62: 255-277.
Luenberger, D.G. (1984), Linear and Nonlinear Programming. Addison-Wesley Publishing Company
Michalewicz, Z. (1994), Genetic Algorithms + Data Structure = Evolution Programs. Springer Verlag.
Mockus, J. (1989), Bayesian Approach to Global Optimization. Kluwer Academic Publishers, Dordrecht-Boston-London.
Mockus, J. (1994), Application of bayesian approach to numerical methods of global and stochastic optimization, Journal of Global Optimization 4: 347-365.
Pardalos, P.M. (1993), Complexity in Numerical Optimization.World Scientific, Singapore and River Edge, N.J.
Pardalos, P.M. and Rosen, J.B. (1987), Constrained Global Optimization: Algorithms and Applications, vol. 268 of Lecture Notes in Computer Science. Springer Verlag.
Patel, N.R., Smith, R.L. and Zabinsky, Z.B. (1988), Pure adaptive search in Monte Carlo optimization, Mathematical Programming 43: 317-328.
Piccioni, M. (1987), A combined multistart-annealing algorithm for continuous global optimization. Technical Report 87-45, Systems and Research Center, The University of Maryland, College Park, MD.
Romeijn, H.E. and Smith, R.L. (1994), Simulated annealing for constrained global optimization, Journal of Global Optimization 5(2): 101-126.
Sarma, M.S. (1990), On the convergence of the Baba and Dorea random optimization methods, Journal of Optimization Theory and Applications 66: 337-343.
Schoen, F. (1991), Stochastic techniques for global optimization: A survey on recent advances, Journal of Global Optimization 1(3): 207-228.
Shang, Y. (1997), Global Search Methods for Solving Nonlinear Optimization Problems. Ph.D. Thesis, Department of Computer Science, University of Illinois, Urbana, IL.
Shang, Y. and Wah, B.W. (1996), Global optimization for neural network training, IEEE Computer 29: 45-54.
Stuckman, B.E. (1988), A global search method for optimizing nonlinear systems, IEEE Trans. on Systems, Man, and Cybernetics 18(6): 965-977.
Törn, A. and Viitanen, S. (1992), Topographical global optimization, in C.A. Floudas and P.M. Pardalos (eds.), Recent Advances in Global Optimization, pp. 385-398. Princeton University Press.
Törn, A. and Žilinskas, (1989) Global Optimization. Springer Verlag.
Vanderbilt, D. and Louie, S.G. (1984), A Monte Carlo simulated annealing approach to optimization over continuous variables, Journal of Computational Physics 56: 259-271.
Žilinskas, A. (1992), A review of statistical models for global optimization, Journal of Global Optimization 2: 145-153.
Wah, B.W. and Chang, Y.J. (1997), Trace-based methods for solving nonlinear global optimization problems, Journal of Global Optimization 10(2): 107-141.
Wah, B.W., Shang, Y., Wang, T. and Yu, T. (1997), Global optimization of QMF filter-bankdesign using NOVEL, in Proc. Int'l Conf. on Acoustics, Speech and Signal Processing, vol. 3, pp. 2081-2084. IEEE, April 1997.
Wah, B.W., Wang, T., Shang, Y. and Wu, Z. (1997), Improving the performance of weighted Lagrange-multiple methods for constrained nonlinear optimization. In Proc. 9th Int'l Conf. on Tools for Artificial Intelligence, pages 224-231. IEEE, November 1997.
Zabinsky, Z.B. and Smith R.L. (1992), Pure adaptive search in global optimization, Mathematical Programming 53: 323-338.
Zabinsky, Z.B. et al. (1993), Improving hit-and-run for global optimization, Journal of Global Optimization 3: 171-192.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Wah, B.W., Wang, T. Efficient and Adaptive Lagrange-Multiplier Methods for Nonlinear Continuous Global Optimization. Journal of Global Optimization 14, 1–25 (1999). https://doi.org/10.1023/A:1008203422124
Issue Date:
DOI: https://doi.org/10.1023/A:1008203422124