Skip to main content
Log in

A smoothing-out technique for min—max optimization

  • Published:
Mathematical Programming Submit manuscript


In this paper, we suggest approximations for smoothing out the kinks caused by the presence of “max” or “min” operators in many non-smooth optimization problems. We concentrate on the continuous-discrete min—max optimization problem. The new approximations replace the original problem in some neighborhoods of the kink points. These neighborhoods can be made arbitrarily small, thus leaving the original objective function unchanged at almost every point ofR n. Furthermore, the maximal possible difference between the optimal values of the approximate problem and the original one, is determined a priori by fixing the value of a single parameter. The approximations introduced preserve properties such as convexity and continuous differentiability provided that each function composing the original problem has the same properties. This enables the use of efficient gradient techniques in the solution process. Some numerical examples are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  1. D.P. Bertsekas, “Nondifferentiable optimization via approximation”,Mathematical Programming Study 3 (1975) 1–25.

    Google Scholar 

  2. C. Charalambous and J.W. Bandler, “Non-linear minimax optimization as a sequence of leastpth optimization with finite values ofp”,International Journal of Systems Science 7 (1976) 377–391.

    Google Scholar 

  3. A.R. Conn, “Constrained optimization using a nondifferentiable penalty function”,SIAM Journal on Numerical Analysis 10 (1973) 760–784.

    Google Scholar 

  4. A.M. Geoffrion, “Objective function approximations in mathematical programming”,Mathematical Programming 13 (1977) 23–37.

    Google Scholar 

  5. S.E. Hersom, “Smoothing for piece-wise linear functions”, Technical Report No. 71, Numerical Optimisation Centre, The Hatfield Polytechnic (Hatfield, 1975).

    Google Scholar 

  6. K. Madsen, “An algorithm for minimax solution of overdetermined systems of non-linear equations”,Journal of the Institute of Mathematics and its Applications 16 (1975) 321–328.

    Google Scholar 

  7. K. Madsen and H. Schaer-Jacobsen, “Linearly constrained minimax optimization”,Mathematical Programming 14 (1978) 208–223.

    Google Scholar 

  8. M.J.D. Powell, “An efficient method for finding the minimum of a function of several variables without calculating derivatives”,The Computer Journal 7 (1964) 155–162.

    Google Scholar 

  9. G.W. Stewart, “A modification of Davidon's minimization method to accept difference approximations of derivatives”,Journal of the Association for Computing Machinery 14 (1967) 72–83.

    Google Scholar 

  10. A. Tishler and I. Zang, “A new maximum likelihood method for piecewise regression”, Working Paper No. 526/77/R, Faculty of Management, Tel Aviv University (Tel Aviv, August 1977 (Revised July 1979)).

    Google Scholar 

  11. A. Tishler and I. Zang, “An absolute deviations curve fitting algorithm for non-linear models”, Working Paper No. 577/78, Faculty of Management, Tel Aviv University (Tel Aviv, October 1978 (Revised August 1979)).

    Google Scholar 

  12. I. Zang, “A new arc algorithm for unconstrained optimization”,Mathematical Programming 15 (1978) 36–52.

    Google Scholar 

  13. W.I. Zangwill, “Non-linear programming via penalty functions”,Management Science 13 (1967) 344–358.

    Google Scholar 

Download references

Author information

Authors and Affiliations


Rights and permissions

Reprints and permissions

About this article

Cite this article

Zang, I. A smoothing-out technique for min—max optimization. Mathematical Programming 19, 61–77 (1980).

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI:

Key words