Advertisement

On a feasible descent algorithm for solving min-max problems

  • Andrzej Stachurski
Conference paper
  • 298 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1196)

Abstract

The paper presents on an example of max functions a new algorithm for solving nondifferentible optimization problems when full knowledge of subderivative is available at each point. We generate an easy to handle representation of the improvement cone with the computational cost equal to the calculation of the inverse of one matrix. Due to that we find at each iteration a descent direction.

This is in contrary to the usual nondifferentiable methods which make use of only one subgradient at each point and solve a QP direction search problem at each step. This is much more computationally expensive and does not ensure that the direction found is descent.

The results of numerical experiments on the sequential machine are presented. They have proved that the current algorithm works quite good on convex min-max problems. The case of nonconvex problems requires careful elaboration of a new minimization algorithm.

We present also an idea of a parallel method of bundle type making use of our mechanism of feasible directions generation.

Keywords

Descent Direction Full Column Rank Feasible Direction Nonconvex Problem Member Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and distributed computation. Numerical methods. Prentice-Hall, Englewood Cliffs, New Jersey (1989).Google Scholar
  2. 2.
    Clarke, F.H.: Optimization and nonsmooth analysis. Wiley-Interscience, New York (1983).Google Scholar
  3. 3.
    Cosnard, M., Muller, J.-M., Robert, Y.: Parallel QR Decomposition of a Rectangular Matrix. Numerische Mathematik 48 (1986) 239–249.Google Scholar
  4. 4.
    Fletcher, R.: Practical Methods of Optimization. Second ed., John Wiley & Sons, New York (1987).Google Scholar
  5. 5.
    Kiwiel, K.C.: Methods of descent for nondifferentiable optimization. Springer Verlag, Berlin, Heidelberg, New York (1985).Google Scholar
  6. 6.
    Kiwiel, K.C.: Some Computational Methods of Nondifferentiable Optimization. (in polish), Ossolineum, Wroclaw (1988).Google Scholar
  7. 7.
    Lemarechal, C.: Nonsmooth Optimization and Descent Methods. Technical Report RR-78-4, International Institute for Applied Systems Analysis, Laxenburg, Austria (1977).Google Scholar
  8. 8.
    Mifflin, R.: A Modification and an Extension of Lemarechal's Algorithm for Nonsmooth Minimization, in: Nondifferentiable and Variational Techniques in Optimization (eds. Sorensen D.C. and Wets R.J.-B.), Mathematical Programming Study 17, North Holland, Amsterdam (1982) 77–90.Google Scholar
  9. 9.
    Schittkowski, K.: More Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems 282, Springer Verlag, Berlin, Heidelberg (1987).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Andrzej Stachurski
    • 1
  1. 1.Institute of Automatic ControlWarsaw University of TechnologyWarszawaPoland

Personalised recommendations