We generalize a smoothing algorithm for finite min–max to finite min–max–min problems. We apply a smoothing technique twice, once to eliminate the inner min operator and once to eliminate the max operator. In mini–max problems, where only the max operator is eliminated, the approximation function is decreasing with respect to the smoothing parameter. Such a property is convenient to establish algorithm convergence, but it does not hold when both operators are eliminated. To maintain the desired property, an additional term is added to the approximation. We establish convergence of a steepest descent algorithm and provide a numerical example.
Approximation Function Smoothing Technique Minimax Problem Smoothing Algorithm Order Optimality Condition
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in to check access
Mayne D.Q., Polak E., Trahan R.: An outer approximations algorithm for computer-aided design problems. J. Optim. Theory Appl. 28(3), 331–352 (1979)MATHCrossRefMathSciNetGoogle Scholar
Drezner Z., Thisse J.-F., Wesolowsky G.O.: The minimax–min location problem. J. Reg. Sci. 26(1), 87–101 (1986)CrossRefGoogle Scholar
Polak E., Royset J.O.: Algorithms for finite and semi-infinite min–max–min problems using adaptive smoothing techniques. J. Optim. Theory Appl. 119(3), 421–457 (2003)MATHCrossRefMathSciNetGoogle Scholar