Algorithmic Developments for Difficult Robust Discrete Optimization Problems

  • Panos Kouvelis
  • Gang Yu
Part of the Nonconvex Optimization and Its Applications book series (NOIA, volume 14)


In Chapter 4, several polynomially solvable cases of robust discrete optimization problems are discussed in detail. However, by results presented in Chapter 3 we also know that most robust discrete optimization problems belong to the NP-hard class. In this chapter, we present our approach for solving these difficult robust discrete optimization problems. We are in this chapter restricting our attention to robust discrete optimization problems with equivalent single scenario problems that can be efficiently solved with a polynomial or pseudo-polynomial procedure. The solution procedures are based on branch-and-bound with both upper and lower bounds generated by surrogate relaxation. To be exact, the upper bound (for a maximization problem) is obtained from surrogate relaxation and the lower bound is obtained as a by-product via a heuristic based on the surrogate relaxation result. In the case when input data satisfies bounded percentage deviation condition (to be defined in Section 5.2), the heuristic is shown to provide a constant approximation. Computational results in Section 5.3 demonstrate the effectiveness of the bounds and the solution procedure.


Algorithmic Development Knapsack Problem Short Path Problem Resource Allocation Problem Greedy Heuristic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Fisher, M.L. (1981), “The Lagrangian Relaxation Method for Solving Integer Programming Problems,” Management Science 27, 1, 1–18.MathSciNetMATHCrossRefGoogle Scholar
  2. [2]
    Gavish, B. and H. Pirkul (1985), “Efficient Algorithms for Solving the Multi-Constraint 0–1 Knapsack Problem,” Mathematical Programming, 31, 78–105.MathSciNetMATHCrossRefGoogle Scholar
  3. [3]
    Glover, F. (1975), “Surrogate Constraint Duality in Mathematical Programming,” Operations Research 23, 434–453.MathSciNetMATHCrossRefGoogle Scholar
  4. [4]
    Ibaraki, T. and N. Katoh (1988), Resource Allocation Problems: Algorithmic Approaches, the MIT Press, Cambridge, Massachusetts.MATHGoogle Scholar
  5. [5]
    Karabati, S., P. Kouvelis and G. Yu (1996), “A Min-Max-Sum Resource Allocation Problem and Its Applications,” Working Paper, Fuqua School of Business, Duke University.Google Scholar
  6. [6]
    Kouvelis, P. and G. Yu (1995), “Robust Discrete Optimization and Its Applications,” Working Paper, Department of MSIS, Graduate School of Business, The University of Texas at Austin.Google Scholar
  7. [7]
    Martello, S. and P. Toth (1990), Knapsack Problems: Algorithms and Computer Implementations, John Wiley & Sons, New York.MATHGoogle Scholar
  8. [8]
    Yu, G. (1996), “On the Max-min 0–1 Knapsack Problem with Robust Optimization Applications,” Operations Research, 44, 2, 407–415.MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1997

Authors and Affiliations

  • Panos Kouvelis
    • 1
  • Gang Yu
    • 2
  1. 1.Olin School of BusinessWashington University at St. LouisSt. LouisUSA
  2. 2.Center for Cybernetic StudiesThe University of TexasAustinUSA

Personalised recommendations