Abstract
In this paper we present a scaling algorithm for minimizing arbitrary functions over vertices of polytopes in an oracle model of computation which includes an augmentation oracle. For the binary case, when the vertices are 0–1 vectors, we show that the oracle time is polynomial. Also, this algorithm allows us to generalize some concepts of combinatorial optimization concerning performance bounds of greedy algorithms and leads to new bounds for the complexity of the simplex method.
1 Introduction and formulation of the main result
We consider the following optimization problem:
where f is a function and S is a finite subset of rational vectors in \(\mathbb {R}^n.\) We assume that f can be accessed via an oracle which uses a blackbox data structure to compute f. For instance, in the linear case, this structure can e.g. contain the coefficient vector. We will say that f is polynomially computable if there is an algorithm (an evaluation oracle) such that both the number of arithmetic operations and binary sizes of the numbers in the course of that algorithm are bounded by a polynomial in the binary size of that data structure and in the binary size of the argument. In this case, the binary size of f(x) is also polynomially bounded in the mentioned sense for every \(x\in S.\)
Let OPT denote the optimal value. A feasible solution x is an \(\varepsilon \)approximate solution to the above problem if
We assume that f belongs to a class \(\mathcal {C},\) of functions defined on S, closed under additions of linear functions, i.e., if \(f\in \mathcal {C}\) and h is linear over S, then \(f+h\in \mathcal {C}.\) An algorithm computing g(x) for every \(g\in \mathcal {C}\) and \(x\in S\) is further called an evaluation oracle.
Also, we assume that we have an augmentation oracle which, given \(x\in S\) and \(g\in \mathcal {C},\) either returns \(y\in S\) such that \(g(y) < g(x),\) or correctly decides that x minimizes g over S.
Further we consider a model of computation including arithmetic operations, additions of linear functions to functions of \(\mathcal {C},\) and calls to the augmentation oracle. By an oracle running time or oracle complexity of an algorithm we understand the number of calls to the augmentation oracle performed by this algorithm for a given input. The access to solutions in S is only possible via the augmentation oracle, i.e., it is only possible to check whether \(x\in S\) is optimal for a given function \(g\in \mathcal {C}\) and, if this is not the case, to get another solution in S with a lower value of g. Also, there is no direct access to the evaluation oracle, which is used exclusively inside the augmentation oracle. That is, in this model of computation we are basically interested in the number of calls to the augmentation oracle and put aside any details related to the arithmetic model of computation.
In the subsequent sections we present a general scaling framework where the main idea is to perform a perturbation of the objective function by adding a linear function multiplied by a scaling parameter. The linear function which is to be added at each iteration depends on the current solution. The scaling parameter decreases by a factor of two whenever the current solution is optimal for the perturbed objective function, which is verified by means of the augmentation oracle. An advantage of this scaling technique is that it is applicable to arbitrary objective functions, in contrast to methods based on bitscaling like that proposed by Schulz et al. [11], which are restricted to linear objective functions.
The choice of a linear function for perturbation of the original objective function depends on the problem in question. We will consider two cases; in the first case, S is a set of binary vectors and in the second case S is the vertex set of a polytope defined by a system of linear equations and nonnegativity constraints.
If S is a set of binary vectors, our algorithm runs in polynomial oracle time. The previous results on the oracle complexity of 0–1 problems focused on linear problems. Schulz et al. [11] proved that a linear 0–1 problem can be solved in a polynomial number of calls to an augmentation oracle. Chubanov [2] proposed a variant of the simplex method for integer linear problems given by verification oracles, i.e., by the oracles which only verify whether a given solution is optimal and which do not necessarily return an improvement direction if the given solution is not optimal. This variant of the simplex method visits O(n) vertices of the convex hull of the feasible set in the 0–1 case and O(nq) vertices in the more general case where the feasible set is contained in \([0,q]^n.\) Orlin et al. [10] considered the case when local optimality does not necessarily imply global optimality and proposed a fully polynomial time approximation scheme for finding an \(\varepsilon \)local optimal solution, i.e., one whose objective value is within a factor of \(1+\varepsilon \) of the minimum objective value in the respective neighborhood of this solution. In the present paper we consider only the case when local optimality means global optimality; so the existence of an FPTAS in the sense of the concept of \(\varepsilon \)local optimality introduced in [10] seems to be an open question in the nonlinear case.
For the wellknown polynomially solvable cases like the assignment problem and the minimumweight spanning tree problem, as well as the problem of finding a maximumweight independent set of a matroid, for which simple augmentation oracles are known, our scaling algorithm represents an alternative polynomialtime algorithmic approach. The main advantage of our algorithm is that its iterations are easy to parallelize because any improvement of the current objective function (which can differ from the objective function of the problem in question) is sufficient at each iteration, in contrast to greedy algorithms, which require the best local improvement.
On the other hand, our algorithm implies a new complexity bound for the simplex method. For the polytopes of the form \(P = \{x\in \mathbb {R}^n:Ax = b, x\ge \mathbf{0}\},\) where A is a matrix and b is a vector of the respective dimension (\(\mathbf{0}\) denotes a zero vector), we show that there is a variant of the simplex method which solves a linear problem over P by visiting a number of vertices which is polynomially bounded in certain parameters related to the problem, in particular in the ratio between the largest and the smallest nonzero elements of a basic feasible solution. It should be noted that our bound depends on the dimension, i.e., on the number of variables, only logarithmically; the respective bound on the diameters of polytopes is presented in Corollary 5.1. Our bound can be viewed as an improvement of the bound obtained recently by Kitahara and Mizuno [7], which depends linearly on the dimension.
Another important consequence of our scaling algorithm is that any greedy algorithm for binary optimization, with the property that it is able to find an optimal solution for any function in the class \(\mathcal {C},\) must run in polynomial oracle time. Moreover, the greedy algorithm runs in strongly polynomial oracle time for the case of a linear objective function. To prove this, we use our scaling algorithm.
It is not clear if our algorithm can be generalized for the case when S is a finite subset of integer vectors which are not necessarily vertices of a polytope. In particular, the question is, whether there is an algorithm for (1.1) with pseudopolynomial oracle running time when S is a finite subset of integer vectors. More precisely, assuming that S is a set of nonnegative integer vectors such that \(x \le u\) for some integer vector u, the question is whether there exists an algorithm for finding an \(\varepsilon \)approximate solution for (1.1) whose oracle complexity is polynomial exclusively in n, \(\Vert u\Vert _\infty \) and \(\log \frac{1}{\varepsilon }.\) A special case of (1.1) where the objective function is linear and \(S = \{x \in \mathbb {Z}^n: Ax = b, \mathbf{0} \le x \le u\}\) was studied by De Loera, Hemmecke, and Lee [3]; they developed a pseudopolynomial oracle algorithm for this case.
In this paper, we do not consider any concrete class of nonlinear functions which would be closed under additions of linear functions and permit an efficient augmentation oracle. This question is very nontrivial and should be studied separately.
2 Scaling algorithm
Let us consider a family of functions
parameterized by \(y\in S\) and \(\delta \in \mathbb {R}_+,\) such that, for all \(y\in S,\)
where \(c_y\in \mathbb {R}^n\) is a vector depending on y such that
Note that, according to (2.2), \(c_y\) defines a linear function which is uniquely minimized by y. If all elements of S are vertices of the convex hull of S, then such \(c_y\) exists for every \(y\in S.\) In the next sections we consider different special cases where a suitable \(c_y\) is easy to find. Denote
These values are positive according to our assumptions about \(c_y.\)
Lemma 2.1
The following statements are true:

(i)
If y is optimal for \(g_{y,\delta },\) then y is an \((\mu ^+\delta )\)approximate solution of (1.1).

(ii)
If \(x\ne y\) and \(g_{y,\delta }(y) \ge g_{y,\delta }(x),\) then \(f(x) \le f(y)  \mu ^\delta .\)
Proof
Proof of (i): Let \(x^*\) be an optimal solution of (1.1). Then,
Proof of (ii):
This implies (ii). \(\square \)
Lemma 2.1 naturally suggests the following algorithm, which we further call the scaling algorihtm:
Input: A family of functions (2.1) and an initial solution \(x^0\in S.\)
Output: \(x^*\in S\) optimal for (1.1).

1.
Set \(k:=0\) and \(\delta := 1.\) While \(x^0\) is not optimal for \(g_{x^0,\delta },\) set \(\delta := 2\delta .\)

2.
Call the augmentation oracle for f and \(x^k;\) if \(x^k\) is optimal for f, then return \(x^*:= x^k.\)

3.
Call the augmentation oracle to find \(x^{k+1}\in S\) with \(g_{x^k,\delta }(x^{k+1}) < g_{x^k,\delta }(x^k)\) or prove that \(x^k\) is optimal for \(g_{x^k,\delta }.\)

4.
If \(x^k\) is optimal for \(g_{x^k,\delta },\) then set \(\delta := \delta /2\) and \(x^{k+1} := x^k.\)

5.
Set \(k := k + 1\) and go to Step 2.
Further, we refer to repetitions of steps 2–5 as to iterations of the scaling algorithm.
At each iteration, the scaling algorithm tries to improve the current value of function \(g_{x^k,\delta }\) associated with the current solution. This function belongs to \(\mathcal {C}\) because it is obtained from \(f\in \mathcal {C}\) by adding a linear function and \(\mathcal {C}\) is closed under additions of linear functions. So it is important to keep in mind that the augmentation oracle should be applicable to every function in \(\mathcal {C}.\)
Step 1 sets \(\delta \) to 1 and then multiplies it by 2 in the whileloop until \(x^0\) becomes optimal for \(g_{x^0,\delta }.\) The whileloop of Step 1 runs in oracle time
iterations because if \(x^0\) is not optimal for \(g_{x^0,\delta }\) then \(f(x^0)\) can be improved by at least \(\mu ^\delta \) (which follows from (ii) of Lemma 2.1) and \(f(x^0)  f(x) \le f(x^0)  OPT\) for any \(x\in S.\) Thus, \(\delta \) is either 1 after Step 1, in which case \(x^0\) is optimal for \(g_{x^0,1},\) or \(\delta > 1.\) In the latter case, after Step 1, for a solution x which is optimal for \(g_{x^0,\delta /2},\) we have
Therefore, the initial value \(\delta \) constructed in Step 1 satisfies
Let \(\gamma > 0\) such that \(\gamma \le \mu ^+\) be a strict lower bound on the gap between two different objective values that can be taken at feasible solutions:
Theorem 2.1
The scaling algorithm finds an optimal solution in oracle time
Proof
As already mentioned, Step 1 requires a number of oracle calls bounded by (2.4).
The first division of \(\delta \) occurs at exactly the first iteration because \(x^0\) is optimal for \(g_{x^0,\delta }\) after Step 1.
The number of iterations between two consecutive divisions of \(\delta \) is bounded by \(O(\mu ^+/\mu ^)\) due to Lemma 2.1. Indeed, let \(x^k\) be optimal for \(g_{x^k,\delta }.\) Then (i) of Lemma 2.1 implies that \(x^k\) is \((\mu ^+\delta )\)approximate for the current \(\delta .\) Denote the current \(\delta \) by \(\delta ^\prime .\) The new \(\delta \) is equal to \(\delta ^\prime /2\) according to Step 4. Statement (ii) of Lemma 2.1 implies that, in each iteration before the next division of \(\delta ,\) the objective value will be improved by at least \(\mu ^\delta ^\prime /2.\) The number of such improvements is bounded by \(O(\mu ^+/\mu ^)\) because \(x^k\) is \((\mu ^+\delta ^\prime )\)approximate.
If \(\delta \) is divided by 2, the current solution \(x^k\) is \((\mu ^+\delta )\)approximate for the current \(\delta .\) If \(\delta \) was less than \(\gamma /\mu ^+,\) this would mean that the current solution would be \(\gamma \)approximate, i.e., optimal for f. In this case Step 2 would return an optimal solution and the algorithm would stop. It follows that \(\delta \) is still not less than \(\gamma /\mu ^+\) whenever a division of \(\delta \) occurs. Therefore, (2.5), implies that the number of divisions of \(\delta \) is bounded by the logarithm in (2.6).
Thus, we have proved that the scaling algorithm runs in oracle time bounded by (2.6). \(\square \)
3 Scaling algorithm for binary optimization
In this section, we assume that
Let \(g_{y,\delta }\) be defined as
where \((\mathbf{1})^y\) is a componentwise power. That is, the ith component of \((\mathbf{1})^y\) is 1 if \(y_i = 0\) and it is equal to \(1\) if \(y_i = 1.\) Note that \(g_{y,\delta }(y) = f(y).\) When restricted to 0–1 vectors x and y, this function has the following interpretation: when computing \(g_{y,\delta }(x),\) the value \(\delta \) is added to f(x) whenever \(x_i \ne y_i\) for an index \(i = 1,\ldots ,n.\) That is, one can say that we additionally pay \(\delta \) for each inversion of a binary digit when moving from y to x. So if x and y are 0–1 vectors and \(x\ne y,\) then
Let m be an upper bound on the number of nonzeros in any feasible solution:
Theorem 3.1
If S is a set of 0–1 vectors, then the scaling algorithm runs in oracle time bounded by
Proof
Note that \(\mu ^+ \le 2m\) and \(\mu ^ \ge 1\) because \( c_y = (\mathbf{1})^{y}\) and then apply Theorem 2.1. \(\square \)
It is not hard to see that the scaling algorithm is a polynomialtime method for such problems as minimumcost spanning trees, the assignment problem and, more generally, for all combinatorial problems, with polynomially computable objective functions, for which there is a polynomialtime augmentation oracle for the class \(\mathcal {C}.\) The advantage of the scaling algorithm for instance for the spanning tree problem is a possibility of parallelization. From the matroid theory it follows that a spanning tree is optimal if and only if a replacement of an edge of this tree by an edge not belonging to the tree does not lead to a better solution. An augmentation oracle in the course of the scaling algorithm does not need to find the best improvement, it only needs to find a better solution for the current objective function or to prove that the current solution is optimal for the current objective function. That is, we can decompose the neighborhood explored by the augmentation oracle according to the number of processors in a parallel model of computation, so that each processor would independently explore the respective subset of the neighborhood.
4 Greedy algorithm
Now we are going to prove that every greedy algorithm of a sufficiently general class must run in a polynomial oracle time. Let N be a mapping from S to \(2^S,\) the set of all subsets of S, such that \(x\in N(x)\) for all \(x\in S.\) In other words, N defines a neighborhood for each \(x\in S.\) Assume that N is defined so that the following optimality condition holds for any \(g\in \mathcal {C}:\)
This condition says that \(x^*\) is optimal for g if and only if \(x^*\) is locally optimal for g. For instance, if S represents the set of spanning trees of a graph, then N(x) can be defined as the set consisting of x and all spanning trees which can be obtained by means of removing an edge of the current tree x and adding another edge of the graph. (More generally, we can consider S representing the set of bases of a matroid.) A greedy algorithm based on such pairwise interchanges belongs to the class of greedy algorithms that we define below.
Consider the following greedy algorithm:
Input: Problem (1.1) and an initial solution \(x^0\in S.\)
Output: An optimal solution \(x^*\) to (1.1).

1.
\(k:=0.\)

2.
Find \(x^{k+1}\) in \(\arg \min _{x\in N(x^k)} f(x).\)

3.
\(k := k + 1.\)

4.
Repeat steps 2 and 3 until \(x^k \in \arg \min _{x\in N(x^k)} f(x).\)

5.
Return \(x^*.\)
In the context of the greedy algorithm, by an augmentation oracle we mean one which is able to minimize a given function \(g\in \mathcal {C}\) over the neighborhood N(x) of a given solution \(x\in S.\) To distinguish such oracles from a larger class of augmentation oracles we have already introduced, we call such oracles local optimization oracles. Whenever we are speaking about oracle running time of the greedy algorithm, we mean a model of computation with a local optimization oracle. Although the greedy algorithm does not use optimization of other functions than f in the neighborhood of the current solution at each iteration, condition (4.1) is important when estimating the running time:
Theorem 4.1
Let (4.1) hold for all \(g\in \mathcal {C}.\) Then, if S is a set of 0–1 vectors, the greedy algorithm runs in oracle time
Proof
First, let us prove the following statement: If
then \(x^{k}\) minimizes \(g_{x^k,\delta }.\)
The statement can be proved in the following way. Assume the contrary, i.e., taking into account (4.1), that there is \(x\in N(x^k)\) with
Then, since \(x^{k+1}\) minimizes f over \(N(x^k),\)
It follows that
which contradicts (4.2).
The above statement implies that if (4.2) takes place, then \(x^k\) is a \(2m\delta \)approximate solution. Indeed, using the fact that (4.2) implies that \(x^{k}\) is optimal for \(g_{x^k,\delta },\) we can write:
where \(x^*\) is optimal for f. Let \(\delta \) be initialized exactly as in the scaling algorithm. Let \(\delta \) be divided by 2 whenever the improvement of the objective function in the greedy algorithm is smaller than \(\delta ,\) i.e., whenever (4.2) holds. The number of iterations between two consecutive divisions of \(\delta \) does not exceed 4m because if no division of \(\delta \) occurs then the current value of f is improved by at least \(\delta \) and a division of \(\delta \) implies that the current solution is \(2m\delta \)approximate for the current \(\delta .\) Now, to estimate the number of divisions of \(\delta ,\) we observe that a \(\gamma \)approximate solution is optimal, which implies the respective logarithmic bound on the number of divisions of \(\delta .\) \(\square \)
If m is much smaller than n, say, when n is exponential in m, the above theorem improves the bound O(n) for the diameter of 0,1polytopes obtained by Naddef [9]:
Corollary 4.1
Let P be a 0,1polytope where each vertex has no more than m nonzero components. Then, the diameter of P is bounded by \(O(m\log n).\)
Proof
Let w be a vertex of P and f be defined as \(f(x) = (\mathbf{1}  2w)^Tx.\) This function is uniquely minimized by w. Let v be another vertex of P. Let S be the vertex set of P and, for each \(x\in S,\) N(x) consist of x and the vertices adjacent to x. The greedy algorithm starting at \(x^0 = v\) constructs a path from v to w in the 1skeleton of P to minimize f over S. Now we apply Theorem 4.1 noting that \(\gamma \) can be chosen as \(\gamma = 1/2\) and \(f(x) \le n\) for all \(x\in S\) in this case. \(\square \)
Actually, the greedy algorithm runs in strongly polynomial oracle time provided that the objective function is linear:
Corollary 4.2
Let f be a linear function of the form
for some integer vector c. Then, if S is a set of 0–1 vectors, then the greedy algorithm runs in oracle time
Proof
We now formulate a new problem such that \(\mu ^+,\) \(\mu ^,\) and the objective values of feasible solutions will be of strongly polynomial size and the greedy algorithm will construct exactly the same sequence of solutions as when solving the original problem.
Let \(D_=\) be the subset of all solution pairs \((x,y)\in S\times S\) such that \(c^T(xy) = 0\) and \(D_{<}\) be the subset of all \((x,y)\in S\times S\) such that \(c^T(xy) < 0.\)
The integer vector c defining the objective function f is contained in the polyhedron
which means that Q is not empty. The coefficient matrix of the system of linear inequalities defining Q is a \(1,0,1\)matrix.
The absolute values of the subdeterminants of the coefficient matrix of the system of constraints defining Q are bounded by \({(2m)}^n\) because each row of this coefficient matrix contains no more than 2m nonzero entries. Consider a matrix obtained from the coefficient matrix by replacing one of the columns by the \(1,0\)vector of the righthand sides of the constraints defining Q. The absolute values of the subdeterminants of this matix are bounded by \({(2m+1)}^n.\) Standard linear algebra implies that a minimal face of Q contains a vector a such that \(\Vert a\Vert _\infty \le {(2m+1)}^n\) and \(\Vert a\Vert _\infty \ge 1/{(2m)^n}.\) For every \(x\in S\) and \(y\in S,\) we have \(a^Tx < a^Ty\) if and only if \(c^Tx < c^Ty.\) It follows that
Let \(\bar{f}\) be defined as \(\bar{f}(x) = a^Tx.\) If f had a unique local optimum in N(x) for every x, then we could conclude immediately that the greedy algorithm would construct the same sequence of solutions when applied to \(\bar{f}.\) In the general case, we need to construct a new local optimization oracle which, under the same circumstances, outputs exactly the same result for \(\bar{f}\) as that for f. Thus, let \(\varphi :S \longrightarrow S\) be defined as follows: For each \(x\in S,\) let \(\varphi (x) := x\) if the original local optimization oracle decides that x is optimal for f and, otherwise, \(\varphi (x) := y,\) where y is a better solution returned by the original local optimization oracle with respect to f. Define a new local optimization oracle in the following way:
Input: \(x\in S\) and \(g\in \mathcal {C};\)
Output: \(y\in S\) such that \(g(y) < g(x)\) or a decision that x is optimal for g.

if \(g\ne \bar{f}\) then run the original oracle to minimize g over N(x);

else if \(\varphi (x) = x,\) then decide that x is optimal, else return \(\varphi (x).\)
Given \(x\in S\) and a linear function \(g\in \mathcal {C},\) the new oracle returns a solution in \(\arg \min _{y\in N(x)} g(x).\) Indeed, this is the case if \(g\ne \bar{f}.\) Otherwise, this is implied by (4.3).
Let the greedy algorithm use the new oracle and be applied to the new problem, where the objective is to minimize \(\bar{f}\) over \(x\in S.\) In this case, the greedy algorithm constructs exactly the same sequence of solutions as when applied to the original problem, with the original oracle. To estimate the oracle running time for the new problem, set \(\gamma = 1/(2{(2m)^n})\) and apply Theorem 4.1. Note that \(a^Tx^0 \le n(2m+1)^{n}\) and the optimal value of the new problem is not less than \({n(2m+1)^n}\) because \(a^Tx \le n\Vert a\Vert _\infty \) for all \(x\in S.\) Then Theorem 4.1 implies that the greedy algorithm runs in oracle time \(O(mn\log n)\) for the new problem. Since the sequence of solutions considered in this case by the greedy algorithm is exactly the same as that for the original problem, it follows that the greedy algorithm solves the original problem in the same oracle time using the original oracle. \(\square \)
Remark. It should be emphasized that Corollary 4.2 does not assume any modification of the objective function, i.e., the greedy algorithm is applied to the original objective function, in contrast to the method of Frank and Tardos [5] which relies on simultaneous Diophantine approximation. Schulz et al. [11] also apply simultaneous Diophantine approximation at a preprocessing step.
If P is a nondegenerate 0,1polytope being the set of feasible solutions of a linear program of the standard form, then the above complexity bounds hold true for the number of vertices visited by the simplex method equipped with Dantzig’s best improvement rule. In this case, the bound of Corollary 4.2 follows from the analysis of the simplex method proposed by Kitahara and Mizuno [6]. In the next section, we will consider a variant of the simplex method, for the general case, which can use any pivot rule modified so that the simplex method using it becomes our scaling algorithm.
5 Optimization of functions over vertex sets of polytopes
Let the following system define a polytope P :
where \(A\in \mathbb {Z}^{m\times n}\) and \(b\in \mathbb {Z}^m.\)
In this section, S is assumed to be the vertex set of the polytope P.
Let Z(x) denote the index set of zero components of a vector x and \(\mathbf{1}_J\) denote the characteristic vector of a set J.
Let \(g_{y,\delta }\) be defined in the following way:
Such a function has the following interpretation: when moving from y to x, we additionally pay \(\delta \cdot x_i\) for each component \(x_i\) such that \(y_i = 0.\) Since \(x\ge \mathbf{0}\) for all x in P, the value added to f(x) is nonnegative:
At this stage, in order to apply our scaling algorithm, we should still prove that (2.2) takes place for \(c_y = \mathbf{1}_{Z(y)},\) i.e., that the added value is positive if \(x\ne y.\)
Let \(\alpha > 0\) be a lower bound and \(\beta > 0\) be an upper bound on the nonzero components of the vertices of P, the polytope defined by (5.1).
Observation 5.1
The following statement is true:
Proof
If y is a vertex of P, then the system
uniquely identifies y, i.e., the only solution of (5.5) is y. Then for each \(x\in S,\) \(x\ne y,\) there exists \(i\in Z(y)\) with \(x_i \ge \alpha .\) Since x is nonnegative, (5.3) implies (5.4). \(\square \)
Thus, by the above observation we have the property (2.2) and can apply the scaling algorithm.
Theorem 5.1
If S is the vertex set of P, then the scaling algorithm solves the problem (1.1) in oracle time
where m is the number of rows of A.
Proof
Indeed, noting that \(\mu ^+ \le m\beta \) and \(\mu ^ \ge \alpha ,\) we apply Theorem 2.1. \(\square \)
Let \(\Delta \) be an upper bound on the absolute values of the subdeterminants of A.
Theorem 5.2
Let \(c\in \mathbb {Z}^n.\) There exists a variant of the simplex method which solves the linear program
by visiting at most
vertices of P.
Proof
For every vertex \(x\in S,\) let N(x) denote the set consisting of x and the vertices of P adjacent to x. For any linear objective function, a vertex x is optimal if and only if it is optimal in N(x). Let the augmentation oracle be an algorithm which either proves that x is optimal in N(x) or delivers a solution in N(x) which is better than x for a given linear objective function. Such an augmentation oracle can e.g. be based on any anticycling pivot rule according to which an entering variable has a negative reduced cost. Then, the augmentation oracle can take the following form: Given \(x\in S\) and \(g\in \mathcal {C},\) choose a basis defining x and use the pivot rule until a new basic feasible solution is constructed or it is proved that x is optimal for g. (The polytope P can be degenerate and the pivot rule may need more than one iteration to reach another vertex.) With such an oracle, the scaling algorithm becomes a variant of the simplex method. During each call to the oracle, the pivot rule can start with any basis defining the given vertex x or use the current basis obtained at the previous call to the oracle (in this case the scaling algorithm should store the respective basis after each call to the oracle).
Let x and y, \(x\ne y,\) be adjacent vertices of P. By Observation 5.1,
It is clear that \(c^Tx^0\le \beta \Vert c\Vert _1\) and \(OPT \ge  \beta \Vert c\Vert _1.\) Note that \( \gamma =\frac{1}{2\Delta ^2} \) is a strict lower bound on \(c^T(xy)\) where \(x,y\in S,\) and \(c^Tx \ne c^Ty.\) Then Theorem 5.1 implies the required complexity bound. \(\square \)
The coefficient of the logarithmic term in the bound (5.7) does not depend on the dimension. That is, our approach can be useful when developing column generation algorithms, when the number of columns of A is much larger than the number of rows of A.
Obviously, the above complexity bound implies the following bound on diameters of polytopes:
Corollary 5.1
The diameter of P is bounded by
Proof
Consider a vertex \(x^1\) of P. This vertex uniquely minimizes the linear function \(\mathbf{1}^T_{Z(x^1)}x\) over P. Now consider the problem of minimizing this function over the vertex set of P. The objective value at \(x^1\) is zero while the objective values of the other vertices are not less than \(\alpha ,\) which follows from (5.4). The length of the path found by the variant of the simplex method described in the proof of Theorem 5.2, when starting from another vertex \(x^0,\) is bounded by (5.8) according to (5.7). The other end of the path is \(x^1\) because \(x^1\) is the unique optimal solution for the linear function \(\mathbf{1}^T_{Z(x^1)}x.\) Thus, we obtain the bound (5.8). \(\square \)
The bounds (5.7) and (5.8) depend on the dimension at most logarithmically (to see this for (5.7), observe that \(\Vert c\Vert _1\le n\Vert c\Vert _\infty \)). Recently, Kitahara and Mizuno [6] proposed a bound of the form \(O\left( nm\frac{\beta }{\alpha }\log \frac{\beta }{\alpha }\right) ,\) which depends linearly on the dimension. Bonifas et al. [1] proposed a bound of \(O(\Delta ^2n^{3.5} \log n\Delta )\) for polytopes in an ndimensional space, where \(\Delta \) is an upper bound on the absolute values of the subdeterminants of the coefficient matrix of a system of linear inequalities defining the respective polytope. This bound does not depend on the number of inequalities, but the dependence of this bound on \(\Delta \) is at least quadratic. For totally unimodular coefficient matrices, this bound is strongly polynomial and substantially improves the bound obtained by Dyer and Frieze [4]. In our case, if A is totally unimodular, the estimate (5.8) can be competitive only if the ratio \(\beta /\alpha \) is bounded by a polynomial of n whose degree is not so high.
6 Conclusions
We have proved that augmentation is polynomially equivalent to optimization for arbitrary functions over sets of binary vectors. Also, we have given complexity bounds for optimization of arbitrary functions over vertex sets of polytopes. As a consequence, we have obtained new bounds on diameters of polytopes and guaranteed bounds on the running time which can be achieved by the simplex method. For instance, the simple method introduced in the paper allows to estimate the quality of the current solution in the course of column generation methods, the respective estimates depending only logarithmically on the number of columns of the coefficient matrix.
Change history
30 June 2021
A Correction to this paper has been published: https://doi.org/10.1007/s10107021016827
References
Bonifas, N., Di Summa, M., Eisenbrand, F., Hähnle, N., Niemeier, M.: On subdeterminants and the diameter of polyhedra. Discrete Comput. Geom. 52, 102–115 (2014)
Chubanov, S.: A generalized simplex method for integer problems given by verification oracles. Optimization Online (2016)
De Loera, J.A., Hemmecke, R., Lee, J.: On augmentation algorithms for linear and integerlinear programming: from Edmonds–Karp to Bland and beyond. SIAM J. Optim. 25, 2494–2511 (2015)
Dyer, M., Frieze, A.: Random walks, totally unimodular matrices, and a randomised dual simplex algorithm. Math. Program. 64, 1–16 (1994)
Frank, A., Tardos, É.: An application of simultaneous Diophantine approximation in combinatorial optimization. Combinatorica 7, 49–65 (1987)
Kitahara, T., Mizuno, Sh.: A proof by the simplex method for the diameter of a (0,1)polytope. Optimization Online (2011)
Kitahara, T., Mizuno, Sh: A bound for the number of different basic solutions generated by the simplex method. Math. Program. 137, 579–586 (2013)
Kleinschmidt, P., Onn, S.: On the diameter of convex polytopes. Discrete Math. 102, 75–77 (1992)
Naddef, D.: The Hirsch conjecture is true for (0,1)polytopes. Math. Program. Ser. A 45, 109–110 (1989)
Orlin, J.B., Punnen, A.P., Schulz, A.S.: Approximate local search in combinatorial optimization. SIAM J. Comput. 33, 1201–1214 (2004)
Schulz, A.S., Weismantel, R., Ziegler, G.M.: 0/1integer programming: optimization and augmentation are equivalent. Lect. Notes Comput. Sci. 979, 473–483 (1995)
Schulz, A.S.: On the relative complexity of 15 problems related to 0/1integer programming. In: Cook, W.J., Lovász, L., Vygen, J. (eds.) Res. Trends Comb. Optim, vol. 19, pp. 399–428. Springer, Berlin (2009)
Acknowledgements
The author is thankful to the editors and two anonymous referees for their constructive comments and suggestions.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised due to a retrospective Open Access order.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chubanov, S. A scaling algorithm for optimizing arbitrary functions over vertices of polytopes. Math. Program. 190, 89–102 (2021). https://doi.org/10.1007/s10107020015220
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107020015220
Keywords
 Nonlinear optimization
 Binary optimization
 Simplex method
 Diameter
 Scaling
 Greedy algorithm
Mathematics Subject Classification
 90C27 Combinatorial optimization