Abstract
In this paper, we consider a class of mixed integer weakly concave programming problems (MIWCPP) consisting of minimizing a difference of a quadratic function and a convex function. A new necessary global optimality conditions for MIWCPP is presented in this paper. A new local optimization method for MIWCPP is designed based on the necessary global optimality conditions, which is different from the traditional local optimization method. A global optimization method is proposed by combining some auxiliary functions and the new local optimization method. Furthermore, numerical examples are also presented to show that the proposed global optimization method for MIWCPP is efficient.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is well known that the sum of the quadratic and convex function is called to be weakly convex function (see [1]), which has been investigated in many literatures, such as [1, 2]. In this paper, we consider the class of mixed integer weakly concave programming problems (MIWCPP) consisting of minimizing a difference of a quadratic function and a convex function.
Models of MIWCPP cover a large spectrum of global optimization problems including mixed integer quadratic optimization problems and mixed integer concave minimization problems. Some problems in the layout design of integrated electronic circuits can be modeled as a quadratic program [3], and the \(\{0,1\}\) quadratic problems also arise in combinational optimization [4], financial analysis problems [5], computer-aided design problems [6], and message traffic management [7]. For concave minimization problems, one can refer to [8–12].
As far as we know, there are many applications for mixed integer programming problems in many areas, such as engineering design, computational chemistry, computational biology, communications and finance, reliability networks, and optimization of core reload patterns for nuclear reactors, as well as the planning of the production activities. Floudas [13] gives an overview of many applications, including process synthesis, process design, process synthesis and design under uncertainty, molecular design, interaction of design, synthesis and control, process operations, facility location and allocation, facility planning, and scheduling and topology of transportation networks. For applications of mixed integer programming models in reliability optimization, chemical engineering, and core reload patterns for nuclear reactor can be found in [14–16]. More applications can be found in [17–23]. Recently, the \(\{0,1\}\) mixed integer programming problems are also discussed by many researchers (see [24–27]).
Most approaches for the mixed integer nonlinear programming problems are branch-and-bound, decomposition, and outer approximation method (see [21, 24, 28–35]). It is very difficult to solve the mixed integer programming problems due to the nonlinearity and the mixture of discrete and continuous variables. The existing algorithms are not polynomial, especially when the size of the objective function is very large; these algorithms manifest their inadequate, also we even do not know whether the results we find are the global solutions or not. Thus, the global optimality conditions and other novel algorithms for mixed integer programming become the focus of many researches in recent years. Some global optimality conditions characterizing global minimizer of quadratic minimization problem has been discussed in [36–39]. [40] discussed some global optimality conditions for mixed integer quadratic programming problems. [41] discussed some global optimality conditions for MIWCPP with only box or binary constraints. [42] discussed some global optimality conditions for MIWCPP with mixed integer constraints.
In this paper, we will further discuss some new necessary global optimality conditions for MIWCPP, which extend the results given in [41] and [42]. Then, we will use the given necessary global optimality conditions to develop a new local optimization method (LOM) for MIWCPP. Finally, a global optimization method for MIWCPP is also proposed in this paper, which is designed based on the new local optimization method and some special auxiliary functions. The methods proposed in this paper are very different from the existed methods. First, the LOM given in this paper is based on the necessary global optimality conditions of MIWCPP. Second, the global optimization method is combined the new local optimization method and some auxiliary functions for MIWCPP. Some numerical examples are also presented to show that the proposed novel optimization methods for MIWCPP are efficient.
The layout of the paper is as follows. Some new necessary global optimality conditions for MIWCPP are proposed in Sect. 2. A new LOM based on the necessary global optimality conditions is developed in Sect. 3. A global optimization method based on auxiliary functions and the proposed new LOM is followed in Sect. 4. Section 5 provides several examples to illustrate the efficiency of the given methods. Section 6 gives the conclusion of this paper
2 Necessary Global Optimality Conditions for MIWCPP
Throughout this paper, the real line is denoted by \({\mathbb {R}}\) and the \(n\)-dimensional Euclidean space is denoted by \({\mathbb {R}}^{n}\). For vectors \(x,y\in {\mathbb {R}}^{n}\), \(x\geqslant y\) means that \( x_{i}\geqslant y_{i}\), for \(i=1,\cdots ,n\). The notation \(A\succeq B\) means \(A-B\) is positive semidefinite and \(A\preceq 0\) means \(-A\succeq 0\). A diagonal matrix with diagonal elements \(\alpha _{1},\cdots ,\alpha _{n}\) is denoted by \({{\mathrm {diag}}}(\alpha )\) or \({{\mathrm {diag}}}(\alpha _{1},\cdots ,\alpha _{n})\), where \(\alpha =(\alpha _{1},\cdots ,\alpha _{n})^{\mathrm{T}}\).
In this paper, we consider the following MIWCPP:
where \(S\subset {\mathbb {R}}^n\) and \(S=\prod _{i=1}^nS_i\), \(S_i=\{0,1\}, i=1,\cdots ,k\) and \(S_i=[0,1], i=k+1,\cdots ,n\), \( A=(a_{ij})_{n\times n} \in S^{n}\) and \(S^{n}\) is the set of all symmetric \(n\times n\) matrices, \(g(x): {\mathbb {R}}^n \rightarrow {\mathbb {R}}\) is continuously differentiable convex on \({\mathbb {R}}^n\). For \(\bar{x}\in S\), let
where \(e_i\) is a i-th unit vector in \({\mathbb {R}}^n\), and
Theorem 2.1
(Necessary Global Optimality Conditions for MIWCPP Let \(\bar{x} \in S\) and let \({{\mathrm{diag }}}(A)={{\mathrm{diag}}} (a_{11}, \cdots , a_{nn})\). If \(\bar{x}\) is a global minimizer of MIWCPP, then
Proof
Let \(\bar{x}\) be a global minimizer of MIWCPP. Then
Let \(x:=(\bar{x}_1,\cdots , \bar{x}_{i-1}, x_i,\bar{x}_{i+1}, \cdots , \bar{x}_n)^{\mathrm{T}}\), where \(x_i\in S_i, i=1,\cdots ,n\). Then
For any \(i=1,\cdots ,n\), if \(\bar{x}_i\in \{0,1\}\), let \(x_i:=1-\bar{x}_i\), then \(x_i\in S_i\) and \( y^{(i)}=\bar{x}+(1-2\bar{x}_i)e_i\in S.\) It follows from (2.8) that
If \(\bar{x}_i\in (0,1)\), then (2.8) implies that \(\nabla h_i(\bar{x}_i)=(A\bar{x}-\nabla g(\bar{x}))_i=0\) since \(\bar{x}_i\) is a global minimizer of \(h_i(x_i)\) on \(S_i\). Therefore, \(b_{\bar{x}_i}=0\) and it follows from (2.8) that
Hence, we have that \( b_{\bar{x}_i} \leqslant \frac{a_{ii}}{2} \) for any \(i=1,\cdots , n\).
Moreover, for the case of \(k+1\leqslant i\leqslant n\), we break the discussion into three different cases:
- \(1^\circ \) :
-
if \(\bar{x}_i=0\), then it follows from (2.8) that
$$\begin{aligned}&\frac{1}{2} a_{ii}(x_i-\bar{x}_i)+( A\bar{x}-\nabla g(\bar{x}))_{i}\geqslant 0, \text{ for } \text{ any } x_i\in (0,1]\\&\quad \Rightarrow ( A\bar{x}-\nabla g(\bar{x}))_{i} \geqslant 0 \quad ( \text{ let } x_i\rightarrow \bar{x}_i)\\&\quad \Leftrightarrow \widetilde{\bar{x}}_i ( A\bar{x}-\nabla g(\bar{x}))_{i} \leqslant 0, \text{ i.e., } d_{\bar{x}_i} \leqslant 0. \end{aligned}$$ - \(2^\circ \) :
-
If \(\bar{x}_i=1\), then it follows from (2.8) that
$$\begin{aligned}&\frac{1}{2} a_{ii}(x_i-\bar{x}_i)+(A\bar{x}-\nabla g(\bar{x}))_{i}\leqslant 0, \text{ for } \text{ any } x_i\in [0,1)\\&\quad \Rightarrow (A\bar{x}-\nabla g(\bar{x}))_{i} \leqslant 0 \quad ( \text{ let } x_i\rightarrow \bar{x}_i) \\&\quad \Leftrightarrow \widetilde{\bar{x}}_i (A\bar{x}-\nabla g(\bar{x}))_{i} \leqslant 0, \text{ i.e., } d_{\bar{x}_i} \leqslant 0. \end{aligned}$$ - \(3^\circ \) :
-
If \(0<\bar{x}_i<1\), then it follows from (2.8) that
$$\begin{aligned}&\left\{ \begin{array}{ll}\frac{1}{2} a_{ii}(x_i-\bar{x}_i)+( A\bar{x}-\nabla g(\bar{x}))_{i}\leqslant 0,&{} \text{ for } \text{ any } x_i\in [0,\bar{x}_i), \\ \frac{1}{2} a_{ii}(x_i-\bar{x}_i)+(A\bar{x}-\nabla g(\bar{x}))_{i}\geqslant 0,&{} \text{ for } \text{ any } x_i \in (\bar{x}_i,1]\end{array}\right. \\&\quad \Rightarrow ( A\bar{x}-\nabla g(\bar{x}))_{i} =0 \text{ and } a_{ii}\geqslant 0\\&\quad \Leftrightarrow \widetilde{\bar{x}}_i ( A\bar{x}-\nabla g(\bar{x}))_{i} \leqslant 0, \text{ i.e., } d_{\bar{x}_i} \leqslant 0. \end{aligned}$$
By the above discussion, we know that \([{\text{NC1}}]\) holds.
Remark 2.2
By the convexity of \(g(x)\), we have that
If \(\bar{x}_i\in \{0,1\}\), then \(\nabla g(\bar{x})^{\mathrm{T}}(y^{(i)}-\bar{x})=-\widetilde{\bar{x}}_i(\nabla g(\bar{x}))_i.\) So
where \([{\text{NC1}}]^*\) is just the necessary global optimality conditions given in [42] for MIWCPP with \(u_i=p_i=0\) and \(v_i=q_i=1\) and the necessary condition given by Theorem 2.1 in [41] for MIWCPP with box constraints. So \([{\text{NC1}}]\) extends the condition \([{\text{NC1}}]^*\) given in [42] for MIWCPP with \(u_i=p_i=0\) and \(v_i=q_i=1\) and extends the conditions given in [41] for MIWCPP with box constraints.
Corollary 2.3
Let \(\bar{x} \in S\), \(A\in {S^n}\). If \(k=n\), then \([{\text{NC1}}]\) is equivalent to
Proof
Using the same proof in Theorem 2.1, we can prove it.
Remark 2.4
By the convexity of \(g(x)\) and by \(k=n\), we have that
So
Note that the necessary global optimality conditions \([{\text{BNC1}}]\) for MIWCPP with binary constraints given by Theorem 4.1 in [41] is
which should be changed into \([{\text{NC2}}]^*\) from the proof of Theorem 4.1 in [41]. So \([{\text{NC2}}]\) extends the condition \([{\text{NC2}}]^*\) given in [41] for MIWCPP with binary constraints.
Corollary 2.5
Let \(\bar{x} \in S, A\in {S^n}\) . If \(g(x)=a^{\mathrm{T}}x\) , then \([{\text{NC1}}]\) is equivalent to
where \({{\mathrm{diag }}}(\widetilde{A})={{\mathrm{diag}}} (\widetilde{a}_{11}, \cdots , \widetilde{a}_{nn}) \) and \(\widetilde{a}_{ii}:=\left\{\begin{array}{ll} a_{ii}, & \text{ if } i=1,\cdots ,k, \\ \min \{0,a_{ii}\}, & \text{ if } i=k+1,\cdots ,n.\end{array} \right. \) Moreover, if \(A\) is a diagonal matrix, then \(\bar{x}\) is a global minimizer of MIWCPP if and only if \([{\text{NC3}}]\) holds.
Proof
If \(g(x)=a^{\mathrm{T}} x\), then \(b_{\bar{x}_i}= \widetilde{\bar{x}}_i(A\bar{x}-a)_i, \text{ for } \text{ any } i=1,\cdots ,n\) and \(d_{\bar{x}_i}=\widetilde{\bar{x}}_i(A\bar{x}-a)_i, \text{ for } \text{ any } i=k+1,\cdots , n\). Hence, we can easily check that \([{\text{NC1}}]\) is equivalent to \([{\text{NC3}}]\).
Assume that \(A\) is a diagonal matrix and \([{\text{NC3}}]\) holds, then \(a_{kt}=0, \, \text{ for } \text{ any } t=1,2,\cdots ,n,k\ne t\). Hence,
if and only if
As the same proof in Theorem 1, we can prove that (2.9) holds if and only if \([{\text{NC3}}]\) holds. Hence, \(\bar{x}\) is a global minimizer of MIWCPP if and only if \([{\text{NC3}}]\) holds.
Note that the condition \([{\text{NC3}}]\) is just the necessary condition given by Theorem 3.7 in [40].
In the following section, we will discuss a new local optimization method using the given necessary global optimality condition \([{\text{NC1}}]\).
3 A New Local Optimization Method
First, we consider the following general problem \(({\text{P}})\):
where \(S=\prod _{i=1}^n S_i\) and \(S_i=\{0,1\}\) for \(i\in I\), \(S_i=[0,1]\) for \(i\in \bar{I}\), \(f\) is a differentiable function on \(\prod _{i=1}^n[0,1]\), where
For any \(\bar{x}=(\bar{x}_1,\cdots , \bar{x}_n)^{\mathrm{T}}\in S\), let
For \(\delta >0\), let
and let
Let
Obviously, \(|N_i(\bar{x})|=1, \text{ for } \text{ any } i\in I_{\bar{x}}\) and \(N_\delta (\bar{x})\subset S\) if \(\delta <\delta (\bar{x})\), where \(|N_i(\bar{x})|\) means the number of the element in \(N_i(\bar{x})\).
Definition 3.1
Let \(\bar{x}\in S\). For \(\delta >0\) such that \(\delta \leqslant \delta (\bar{x})\), \(N_\delta (\bar{x})\) is said to be a neighborhood of \(\bar{x}\) with respect to \(S\).
Note that if \( I_{\bar{x}}\ne \emptyset \), then \(\bar{x}\notin N_\delta (\bar{x})\).
Definition 3.2
Let \(\bar{x}\in S\). If there exists a \(0<\delta \leqslant \delta (\bar{x})\) such that \(f(\bar{x})\leqslant f(x)\) (\(f(\bar{x})\geqslant f(x)\)) for any \(x\in N_\delta (\bar{x})\), then \(\bar{x}\) is said to be a local minimizer (maximizer) of problem \(({\text{P}})\); furthermore, if \(f(\bar{x})<f(x)\) (\(f(\bar{x})>f(x)\)) for any \(x\in N_\delta (\bar{x}){\setminus} \{\bar{x}\}\), then \(\bar{x}\) is said to be a strict local minimizer (maximizer) of problem \(({\text{P}})\).
Let
From [43], we have the following KKT problem \(\min _{z\in S_{\bar{I}}} h(z)\).
Lemma 3.3
[43] Let \(\bar{z} \in S_{\bar{I}}\). \(\bar{z}\, is\, an \, {{MIWCPP}} \, point\,of\, h(z) \, on \, S_{\bar{I}}\) if and only if
In the following, we will define the KKT point for mixed programming problem \(({\text{P}})\).
Definition 3.4
Let \(\bar{x}\in S\) and let \( \bar{z}:=(\bar{x}_{k+1},\cdots , \bar{x}_n)^{\mathrm{T}}\). \(\bar{x}\) is said to be a KKT point of problem \(({\text{P}})\) if
By Lemma 3.3 and Definition 3.4, we can obtain the following KKT condition for mixed problem \(({\text{P}})\).
Proposition 3.5
Let \(\bar{x}\in S\) . Then, \(\bar{x}\) is a KKT point of problem \(({\text{P}})\) if and only if the following condition \([{\text{KKTC}}]\) holds:
where \(\widetilde{\bar{x}}_i\) is defined by (2.2).
Proof
By the definition of KKT point, here we just need to prove that \(\bar{z} \text{ is } \text{ a } \text{ KKT } \text{ point } \text{ of } h(z) \text{ on } S_{\bar{I}}\) if and only if \(\widetilde{\bar{x}}_i( \nabla f(\bar{x}))_i\leqslant 0, \text{ for } \text{ any } i\in \bar{I}\). By Lemma 3.3, we know that \(\bar{z}\,\text{is}\,\text{a}\,\text{KKT} \text{ point } \text{ of } h(z) \text{ on } S_{\bar{I}}\) if and only if
We can easily verify that (3.8) is equivalent to \(\widetilde{\bar{x}}_i( \nabla f(\bar{x}))_i\leqslant 0, \text{for}\,\text{any} \,i\in \bar{I}\).
Proposition 3.6
Let \(\bar{x}\in S\) . If \(\bar{x}\) is a local minimizer of problem \(({\text{P}})\) , then \(\bar{x}\) is a KKT point of problem \(({\text{P}})\).
Proof
By Definition 3.2, we know that \(\bar{x}\) is a local minimizer of problem \(({\text{P}})\) if and only if there exists a \(0<\delta \leqslant \delta (\bar{x})\) such that \(f(\bar{x})\leqslant f(x)\) for any \(x\in N_\delta (\bar{x})\). By \(\cup _{i\in I_{\bar{x}}} N_i(\bar{x}) \subset N_\delta (\bar{x})\), we know that
Moreover, \(f(\bar{x})\leqslant f(x)\) for any \(x\in N_\delta (\bar{x})\) implies that
(3.12) and (3.13) imply that \(\bar{x}\) is a KKT point of problem \(({\text{P}})\).
Proposition 3.7
Let \(\bar{x}\in S\) . Then, the following conditions hold:
- \((i)\) :
-
If \(\bar{x}\) is a local minimizer of MIWCPP, then \([{\text{NC1}}]\) holds;
- \((ii)\) :
-
If \([{\text{NC1}}]\) holds, then \(\bar{x}\) is a KKT point of MIWCPP.
Moreover, if for any \(i\in \{1,\cdots , n\}{\setminus} I_{\bar{x}}\) , \(a_{ii}\geqslant 0\) , then \(\bar{x}\) is a KKT point of MIWCPP if and only if \([{\text{NC1}}]\) holds.
Proof
- (\(i\)):
-
If \(\bar{x}\) is a local minimizer of MIWCPP, then \(f(\bar{x})\leqslant \min \{f(x)\mid x\in \cup _{i\in I_{\bar{x}}} N_i(\bar{x})\}\). We can easily verify that
$$\begin{aligned} f(\bar{x})\leqslant \min \{f(x)\mid x\in \cup _{i\in I_{\bar{x}}} N_i(\bar{x})\}\Leftrightarrow b_{\bar{x}_i}\leqslant \frac{a_{ii}}{2}, \text{ for } \text{ any } i\in I_{\bar{x}}. \end{aligned}$$(3.14)
Moreover, if \(\bar{x}\) is a local minimizer of MIWCPP, then there exists a \(0<\delta \leqslant \delta (\bar{x})\) such that
which implies that
(3.14) and (3.15) imply that \([{\text{NC1}}]\) holds.
- \((ii)\) :
-
If \([{\text{NC1}}]\) holds, then
$$\begin{aligned} b_{\bar{x}_i}\leqslant \frac{a_{ii}}{2}, \text{ for } \text{ any } i=1,\cdots ,n. \end{aligned}$$
By (3.14), we know that
By Proposition 3.5, we know that \(\bar{x}\) is a KKT point of MIWCPP.
Moreover, if for any \(i\in \{1,\cdots , n\}{\setminus} I_{\bar{x}}\), \(a_{ii}\geqslant 0\), and if \(\bar{x}\) is a KKT point of MIWCPP, then \( b_{\bar{x}_i}\leqslant \frac{a_{ii}}{2} \) for any \(i\in \{1,\cdots ,n\} {\setminus} I_{\bar{x}}\) since \(b_{\bar{x}_i}=0,\text{ for} \text{ any } i\in \{1,\cdots ,n\} {\setminus} I_{\bar{x}}\) if \(\bar{x}\) is a KKT point of MIWCPP. Hence, by (3.14) and Proposition 3.5, we know that condition \([{\text{NC1}}]\) holds. Therefore, if for any \(i\in \{1,\cdots , n\}{\setminus} I_{\bar{x}}\), \(a_{ii}\geqslant 0\), then \(\bar{x}\) is a KKT point of MIWCPP if and only if \([{\text{NC1}}]\) holds.
By the condition \([{\text{KKTC}}]\) for problem \(({\text{P}})\), we can obtain the following LOM for problem \(({\text{P}})\).
Algorithm 3.1 Local Optimization Method (LOM)
-
Step 1. Take an initial point \(x_0=(x_0^1,\cdots , x_0^n)^{\mathrm{T}}\in S\). Let \(\bar{x}:=x_0, k:=1\).
-
Step 2. Let \(I_{\bar{x}}:=\{i\mid \bar{x}_i(1-\bar{x}_i)=0\}\) and check whether the following condition holds.
$$\begin{aligned}{}[{\text{KKTC}}]_1: \quad f(\bar{x})\leqslant f(x), \text{ for } \text{ any } \cup _{i\in I_{\bar{x}}} N_i(\bar{x}). \end{aligned}$$If \([{\text{KKTC}}]_1\) does not hold, goto Step 3; otherwise, check whether the following condition holds.
$$\begin{aligned}{}[{\text{KKTC}}]_2:\quad \widetilde{\bar{x}}_i( \nabla f(\bar{x}))_i\leqslant 0, \text{ for } \text{ any } i\in \bar{I}. \end{aligned}$$If \([{\text{KKTC}}]_2\) holds, goto Step 5, else goto Step 4.
-
Step 3. Let \(x^*=(x_1^*,\cdots , x_n^*)^{\mathrm{T}}:= \text{ augmin } \{f(x)\mid x\in \cup _{i\in I_{\bar{x}}} N_i(\bar{x})\}\) and let \(\bar{x}:= x^*\), goto Step 2.
-
Step 4. Let \(h(z):=f((\bar{x}_1,\cdots , \bar{x}_k, z_1,\cdots , z_{n-k}))\) and let \(z^*:=(z_1^*,\cdots , z_{n-k}^*)^{\mathrm{T}}\) be a local minimizer or a KKT point of \(h(z)\) on \( S_{\bar{I}}\) starting from \((\bar{x}_{k+1},\cdots , \bar{x}_n)^{\mathrm{T}}\). Let \(\bar{x}:=(\bar{x}_1,\cdots , \bar{x}_k, z_{1}^*, \cdots , z^*_{n-k})^{\mathrm{T}}\), goto Step 2.
-
Step 5. Stop. \(\bar{x}\) is a KKT point or a local minimizer of problem \(({\text{P}})\).
Remark 3.1
\(1^\circ \) If \(f(x)=\frac{1}{2}x^{\mathrm{T}} A x-g(x)\), where \(g(x)\) is a differentiable convex function, i.e., if the problem \(({\text{P}})\) is MIWCPP, then the condition \([{\text{KKTC}}]_1\) can be replaced by the following condition \([{\text{NC1}}]_1\).
\(2^\circ \) In Step 4, we can use any local optimization methods to obtain a local or KKT point for smooth programming problem: \(\min _{z\in S_{\bar{I}}} h(z)\) with only continuous variables since any local minimizer of problem \(\min _{z\in S_{\bar{I}}} h(z)\) must be a KKT point. In Sect. 5, in the numerical examples, we use the following three local optimization methods: Quasi-Newton, DFP and BFGS to solve problem \(\min _{z\in S_{\bar{I}}} h(z)\).
Theorem 3.2
For a given initial point \(x_0\in S\) , we can obtain a local minimizer or a KKT point \(\bar{x}\) for problem \(({\text{P}})\) in finite steps by the given LOM.
Proof
Here, we just need to prove that the Algorithm (LOM) needs only finite steps from Step 2 to Step 5 since a point \(\bar{x}\) can go to Step 5 from Step 2 only when it satisfies the conditions \([{\text{KKTC}}]_1\) and \([{\text{KKTC}}]_2\), i.e., \(\bar{x}\) is already a KKT point of problem \(({\text{P}})\). Without loss of generality, suppose that \(f(x)\) is not a constant function on \(\{0,1\}^n\). Let \(\eta :=\min \{|f(x)-f(y)|\mid {x,y\in \{0,1\}^n} \text{ and } f(x)\ne f(y)\}\), \(M:=\max \{f(x)\mid x\in \{0,1\}^n\}\) and \(m:=\min \{f(x)\mid x\in \{0,1\}^n\}\). Then, \(\eta >0\) since the number of set \(\{0,1\}^n\) is finite. Then, we can prove that a local minimizer or a KKT point \(\bar{x}\) for problem \(({\text{P}})\) starting from a given point \(x_0\) can be obtained in at most \(\frac{2(M-m)}{\eta }+1\) steps by the given LOM . If \(f(x)\) is a constant function on \(\{0,1\}^n\), then a local minimizer or a KKT point \(\bar{x}\) for problem \(({\text{P}})\) starting from a given point \(x_0\) can be obtained in at most \( 1\) step by the given LOM.
In fact, from Step 2 to Step 3, we have that \(f(x^*)< f(\bar{x})\) since it can happen only when the condition \([{\text{KKTC}}]_1\) does not hold, i.e., there must exist \(i_0\in I_{\bar{x}}\) and \(y_{i_0}=\bar{x}+(1-2\bar{x}_{i_0})e_{i_0}\) such that \(f(y_{i_0})< f(\bar{x})\) otherwise Step 2 will not goto Step 3, and since \(f(x^*)\leqslant f(y_{i_0})\). Hence, there are at most \(\frac{M-m}{\eta }\) steps from Step 2 to Step 3.
Moreover, Step 2 goes to Step 4 only when condition \([{\text{KKTC}}]_1\) holds, but condition \([{\text{KKTC}}]_2\) does not hold. Since \(z^*\) obtained in Step 4 is the local point or KKT point of programming problem \(\min _{z\in S_{\bar{I}}} h(z)\), we have that \(\widetilde{\bar{x}}_i( \nabla f(\bar{x}))_i\leqslant 0, \text{ for } \text{ any } i\in \bar{I}\), where \(\bar{x}=(\bar{x}_1,\cdots , \bar{x}_k, z_{1}^*, \cdots , z^*_{n-k})^{\mathrm{T}}\), i.e., the condition \([{\text{KKTC}}]_2\) holds for the point \(\bar{x}\), which means that if condition \([{\text{KKTC}}]_1\) also holds at this point \(\bar{x}\), then Step 2 goes to Step 5, i.e., this point \(\bar{x}\) is the obtained local minimizer or KKT point of problem \(({\text{P}})\) and the algorithm stops; if condition \([{\text{KKTC}}]_1\) does not hold at this point \(\bar{x}\), then Step 2 must go to Step 3. Therefore, the steps from Step 2 to Step 4 are less than or equal to the steps from Step 2 to Step 3 plus 1 when condition \([{\text{KKTC}}]_1\) holds and \([{\text{KKTC}}]_2\) does not hold for given point \(\bar{x}\), and the steps from Step 2 to Step 4 are less than or equal to the steps from Step 2 to Step 3 when condition \([{\text{KKTC}}]_1\) does not hold for the given point \(\bar{x}\).
Hence, the total steps from \({ Step\, 2}\) to Step 5 are at most \(\frac{2(M-m)}{\eta }+1\).
If \(f(x)\) is a constant function on \(\{0,1\}^n\), then the condition \([{\text{KKTC}}]_1\) always holds. So we just need check whether the condition \([{\text{KKTC}}]_2\) holds. If the condition \([{\text{KKTC}}]_2\) also holds, then \(\bar{x}\) is already a local minimizer or a KKT point \(\bar{x}\) for problem \(({\text{P}})\). If the condition \([{\text{KKTC}}]_2\) does not hold, then we just need one step from Step 2 to Step 4, and then we can sure that both of the conditions \([{\text{KKTC}}]_2\) and \([{\text{KKTC}}]_1\) hold. Then, the total steps from \({ Step\, 2}\) to Step 5 are at most \(1\).
Here, we have given an LOM for problem \(({\text{P}})\) (or MIWCPP) using the KKT condition \([{\text{KKTC}}]\) (or by the necessary condition \([{\text{NC1}}]\)). In the following, we will introduce a global optimization method using some auxiliary functions and the given LOM for problem \(({\text{P}})\) (or MIWCPP).
4 Global Optimization Method for MIWCPP
To introduce the global optimization method, first we will introduce the following auxiliary functions.
For any \(r>0\), let
and let
where \(r>0\) is a parameter, and \(\bar{x}\) is the current local minimum. Please notice that functions \(f_r\) and \(g_r\) are just the functions given by (3.4) and (3.5) in [44]. Even though function \(F_{r, \bar{x}}(x)\) is also similar to the function \(H_{q,e,x^*}(x)\) given in [44], but since the MIWCPP is mixed integer program problem and since the definitions for local minimizer and \({\text{KKT}}\) point given here are different from others, here we think that it is necessary to introduce the properties of function \(F_{r, \bar{x}}(x)\), which is as follows:
Theorem 4.1
Suppose that \(\bar{x}\) is a local minimizer of MIWCPP, then \(\bar{x}\) is a strictly local maximizer of \(F_{r,\bar{x}}(x)\) on \(S\) for any \(r>0\).
Proof
Since \(\bar{x}\) is a local minimizer of MIWCPP, there exists a \(\delta >0\) such that \(N_\delta (\bar{x})\subset S\) and
Hence, we have that
Hence, \(\bar{x}\) is a strictly local maximizer of \(F_{r,\bar{x}}(x)\) on \(S\) for any \(r>0\).
Let \(x^* \) be the global minimizer of MIWCPP and let
Then, we have the following results:
Theorem 4.2
If \(\bar{x}\) is not a global minimizer of MIWCPP, i.e., \(\beta >0\) , then \(x^*\) is a local minimizer of \(F_{r,x^*}(x)\) on \(S\) when \(r\leqslant \beta \).
Proof
When \(r\leqslant \beta \), we have that \(f(x^*)-f(\bar{x})\leqslant -r\). Hence, \(F_{r,\bar{x}}(x^*)=f(x^*)-f(\bar{x})+r\leqslant 0\) and for any \(x\in S\),
Hence, \( F_{r,\bar{x}}(x^*) \leqslant F_{r,\bar{x}}(x)\), for all \(x\in S\), i.e., \(x^*\) is a global minimizer of \(F_{r,\bar{x}}(x)\) on \(S\). Of course, \(x^*\) is a local minimizer of \(F_{r,x^*}(x)\) on \(S\) when \(r\leqslant \beta \).
Theorem 4.3
Any KKT point \( \widehat{x}\) of \(F_{r,\bar{x}}(x)\) on \(S\) satisfies one of the following conditions:
-
1°
\(f(\widehat{x}) <\, f(\bar{x})\);
-
2°
\(\widehat{x}:=( \widehat{x}_1,\cdots , \widehat{x}_n )\), where \(\widehat{x}_i=\left\{\begin{array}{ll} 1-\bar{x}_i, & if \, i\in I_{\bar{x}},\\ 0 \, or\, 1 \, or\, \bar{x}_i, & if \, i\in (I\cup \bar{I}){\setminus} I_{\bar{x}}. \end{array}\right. \)
Proof
First, we can prove that if \(\widehat{x}\) is a KKT point of \(F_{r,\bar{x}}(x)\) on \(S\) and \(f(\widehat{x})\geqslant f(\bar{x})\), then \( \widehat{x}_i\in \{0,1\}\) when \(i\in I\) and \( \widehat{x}_i \in \{0,1,\bar{x}_i\}\) when \(i\in \bar{I}\). In fact, if \(i\in I\), then \(\widehat{x}_i\in S_i=\{0,1\}\). If \(i\in \bar{I}\), and if \( \widehat{x}_i \notin \{0,1,\bar{x}_i\}\), then we have that \(\widehat{x}_i\in (0,1)\) and \(\widehat{x}_i\ne \bar{x}_i\). By \(\widehat{x}_i\in (0,1)\) and by the condition \([{\text{KKTC}}]\), we know that \(\frac{\partial F_{r, \bar{x}}(\widehat{x})}{\partial x_{i}}=0\). But if \(f(\widehat{x})\geqslant f(\bar{x}) \), by \(\widehat{x}_i\ne \bar{x}_i\), we have that \(\frac{\partial F_{r, \bar{x}}(\widehat{x})}{\partial x_{i}}=-\frac{\widehat{x}_{i}-\bar{x}_{i}}{(\Vert x-\bar{x}\Vert ^2+1)^2}\ne 0 ,\) which contradicts with \(\frac{\partial F_{r, \bar{x}}(\widehat{x})}{\partial x_{i}}=0\).
Second, we can prove that if \(\widehat{x}\) is a KKT point of \(F_{r,\bar{x}}(x)\) on \(S\) and if \(f(\widehat{x}) \geqslant f(\bar{x})\), then \(\widehat{x}_i \ne \bar{x}_i\) for \(i\in I_{\bar{x}}\). In fact, if there exists an \(i_0\in I_{\bar{x}}\) such that \(\widehat{x}_{i_0} = \bar{x}_{i_0}\), then \(\widetilde{x}\in N_{i_0}(\widehat{x})\), where \(\widetilde{x}:=\widehat{x}+(1-2\widehat{x}_{i_0})e_{i_0}\). By the definition of KKT point, we should have that \(F_{r,\bar{x}}(\widetilde{x})\geqslant F_{r, \bar{x}}(\widehat{x})\), which is contradicts with that
Hence, for any \(i\in I_{\bar{x}}\), we have that \(\widehat{x}_i=1-\bar{x}_i\).
Corollary 4.4
Any local minimizer \( \widehat{x}\) of \(F_{r,\bar{x}}(x)\) on \(S\) satisfies one of the following conditions:
-
1°
\(f(\widehat{x})< f(\bar{x})\);
-
2°
\(\widehat{x}:=( \widehat{x}_1,\cdots , \widetilde{x}_n)\), where \( \widehat{x}_i=\left\{\begin{array}{ll} 1-\bar{x}_i, & if \, i\in I_{\bar{x}},\\ 0 \, or\,1, & if \,i\in (I\cup \bar{I}){\setminus} I_{\bar{x}}.\end{array}\right. \)
Proof
Since \(\widehat{x}\) is a local minimizer of \(F_{r,\bar{x}}(x)\) on \(S\), there exists a \(0<\delta \leqslant \delta ( \widehat{x})\) such that \(F_{r,\bar{x}}(x)\geqslant F_{r,\bar{x}}(\widehat{x})\) for any \(x\in N_\delta ( \widehat{x})\).
By Proposition 3.6 and by Theorem 4.3, we know that if \( \widehat{x}\) is a local minimizer of \(F_{r,\bar{x}}(x)\) on \(S\), then one of the following conditions holds:
-
1°
\(f(\widehat{x})<\, f(\bar{x})\);
-
2°
\(\widehat{x}:=( \widehat{x}_1,\cdots , \widehat{x}_n )\), where \(\widehat{x}_i=\left\{ \begin{array} {ll} 1-\bar{x}_i, & \text{ if } i\in I_{\bar{x}},\\ 0 \text{ or } 1 \text{ or } \bar{x}_i, & \text{ if } i\in (I\cup \bar{I}){\setminus} I_{\bar{x}}.\end{array}\right. \)
So, here we just need to prove that if \( i\in (I\cup \bar{I}){\setminus} I_{\bar{x}}\), then \(\widehat{x}_i\ne \bar{x}_i\). In fact, if there exists an \( i_0\in (I\cup \bar{I}){\setminus} I_{\bar{x}}\) such that \(\widehat{x}_{i_0} = \bar{x}_{i_0}\), then let \(\alpha _{i_0}:=\frac{\delta }{2}\) and let \(\widetilde{x}:=\widehat{x}+\alpha _{i_0}e_{i_0}\). Then, \(\widetilde{x}\in N_\delta (\widehat{x})\) and
which contradicts \(F_{r,\bar{x}}(x)\geqslant F_{r,\bar{x}}(\widehat{x}), \text{ for} \text{ any } x\in N_\delta ( \widehat{x})\).
In the following, we will introduce a global optimization method for MIWCPP. This method combines the local optimization method for MIWCPP and \(({\text{P}})\) and the auxiliary function \(F_{r,\bar{x}}(x)\). The auxiliary function is used to escape the current local minimizer and to find a better feasible point of MIWCPP.
Algorithm 4.1 Global Optimization Method (GOM)
-
Step 0. Take an initial point \(x_1\in S\), a sufficiently small positive number \(\mu \), and an initial \(r_1>0\) (\( r_1>\mu \), in the numerical examples in the next section, we choose \(\mu =10^{-10}\) and \(r_1=1\)). Set \(k:=1\) and \( r:=r_1\).
-
Step 1. Use the LOM to solve MIWCPP starting from \(x_k\). Let \(x_k^*\) be the obtained local minimizer or KKT point of MIWCPP. If \(k\geqslant 2\), then goto Step 2; otherwise goto Step 3.
-
Step 2. If \(f(x_k^*)\,<\,f(x_{k-1}^*)\), then let \(r:=r_1\) and let \(\bar{x}:= x_k^*\), goto Step 3, otherwise let \(\bar{x}:= x_{k-1}^*\), goto Step 5.
-
Step 3. Construct the following auxiliary function:
$$\begin{aligned} F_{r,\bar{x}}(x)=\frac{1}{\Vert x-\bar{x}\Vert ^2+1}g_{r}(f(x)-f(\bar{x}))+f_{r}(f(x)-f(\bar{x})). \end{aligned}$$Consider the following problem:
$$\begin{aligned}& \min \ F_{r,\bar{x}}(x)\\ &{\text{s.t.}} \ x\in S, \end{aligned}$$(4.5)goto Step 4.
-
Step 4. Use the LOM to solve problem (4.5) starting from \(\bar{x}\). Let \(\bar{x}_{k}^*\) be the local minimizer of problem (4.5) and let \(x_{k+1}:=\bar{x}_{k}^*, k:=k+1\), goto Step 1.
-
Step 5. If \(r\geqslant \mu \), decrease \(r\) (such as, in the numerical examples in the next section, we let \(r:=r/10\)), go to Step 3; otherwise, go to Step 6.
-
Step 6. Stop. \(\bar{x}\) is the obtained global minimizer.
Remark 4.5
In Algorithm 4.1, we use auxiliary function \(F_{r,\bar{x}}(x)\) to improve the current best local minimizer \(\bar{x} \) by adjusting the parameter \(r\). From the properties given by Theorems 4.1–4.3, we know that if the current best local minimizer \(\bar{x}\) is not a global minimizer, then we can try to find a better one by local searching the auxiliary problem: \(\min _{x\in S} F_{r, \bar{x}}(x)\). When \(r\) is small enough, we also cannot find any better point, then we will stop and the current best local minimizer is the best point which we can find (we regarded it as the global minimizer). The numerical examples given in the following section illustrate that the global minimization method Algorithm 4.1 is very efficient and stable.
5 Example
In this section, we apply Algorithm 4.1 to the following test examples, where
-
\((k)\) is the iterative number in finding the k-th local minimizer or KKT point of MIWCPP;
-
\(x_k, k\geqslant 1\) is the k-th initial point;
-
\(f(x_k), k\geqslant 1\) is the function value of k-th initial point;
-
\(x_k^*, k\geqslant 1\) is the k-th KKT point or local minimizer of MIWCPP starting from \(x_k\);
-
\(f(x_k^*), k\geqslant 1\) is the function value of \(f\) at \(x_k^*\);
-
\(\bar{x}_k^*, k\geqslant 1\) is the k-th KKT point or local minimizer of problem (4.5) starting from \(x_k^*\);
-
\(f(\bar{x}_k^*), k\geqslant 1\) is the function value of of \(f\) at \(\bar{x}_k^*\).
Example 5.1
Consider the problem
Here \(A=\left( \begin{array}{rrrr} 1&-2&-3&-4\\ -2&-2&5&6\\ -3&5&3&7\\ -4&6&7&-4 \end{array}\right) \), \(B=\left( \begin{array}{rrrr} 1&26&14&-3\\ 6&0&-5&11\\ 4&-4&7&8\\ -7&11&4&9 \end{array}\right) \), \(b=(1,2,3,4)^{\mathrm{T}}\). The optimal solution is \(\bar{x}=(1,1,1,1)^{\mathrm{T}}\) and the optimal value \(f(\bar{x})=-{1\,774.000}\). Table 1 records the numerical results of solving Example 5.1 by Algorithm 4.1.
From Table 1, we see that the first local minimizer \(x_1^*\) of \(f(x)\) from the initial point \(x_1\) is not the global minimizer of Example 5.1, the auxiliary function is used to find a new initial point \(x_2\), which is the local minimizer of the auxiliary function problem (4.5) from \(x_1^*\). Then, the global minimizer \(x_2^*\) of Example 5.1 is obtained starting from the new initial point \(x_2\).
Example 5.2
Consider the problem
Here \(A=\left( \begin{array}{rrrrr} 1&26&14&-3&5\\ 26&0&-5&11&2\\ 14&-5&7&4&-10\\ -3&11&4&9&19\\ 5&2&-10&19&-21 \end{array}\right) \). The optimal solution is \(\bar{x}=(0,1,1,0,1)^{\mathrm{T}}\) and the optimal value \(f(\bar{x})= -{22.317\,95}\). Table 2 records the numerical results of solving Example 5.2 by Algorithm 4.1.
Example 5.3
Consider the problem
\({\text{Here}}\,\,{A}=\left( \begin{array}{ccccccc} {1\,567}&-16&14&-13&12&-11&9\\ -16&-14&-5&11&2&11&3\\ 14&-5&745&4&-10&-6&-7\\ -13&11&4&234&19&20&-1\\ 12&2&-10&19&-35&4&8\\ -11&11&-6&20&4&-123&-3\\ 9&3&-7&-1&8&-3&578 \end{array}\right)\), \(u(x)=\sum _{i=1}^7 x_i^2\). The optimal solution is \(\bar{x}=(0,1,0,0,1,1,0)^{\mathrm{T}}\) and the optimal value \(f(\bar{x})= -89.085\,533\). Table 3 records the numerical results of solving Example 5.3 by Algorithm 4.1.
Example 5.4
Consider the problem
Here \(A=\left( \begin{array}{ccccccc} 20&-6&4&-3&5&2&9\\ -6&16&-5&1&3&1&3\\ 4&-5&19&4&-10&-16&-7\\ -3&1&4&24&9&0&-1\\ 5&3&-10&9&17&4&8\\ 2&1&-16&0&4&16&-8\\ 9&3&-7&-1&8&-8&11 \end{array}\right) \), \(a(i)=\frac{1}{1+i}\), \(b(1)=(1.2,0,0,0,0,0,0)\), \(b(2)=(0,-1.5,0,0,0,0,0)\), \(b(3)=(0,0,1.6,0,0,0,0)\), \(b(4)=(0,0,0,-2.0,0,0,0)\), \(b(5)=(0,0,0,0,1.8,0,0)\), \(b(6)=(0,0,0,0,0,-1.4,0)\), \(b(7)=(0,0,0,0,0,0,1.6)\). The optimal solution is \(\bar{x}=(0,0,1,0.000\,0,0.000\,0,1.000\,0,1.000\,0)^{\mathrm{T}}\) and the optimal value \(f(\bar{x})= -11.092\,6\). Table 4 records the numerical results of solving Example 5.4 by Algorithm 4.1.
6 Conclusion
In this paper, we developed some new necessary global optimality conditions for MIWCPP, which extend the results given in [41] and [42]. A new LOM for MIWCPP is presented according to the given necessary global optimality conditions, which is different from the traditional one. A global optimization method for MIWCPP is also proposed in this paper, which is designed by combining the new LOM and an auxiliary function. Some numerical examples are also presented to show that the proposed novel optimization methods for MIWCPP are efficient.
References
Vial, J.P.: Strong and weak convexity of sets and functions. Math. oper. Res. 8(2), 231–259 (1983)
Wu, Z.Y.: Sufficient global optimality conditions for weakly convex minimization problems. J. Global Optim. 39(3), 427–440 (2007)
Jnger, M., Martin, A., Reinelt, G., Weismantel, R.: Quadratic 0/1 optimization and a decomposition approach for the placement of electronic circuits. Math. Program. 63, 257–279 (1994)
Pardalos, P.M., Rodgers, G.P.: Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing 45, 131–144 (1990)
McBride, R.D., Yormark, J.S.: An implicit enumeration algorithm for quadratic integer programming. Manag. Sci. 26(3), 282–296 (1980)
Krarup, J., Pruzan, P.A.: Computer aided layout design. Math. Program. Study 9, 75–94 (1978)
Gallo, G., Hammer, P.L., Simeone, B.: Quadratic knapsack problems. Math. Program. 12, 132–149 (1980)
Tuy, H.: Convergence and restart in branch-and-bounded algorithms for global optimization: appliccation to concvae minization and D.C. optimizatin problems. Math. Program. 41, 161–183 (1988)
Thoai, N.V.: On the construction of test problems for concave minimization algorithms. J. Global Optim. 5, 399–402 (1994)
Falk, J.E., Hoffman, K.R.: A successive underestimation method for concave minimization problems. Math. Oper. Res. 1, 251–259 (1976)
Horst, R.: A general class of branch-and-bound-methods in global optimization with some new approaches for concave minimization. J. Optim. Theory Appl. 51, 271–291 (1986)
Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization. Nonconvex Optimization and its Applications, vol. 48, 2nd edn. Kluwer Academic Publishers, Dordrecht (2000)
Floudas, C.A.: Deterministic Global Optimization: Theory, Algorithms and Applications. Kluwer Academic Publishers, Dordrecht (2000)
Tillman, F.A., Hwuang, C.L., Kuo, W.: Optimization of System Reliability. Marcel Dekker, New York (1980)
Floudas, C.A.: Nonlinear and Mixed-Integer Optimization: Fundamentals and Applications. Oxford University Press, New York (1995)
Quist, A.J., Klerk, E.D., Roos, C., Terlaky, T., Geemert, R.V., Hoogenboom, J.E., Illes, T.: Finding optimal nuclear reactor core reload patterns using nonlinear optimization and search heuristics. Eng. Optim. 32, 143–176 (1999)
Pinter, J.D.: Global Optimization in Action. Kluwer Academic Publishers, Dordrecht (1996)
Grossmann, I.E., Sahinidis, N.: Special Issue on Mixed-Integer Programming and its Application to Engineering, Part I. Optim. Eng., Kluwer Academic Publishers, Dordrecht, 3(4) (2002a)
Grossmann, I.E., Sahinidis, N.: Special Issue on Mixed-Integer Programming and its Application to Engineering, Part II. Optim. Eng., Kluwer Academic Publishers, Dordrecht, 4(1) (2002b)
Adjiman, C.S., Androulakis, I.P., Floudas, C.A.: Global optimization of mixed integer nonlinear problems. AIChE J. 46, 1769–1797 (2000)
Duran, M.A., Grossmann, I.E.: An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Math. Program. 36, 307–339 (1986)
Grossmann, I.E., Kravanja, Z.: Mixed-integer nonlinear programming: A survey of algorithms and applications. In: Conn A, R., Coleman, T.F., Biegler, L.T., Santosa, F.N. (eds.) Large-scale optimization with applications: II. Optimization design and control. Springer, Berlin (1997)
Tawarmalani, M., Sahinidis, N.V.: Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming: Theory, Algorithms, Software, and Applications. Kluwer Academic Publishers, Dordrecht (2002)
Stubbs, R.A., Mehrotra, S.: A branch-and-cut method for 0–1 mixed convex programming. Math. Program. 6, 515–532 (1999)
Gallaghera, R.J., Lee, E.K., Pattersonc, D.A.: Constrained discriminant analysis via 0-1 mixed integer programming. Ann. Oper. Res. 74, 65–88 (1997)
Richard, J.P.P., de Farias, I.R., Nemhauser, G.L.: Lifted inequalities for 0–1 mixed integer programming: basic theory and algorithms. Math. Program. Ser. B 98, 89–113 (2003)
Eckstein, J., Nediak, M.: Pivot, cut, and dive: a heuristic for 0-1 mixed integer programming. J Heuristics 13, 471C503 (2007)
Borchers, B., Mitchell, J.E.: An improved branch and bound algorithm for mixed integer nonlinear programs. Comput. Oper. Res. 21, 359–367 (1994)
Gupta, O.K., Ravindran, A.: Branch and bound experiments in convex nonlinear integer programming. Manag. Sci. 31, 1533–1546 (1985)
Leyffer, S.: Integrating SQP and branch-and-bound for mixed integer nonlinear programming. Comput. Optim. Appl. 18, 295–309 (2001)
Geoffrion, A.M.: Lagrangean relaxation for integer programming. Math. Program. Study 2, 82–114 (1974)
Fletcher, R., Leyffer, S.: Solving mixed integer nonlinear programs by outer approximation. Math. Program. 66, 327–349 (1994)
Li, Y., Pardalos, P.M.. In: Rao, C.R. (ed.) Integer programming, Handbook of Statistics, pp. 279–302. Elsevier, New York (1993)
Horst, R., Pardalos, P.M.: Handbook of Global Optimization. Kluwer Academic Publishers, Dordrecht (1995)
Pardalos, P.M., Romeijn, E.: Handbook of Global Optimization. Heuristic Approaches, vol. 2. Kluwer Academic Publishers, (2002)
Beck, A., Teboulle, M.: Global optimality conditions for quadratic optimization problems with binary constraints. SIAM J. Optim. 11, 179–188 (2000)
Jeyakumar, V., Rubinov, A.M., Wu, Z.Y.: Global optimality conditions for non-convex quadratic minimization problems with quadratic constraints. Math. Program. (A) 110, 521–541 (2007)
Jeyakumar, V., Rubinov, A.M., Wu, Z.Y.: Sufficient global optimality conditions for non-convex quadratic minimization problems with box constraints. J. Global Optim. 36, 471–481 (2006)
Chen, W., Zhang, L.S.: Global optimality conditions for quadratic integer problems, ORSC, 206–211 (2006)
Wu, Z.Y., Bai, F.S.: Global optimality conditions for mixed nonconvex quadratic programs. Optimization 58(1), 39–47 (2009)
Jeyakumar, V., Huy, N.Q.: Global minimization of difference of quadratic and convex functions over box or binary constraints. Optim. Lett. 2, 223–238 (2008)
Wu, Z.Y., Quan, J., Bai, F.S.: Global optimality conditions for mixed integer weakly concave programming problems. Dyn. Contin. Discr. Impuls. Syst. Ser. B 17, 675–685 (2010)
Zowe, J., Kurcyusz, S.: Regularity and stability for the mathematical programming problem in Banach spaces. Appl. Math. Optim. 5, 46–62 (1979)
Wu, Z.Y., Lee, H.W.J., Zhang, L.S., Yang, X.M.: A novel filled function method and quasi-filled function method for global optimization. Comput. Optim. Appl. 34(2), 249C272 (2006)
Author information
Authors and Affiliations
Corresponding author
Additional information
This research was partly supported by Natural Science Foundation of Chongqing (Nos. cstc2013jjB00001 and cstc2011jjA00010).
Rights and permissions
About this article
Cite this article
Wu, Zy., Bai, Fs., Yang, Yj. et al. Optimization Methods for Mixed Integer Weakly Concave Programming Problems. J. Oper. Res. Soc. China 2, 195–222 (2014). https://doi.org/10.1007/s40305-014-0046-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40305-014-0046-y
Keywords
- Global optimality conditions
- Local optimization method
- Global optimization method
- Mixed integer weakly concave programming problems