Abstract
In this paper, we propose a new algorithm for global minimization of functions represented as a difference of two convex functions. The proposed method is a derivative free method and it is designed by adapting the extended cutting angle method. We present preliminary results of numerical experiments using test problems with difference of convex objective functions and box-constraints. We also compare the proposed algorithm with a classical one that uses prismatical subdivisions.
Similar content being viewed by others
References
Alexandrov, A.D.: On surfaces which may be represented by a difference of convex functions (in russian). Izvestia Akademii Nauk Kazakhskoj SSR, Seria Fiziko-Matematicheskikh Nauk 3, 3–20 (1949)
Bagirov, A.M., Rubinov, A.M.: Global minimization of increasing positively homogeneous functions over the unit simplex. Ann. Oper. Res. 98(1–4), 171–187 (2000)
Bagirov, A.M., Rubinov, A.M.: Modified versions of the cutting angle method. In: Hadjisavvas, N., Pardalos, P.M. (eds.) Convex Analysis and Global Optimization, Nonconvex Optimization and Its Applications, vol. 54, pp. 245–268. Kluwer, Dordrecht (2001)
Batten, L.M., Beliakov, G.: Fast algorithm for the cutting angle method of global optimization. J. Global Optim. 24, 149–161 (2002)
Beliakov, G.: Geometry and combinatorics of the cutting angle method. Optimization 52(4–5), 379–394 (2003)
Beliakov, G.: The cutting angle method: a tool for constrained global optimization. Optim. Methods Softw. 19, 137–151 (2004)
Beliakov, G.: A review of applications of the cutting angle methods. In: Rubinov, A., Jeyakumar, V. (eds.) Continuous Optimization, pp. 209–248. Springer, New York (2005)
Beliakov, G.: Extended cutting angle method of global optimization. Pac. J. Optim. 4(1), 153–175 (2008)
Beliakov, G., Ting, K.M., Murshed, M., Rubinov, A.M., Bertoli, M.: Efficient serial and parallel implementations of the cutting angle method. In: Di Pillo, G. (ed.) High Performance Algorithms and Software for Nonlinear Optimization, pp. 57–74. Kluwer Academic Publishers, Dordrecht (2003)
Bougeard, M.: Contribution à la théorie de Morse en dimension finie. PhD thesis, Université de Paris IX, Paris (1978)
Cheney, E.W., Goldstein, A.A.: Newton’s method for convex programming and Tchebycheff approximation. Numer. Math. 1, 253–268 (1959)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)
Ferrer, A.: Representation of a polynomial function as a difference of convex polynomials, with an application. Lect. Notes Econ. Math. Syst. 502, 189–207 (2001)
A, Ferrer: Applying global optimization to a problem in short-term hydrotermal scheduling. Nonconvex Optim. Appl. 77, 263–285 (2005)
Hartman, P.: On functions representable as a difference of convex functions. Pac. J. Math. 9, 707–713 (1959)
Hoai An, L.T., Dinh Tao, P.: The DC (difference of convex functions) Programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133, 23–46 (2005)
Holmberg, K., Tuy, H.: A production-transportation problem with stochastic demand and concave production costs. Math. Program. 85, 157–179 (1999)
Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches, 1st edn. Springer, Heilderberg (1990)
Horst, R., Pardalos, P.M., Thoai, NgV: Introduction to Global Optimization, first edition edn. Kluwer Academic Publishers, Dordrecht (1995)
Kelley Jr, J.E.: The cutting-plane method for solving convex programs. J. Soc. Indust. Appl. Math. 8, 703–712 (1960)
Konno, H., Thach, P.T., Tuy, H.: Optimization on Low Rank Nonconvex Structures. Kluwer Academic Publishers, New York (1997)
Landis, E.M.: On functions representable as the difference of two convex functions. Dokl. Akad. Nauk SSSR 80, 9–11 (1951)
Penot, J.P., Bougeard, M.L.: Approximation and decomposition properties of some classes of locally d.c. functions. Math. Program. 41, 195–227 (1988)
Pey-Chun, C., Hansen, P., Jaumard, B., Tuy, H.: Solution of the multisource weber and conditional weber problems by d.c. programming. Oper. Res. 46(4), 548–562 (1998)
Rubinov, A.M.: Abstract Convexity and Global Optimization, Volume 44 of Nonconvex Optimization and Its Applications. Kluwer Academic Publishers, Dordrecht (2000)
Strekalovsky, A., Tsevendorj, I.: Testing the \(\cal R\)-strategy for a reverse convex problem. J. Global Optim. 13, 61–74 (1998)
Tuy, H.: Convex Analysis and Global Optimization, 1st edn. Kluwer Academic Publishers, Dordrescht (1998)
Vial, J.-P.: Strong and weak convexity of sets and functions. Math. Oper. Res. 8, 231–259 (1983)
Acknowledgments
This research by Dr. Albert Ferrer was partially supported by the Ministry of Science and Technology (Project No. MTM2011-29064-C03-01), and by Dr. Adil Bagirov was supported under Australian Research Council’s Discovery Projects funding scheme (Project No. DP140103213). The authors would like to thank an anonymous referee and an Associate Editor for their comments that helped to improve the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
The research of this author has been partially supported by the Ministerio de Ciencia y Tecnología, Project MTM2011-29064-C03-01.
Appendix
Appendix
In Appendix we describe test problems used in the numerical experiments.
1.1 Test problems with non-Lipschitz objective functions
Problem 10.1
The function \(5(x^2+y^2)\), allows us to obtain a d.c. representations of the objective function \(f\) so DCECAM can be applied:
The best solution found is \((0.29658\ldots , 0.62279\ldots )\) with \(f^*=-0.99999825\ldots \).
Problem 10.2
with \(a=0,9\) and \(a=1.5\). Then we have the convex functions
that satisfy
Problem 10.3
Then we have the convex functions
and
that satisfy
Problem 10.4
By combining the above-mentioned functions we can obtain new function with \(n\) variables.
and
1.2 Test problems with Lipschitz objective functions
By using the convex function \(g(x):=K\Vert x\Vert ^2\) with \(K>0\), we can obtain a DC representation of the objective functions of the test problems as follows.
with \(K\) being a real number such that \(f(x)+k\sum _{j=1}^n x_j^2\) is a convex function.
Problem 10.5
The class of test problems \(HPTnXmY\).
The following class of test problems can be found in [19]:
where \(a^i \in \{x \in \mathrm{I\!R}^n:0\le x_j \le 10, 1\le j \le n\}\) and \(c_i > 0\). By using the convex function \(k\left( \sum _{j=1}^nx_j^2\right) \) with \(k>0\), we can obtain a d.c. representation of the objective function in (30) as follows. Consider \( f(x)=\sum _{i=1}^mf_i(x), \) with \(f_i(x):=1/\left( \Vert x-a^i\Vert ^2+c_i\right) \) and \(x \in \mathrm{I\!R}^n\). Hence, we can write
with \(k\) a real number such that \(f(x)+k\sum _{j=1}^nx_j^2\) is a convex function. The different instances of the test problem (30) are denoted by \(HPTnXmY\) where \(X\) represents the dimension and \(Y\) means the number of local optimal solutions of the instance. Parameters are given in Table 8.
Problem 10.6
The class of test problems \(TnXrY\).
Let \(x\in \mathrm{I\!R}^n\) be \(x=(x_1,\ldots ,x_n)\). A reduced version of the test problem
where \(A\in \mathrm{I\!R}^{m*n}\) and \(b\in \mathrm{I\!R}^m\), can be found in [27]. The names of the different instances of the test problem (32) are denoted by \(TnXrY\), where \(X\) is the dimension and \(Y\) means the number of linear constraints of the instance. For numerical tests, we have chosen the instance \(Tn2r4\) with the parameters \(c_1=0.09\) and \(c_2=0.1\). As before, by using the convex function \(k\left( \sum _{j=1}^nx_j^2\right) \) different d.c. representations of the objective function can be obtained in the form:
We consider the values \(k=7.5\), \(k=8\) and \(k=8.5\).
Problem 10.7
The class of test problems \(HPBr1\).
The problem
which will be denoted by \(HPBr1\), is a nonconvex programming problem. The objective function of \(HPBr1\) is an homogeneous polynomial of degree two with two variables (in this case it is a hyperbole). It is known that the d.c. representation of \(xy\) in (33) is the optimal. Alternative non-optimal d.c. representations of \(xy\) are:
-
(1)
\(xy=\frac{1}{2}(x+y)^2-\frac{1}{2}(x^2+y^2)\), and
-
(2)
\(xy=\frac{1}{2}(x^2+y^2)-\frac{1}{2}(x-y)^2\).
Problem 10.8
The class of test problems \(COSr0\)
The problem
which will be denoted by \(COSr0\), is a multiextremal programming problem with minimizer \((0,0)\) and minimum \(-1\). The function \(k(x^2+y^2)\), \(k>0\) allows us to obtain many different d.c. representations of the objective function \(f\):
Problem 10.9
Here
Problem 10.10
Here
Rights and permissions
About this article
Cite this article
Ferrer, A., Bagirov, A. & Beliakov, G. Solving DC programs using the cutting angle method. J Glob Optim 61, 71–89 (2015). https://doi.org/10.1007/s10898-014-0159-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-014-0159-1