Skip to main content
Log in

An efficient global optimization algorithm for a class of linear multiplicative problems based on convex relaxation

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

This paper presents an efficient global optimization algorithm to solve a class of linear multiplicative problems (LMP). The algorithm first converts LMP into an equivalent problem (EP) via some variables transformation, and a convex relaxation problem is constructed to derive a lower bound to the optimal value of EP. Consequently, the process of solving LMP is transformed into tackling a series of convex programs. Additionally, a pruning rule is developed to offer a chance to remove the portion of the investigated space which does not contain the optimal solution of EP, and we propose a strategy which provides more choices of the feasible solution to update the upper bound for the optimal value of LMP. We also analyze the convergence of the algorithm and give its complexity result. Finally, our approach has been confirmed to be effective based on numerical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1

Similar content being viewed by others

Data availability

No data was used for the research described in the article.

References

Download references

Acknowledgements

We thank each reviewer for their valuable comments and suggestions, which help us to improve the quality of the paper. This research was supported by the National Natural Science Foundation of China (grant number: 12071133;11871196).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiping Shen.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by National Natural Science Foundation of China (Grant numbers: 12071133, 11871196).

Appendix A. The derivation of the relaxations in Cambini et al. (2023)

Appendix A. The derivation of the relaxations in Cambini et al. (2023)

We first report the two linear relaxations offered by Cambini et al. (2023) for Problem 2. To this end, the following 4p linear programs need to be solved:

$$\begin{aligned} {\underline{c}}_{i}=\min _{x\in D}c_{i}^{\top }x,\ {\overline{c}}_{i}=\max _{x\in D}c_{i}^{\top }x,\ {\underline{d}}_{i}=\min _{x\in D}d_{i}^{\top }x,\ {\overline{d}}_{i}=\max _{x\in D}d_{i}^{\top }x,\ i=1,\ldots ,p. \end{aligned}$$

Then the two linear relaxations of Problem 2 can be expressed as follows:

$$\begin{aligned}{{(\text {LRP1})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}(-{\underline{c}}_{i}{\underline{d}}_{i}+ ({\underline{c}}_{i} d_{i}+{\underline{d}}_{i}c_{i})^{\top }x) +\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)\\ &{}\mathrm {s.t.}&{} x\in D,\ {\underline{c}}_{i}\le c_{i}^{\top }x\le {\overline{c}}_{i},\ {\underline{d}}_{i}\le d_{i}^{\top }x\le {\overline{d}}_{i}, i=1,\ldots , p.\\ \end{array} \right. \end{aligned}$$
$$\begin{aligned}{{(\text {LRP2})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}(-{\overline{c}}_{i}{\overline{d}}_{i}+ ({\overline{c}}_{i} d_{i}+{\overline{d}}_{i}c_{i})^{\top }x) +\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)\\ &{}\mathrm {s.t.}&{} x\in D,\ {\underline{c}}_{i}\le c_{i}^{\top }x\le {\overline{c}}_{i},\ {\underline{d}}_{i}\le d_{i}^{\top }x\le {\overline{d}}_{i}, i=1,\ldots , p.\\ \end{array} \right. \end{aligned}$$

Next, we will introduce the six convex relaxation programs in Cambini et al. (2023), towards this end, we first rewrite \(\varphi (x)\) as the following three D.C. functions (D.C. function: the difference between two convex functions):

(i) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-\frac{1}{2}\sum \nolimits _{i=1}^{p} ((c_{i}^{\top }x)^2+ (d_{i}^{\top }x)^2)\);

(ii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x-d_{i}^{\top }x)^2\);

(iii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} ((c_{i}^{\top }x)^2+ (d_{i}^{\top }x)^2)-\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x-d_{i}^{\top }x)^2\).

Based on (i)–(iii), Problem 2 can be relaxed to the following three convex programs:

$$\begin{aligned}{{(\text {CRP1})}:}\left\{ \begin{array}{lll} &{}&{}\min \sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \limits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}\\ &{}&{} +\frac{1}{2}\sum \limits _{i=1}^{p}({\underline{c}}_{i}{\overline{c}}_{i} +{\underline{d}}_{i}{\overline{d}}_{i} -({\underline{c}}_{i}+{\overline{c}}_{i}){c}_{i}^{T}{x}-({\underline{d}}_{i} +{\overline{d}}_{i}){d}_{i}^{T}{x})\\ &{}&{} {\mathrm{s.t.}} x\in D,\ {\underline{c}}_{i}\le c_{i}^{\top }x\le {\overline{c}}_{i},\ {\underline{d}}_{i}\le d_{i}^{\top }x\le {\overline{d}}_{i}, i=1,\ldots , p.\\ \end{array} \right. \end{aligned}$$
$$\begin{aligned}{{(\text {CRP2})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \limits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}\\ &{}&{}+\frac{1}{4}\sum \limits _{i=1}^{p}({\underline{\sigma }}_{i}{\overline{\sigma }}_{i} -({\underline{\sigma }}_{i}+{\overline{\sigma }}_{i})(c_{i}^{\top }x-d_{i}^{\top }x))\\ &{}\mathrm{s.t.}&{} x\in D,\ {\underline{\sigma }}_{i}\le (c_{i}-d_{i})^{\top }x\le {\overline{\sigma }}_{i}, i=1,\ldots , p,\\ \end{array} \right. \end{aligned}$$

where \({\underline{\sigma }}_{i}=\min _{{x}\in D}(c_{i}^{\top }x-d_{i}^{\top }x),\) \({\overline{\sigma }}_{i}=\max _{{x}\in D}(c_{i}^{\top }x-d_{i}^{\top }x), i=1,\ldots , p.\)

$$\begin{aligned}{{(\text {CRP3})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \limits _{i=1}^{p} ((c_{i}^{\top }x)^2+ (d_{i}^{\top }x)^2)\\ &{}&{}+\frac{1}{2}\sum \limits _{i=1}^{p} ({\underline{\sigma }}_{i}{\overline{\sigma }}_{i} -({\underline{\sigma }}_{i}+{\overline{\sigma }}_{i})(c_{i}^{\top }x-d_{i}^{\top }x))\\ &{}\mathrm {s.t.}&{} x\in D,\ {\underline{\sigma }}_{i}\le (c_{i}-d_{i})^{\top }x\le {\overline{\sigma }}_{i}, i=1,\ldots , p.\\ \end{array} \right. \end{aligned}$$

The last three convex relaxation programs are based on the D.C. forms of the objective function \(\varphi (x)\) in Problem 2 and the eigenvalue decomposition of the matrix in the quadratic representation of \(\varphi (x)\). The details are explained as follows.

Note that the D.C. forms offered by (i)–(ii) can be rewritten as:

(i) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-x^{\top }Q_{1}x\),

(ii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-x^{\top }Q_{2}x\),

where \(Q_{1}=\frac{1}{2}\sum \nolimits _{i=1}^{p}(c_{i}c_{i}^{\top }+d_{i}d_{i}^{\top }),\ Q_{2}=\frac{1}{4}\sum \nolimits _{i=1}^{p}(c_{i}-d_{i})(c_{i}-d_{i})^{\top }\) are symmetric positive semidefinite matrices. Therefore, there exists an orthonormal matrix \({\tilde{U}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\tilde{u}}_{1},\ldots ,{\tilde{u}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\tilde{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\tilde{\lambda }}_{1},\ldots ,{\tilde{\lambda }}_{n}\in {\mathbb {R}}\) such that \(Q_{1}={\tilde{U}}{\tilde{D}}{\tilde{U}}^{\top }\). Let \(\Theta ^{+}=\{i=1,\ldots ,n: {\tilde{\lambda }}_{i}> 0\}\) and \({\tilde{\vartheta }}_{i}=\sqrt{{\tilde{\lambda }}_{i}}\cdot {\tilde{u}}_{i}\) for all \(i\in \Theta ^{+},\) then the convex relaxation of Problem 2 can be obtained as follows:

$$\begin{aligned}{{(\text {CRP4})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \limits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}\\ &{}&{}+\sum \limits _{i\in \Theta ^{+}} ({\tilde{\vartheta }}^{L}_{i}{\tilde{\vartheta }}^{U}_{i} -({\tilde{\vartheta }}^{L}_{i}+{\tilde{\vartheta }}^{U}_{i}){\tilde{\vartheta }}_{i}^{\top }x)\\ &{}\mathrm{s.t.}&{} x\in D,\ {\tilde{\vartheta }}^{L}_{i}\le {\tilde{\vartheta }}_{i}^{\top }x\le {\tilde{\vartheta }}^{U}_{i}, i\in \Theta ^{+},\\ \end{array} \right. \end{aligned}$$

where \({\tilde{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\tilde{\vartheta }}_{i}^{T}{x}),\) \({\tilde{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\tilde{\vartheta }}_{i}^{T}{x}), \ i\in \Theta ^{+}.\)

Similarly, there exists an orthonormal matrix \({\hat{U}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\hat{u}}_{1},\ldots ,{\hat{u}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\hat{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\hat{\lambda }}_{1},\ldots ,{\hat{\lambda }}_{n}\in {\mathbb {R}}\) such that \(Q_{2}={\hat{U}}{\hat{D}}{\hat{U}}^{\top }\). Let \(\Gamma ^{+}=\{i=1,\ldots ,n: {\hat{\lambda }}_{i}> 0\}\) and \({\hat{\vartheta }}_{i}=\sqrt{{\hat{\lambda }}_{i}}\cdot {\hat{u}}_{i}\) for all \(i\in \Gamma ^{+},\) then we can construct the convex relaxation of Problem 2 as follows:

$$\begin{aligned}{} \mathbf{{(\text {CRP5})}:}\left\{ \begin{array}{lll} &{}\min &{}\sum \limits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \limits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}\\ &{}&{}+\sum \limits _{i\in \Gamma ^{+}} ({\hat{\vartheta }}^{L}_{i}{\hat{\vartheta }}^{U}_{i} -({\hat{\vartheta }}^{L}_{i}+{\hat{\vartheta }}^{U}_{i}){\hat{\vartheta }}_{i}^{\top }x)\\ &{}\mathrm{s.t.}&{} x\in D,\ {\hat{\vartheta }}^{L}_{i}\le {\hat{\vartheta }}_{i}^{\top }x\le {\hat{\vartheta }}^{U}_{i}, i\in \Gamma ^{+},\\ \end{array} \right. \end{aligned}$$

where \({\hat{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\hat{\vartheta }}_{i}^{T}{x}),\) \({\hat{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\hat{\vartheta }}_{i}^{T}{x}), \ i\in \Gamma ^{+}.\)

To report the last convex relaxation of Problem 2, we rewrite \(\varphi (x)\) as a quadratic function below:

$$\begin{aligned} \varphi (x)=x^{\top }{\hat{Q}}x+{\hat{a}}^{\top }x+{\hat{a}}_{0}, \end{aligned}$$

where \({\hat{a}}=\sum _{i=1}^{p}(c_{0i}d_{i}+d_{0i}c_{i}),\ {\hat{a}}_{0}=\sum _{i=1}^{p}c_{0i}d_{0i},\ {\hat{Q}}=\frac{1}{2}\sum _{i=1}^{p}(c_{i}d_{i}^{\top }+d_{i}c_{i}^{\top })\) is a symmetric matrix, thus there exists an orthonormal matrix \({\bar{P}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\bar{p}}_{1},\ldots ,{\bar{p}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\bar{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\bar{\lambda }}_{1},\ldots ,{\bar{\lambda }}_{n}\in {\mathbb {R}}\) such that \({\hat{Q}}={\bar{P}}{\bar{D}}{\bar{P}}^{\top }\). Let \(\Lambda ^{+}=\{i=1,\ldots ,n: {\bar{\lambda }}_{i}> 0\},\ \Lambda ^{-}=\{i=1,\ldots ,n: {\bar{\lambda }}_{i}< 0\}\) and \({\bar{\vartheta }}_{i}=\sqrt{{\bar{\lambda }}_{i}}\cdot {\bar{p}}_{i},\ i\in \Lambda ^{+}\cup \Lambda ^{-}\), then \(x^{\top }{\hat{Q}}x=\sum _{i\in \Lambda ^{+}}({\bar{\vartheta }}_{i}^{\top }x)^{2}-\sum _{i\in \Lambda ^{-}}({\bar{\vartheta }}_{i}^{\top }x)^{2}\), and the convex relaxation of Problem 2 can be obtained as follows:

$$\begin{aligned}{{(\text {CRP6})}:}\left\{ \begin{array}{lll} &{}\min &{}{\hat{a}}^{\top }x+{\hat{a}}_{0}+\sum \limits _{i\in \Lambda ^{+}}({\bar{\vartheta }}_{i}^{\top }x)^{2}+\sum \limits _{i\in \Lambda ^{-}} ({\bar{\vartheta }}^{L}_{i}{\bar{\vartheta }}^{U}_{i} -({\bar{\vartheta }}^{L}_{i}+{\bar{\vartheta }}^{U}_{i}){\bar{\vartheta }}_{i}^{\top }x)\\ &{}\mathrm {s.t.}&{} x\in D,\ {\bar{\vartheta }}^{L}_{i}\le {\bar{\vartheta }}_{i}^{\top }x\le {\bar{\vartheta }}^{U}_{i}, i\in \Lambda ^{-},\\ \end{array} \right. \end{aligned}$$

where \({\bar{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\bar{\vartheta }}_{i}^{T}{x}),\) \({\bar{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\bar{\vartheta }}_{i}^{T}{x}), \ i\in \Lambda ^{-}.\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, B., Shen, P. An efficient global optimization algorithm for a class of linear multiplicative problems based on convex relaxation. Comp. Appl. Math. 43, 247 (2024). https://doi.org/10.1007/s40314-024-02765-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-024-02765-9

Keywords

Mathematics Subject Classification

Navigation