Abstract
In this paper, a representation of a recurrent neural network to solve quadratic programming problems with fuzzy parameters (FQP) is given. The motivation of the paper is to design a new effective one-layer structure neural network model for solving the FQP. As far as we know, there is not a study for the neural network on the FQP. Here, we change the FQP to a bi-objective problem. Furthermore, the bi-objective problem is reduced to a weighting problem and then the Lagrangian dual is constructed. In addition, we consider a neural network model to solve the FQP. Finally, some illustrative examples are given to show the effectiveness of our proposed approach.
Similar content being viewed by others
References
Abdel-Malek, L. L., & Areeractch, N. (2007). A quadratic programming approach to the multi-product newsvendor problem with side constraints. European Journal of Operational Research, 176(8), 55–61.
Ammar, E., & Khalifa, H. A. (2003). Fuzzy portfolio optimization a quadratic programming approach. Chaos, Solitons and Fractals, 18, 1045–1054.
Bazaraa, M. S., Shetty, C., & Sherali, H. D. (1979). Nonlinear programming, theory and algorithms. New York: Wiley.
Chen, Y.-H., & Fang, S.-C. (2000). Neurocomputing with time delay analysis for solving convex quadratic programming problems. IEEE Transactions on Neural Networks, 11, 230–240.
Cruz, C., Silva, R. C., & Verdegay, J. L. (2011). Extending and relating different approaches for solving fuzzy quadratic problems. Fuzzy Optimization and Decision Making, 10(3), 193–210.
Effati, S., Pakdaman, M., & Ranjbar, M. (2011). A new fuzzy neural network model for solving fuzzy linear programming problems and its applications. Neural Computing & Applications, 20, 1285–1294.
Effati, S., Mansoori, A., & Eshaghnezhad, M. (2015). An efficient projection neural network for solving bilinear programming problems. Neurocomputing, 168, 1188–1197.
Effati, S., & Ranjbar, M. (2011). A novel recurrent nonlinear neural network for solving quadratic programming problems. Applied Mathematical Modelling, 35, 1688–1695.
Eshaghnezhad, M., Effati, S., & Mansoori, A. (2016). A neurodynamic model to solve nonlinear pseudo-monotone projection equation and its applications. IEEE Transactions on Cybernetics. doi:10.1109/TCYB.2016.2611529.
Friedman, M., Ma, M., & Kandel, A. (1999). Numerical solution of fuzzy differential and integral equations. Fuzzy Set and Systems, 106, 35–48.
Hopfield, J. J., & Tank, D. W. (1985). Neural computation of decisions in optimization problems. Biological Cybernetics, 52, 141–152.
Khalil, H. K. (1996). Nonlinear systems. Michigan: prentice-hall.
Liu, S. T. (2009). A revisit to quadratic programming with fuzzy parameters. Chaos, Solitons & Fractals, 41, 1401–1407.
Lupulescu, V. (2009). On a class of fuzzy functional differential equations. Fuzzy Sets and Systems, 160, 1547–1562.
Mansoori, A., Effati, S., & Eshaghnezhad, M. (2016). An efficient recurrent neural network model for solving fuzzy non-linear programming problems. Applied Intelligence. doi:10.1007/s10489-016-0837-4.
Miettinen, K. M. (1999). Non-linear multiobjective optimization. Boston: Kluwer Academic.
Panigrahi, M., Panda, G., & Nanda, S. (2008). Convex fuzzy mapping with differentiability and its application in fuzzy optimization. European Journal of Operational Research, 185(1), 47–62.
Petersen, J. A. M., & Bodson, M. (2006). Constrained quadratic programming techniques for control allocation. IEEE Transactions on Control Systems Technology, 14(9), 1–8.
Silva, R. C., Cruz, C., & Verdegay, J. L. (2013). Fuzzy costs in quadratic programming problems. Fuzzy Optimization and Decision Making, 12(3), 231–248.
Wang, G., & Wu, C. (2003). Directional derivatives and sub-differential of convex fuzzy mappings and application in convex fuzzy programming. Fuzzy Sets and Systems, 138, 559–591.
Wu, H.-C. (2003). Saddle Point Optimality Conditions in Fuzzy Optimization Problems. Fuzzy Optimization and Decision Making, 2(3), 261–273.
Wu, H.-C. (2004). Evaluate fuzzy optimization problems based on biobjective programming problems. Computers and Mathematics with Applications, 47, 893–902.
Wu, H.-C. (2004). Duality theory in fuzzy optimization problems. Fuzzy Optimization and Decision Making, 3(4), 345–365.
Wu, X.-L., & Liu, Y.-K. (2012). Optimizing fuzzy portfolio selection problems by parametric quadratic programming. Fuzzy Optimization and Decision Making, 11(4), 411–449.
Xia, Y., & Wang, J. (2000). A recurrent neural network for solving linear projection equations. Neural Networks, 13, 337–350.
Zhong, Y., & Shi, Y. (2002). Duality in fuzzy multi-criteria and multi-constraint level linear programming: A parametric approach. Fuzzy Sets and Systems, 132, 335–346.
Acknowledgements
The authors wish to express our special thanks to the anonymous referees and editor for their valuable suggestions.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Some results on fuzzy calculus
Lemma 7.1
Let \({\tilde{f}}, {\tilde{g}}\) be convex fuzzy mappings defined on \(C\subseteq {\varOmega }\), and \(int\ C\ne \emptyset \), then,
Proof
From Theorem 2.14, \(\lambda {\tilde{f}}\) and \({\tilde{f}}+{\tilde{g}}\) are convex fuzzy mapping. The completeness of the proof follows from theorem 23.8 in Zhong and Shi (2002). \(\square \)
Theorem 7.2
Let \({\tilde{f}}, {\tilde{g}}\) be convex fuzzy mappings defined on \(C\subseteq {\varOmega }\), and \(int\ C\ne \emptyset \), if \({\tilde{f}}, {\tilde{g}}\) are differential at \(x^*\), then \(\lambda {\tilde{f}}\) (\(\lambda >0\)) and \({\tilde{f}}+{\tilde{g}}\) are also differential at \(x^*\), i. e.,
Proof
From Theorem 2.14, \(\lambda {\tilde{f}}\) and \({\tilde{f}}+{\tilde{g}}\) are convex fuzzy mapping. Using Definition 2.11 and Lemma 7.1, the proof is trivial. \(\square \)
Appendix 2: Some results on FQP
Lemma 7.3
If fuzzy matrix \({\tilde{H}}\), is positive semi-definite and symmetric, \(x\in {\mathbb {R}}^n_+\), then \(x^T{\tilde{H}}x\) is a fuzzy number and,
where \(x=(x_1,x_2,\ldots ,x_n)^T, {\tilde{H}}=(\{(\underline{h_{ij}}(\alpha ), \overline{h_{ij}}(\alpha ), \alpha ): \alpha \in [0,1]\})_{n\times n}\).
Proof
The proof follows from Definition 3.1. \(\square \)
Lemma 7.4
Let fuzzy matrix \({\tilde{H}}\) be positive semi-definite and symmetric, \(x\in {\mathbb {R}}^n_+\), then \({\tilde{h}}(x)=x^T{\tilde{H}}x\) is a convex fuzzy mapping.
Proof
The proof follows from Definition 3.1, Theorem 2.13, and Lemma 7.3. \(\square \)
Now, consider the FQP defined in (1). Here, we are going to prove some results for the FQP.
Lemma 7.5
Let fuzzy matrix \({\tilde{H}}\) be a positive semi-definite and symmetric, then \({\tilde{f}}(x)={\tilde{c}}^Tx+\frac{1}{2}x^T{\tilde{H}}x\) in (1) is a convex fuzzy mapping.
Proof
Since \(x\ge 0, {\tilde{c}}^Tx\) is a convex fuzzy mapping. Using Lemma 7.4 and Theorem 2.14, the proof is complete. \(\square \)
Remark 7.6
Since in FQP (1), \({\tilde{f}}(x)\) is a convex fuzzy mapping and \(T=\{x:\ x\ge 0, {\tilde{A}}x\le {\tilde{b}}\}\) is a convex feasible set, so FQP (1) is a convex fuzzy programming.
Remark 7.7
As in crisp programming problem, we say a fuzzy programming is convex if both objective function and the region solution are convex.
Lemma 7.8
The fuzzy mapping \({\tilde{f}}(x)={\tilde{c}}^Tx+\frac{1}{2}x^T{\tilde{H}}x\) is differentiable on \(int\ {\mathbb {R}}^n_+\) and,
Proof
The proof follows from Theorem 7.2 and Lemma 7.5. \(\square \)
Appendix 3: Proof of Theorem 3.2
Proof
Since \({\bar{x}}\) is a local optimal solution, there exists a neighborhood \(N({\bar{x}})\) around \({\bar{x}}\), such that:
i.e., according to the Definition 2.3, we get,
By contradiction, suppose that \({\bar{x}}\) is not a global optimal solution, so that \({\tilde{f}}(x^*)<{\tilde{f}}({\bar{x}})\) for some \(x^*\in T\), where \(T=\{x:\ x\ge 0, {\tilde{A}}x\le {\tilde{b}}\}\) is the feasible set. In other words, we have:
From the convexity of \({\tilde{f}}\) for all \(\lambda \in (0,1)\), we have:
But for \(\lambda >0\) and sufficiently small, \(\lambda x^*+(1-\lambda ){\bar{x}}\in N({\bar{x}})\). Hence, the above inequalities contradict with (26), this leads to the conclusion that \({\bar{x}}\) is a global optimal solution. Suppose that \({\bar{x}}\) is not the unique global optimal solution, so that there exists a \({\hat{x}}\in T, {\hat{x}}\ne {\bar{x}}\), such that \({\tilde{f}}({\hat{x}})={\tilde{f}}({\bar{x}})\), i. e.,
By the strict convexity,
By the convexity of \(T, \frac{1}{2}{\hat{x}}+\frac{1}{2}{\bar{x}}\in T\), and the above inequalities violate global optimality of \({\bar{x}}\). Hence, \({\bar{x}}\) is the unique global minimum. \(\square \)
Rights and permissions
About this article
Cite this article
Mansoori, A., Effati, S. & Eshaghnezhad, M. A neural network to solve quadratic programming problems with fuzzy parameters. Fuzzy Optim Decis Making 17, 75–101 (2018). https://doi.org/10.1007/s10700-016-9261-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10700-016-9261-9