Skip to main content
Log in

Epsilon-nonparallel support vector regression

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In this work, a novel method called epsilon-nonparallel support vector regression (ε-NPSVR) is proposed. The reasoning behind the nonparallel support vector machine (NPSVM) method for binary classification is extended for predicting numerical outputs. Our proposal constructs two nonparallel hyperplanes in such a way that each one is closer to one of the training patterns, and as far as possible from the other. Two epsilon-insensitive tubes are also built for providing a better alignment for each hyperplane with their respective training pattern, which are obtained by shifting the regression function up and down by two fixed parameters. Our proposal shares the methodological advantages of NPSVM: A kernel-based formulation can be derived directly by applying the duality theory; each twin problem has the same structure of the SVR method, allowing the use of efficient optimization algorithms for fast training; it provides a generalized formulation for twin SVR; and it leads to better performance compared with the original TSVR. This latter advantage is confirmed by our experiments on well-known benchmark datasets for the regression task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Abaszade M, Effati S (2018) Stochastic support vector regression with probabilistic constraints. Appl Intell 48(1):243–256

    Article  Google Scholar 

  2. Alcalá-Fdez J, Fernandez A, Luengo J, Derrac J, García S, Sánchez L, Herrera F (2011) Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing 17(2-3):255–287

    Google Scholar 

  3. Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  4. Balasundaram S, Meena Y (2016) Training primal twin support vector regression via unconstrained convex minimization. Appl Intell 44(4):931–955

    Article  Google Scholar 

  5. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:27:1–27:27. software available at http://www.csie.ntu.edu.tw/cjlin/libsvm

    Article  Google Scholar 

  6. Chen X, Yang J, Liang J, Ye Q (2012) Smooth twin support vector regression. Neural Comput & Applic 21:505–513

    Article  Google Scholar 

  7. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  8. Demšar J (2006) Statistical comparisons of classifiers over multiple data set. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  9. Deng N, Tian Y, Zhang C (2012) Support vector machines: optimization based theory, algorithms, and extensions. Chapman and Hall/CRC, Boca Raton

    Book  Google Scholar 

  10. Drucker H, Burges C, Kaufman L, Smola A, Vapnik V (1997) Support vector regression machines. In: Advances in neural information processing systems (NIPS), vol 9. MIT Press, pp 155–161

  11. Gorban A, Tyukin I, Prokhorov D, Sofeikov K (2016) Approximation with random bases: Pro et contra. Inf Sci 364:129–145

    Article  Google Scholar 

  12. Jayadeva Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910

    Article  Google Scholar 

  13. Jiang H, Zhang Y, Muljadi E, Zhang J, Gao DW (2018) A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization. IEEE Trans Smart Grid 9(4):3341–3350

    Article  Google Scholar 

  14. Khemchandani R, Goyal K, Chandra S (2016) Twsvr: regression via twin support vector machine. Neural Netw 74:14–21

    Article  Google Scholar 

  15. López J, Maldonado S (2018) Robust twin support vector regression via second-order cone programming. Knowl-Based Syst 152:83–93

    Article  Google Scholar 

  16. López J, Carrasco M, Maldonado S (2017) A robust formulation for twin multiclass support vector machine. Appl Intell 47(4):1031–1043

    Article  Google Scholar 

  17. Maldonado S, López J (2017) Synchronized feature selection for support vector machines with twin hyperplanes. Knowl-Based Syst 132:119–128

    Article  Google Scholar 

  18. Maldonado S, Weber R (2010) Feature selection for support vector regression via kernel penalization. In: Proceedings of the 2010 International Joint Conference on Neural Networks, Barcelona, Spain, pp 1973–1979

  19. Maldonado S, López J, Carrasco M (2016) A second-order cone programming formulation for twin support vector machines. Appl Intell 45(2):265–276

    Article  Google Scholar 

  20. Mangasarian OL, Musicant DR (1999) Successive overrelaxation for support vector machines. IEEE Trans Neural Netw 10(5):1032–1037. https://doi.org/10.1109/72.788643

    Article  Google Scholar 

  21. Melki G, Kecman V, Ventura S, Cano A (2018) Ollawv: online learning algorithm using worst-violators. Appl Soft Comput 66:384–393

    Article  Google Scholar 

  22. Peng X (2010) Tsvr: an efficient twin support vector machine for regression. Neural Netw 23(3):365–372

    Article  Google Scholar 

  23. Peng X (2012) Efficient twin parametric insensitive support vector regression model. Neurocomputing 79:26–38

    Article  Google Scholar 

  24. R Rastogi R, Ananda P, Chandra S (2017) L1-norm twin support vector machine-based regression. Optimization 66(11):1895–1911

    Article  MathSciNet  Google Scholar 

  25. Sapankevych N, Sankar R (2009) Time series prediction using support vector machines: a survey. IEEE Comput Intell Mag 4:24–38

    Article  Google Scholar 

  26. Scardapane S, Wang D (2017) Randomness in neural networks: an overview. WIREs Data Min Knowl Discovery, vol 7(2). https://doi.org/10.1002/widm.1200

    Google Scholar 

  27. Schölkopf B, Smola AJ (2002) Learning with Kernels. MIT Press, Cambridge

    MATH  Google Scholar 

  28. Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12(5):1207–1245

    Article  Google Scholar 

  29. Shao YH, Zhang CH, Yang ZM, Jing L, Deng N (2013) An epsilon-twin support vector machine for regression. Neural Comput & Applic 23:175–185

    Article  Google Scholar 

  30. Singh M, Chadha J, Ahuja P, Jayadeva Chandra S (2011) Reduced twin support vector regression. Neurocomputing 74:1474–1477

    Article  Google Scholar 

  31. Sturm J (1999) Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optim Methods Softw 11(12):625–653. special issue on Interior Point Methods (CD supplement with software)

    Article  MathSciNet  Google Scholar 

  32. Tang L, Tian Y, Yang C (2018) Nonparallel support vector regression model and its smo-type solver. Neural Nertworks 105:431–446

    Article  Google Scholar 

  33. Tanveer M, Shubham K, Aldhaifallah M, Nisar K (2016) An efficient implicit regularized lagrangian twin support vector regression. Appl Intell 44(4):831–848

    Article  Google Scholar 

  34. Taylor JW, Buizza R (2002) Neural network load forecasting with weather ensemble predictions. IEEE Trans Power Syst 17(3):626–632

    Article  Google Scholar 

  35. Tian Y, Qi Z, Ju X, Shi Y, Liu X (2014a) Nonparallel support vector machines for pattern classification. IEEE Transactions on Bybernetics 44(7):1067–1079

    Article  Google Scholar 

  36. Tian Y, Zhang Q, Liu D (2014b) nu-nonparallel support vector machine for pattern classification. Neural Comput & Applic 25(5):1007–1020

    Article  Google Scholar 

  37. Tian YJ, Ju XC (2015) Nonparallel support vector machine based on one optimization problem for pattern recognition. Journal of the Operations Research Society of China 3(4):499–519

    Article  MathSciNet  Google Scholar 

  38. Vapnik V (1998) Statistical learning theory. Wiley, New Jersey

    MATH  Google Scholar 

  39. Wang D, Li M (2017) Stochastic configuration networks: fundamentals and algorithms. IEEE Trans on Cybernetics 47(10):3466–3479

    Article  Google Scholar 

  40. Wang H, Shi Y, Niu L, Tian Y (2017) Nonparallel support vector ordinal regression. IEEE Transactions on Cybernetics 47(10):3306–3317

    Article  Google Scholar 

  41. Zhao J, Xu Y, Fujita H (2019) An improved non-parallel universum support vector machine and its safe sample screening rule. Knowl-Based Syst 170:79–88

    Article  Google Scholar 

Download references

Acknowledgements

This research was partially funded by CONICYT, FONDECYT projects 1160894 and 1160738, and by the Complex Engineering Systems Institute (CONICYT, PIA, FB0816).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastián Maldonado.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of proposition 1

Proof

We note that the twin problems (15)–(16) are convex with a Slater point, and thus, the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for deriving the dual form. Only the proof for Formulation (15) is provided; the same reasoning can be followed for obtaining the dual formulation of Problem (16).

The Lagrangian function associated to Problem (15) is given by

$$ \begin{array}{@{}rcl@{}} L_{1}\!&=&\!\frac{1}{2}\|\mathbf{w}_{1}\|^{2}+ \hat{c}_{1}\mathbf{e}^{\top} (\boldsymbol{\eta}_{1}+\boldsymbol{\eta}_{1}^{*}) + \boldsymbol{\alpha}_{1}^{\top} (\mathbf{y}{\kern1.7pt}-{\kern1.7pt}\mathbf{A}\mathbf{w}_{1}-b_{1}\mathbf{e} -\varepsilon \mathbf{e} -\boldsymbol{\eta}_{1} ) \\ &&+ {\boldsymbol{\alpha}_{1}^{*}}^{\top} \!(\mathbf{A}\mathbf{w}_{1}\!+b_{1}\mathbf{e} {\kern1.7pt}-{\kern1.7pt} \mathbf{y}{\kern1.7pt}-{\kern1.7pt} \varepsilon \mathbf{e} -\!\boldsymbol{\eta}_{1}^{*}) + c_{1} \mathbf{e}^{\top} \boldsymbol{\xi}_{1} \!- \boldsymbol{\gamma}_{1}^{\top} \boldsymbol{\eta}_{1}{\kern1.7pt}-{\kern1.7pt} {\boldsymbol{\beta}_{1}^{*}}^{\top} \boldsymbol{\xi}_{1}\\ &&- \boldsymbol{\beta}_{1}^{\top} (\mathbf{y}-\mathbf{A}\mathbf{w}_{1} -b_{1}\mathbf{e}+ \varepsilon_{1}\mathbf{e}+\boldsymbol{\xi}_{1}) - {\boldsymbol{\gamma}_{1}^{*}}^{\top} \boldsymbol{\eta}_{1}^{*}, \end{array} $$
(26)

where \(\boldsymbol {\alpha }_{1},\boldsymbol {\alpha }_{1}^{*},\boldsymbol {\beta }_{1},\boldsymbol {\beta }_{1}^{*},\boldsymbol {\gamma }_{1},\boldsymbol {\gamma }_{1}^{*}\in \Re ^{m}_{+}\) denote the Lagrange multipliers. This function can be rewritten as

$$ \begin{array}{@{}rcl@{}} L_{1}\!&=&\!\frac{1}{2}\|\mathbf{w}_{1}\|^{2}- \mathbf{w}_{1}^{\top} \mathbf{A}^{\top}(\boldsymbol{\alpha}_{1} \!- \boldsymbol{\alpha}_{1}^{*}\!-\boldsymbol{\beta}_{1}) +\boldsymbol{\alpha}_{1}^{\top} (\mathbf{y}\!-\varepsilon\mathbf{e}) \!-{\boldsymbol{\alpha}_{1}^{*}}^{\top} (\mathbf{y}+\varepsilon\mathbf{e})\\ &&-b_{1}\mathbf{e}^{\top}(\boldsymbol{\alpha}_{1} - {\boldsymbol{\alpha}_{1}^{*}}- \boldsymbol{\beta}_{1})+ \boldsymbol{\eta}_{1}^{\top} (\hat{c}_{1}\mathbf{e} - \boldsymbol{\alpha}_{1}-\boldsymbol{\gamma}_{1})\\ &&+ {\boldsymbol{\eta}_{1}^{*}}^{\top} (\hat{c}_{1}\mathbf{e} {\kern1.7pt}-{\kern1.7pt} \boldsymbol{\alpha}_{1}^{*}{\kern1.7pt}-{\kern1.7pt}\boldsymbol{\gamma}_{1}^{*}) {\kern1.7pt}-{\kern1.7pt}\boldsymbol{\beta}_{1}^{\top} (\mathbf{y}+\varepsilon_{1}\mathbf{e})\!+ \boldsymbol{\xi}_{1}^{\top} (c_{1}\mathbf{e} {\kern1.7pt}-{\kern1.7pt} \boldsymbol{\beta}_{1}{\kern1.7pt}-{\kern1.7pt}\boldsymbol{\beta}_{1}^{*}) . \end{array} $$
(27)

Then, the KKT conditions for Formulation (15) are given by

$$ \begin{array}{@{}rcl@{}} \mathbf{w}_{1}-\mathbf{A}^{\top}(\boldsymbol{\alpha}_{1} - \boldsymbol{\alpha}_{1}^{*}-\boldsymbol{\beta}_{1}) &= &\mathbf{0}, \end{array} $$
(28)
$$ \begin{array}{@{}rcl@{}} \mathbf{e}^{\top}(\boldsymbol{\alpha}_{1} - {\boldsymbol{\alpha}_{1}^{*}}- \boldsymbol{\beta}_{1})&=&0, \end{array} $$
(29)
$$ \begin{array}{@{}rcl@{}} \hat{c}_{1}\mathbf{e} - \boldsymbol{\alpha}_{1}-\boldsymbol{\gamma}_{1}&=&\mathbf{0}, \end{array} $$
(30)
$$ \begin{array}{@{}rcl@{}} \hat{c}_{1}\mathbf{e} - \boldsymbol{\alpha}_{1}^{*}-\boldsymbol{\gamma}_{1}^{*}&=&\mathbf{0}, \end{array} $$
(31)
$$ \begin{array}{@{}rcl@{}} c_{1}\mathbf{e} - \boldsymbol{\beta}_{1}-\boldsymbol{\beta}_{1}^{*}&=&\mathbf{0}, \end{array} $$
(32)
$$ \begin{array}{@{}rcl@{}} \mathbf{y}-(\mathbf{A}\mathbf{w}_{1}+b_{1}\mathbf{e}) - \varepsilon \mathbf{e} -\boldsymbol{\eta}_{1}&\le&\mathbf{0}, \end{array} $$
(33)
$$ \begin{array}{@{}rcl@{}} \mathbf{A}\mathbf{w}_{1}+b_{1}\mathbf{e} - \mathbf{y}- \varepsilon \mathbf{e} -\boldsymbol{\eta}_{1}^{*}&\le&\mathbf{0}, \end{array} $$
(34)
$$ \begin{array}{@{}rcl@{}} \mathbf{y}-\mathbf{A}\mathbf{w}_{1} -b_{1}\mathbf{e}+ \varepsilon_{1}\mathbf{e}+\boldsymbol{\xi}_{1} &\ge&\mathbf{0}, \end{array} $$
(35)
$$ \begin{array}{@{}rcl@{}} \boldsymbol{\eta}_{1},\boldsymbol{\eta}_{1}^{*},\boldsymbol{\xi}_{1},\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{1}^{*},\boldsymbol{\beta}_{1},\boldsymbol {\beta}_{1}^{*},\boldsymbol{\gamma}_{1},\boldsymbol{\gamma}_{1}^{*}&\ge&\mathbf{0}, \end{array} $$
(36)
$$ \begin{array}{@{}rcl@{}} \boldsymbol{\alpha}_{1}^{\top}(\mathbf{y}-\mathbf{A}\mathbf{w}_{1}-b_{1}\mathbf{e}- \varepsilon \mathbf{e} -\boldsymbol{\eta}_{1})&=&{ 0}, \end{array} $$
(37)
$$ \begin{array}{@{}rcl@{}} {\boldsymbol{\alpha}_{1}^{*}}^{\top}(\mathbf{A}\mathbf{w}_{1}+b_{1}\mathbf{e} - \mathbf{y} - \varepsilon \mathbf{e} -\boldsymbol{\eta}_{1}^{*})&=&0, \end{array} $$
(38)
$$ \begin{array}{@{}rcl@{}} \boldsymbol{\beta}_{1}^{\top} (\mathbf{y}-\mathbf{A}\mathbf{w}_{1} -b_{1}\mathbf{e}+ \varepsilon_{1}\mathbf{e}+\boldsymbol{\xi}_{1} ) &=&0, \end{array} $$
(39)
$$ \begin{array}{@{}rcl@{}} {\boldsymbol{\beta}_{1}^{*}}^{\top} \boldsymbol{\xi}_{1} =0,\quad \boldsymbol{\gamma}_{1}^{\top} \boldsymbol{\eta}_{1}=0,\quad {\boldsymbol{\gamma}_{1}^{*}}^{\top} \boldsymbol{\eta}_{1}^{*}&=&0. \end{array} $$
(40)

By using (28)–(32) in (27), one has

$$ \begin{array}{@{}rcl@{}} L_{1}&=&-\frac{1}{2}\|\mathbf{A}^{\top}(\boldsymbol{\alpha}_{1}-\boldsymbol{\alpha}_{1}^{*}-\boldsymbol{\beta}_{1})\|^{2} + \boldsymbol{\alpha}_{1}^{\top} (\mathbf{y}-{\varepsilon}\mathbf{e})\\&&- {\boldsymbol{\alpha}_{1}^{*}}^{\top} ({\varepsilon}\mathbf{e} + \mathbf{y}) -\boldsymbol{\beta}_{1}^{\top}(\mathbf{y}+\varepsilon_{1}\mathbf{e}). \end{array} $$
(41)

On the other hand, from (30), (31), (32) and (36) it follows that

$$ \begin{array}{@{}rcl@{}} \mathbf{0}\le \boldsymbol{\alpha}_{1} \le \hat{c}_{1}\mathbf{e},\quad \mathbf{0}\le \boldsymbol{\alpha}_{1}^{*}\le \hat{c}_{1}\mathbf{e},\quad \mathbf{0}\le \boldsymbol{\beta}_{2}\le c_{1}\mathbf{e}. \end{array} $$
(42)

Hence, taking into account the relations (41) and (42), we obtain the dual formulation of problem (15)

$$ \begin{array}{@{}rcl@{}} &&\min\limits_{\boldsymbol{\alpha}_{1}, \boldsymbol{\alpha}_{1}^{*},\boldsymbol{\beta}_{1} } \frac{1}{2}\|\mathbf{A}^{\top}(\boldsymbol{\alpha}_{1} - \boldsymbol{\alpha}_{1}^{*}-\boldsymbol{\beta}_{1}) \|^{2} - \mathbf{y}^{\top} (\boldsymbol{\alpha}_{1}-\boldsymbol{\alpha}_{1}^{*}-\boldsymbol{\beta}_{1}) \\&&\quad\quad\quad+\varepsilon\mathbf{e}^{\top} (\boldsymbol{\alpha}_{1}+\boldsymbol{\alpha}_{1}^{*}) +\varepsilon_{1}\mathbf{e}^{\top} \boldsymbol{\beta}_{1}\\ &&\quad\text{s.t.} {\kern1.7pt}{\kern1.7pt} \mathbf{e}^{\top}(\boldsymbol{\alpha}_{1} - {\boldsymbol{\alpha}_{1}^{*}}- \boldsymbol{\beta}_{1})=0,\\ && \quad\quad\quad\mathbf{0}\le \boldsymbol{\alpha}_{1} , \boldsymbol{\alpha}_{1}^{*}\le \hat{c}_{1}\mathbf{e},\ \mathbf{0}\le \boldsymbol{\beta}_{1}\le c_{1}\mathbf{e}. \end{array} $$

In the same way, the dual formulation of problem (16) is given by

$$ \begin{array}{@{}rcl@{}} &&\min\limits_{\boldsymbol{\alpha}_{2}, \boldsymbol{\alpha}_{2}^{*},\boldsymbol{\beta}_{2} } \frac{1}{2}\|\mathbf{A}^{\top}(\boldsymbol{\alpha}_{2} - \boldsymbol{\alpha}_{2}^{*}+\boldsymbol{\beta}_{2}) \|^{2} - \mathbf{y}^{\top} (\boldsymbol{\alpha}_{2}-\boldsymbol{\alpha}_{2}^{*}+\boldsymbol{\beta}_{2}) \\&&\quad\quad\quad+\varepsilon\mathbf{e}^{\top} (\boldsymbol{\alpha}_{2}+\boldsymbol{\alpha}_{2}^{*})+\varepsilon_{2}\mathbf{e}^{\top} \boldsymbol{\beta}_{2} \\ && \quad\text{s.t.} {\kern1.7pt}{\kern1.7pt} \mathbf{e}^{\top}(\boldsymbol{\alpha}_{2} - {\boldsymbol{\alpha}_{2}^{*}} + \boldsymbol{\beta}_{2})=0, \\ && \quad\quad\quad\mathbf{0}\le \boldsymbol{\alpha}_{2}, \boldsymbol{\alpha}_{2}^{*}\le \hat{c}_{2}\mathbf{e},\ \mathbf{0}\le \boldsymbol{\beta}_{2}\le c_{2}\mathbf{e}. \end{array} $$

Appendix B: Proof of proposition 2

Proof

  1. (a)

    The first part follows from (28). The other follows from the KKT conditions of problem (16).

  2. (b)

    Let us suppose that α1i > 0, then, from (37) we obtain

    $$ y_{i}=(\mathbf{A}\mathbf{w}_{1})_{i}+b_{1}+ \varepsilon +{ \eta}_{1i}, $$

    where (Aw1)i denote the component i of Aw1. Using this join with (38) we have

    $$ 0={{\alpha}_{1i}^{*}}((\mathbf{A}\mathbf{w}_{1})_{i}+b_{1}- y_{i}- \varepsilon -{\eta}_{1i}^{*})=-{\alpha}_{1i}^{*}(2\varepsilon +{\eta}_{1i}+{\eta}_{1i}^{*}). $$

    Since \(\boldsymbol {\eta }_{1},\boldsymbol {\eta }_{1}^{*}\ge 0 \) and ε > 0, we obtain that necessarily \({\alpha }_{1i}^{*}=0.\) The same argument can be used to prove that \({\alpha }_{1i}^{*}>0\) implies that α1i = 0, and similar arguments apply to the case \({\alpha }_{2i}\cdot {\alpha }_{2i}^{*}=0.\)

  3. (c)

    We note first that \((\mathbf {A}\mathbf {w}_{1})_{j}=\mathbf {w}_{1}^{\top } \mathbf {x}_{j}. \) Assume that \({ \alpha }_{1j}\in (0,\hat {c}_{1}),\) from (30) we have that γ1j > 0 and then, by (40) η1j must be equal to zero. Therefore, from (38) we obtain

    $$ b_{1}=y_{j}-(\mathbf{A}\mathbf{w}_{1})_{j}-\varepsilon=y_{j}-\mathbf{w}_{1}^{\top} \mathbf{x}_{j}-\varepsilon. $$

    We can now proceed analogously to the case \({ \alpha }_{1j}^{*}\in (0,\hat {c}_{1})).\)

  4. (d)

    The same arguments developed above can be used in this case.

  5. (e)

    Assume β1j ∈ (0, c1) for some j ∈ 1, …m1, we have from (32) that \({ \beta }_{1j}^{*}>0\) and then, it follows from (40) that ξ1j = 0. Finally, from (39) we obtain directly that

    $$ y_{j}-\mathbf{w}_{1}^{\top} \mathbf{x}_{j}-b_{1}+\varepsilon_{1}=0. $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carrasco, M., López, J. & Maldonado, S. Epsilon-nonparallel support vector regression. Appl Intell 49, 4223–4236 (2019). https://doi.org/10.1007/s10489-019-01498-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-019-01498-1

Keywords

Navigation