Skip to main content
Log in

FCPN Approach for Uncertain Nonlinear Dynamical System with Unknown Disturbance

  • Published:
International Journal of Fuzzy Systems Aims and scope Submit manuscript

Abstract

In this work, we have used a fuzzy counter-propagation network (FCPN) model to control different discrete-time, uncertain nonlinear dynamic systems with unknown disturbances. Fuzzy competitive learning (FCL) is used to process the weight connection and make adjustments between the instar and the outstar of the network. FCL paradigm adopts the principle of learning, used for calculation of the Best Matched Node (BMN) in the instar–outstar network. FCL provides a control of discrete-time uncertain nonlinear dynamic systems having dead zone and backlash. The errors like mean absolute error (MAE), mean square error (MSE), and best fit rate, etc. of FCPN are compared with networks like dynamic network (DN) and back propagation network (BPN). The FCL foretells that the proposed FCPN method gives better results than DN and BPN. The success and enactments of the proposed FCPN are validated through simulations on different discrete-time uncertain nonlinear dynamic systems and Mackey–Glass univariate time series data with unknown disturbances over BPN and DN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Soderstrom, T., Stoica, P.: System Identification. Prentice Hall, New York (1989)

    MATH  Google Scholar 

  2. Billings, S.A.: Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Saptio-Temporal Domains. Wiley, Chichester (2013)

    Book  MATH  Google Scholar 

  3. Liu, M.: Decentralized control of robot manipulators: nonlinear and adaptive approaches. IEEE Trans. Autom. Control 44, 357–366 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  4. Lin, C.M., Ting, A.B., Li, M.C.: Neural network based robust adaptive control for a class of nonlinear systems. Neural Comput. Appl. 20, 557–563 (2011)

    Article  Google Scholar 

  5. Rivals, I., Personnaz, L.: Nonlinear internal model control using neural networks application to processes with delay and design issues. IEEE Trans. Neural Netw. 11, 80–90 (2000)

    Article  Google Scholar 

  6. KenallaKopulas, I., Kokotovic, P.V., Morse, A.S.: Systematic design of adaptive controller for feedback linearizable system. IEEE Trans. Autom. Control 36, 1241–1253 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kokotovic, P.V.: The joy feedback: nonlinear and adaptive. IEEE Control Syst. Mag. 12, 7–17 (1992)

    Article  Google Scholar 

  8. Elmali, H., Olgac, N.: Robust output tracking control of nonlinear MIMO system via sliding mode technique. Automatica 28, 145–151 (1992)

    Article  MathSciNet  Google Scholar 

  9. Sadati, N., Ghadami, R.: Adaptive multi-model sliding mode control of robotic manipulators using soft computing. Neurocomputing 17, 2702–2710 (2008)

    Article  Google Scholar 

  10. Kroll, A., Schulte, H.: Benchmark problems for nonlinear system identification and control using Soft Computing methods: need and overview. Appl. Soft Comput. 25, 496–513 (2014)

    Article  Google Scholar 

  11. Hornik, K., Stinchcombe, M., White, H.: Multiforward feed forwards networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    Article  Google Scholar 

  12. Bortoletti, A., Di Flore, C., Fanelli, S., Zellini, P.: A new class of Quasi-Newtonian methods for optimal learning in MLP-networks. IEEE Trans. Neural Netw. 14, 263–273 (2003)

    Article  Google Scholar 

  13. Lera, G., Pinzolas, M.: Neighborhood based Levenberg-Marquardt algorithm for neural network training. IEEE Trans. Neural Netw. 13, 1200–1203 (2002)

    Article  Google Scholar 

  14. Alfaro-Ponce, M., Arguelles, A., Chairez, I.: Continuous neural identifier for certain nonlinear systems with time in the input signal. Neural Netw. 60, 53–66 (2014)

    Article  MATH  Google Scholar 

  15. Wei, Q., Liu, D.: Neural-network based adaptive optimal tracking control scheme for discrete-time nonlinear system with approximation errors. Neurocomputing 149, 106–115 (2015)

    Article  Google Scholar 

  16. Gao, S., Dong, H., Ning, B., Chen, L.: Neural adaptive control for uncertain nonlinear system with input: State transformation based output feedback. Neurocomputing 159, 117–125 (2015)

    Article  Google Scholar 

  17. Peng, Z., Wang, D., Zhang, H., Lin, Y.: Coopeative output feedback adaptive control of uncertain nonlinear multi-agent systems with a dynamic leader. Neurocomputing 149, 132–141 (2015)

    Article  Google Scholar 

  18. Zhang, T., Xia, X.: Decentralized adaptive fuzzy output feedback control of stochastic nonlinear large scale systems with dynamic un certainties. Inf. Sci. 315, 17–18 (2015)

    Article  Google Scholar 

  19. Song, J., He, S.: Finite time robust passive control for a class of uncertain Lipschitz nonlinear systems with time delays. Neurocomputing 159, 275–281 (2015)

    Article  Google Scholar 

  20. Cui, G., Wang, Z., Zhuang, G., Chu, Y.: Adaptive Centralized NN control of large scale stochastic nonlinear time delay systems with unknown dead zone inputs. Neurocomputing 158, 194–203 (2015)

    Article  Google Scholar 

  21. Zhou, J., Er, M.J., Veluvolu, K.C.: Adaptive output control of Nonlinear Time-Delayed systems with uncertain Dead-zone input, IEEE, 2006, pp. 5312–5317

  22. Zhang, T.P., Ge, S.S.: Adaptive dynamic surface control of nonlinear systems with unknown dead zone in pure feedback form. Automatica 44, 1895–1903 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Liu, Y.-J., Zhou, N.: Observer-based adaptive fuzzy-neural control for a class of uncertain nonlinear systems with unknown dead zone input. ISA Trans. 49, 462–469 (2010)

    Article  Google Scholar 

  24. Ibrir, S., Xie, W.F., Su, C.-Y.: Adaptive tracking of nonlinear systems with non-symmetric dead zone input. Automatica 43, 522–530 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hu, Q., Ma, G., Xie, L.: Robust and adaptive variable structure output feedback control of uncertain systems with input nonlinearity. Automatica 44, 552–559 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhuo, J., Wen, C., Zang, Y.: Adaptive output control of Nonlinear systems with uncertain dead zone nonlinearity. Autom. Control 51, 504–511 (2006)

    Article  MathSciNet  Google Scholar 

  27. Zhang, X., Pariini, T.: Adaptive fault tolerant control of nonlinear uncertain systems: a information based diagnostic approach. Autom. Control 49, 1259–1274 (2004)

    Article  MathSciNet  Google Scholar 

  28. Zhuo, S., Feng, G., feng, C.-B.: Robust Control for a class of uncertain nonlinear systems adaptive fuzzy approach based on backstepping. Fuzzy Sets Syst. 151, 1–20 (2003)

    Article  MathSciNet  Google Scholar 

  29. Lewis, F.L., Campos, J., Selmic, R.: Neuro Fuzzy Control of Industrial Systems with Actuator Nonlinearities. Society for Industrial and Applied Mathematics, Philadelphia (2002)

    Book  MATH  Google Scholar 

  30. Hecht-Nielsen, R.: Theory of the back propagation neural network. Neural Netw. 1, 593–605 (1989)

    Google Scholar 

  31. Hagan, M.T., Demuth, H.B., Beale, M.H.: Orlando De Jesus, Neural Network Design, 2nd Edition, Cengage Learning, 2014

  32. Chang, F.J., Chen, Y.-C.: A counter propagation fuzzy neural network modeling approach to real time stream flow prediction. J. Hydrol. 245, 153–164 (2001)

    Article  Google Scholar 

  33. Dwivedi, A., Bose, N.S.C., Kumar, A., Kandula, P., Mishra, D. and Kalra, P.K.: A novel hybrid image compression technique: wavelet-MFOCPN, in Proc. of 9th SID, 2006, pp. 492–495

  34. Burges, C.J.C., Simard, P., Malvar, H.S.: Improving Wavelet Image Compression with Neural Networks. Microsoft Research, Redmond (2001)

    Google Scholar 

  35. Woods, D.: Back and counter propagation aberrations, IEEE International Conference on Neural Networks, 1988, pp. 473–479

  36. Mishra, D., Chandra Bose, N., Tolambiya, A., Dwivedi, A., Kandula, P., Kumar A., and Kalra, P.K., Color image compression with modified forward-only counter propagation neural network improvement of the quality using different distance measures, ICIT’06. 9th International Conference on Information Technology, 2006, pp. 139–140

  37. Sakhre, V., Jain, S., Sapkal, V.S., Agarwal, D.P.: Fuzzy Counter Propagation Neural Network for a class of nonlinear dynamical systems. Comput. Intell. Neurosci. 2015, 1–12 (2015)

    Article  Google Scholar 

  38. Sarangapani, J.: Neural Network Control of Nonlinear Discrete Time Systems with Actuator Nonlinearties, p. 265. Taylor & Francis, London (2006)

    Google Scholar 

  39. Jagannathan, S., Lewis, F.L.: Discrete Time Neural net Controller for a class of nonlinear dynamical systems. IEEE Trans. Autom. Contr. 41, 1693–1699 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  40. Jaddi, N.S., Abdullah, S., Hamdan, A.R.: Optimization of neural network model using modified bat-inspired algorithm. Appl. Soft Comput. 37, 71–86 (2015)

    Article  Google Scholar 

Download references

Acknowledgments

The author is gratefully acknowledged the financial assistance provided by the All India Council of Technical Education (AICTE) in the form of Research Promotion Scheme (RPS) project in 2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uday Pratap Singh.

Appendices

Appendix A: Dynamic Learning for CPN

Learning stability are fundamental issues for CPN, however there are few studies on the learning issue of FNN. BPN for learning is not always successful because of its sensitivity to learning parameters. Optimal learning rate always change during the training process. Dynamic learning of CPN is carried out using lemma 1, lemma 2 and lemma 3 [37, 38].

Assumption

Let us assume ϕ(x) is a sigmoidal function, if it is a bounded, continuous, and increasing function. Since input to the neural network in this model is bounded, we consider the lemma 1, and lemma 2 as given below:

Lemma 1

Let ϕ(x) be a sigmoid function and Ω be a compact set in \({\mathbb{R}}^{n} , \,{\text{and}} \quad f:{\mathbb{R}}^{n} \to {\mathbb{R}}\) on Ω is a continuous function and for arbitrary ∈ > 0, ∃ a integer N and real constants c i , θ i , w ij , i = 1, 2,…, N, j = 1, 2,…, n. Such that

$$\bar{f}\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = \mathop \sum \limits_{i = 1}^{N} c_{i} \emptyset \left( {\mathop \sum \limits_{j = 1}^{n} w_{ij} x_{i} - \theta_{i} } \right)$$
(72)

Satisfies

$$\mathop {\hbox{max} }\limits_{x \in \varOmega } ||f\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) - \bar{f}\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)|| < \in$$
(73)

Using lemma 1, dynamic learning for a three-layered CPN can be formulated where the hidden-layer transfer functions are ϕ(x) and the transfer function at the output layer is linear.

Let all the vectors be column vectors, and superscript \(d_{p}^{k}\) refer to specific output vectors component.

Let \(X = \left[ {x_{1} , x_{2} , \ldots , x_{p} } \right] \in {\mathbb{R}}^{L \times P}\), and \(Y = \left[ {y_{1} , y_{2} , \ldots , y_{p} } \right] \in {\mathbb{R}}^{H \times P} O = \left[ {o_{1} , o_{2} , \ldots , o_{p} } \right] \in {\mathbb{R}}^{K \times P}\) be the input-hidden, the output vectors, and \(D = \left[ {d_{1} , d_{2} , \ldots , d_{p} } \right] \in {\mathbb{R}}^{K \times P}\) be the desired vector, respectively, where L, H, and K denote the numbers of input, hidden, and output layer neurons. Let V and W represent the input-hidden and hidden-output layer weight matrices, respectively. The objectives of the network training to minimize an error function J where J is given by

$$J = \frac{1}{2PK}\mathop \sum \limits_{p = 1}^{P} \mathop \sum \limits_{k = 1}^{K} \left( {o_{p}^{k} - d_{p}^{k} } \right)^{2}$$
(74)

Appendix B: Optimal Learning of Dynamic System

While considering a three-layered CPN, the network error matrix is defined as the error between the differences of the desired and the FCPN outputs at any iteration, and is given as [37, 38]

$$E_{t} = O_{t} - D = W_{t} Y_{t} - D = W_{t} V_{t} X - D$$
(75)

The objective of the network to realize the minimization of error given in Eq. (31) is defined as follows:

$${\text{s}}.{\text{t}}.\,J = \frac{1}{2PK}T_{r} \left( {E_{t} E_{t}^{T} } \right)$$
(76)

where \(T_{r}\) represents the trace of matrix. Using gradient-descent method, the updated weight is given by

$$W_{t + 1} = W_{t} - \beta_{t} \frac{\partial J}{{\partial W_{t} }}\,{\text{and}}\, V_{t + 1} = V_{t} - \beta_{t} \frac{\partial J}{{\partial V_{t} }}$$

or

$$W_{t + 1} = W_{t} - \frac{{\beta_{t} }}{PK}E_{t} Y_{t}^{T} {\text{and}}\,V_{t + 1} = V_{t} - \frac{{\beta_{t} }}{PK}W_{t}^{T} E_{t} X^{T}$$
(77)

Using Eq. (22)–(24), we have

$$E_{t + 1} E_{t + 1}^{T} = \left( {W_{t + 1} Y_{t + 1} - D} \right)\left( {W_{t + 1} Y_{t + 1} - D} \right)^{T}$$
(78)

To obtain the minimum error for multilayer network, by simplification of above equation, we have

$$J_{t + 1} - J_{t} = \frac{1}{2PK}g(\beta )$$
(79)
$$J = \frac{1}{2PK}\mathop \sum \limits_{p = 1}^{P} \mathop \sum \limits_{k = 1}^{K} \left( {o_{p}^{k} - d_{p}^{k} } \right)^{2}$$
(80)

Using Eq. (31), we have

$$E_{t + 1} E_{t + 1}^{T} = \left( {W_{t + 1} V_{t + 1} X - D} \right)\left( {W_{t + 1} V_{t + 1} X - D} \right)^{T}$$

Solving for the above term, we get

$$\begin{aligned} & = \left[ {\left( {W_{t} - \frac{{\beta_{t} }}{PK}E_{t} Y_{t}^{T} } \right)\left( {V_{t} - \frac{{\beta_{t} }}{PK}W_{t}^{T} E_{t} X^{T} } \right)X - D} \right]\left[ {\left( {W_{t} - \frac{{\beta_{t} }}{PK}E_{t} Y_{t}^{T} } \right)\left( {V_{t} - \frac{{\beta_{t} }}{PK}W_{t}^{T} E_{t} X^{T} } \right)X - D} \right]^{\varvec{T}} \hfill \\ & = \left[ {E_{t} - \frac{{\beta_{t} }}{PK}\left( {W_{t} W_{t}^{T} E_{t} X^{T} X + E_{t} Y_{t}^{T} V_{t} X} \right) + \frac{{\beta_{t}^{2} }}{{\left( {PK} \right)^{2} }}E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right] \hfill \\ & \left[ {E_{t} - \frac{{\beta_{t} }}{PK}\left( {W_{t} W_{t}^{T} E_{t} X^{T} X + E_{t} Y_{t}^{T} V_{t} X} \right) + \frac{{\beta_{t}^{2} }}{{\left( {PK} \right)^{2} }}\left( {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right)} \right]^{T} \hfill \\ & = E_{t} E_{t}^{T} - \frac{{\beta_{t} }}{PK}\left[ {E_{t} \left( {W_{t} W_{t}^{T} E_{t} X^{T} X} \right)^{T} + E_{t} \left( {E_{t} Y_{t}^{T} V_{t} X} \right)^{T} + \left( {W_{t} W_{t}^{T} E_{t} X^{T} XE_{t}^{T} } \right) + \left( {E_{t} Y_{t}^{T} V_{t} XE_{t}^{T} } \right)} \right] \hfill \\ & \quad + \frac{{\beta_{t}^{2} }}{{\left( {PK} \right)^{2} }}\left[ {E_{t} \left( {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right)^{T} + E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} XE_{t}^{T} + E_{t} Y_{t}^{T} V_{t} X\left( {W_{t} W_{t}^{T} E_{t} X^{T} X} \right)^{T} + W_{t} W_{t}^{T} E_{t} X^{T} X\left( {E_{t} Y_{t}^{T} V_{t} X} \right)^{T} + E_{t} Y_{t}^{T} V_{t} X\left( {E_{t} Y_{t}^{T} V_{t} X} \right)^{T} + W_{t} W_{t}^{T} E_{t} X^{T} X\left( {W_{t} W_{t}^{T} E_{t} X^{T} X} \right)^{T} } \right] \hfill \\ & \quad - \frac{{\beta_{t}^{3} }}{{\left( {PK} \right)^{3} }}\left[ {W_{t} W_{t}^{T} E_{t} X^{T} X\left( {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right) + E_{t} Y_{t}^{T} V_{t} X\left( {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right)^{T} + E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X\left( {W_{t} W_{f}^{T} E_{t} X^{T} X} \right)^{T} + E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X\left( {E_{t} Y_{t}^{T} V_{t} X} \right)^{T} } \right] \hfill \\ & \quad + \frac{{\beta_{t}^{4} }}{{\left( {PK} \right)^{4} }}\left[ {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X\left( {E_{t} Y_{t}^{T} W_{t}^{T} E_{t} X^{T} X} \right)^{T} } \right] \hfill \\ \end{aligned}$$

For simplification, omit subscript t; then, we have, i.e.,

$$J_{t + 1} - J_{t} = \frac{1}{2PK}\left( {A\beta^{4} + B\beta^{4} + C\beta^{4} + M\beta } \right)$$
(81)

where

$$\begin{aligned} A = \frac{1}{{\left( {PK} \right)^{4} }}T_{r} \left[ {EY^{T} WEX^{T} X\left( {EY^{T} W^{T} EX^{T} X} \right)^{T} } \right] \hfill \\ B = \frac{1}{{\left( {PK} \right)^{3} }}T_{r} \left[ {WW^{T} EX^{T} X\left( {EY^{T} W^{T} EX^{T} X} \right)^{T} + EY^{T} VX\left( {EY^{T} W^{T} EX^{T} X} \right)^{T} + EY^{T} W^{T} EX^{T} X\left( {WW^{T} EX^{T} X} \right)^{T} + EY^{T} W^{T} EX^{T} X\left( {EY^{T} VX} \right)} \right] \hfill \\ C = \frac{1}{{\left( {PK} \right)^{2} }}T_{r} \left[ {\left( {EY^{T} W^{T} EX^{T} X} \right)^{T} + EY^{T} WEX^{T} XE^{T} + EY^{T} VX\left( {WW^{T} EX^{T} X} \right)^{T} + WW^{T} EX^{T} X + \left( {EY^{T} VX} \right)^{T} + EY^{T} VX\left( {EY^{T} VX} \right)^{T} + WW^{T} EX^{T} X\left( {WW^{T} EX^{T} X} \right)^{T} } \right] \hfill \\ M = \frac{1}{PK}T_{r} \left[ {E\left( {WW^{T} EX^{T} X} \right)^{T} + E\left( {EY^{T} VX} \right)^{T} + WW^{T} EX^{T} X + EY^{T} VXE^{T} } \right] \hfill \\ \end{aligned}$$
(82)

where

$$g\left( \beta \right) = \left( {A\beta^{4} + B\beta^{3} + C\beta^{2} + M\beta } \right)$$
(83)

Equation (35) is a polynomial of degree 4, and to obtain optimum value of \(\beta ,\) we have to solve

$$\frac{\partial g}{\partial \beta } = 0,\, {\text{i}} . {\text{e}} .\,\,4A\left( {\beta^{3} + a\beta^{2} + b\beta + c} \right) = 0$$
(84)

where \(a = \frac{3B}{4A} , b = \frac{2C}{4A} , c = \frac{M}{4A}\)

Lemma 2

For solution of general real cubic equation, we use the following Lemma:

$$f\left( x \right) = x^{3} + ax^{2} + bx + c, {\text{Let }}D = - 27c^{2} + 18c ab + a^{2} b^{2} - 4a^{3} b^{3} - 4b^{3}$$
(85)

where D is the discriminant of f(x).

Then

  1. 1.

    If D < 0, f(x) has one real root.

  2. 2.

    If D ≥ 0, f(x) has three real roots.

    1. (a)

      D > 0, f(x) has three different real roots.

    2. (b)

      D = 0, 6b2a 2 # 0, f(x) has one single root and one multiple root.

    3. (c)

      D = 0, 6b2a 2 = 0, f(x) has one root of three multiplicity.

Lemma 3

For a given polynomial g(β) given Eq. (40) if optimum \(\beta = \left\{ {\beta_{i} |g\left( {\beta_{i} } \right) = \hbox{min} \left( {g\left( {\beta_{1} } \right), g\left( {\beta_{2} } \right), g\left( {\beta_{3} } \right)} \right), i \in \left\{ {1, 2, 3} \right\}} \right\}\), where β i is the real root of \(\frac{\partial g}{\partial \beta },\) then the optimum (β) is the optimal learning rate, and this learning process is stable.

Proof

To find stable learning range of β, consider Lyapunov function

$$V_{t} = Jt^{2} \,{\text{and}} \;\Delta V_{t} = J_{t + 1}^{2} - J_{t}^{2} {\text{if}}\quad \Delta V_{t} < 0$$

and the dynamic system is guaranteed to be stable if \(\Delta V_{t} < 0 \;{\text{i}} . {\text{e}} .\,\;J_{t + 1} - J_{t} < 0\).

Since in the training process, input matrices remain the same during the whole training process, we have to find the range of β which satisfies \(\left( {A\beta^{4} + B\beta^{3} + C\beta^{2} + M\beta } \right) < 0\). Since \(\frac{\partial g}{\partial \beta }\) has at least one real root, one of these roots gives optimum β. Obviously, minimum value of g(β) gives the largest reduction in J t at each step of learning process. Equation (39) shows that g(β) has two or four real roots, one including β = 0. Thus, the minimum value of β shows largest reduction in error at two successive times, minimum value is obtained by differentiating Eq. (30) w.r.t. β, we have from theorem (1).

$$\frac{\partial g}{\partial \beta } = 4A\left( {\beta^{3} + a\beta^{2} + b\beta + c} \right)$$

where \(a = \frac{3B}{4A} , \,b = \frac{2C}{4A} , \,c = \frac{M}{4A}\)

Solving \(\frac{\partial g}{\partial \beta } = 0\), we obtain optimum β, which gives minimum error.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sakhre, V., Singh, U.P. & Jain, S. FCPN Approach for Uncertain Nonlinear Dynamical System with Unknown Disturbance. Int. J. Fuzzy Syst. 19, 452–469 (2017). https://doi.org/10.1007/s40815-016-0145-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40815-016-0145-5

Keywords

Navigation