Skip to main content
Log in

On the Carleman Embedding and Its Offsprings with Their Application to Machine Swing Dynamics

  • Published:
Journal of Control, Automation and Electrical Systems Aims and scope Submit manuscript

Abstract

A formal approach to rephrasing nonlinear filtering of stochastic differential equations is the Kushner setting in applied mathematics and dynamical systems. Thanks to the ability of the Carleman linearization, the ‘nonlinear’ stochastic differential equation can be equivalently expressed as a finite system of ‘bilinear’ stochastic differential equations with the augmented state under the finite closure. Interestingly, the novelty of this paper is to embed the Carleman linearization into a stochastic evolution of the Markov process. The nonlinear swing equation is the cornerstone and lays the foundation of the power systems dynamics. To illustrate the Carleman linearization of the Markov process, this paper embeds the Carleman linearization into a nonlinear swing stochastic differential equation. Furthermore, we achieve the nonlinear swing equation filtering in the Carleman setting. Filtering in the Carleman setting has simplified algorithmic procedures. The concerning augmented state accounts for the nonlinearity as well as stochasticity. We show that filtering the nonlinear stochastic swing equation in the Carleman framework is more refined as well as sharper in contrast to the benchmark nonlinear extended Kalman filter (EKF). The Carleman filtering framework reduces approximately three times more conditional mean absolute errors as compared to the benchmark EKF method for the application of machine swing dynamics. This paper suggests the usefulness of the Carleman embedding into the stochastic differential equation to filter the concerning nonlinear stochastic differential system. This paper will interest nonlinear stochastic dynamists exploring and unfolding linearization embedding techniques to their research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

Download references

Acknowledgements

The authors would like to express their gratefulness to the Editor and the anonymous qualified reviewers for their constructive comments, suggestions and ideas for the improvement of the paper. That has led to an improvement in the content of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prashant G. Medewar.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Carleman Linearization of a Markov Process

Here, we explain succinctly the Carleman linearization of the stochastic evolution of a Markov process. For the simplified analysis as well as gaining insights, consider a nonlinear scalar Itô stochastic differential equation and then embed the Carleman linearization. The Carleman embedding results in an infinite system of bilinear stochastic differential equations. Finite closure circumvents the curse of dimensionality. As a consequence of this, we arrive at a finite system of bilinear stochastic differential equations. Notably, the finite closure preserves the Markovity into the Carleman linearized stochastic differential equation as well. The Carleman linearization brings bilinearity as well as preserves the nonlinearity via associating the ‘nonlinearity stochastic evolution’ with the ‘stochastic state evolution’. Secondly, for the scalar stochastic process, the dimension of the Carleman linearized state vector becomes the order of the Carleman linearization order \(N\). For the vector case, the dimension becomes \(\sum\limits_{1 \le r \le N} {\left( \begin{gathered} n + r - 1 \hfill \\ \quad r \hfill \\ \end{gathered} \right),}\) where \(n\) is the dimension of the state vector and \(r\) is the Kronecker power associated with the state that runs over \(1\) to \(N.\)

The cubic nonlinearity, which has received attention in the literature, e.g. Jing and Lang (2009), is relatively very general in contrast to the square. Thus, the Carleman framework of the nonlinear stochastic differential equation preserving the cubic nonlinearity is sketched in the following Theorem-proof format.

Theorem

Consider a Markovian stochastic evolution in the Itô framework

$$ {\text{d}}y_{t} = f(y_{t} ){\text{d}}t + \sigma g(y_{t} ){\text{d}}W_{t} . $$

where \(y_{t} \in U,\) \(U \subset R,\) \(f:U \to R,\) \(g:U \to R,\) \(U\) is a phase space, \(R\) is the real line and \(W_{t}\) is a scalar Brownian motion process. Suppose the Carleman linearization order is three. Then the ‘associated’ Carleman linearized Markovian stochastic evolution in the Itô framework under the ‘finite closure’ is

$$\begin{aligned} {\text{d}}\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right) & = \left( {\left( {\begin{array}{*{20}c} f \\ {\sigma^{2} g^{2} } \\ 0 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {f^{\prime}} & {\frac{1}{2!}f^{\prime\prime}} & {\frac{1}{3!}f^{\prime\prime\prime}} \\ {2f + 2\sigma^{2} gg^{\prime}} & {2f^{\prime} + \sigma g^{{\prime}{2}} + 2\sigma^{2} gg^{\prime\prime}} & {f^{\prime\prime} + \sigma^{2} g^{\prime}g^{\prime\prime} + \frac{2}{3!}\sigma^{2} gg^{\prime\prime\prime}} \\ {3\sigma^{2} g^{2} } & {3f + 6\sigma^{2} gg^{\prime}} & {3f^{\prime} + 3\sigma g^{{\prime}{2}} + 6\sigma^{2} gg^{\prime\prime}} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right)} \right){\text{d}}t \\ & \;\;\;\; + \left( {\begin{array}{*{20}c} {\sigma \,g^{\prime}} & {\frac{1}{2!}\sigma g^{\prime\prime}} & {\frac{1}{3!}\sigma g^{\prime\prime\prime}} \\ {2\sigma \,g} & {2\sigma g^{\prime}} & {\sigma g^{\prime\prime}} \\ 0 & {3\sigma g} & {3\sigma g^{\prime}} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right){\text{d}}W_{t} + \left( {\begin{array}{*{20}c} {\sigma \,g} \\ 0 \\ 0 \\ \end{array} } \right){\text{d}}W_{t} , \\ \end{aligned}$$

where \(f = f(y_{t} ),\) \(f^{\prime} = \frac{{{\text{d}}f(y_{t} )}}{{{\text{d}}y_{t} }},\) \(f^{\prime\prime} = \frac{{{\text{d}}^{2} (y_{t} )}}{{{\text{d}}y_{t}^{2} }},\) \(f^{\prime\prime\prime} = \frac{{{\text{d}}^{3} (y_{t} )}}{{{\text{d}}y_{t}^{3} }},\) \(g = g(y_{t} ),\) \(g^{\prime} = \frac{{{\text{d}}g(y_{t} )}}{{{\text{d}}y_{t} }},\)\(g^{\prime\prime} = \frac{{{\text{d}}^{2} g(y_{t} )}}{{{\text{d}}y_{t}^{2} }}\) and \(g^{\prime\prime\prime} = \frac{{{\text{d}}^{3} g(y_{t} )}}{{{\text{d}}y_{t}^{3} }}\) evaluated at \(y_{t} = 0\).

Proof

Here we weave the proof of the theorem via illustrating the Carleman embedding into the above SDE of the theorem. Using the expansions of the functions \(f(y_{t} )\) and \(g(y_{t} )\) from the generating function perspective and the function analyticity, we have.

$$ {\text{d}}y_{t} = \sum\limits_{0 \le k} {a_{k} } y_{t}^{k} {\text{d}}t + \sum\limits_{0 \le k} {b_{k} } y_{t}^{k} {\text{d}}W_{t} , $$

where

$$ a_{k} = \frac{1}{k!}\frac{{{\text{d}}^{k} f(y_{t} )}}{{{\text{d}}y_{t}^{k} }}\left| {y_{t} } \right. = 0,\;b_{k} = \frac{\sigma }{k!}\frac{{{\text{d}}^{k} g(y_{t} )}}{{{\text{d}}y_{t}^{k} }}\left| {y_{t} } \right. = 0. $$

Suppose the contribution beyond the cubic nonlinearity is smaller, thus

$$ {\text{d}}y_{t} \approx (a_{0} + a_{1} y_{t} + a_{2} y_{t}^{2} + a_{3} y_{t}^{3} ){\text{d}}t + (b_{0} + b_{1} y_{t} + b_{2} y_{t}^{2} + b_{3} y_{t}^{3} ){\text{d}}W_{t} . $$
(A.1)

Thanks to the stochastic differential rule (Karatzas & Shreve, 1988, p. 148), we have the stochastic evolution

$$ \begin{aligned} {\text{d}}y_{t}^{2} & = 2y_{t} {\text{d}}y_{t} + ({\text{d}}y_{t} )^{2} , \\ & = 2y_{t} ((a_{0} + a_{1} y_{t} + a_{2} y_{t}^{2} + a_{3} y_{t}^{3} ){\text{d}}t+ (b_{0} + b_{1} y_{t} + b_{2} y_{t}^{2} + b_{3} y_{t}^{3} ){\text{d}}W_{t} ) + (b_{0} + b_{1} y_{t} + b_{2} y_{t}^{2} + b_{3} y_{t}^{3} )^{2} {\text{d}}t, \\ & = (b_{0}^{2} + (2a_{0} + 2b_{0} b_{1} )y_{t} + (2a_{1} + b_{1}^{2} + 2b_{0} b_{2})y_{t}^{2} + (2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3})y_{t}^{3} + (2a_{3} + b_{2}^{2} + 2b_{1} b_{3} )y_{t}^{4} \\ &\quad + 2b_{2} b_{3} y_{t}^{5} + b_{3}^{2} y_{t}^{6} ){\text{d}}t + (2b_{0} y_{t} + 2b_{1} y_{t}^{2} + 2b_{2} y_{t}^{3} + 2b_{3} y_{t}^{4} ){\text{d}}W_{t} . \\ \end{aligned} $$
$$ \begin{aligned} {\text{d}}y_{t}^{3} & = y_{t} {\text{d}}y_{t}^{2} + y_{t}^{2} {\text{d}}y_{t} + {\text{d}}y_{t} {\text{d}}y_{t}^{2} , \\ & = y_{t} ((b_{0}^{2} + (2a_{0} + 2b_{0} b_{1} )y_{t} + (2a_{1} + b_{1}^{2} + 2b_{0} b_{2} )y_{t}^{2} + (2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3} )y_{t}^{3} \\ & \;\;\;\; + (2a_{3} + b_{2}^{2} + 2b_{1} b_{3} )y_{t}^{4} + 2b_{2} b_{3} y_{t}^{5} + b_{3}^{2} y_{t}^{6} ){\text{d}}t + (2b_{0} y_{t} + 2b_{1} y_{t}^{2} + 2b_{2} y_{t}^{3} + 2b_{3} y_{t}^{4} ){\text{d}}W_{t} ) \\ & \;\;\;\; + y_{t}^{2} (a_{0} + a_{1} y_{t} + a_{2} y_{t}^{2} + a_{3} y_{t}^{3} ){\text{d}}t + (b_{0} y_{t}^{2} + b_{1} y_{t}^{3} + b_{2} y_{t}^{4} + b_{3} y_{t}^{5} ){\text{d}}W_{t} \\ & \;\;\;\; + (b_{0} + b_{1} y_{t} + b_{2} y_{t}^{2} + b_{3} y_{t}^{3} )(2b_{0} y_{t} + 2b_{1} y_{t}^{2} + 2b_{2} y_{t}^{3} + 2b_{3} y_{t}^{4} ){\text{d}}t, \\ & = (3b_{0}^{2} y_{t} + (3a_{0} + 6b_{0} b_{1} )y_{t}^{2} + (3a_{1} + 3b_{1}^{2} + 6b_{0} b_{2} )y_{t}^{3} + (3a_{2} + 6b_{1} b_{2} + 6b_{0} b_{3} )y_{t}^{4} \\ & \;\;\;\; + (3a_{3} + 3b_{2}^{2} + 6b_{1} b_{3} )y_{t}^{5} + 6b_{2} b_{3} y_{t}^{6} + 3b_{3}^{2} y_{t}^{7} ){\text{d}}t + (3b_{0} y_{t}^{2} + 3b_{1} y_{t}^{3} + 3b_{2} y_{t}^{4} + 3b_{3} y_{t}^{5} ){\text{d}}W_{t} . \\ \end{aligned} $$

After adopting the finite closure, i.e. truncating up to the cubic nonlinearity terms with strong assumption that the small enough contribution of nonlinearities beyond the cubic nonlinearity to the stochastic evolution (Rugh, 1980, p. 108), we get

$$ \begin{aligned} {\text{d}}y_{t}^{2} & \approx (b_{0}^{2} + 2\left( {a_{0} + b_{0} b_{1} } \right)y_{t}\nonumber \\ &\quad + (b_{1}^{2} + 2a_{1} + 2b_{0} b_{2} )y_{t}^{2} + (2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3} )y_{t}^{3} ){\text{d}}t\nonumber \\ &\quad + (2b_{0} y_{t} + 2b_{1} y_{t}^{2} + 2b_{2} y_{t}^{3} ){\text{d}}W_{t} , \\ \end{aligned} $$
(A.2a)
$$ {\text{d}}y_{t}^{3} \approx (3b_{0}^{2} y_{t} + (3a_{0} + 6b_{0} b_{1} )y_{t}^{2} + (3a_{1} + 3b_{1}^{2} + 6b_{0} b_{2} )y_{t}^{3} ){\text{d}}t + (3b_{0} y_{t}^{2} + 3b_{1} y_{t}^{3} ){\text{d}}W_{t} . $$
(A.2b)

After combining (A.1), (A.2a) and (A.2b), we have

$$ \begin{aligned} {\text{d}}\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right) & = \left( {\left( {\begin{array}{*{20}c} {a_{0} } \\ {b_{0}^{2} } \\ 0 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {2a_{0} + 2b_{0} b_{1} } & {2a_{1} + b_{1}^{2} + 2b_{0} b_{2} } & {2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3} } \\ {3b_{0}^{2} } & {3a_{0} + 6b_{0} b_{1} } & {3a_{1} + 3b_{1}^{2} + 6b_{0} b_{2} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right)} \right){\text{d}}t \\ & \;\;\;\; + \left( {\begin{array}{*{20}c} {b_{1} } & {b_{2} } & {b_{3} } \\ {2b_{0} } & {2b_{1} } & {2b_{2} } \\ 0 & {3b_{0} } & {3b_{1} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right){\text{d}}W_{t} + \left( {\begin{array}{*{20}c} {b_{0} } \\ 0 \\ 0 \\ \end{array} } \right){\text{d}}W_{t} . \\ \end{aligned} $$

An alternative notational rephrasing is

$$ \begin{aligned} {\text{d}}\left( {\begin{array}{*{20}c} {y_{1} (t)} \\ {y_{2} (t)} \\ {y_{3} (t)} \\ \end{array} } \right) & = \left( {\left( {\begin{array}{*{20}c} {a_{0} } \\ {b_{0}^{2} } \\ 0 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {2a_{0} + 2b_{0} b_{1} } & {2a_{1} + b_{1}^{2} + 2b_{0} b_{2} } & {2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3} } \\ {3b_{0}^{2} } & {3a_{0} + 6b_{0} b_{1} } & {3a_{1} + 3b_{1}^{2} + 6b_{0} b_{2} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{1} (t)} \\ {y_{2} (t)} \\ {y_{3} (t)} \\ \end{array} } \right)} \right){\text{d}}t \\ & \;\;\;\; + \left( {\begin{array}{*{20}c} {b_{1} } & {b_{2} } & {b_{3} } \\ {2b_{0} } & {2b_{1} } & {2b_{2} } \\ 0 & {3b_{0} } & {3b_{1} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {y_{1} (t)} \\ {y_{2} (t)} \\ {y_{3} (t)} \\ \end{array} } \right){\text{d}}W_{t} + \left( {\begin{array}{*{20}c} {b_{0} } \\ 0 \\ 0 \\ \end{array} } \right){\text{d}}W_{t} , \\ \end{aligned} $$

where \((\begin{array}{*{20}c} {y_{t} } & {y_{t}^{2} } & {y_{t}^{3} } \\ \end{array} )^{T} = (\begin{array}{*{20}c} {y_{1} (t)} & {y_{2} (t)} & {y_{3} (t)} \\ \end{array} )^{T} .\) The above can be further recast as

$$ {\text{d}}\xi_{t} = (A_{0} + A_{t} \xi_{t} ){\text{d}}t + (D_{t} \xi_{t} + L_{t} ){\text{d}}W_{t} , $$

where

$$ \xi_{t} = \left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right),A_{0} = \left( {\begin{array}{*{20}c} {a_{0} } \\ {b_{0}^{2} } \\ 0 \\ \end{array} } \right),A_{t} = \left( {\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } \\ {2a_{0} + 2b_{0} b_{1} } & {2a_{1} + b_{1}^{2} + 2b_{0} b_{2} } & {2a_{2} + 2b_{1} b_{2} + 2b_{0} b_{3} } \\ {3b_{0}^{2} } & {3a_{0} + 6b_{0} b_{1} } & {3a_{1} + 3b_{1}^{2} + 6b_{0} b_{2} } \\ \end{array} } \right), $$
$$ D_{t} = \left( {\begin{array}{*{20}c} {b_{1} } & {b_{2} } & {b_{3} } \\ {2b_{0} } & {2b_{1} } & {2b_{2} } \\ 0 & {3b_{0} } & {3b_{1} } \\ \end{array} } \right),L_{t} = \left( {\begin{array}{*{20}c} {b_{0} } \\ 0 \\ 0 \\ \end{array} } \right). $$

In the direct form, the SDE becomes

$$ {\text{d}}\xi_{t} = (A_{0} + A_{t} \xi_{t} ){\text{d}}t + (D_{t} \xi_{t} + L_{t} ){\text{d}}W_{t} , $$
(A.3)

where

$$ \xi_{t} = \left( {\begin{array}{*{20}c} {y_{t} } \\ {y_{t}^{2} } \\ {y_{t}^{3} } \\ \end{array} } \right),A_{0} = \left( {\begin{array}{*{20}c} f \\ {\sigma^{2} g^{2} } \\ 0 \\ \end{array} } \right),D_{t} = \left( {\begin{array}{*{20}c} {\sigma g^{\prime}} & {\frac{1}{2!}\sigma g^{\prime\prime}} & {\frac{1}{3!}\sigma g^{\prime\prime\prime}} \\ {2\sigma g} & {2\sigma g^{\prime}} & {\sigma g^{\prime\prime}} \\ 0 & {3\sigma g} & {3\sigma g^{\prime}} \\ \end{array} } \right),L_{t} = \left( {\begin{array}{*{20}c} {\sigma g} \\ 0 \\ 0 \\ \end{array} } \right), $$
$$ A_{t} {=} \left( {\begin{array}{*{20}c} {f^{\prime}} & {\frac{1}{2!}f^{\prime\prime}} & {\frac{1}{3!}f^{\prime\prime\prime}} \\ {2f + 2\sigma^{2} gg^{\prime}} & {2f^{\prime} {+} \sigma g^{{\prime}{2}} {+} 2\sigma^{2} gg^{\prime\prime}} & {f^{\prime\prime} {+} \sigma^{2} g^{\prime}g^{\prime\prime} {+} \frac{2}{3!}\sigma^{2} gg^{\prime\prime\prime}} \\ {3\sigma^{2} g^{2} } & {3f + 6\sigma^{2} gg^{\prime}} & {3f^{\prime} {+} 3\sigma g^{{\prime}{2}} + 6\sigma^{2} gg^{\prime\prime}} \\ \end{array} } \right). $$

Note that the entries of the associated vectors and matrices are evaluated at \(y_{t} = 0\). \(\hfill\square \)

Remark 1

Equation ( A.3 ) is a consequence of the Carleman embedding into the nonlinear SDE \({\text{d}}y_{t} = f(y_{t} ){\text{d}}t + \sigma g(y_{t} ){\text{d}}W_{t} \). The Carleman linearized stochastic differential equation assumes the structure of a ‘bilinear’ stochastic differential equation as well as treats the ‘nonlinearity’ as a state variable. This implies that the Carleman linearized stochastic differential equation ( A.3 ) and nonlinear stochastic differential equation ( A.1 ) account for nonlinearity effects. The Carleman linearization circumvents the curse of the formidable complexity attributing to the notion of finite closure. To achieve the estimation, we adopt the Carleman linearized formalism in our paper to exploit the usefulness of bilinear stochastic differential equations.

Appendix B: A General Formula for the Computations Concerning Higher-Order Moments Using the Characteristic Function

Consider the vector Gaussian stochastic process \(Y = \left\{ {y_{t} ,\Im_{t} ,0 \le t < \infty } \right\},\) where \(y_{t} \in R^{n} .\) Furthermore \(y_{t}\) is normal with mean \(\mu_{{y_{t} }}\) and the variance \(P_{{y_{t} }} .\) Here we wish to construct the characteristic function and then weave the higher-order moment equations. The higher-order moment equation arises from practical problems involving higher-order statistics. That introduces mathematical subtleties. Here, we introduce the idea of replacing the higher-order moment with lower-order moments. The appendix does that. Now construct the compensated vector Gaussian process \(\tilde{y}_{t} = y_{t} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} ,\) where the compensated process is zero mean and the variance \(P_{{y_{t} }} .\) Thus, the concerning Gaussian characteristic function is

$$ \begin{aligned} E\exp (s^{T} \tilde{y}_{t} ) & = \exp \left( {\frac{1}{2}s^{T} P_{{y_{t} }} s} \right) = \exp \left( {\frac{1}{2}\sum\limits_{i,j} {s_{i} } s_{j} P_{ij} } \right)\nonumber\\ &= \sum\limits_{m} {\frac{{\left( {\frac{1}{2}\sum\limits_{i,j} {s_{i} } s_{j} P_{ij} } \right)^{m} }}{m!}} \nonumber \\ &= \sum\limits_{m} {\frac{{\left( {\frac{1}{2}\sum\limits_{i} {s_{i}^{2} } P_{ii} + \sum\limits_{{i_{1} < i_{2} }} {s_{i} s_{j} P_{ij} } } \right)^{m} }}{m!}} . \\ \end{aligned} $$
(B.1)

From the generating functions,

$$ \sum\limits_{m} {\sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = m}} {E\tilde{y}_{1}^{{r_{1} }} \,\tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\,\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} } = E\exp (s^{T} \tilde{y}_{t} ). $$
(B.2)

From (B.1) to (B.2), we have

$$ \sum\limits_{m} {\sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = m}} {E\tilde{y}_{1}^{{r_{1} }} \,\tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\,\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} } = \sum\limits_{p} {\frac{{\left( {\frac{1}{2}\sum\limits_{i} {s_{i}^{2} } P_{ii} + \sum\limits_{{i_{1} < i_{2} }} {s_{i} s_{j} P_{ij} } } \right)^{p} }}{p!}} . $$

Rearranging the above terms, we have

$$ \sum\limits_{{m{\text{ is even}}}} {\sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = m}} {E\tilde{y}_{1}^{{r_{1} }} \,\tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\,\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} } + \sum\limits_{{m{\text{ is odd}}}} {\sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = m}} {E\tilde{y}_{1}^{{r_{1} }} \,\tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\,\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} } $$
$$ = \sum\limits_{p} {\frac{{\left( {\frac{1}{2}\sum\limits_{i} {s_{i}^{2} } P_{ii} + \sum\limits_{{i_{1} < i_{2} }} {s_{i} s_{j} P_{ij} } } \right)^{p} }}{p!}} . $$

The above suggests that the second term of the left-hand side vanishes. As a result of this we have,

$$ \sum\limits_{{m{\text{ is even}}}} {\sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = m}} {E\tilde{y}_{1}^{{r_{1} }} \,\tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\,\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} } = \sum\limits_{p} {\frac{{\left( {\frac{1}{2}\sum\limits_{i} {s_{i}^{2} } P_{ii} + \sum\limits_{{i_{1} < i_{2} }} {s_{i} s_{j} P_{ij} } } \right)^{p} }}{p!}} . $$

Suppose \(m = q\), where \(m\) varies and \(q\) is fixed and even, then \(p = \frac{q}{2}\) and

$$ \sum\limits_{{r_{1} + r_{2} + \cdots + r_{N} = q}} {E\tilde{y}_{1}^{{r_{1} }} \tilde{y}_{2}^{{r_{2} }} \cdots \tilde{y}_{N}^{{r_{N} }} \frac{{s_{1}^{{r_{1} }} }}{{r_{1} !}}\frac{{s_{2}^{{r_{2} }} }}{{r_{2} !}} \cdots \frac{{s_{N}^{{r_{N} }} }}{{r_{N} !}}} = \frac{{\left( {\frac{1}{2}\sum\limits_{i} {s_{i}^{2} } P_{ii} + \sum\limits_{{i_{1} < i_{2} }} {s_{i} s_{j} P_{ij} } } \right)^{\frac{q}{2}} }}{{\frac{q}{2}!}}. $$
(B.3)

Here, three cases arise, \(q > N,\)\(q = N,\)\(q < N.\) Here, we restrict our discussions to \(q = N\) and we pose a question to compute the sixth-order moment explicitly in terms of the second order under the Gaussian statistics, i.e.

$$ E(y_{1} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{1} (t))\,(y_{2} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{2} (t))\,(y_{3} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{3} (t))\,(y_{4} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{4} (t))\,(y_{5} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{5} (t))\,(y_{6} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{6} (t)) = ? $$

To answer the above, set

$$ q = 6,\;N = 6,\;r_{1} + r_{2} + \ldots + r_{6} = 6,\;r_{1} = 1,\;r_{2} = 1, \ldots ,r_{6} = \;1, $$

and invoke the condition

$$ \mathop \cap \limits_{k} \left\{ {(i_{k} ,i_{k + 1} )\left| {1 \le i_{k} \le 6,\;1 \le i_{k + 1} \le 6,\,\,i_{k} < i_{k + 1} } \right.} \right\} = \varphi . $$

That can be achieved using the following: first, construct the set \(\left\{ {(i_{1} ,i_{2} )\left| {1 \le i_{1} \le 6,\,1 \le i_{2} \le 6,\,1 \le i_{1} < i_{2} \le 6} \right.} \right\}\). The set is a product space and each element of the product space is a set of two tuples in increasing order. Now, we construct a product space with the property \(\mathop \cap \limits_{k} \left\{ {(i_{k} ,i_{k + 1} )\left| {1 \le i_{k} \le 6,\;1 \le i_{k + 1} \le 6,\,\,i_{k} < i_{k + 1} } \right.} \right\} = \varphi ,\) where \(k\) is odd valued with \(1 \le k \le 6.\) After a result of these, we equate the terms of the left-hand and right-hand sides of (B.3), associated with the term \(s_{1} s_{2} s_{3} s_{4} s_{5} s_{6} .\) Thus, we have

$$ \begin{aligned}& E\mathop \prod \limits_{1 \le i \le 6} (y_{i} (t) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{i} (t))\nonumber \\ &\quad = P_{{y_{1} y_{2} }} P_{{y_{3} y_{4} }} P_{{y_{5} y_{6} }} + P_{{y_{1} y_{2} }} P_{{y_{3} y_{5} }} P_{{y_{4} y_{6} }}\nonumber \\ &\quad + P_{{y_{1} y_{2} }} P_{{y_{3} y_{6} }} P_{{y_{4} y_{5} }} + P_{{y_{1} y_{3} }} P_{{y_{2} y_{4} }} P_{{y_{5} y_{6} }} \nonumber \\ &\quad + P_{{y_{1} y_{6} }} P_{{y_{2} y_{3} }} P_{{y_{4} y_{5} }} + P_{{y_{1} y_{6} }} P_{{y_{2} y_{4} }} P_{{y_{3} y_{5} }} + P_{{y_{1} y_{6} }} P_{{y_{2} y_{5} }} P_{{y_{3} y_{4} }} .\nonumber \\ &\quad + P_{{y_{1} y_{3} }} P_{{y_{2} y_{5} }} P_{{y_{4} y_{6} }} + P_{{y_{1} y_{3} }} P_{{y_{2} y_{6} }} P_{{y_{4} y_{5} }} + P_{{y_{1} y_{4} }} P_{{y_{2} y_{3} }} P_{{y_{5} y_{6} }}\nonumber \\ &\quad + P_{{y_{1} y_{4} }} P_{{y_{2} y_{5} }} P_{{y_{3} y_{6} }} + P_{{y_{1} y_{4} }} P_{{y_{2} y_{6} }} P_{{y_{3} y_{5} }} + P_{{y_{1} y_{5} }} P_{{y_{2} y_{3} }} P_{{y_{4} y_{6} }}\nonumber \\ &\quad + P_{{y_{1} y_{5} }} P_{{y_{2} y_{4} }} P_{{y_{3} y_{6} }} + P_{{y_{1} y_{5} }} P_{{y_{2} y_{6} }} P_{{y_{3} y_{4} }} \\ \end{aligned} $$
(B.4)

Consider the scalar Gaussian stochastic process, (B.4) is simplified to

$$ \,E\left( {y_{t} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} } \right)^{6} = 15P_{{y_{t} }}^{3} . $$
(B.5)

Furthermore, we wish to calculate \(\,Ey_{t}^{6}\). That can be calculated using the binomial coefficient as well as lower-order Gaussian statistics. The relation holds: Fig

$$ \begin{gathered} E\left( {y_{t} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} } \right)^{6} = E\sum\limits_{r = 0}^{6} {( - 1)^{6 - r} \left( \begin{matrix} 6 \hfill \\ r \hfill \\ \end{matrix} \right)} y_{t}^{r} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6 - r}\hfill \\ \quad = \sum\limits_{r = 0}^{6} {( - 1)^{6 - r} \left( \begin{matrix} 6 \hfill \\ r \hfill \\ \end{matrix} \right)} \,Ey_{t}^{r} \,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6 - r} \hfill \\ \qquad = {}^{6}C_{0} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6} - {}^{6}C_{1} Ey_{t} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{5} + {}^{6}C_{2} Ey_{t}^{2} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{4} - {}^{6}C_{3} Ey_{t}^{3} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{3} \hfill\\ \qquad + {}^{6}C_{4} Ey_{t}^{4} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{2} - {}^{6}C_{5} Ey_{t}^{5} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} + {}^{6}C_{6} Ey_{t}^{6} . \hfill \\ \end{gathered} $$

After combining (B.5) with the above, we have

$$ {}^{6}C_{0} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6} - {}^{6}C_{1} Ey_{t} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{5} + {}^{6}C_{2} Ey_{t}^{2} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{4} - {}^{6}C_{3} Ey_{t}^{3} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{3} + {}^{6}C_{4} Ey_{t}^{4} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{2} - {}^{6}C_{5} Ey_{t}^{5} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} + {}^{6}C_{6} Ey_{t}^{6} = 15P_{{y_{t} }}^{3} . $$

Thus,

$$ Ey_{t}^{6} = 15P_{{y_{t} }}^{3} - {}^{6}C_{0} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6} + {}^{6}C_{1} Ey_{t} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{5} - {}^{6}C_{2} Ey_{t}^{2} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{4} + {}^{6}C_{3} Ey_{t}^{3} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{3} - {}^{6}C_{4} Ey_{t}^{4} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{2} + {}^{6}C_{5} Ey_{t}^{5} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} . $$
(B.6)

Using the binomial coefficient identities and expectation operator, i.e.

$$ Ey_{t}^{3} = \left( {3\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} P_{{y_{t} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{3} } \right),Ey_{t}^{4} = \left( {3P_{{y_{t} }}^{2} + 6\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{2} P_{{y_{t} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{4} } \right),Ey_{t}^{5} = \left( {15\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t} P_{{y_{t} }}^{2} + 10\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{3} P_{{y_{t} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{5} } \right), $$

as well as embedding them into (B.6), we have

$$ Ey_{t}^{6} = \left( {15P_{{y_{t} }}^{3} + 45\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{2} P_{{y_{t} }}^{2} + 15\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{4} P_{{y_{t} }} + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y}_{t}^{6} } \right). $$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Medewar, P.G., Sharma, S.N. On the Carleman Embedding and Its Offsprings with Their Application to Machine Swing Dynamics. J Control Autom Electr Syst 34, 1242–1259 (2023). https://doi.org/10.1007/s40313-023-01021-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40313-023-01021-5

Keywords

Navigation