The model of electro-mechanical system with current feedback was obtained by separation from the identified model of velocity feedback. The identification was based on measured data of the desired engine velocity vd and the measured velocity vm, sampled with a time step Δt.
$$ {\boldsymbol{u}}_{\boldsymbol{i}}={v}_d\left(i\cdot \Delta t\right) $$
(1)
$$ {\boldsymbol{y}}_{\boldsymbol{i}}={v}_m\left(i\cdot \Delta t\right) $$
(2)
Inputs for identification are measured output data y1, y2, …, yq corresponding to measured inputs u1, u2, …, uq. The coefficient q denotes the total number of time steps. For the structured data, the most commonly used method is the Eigensystem realization algorithm (ERA) or methods derived from it (e.g., Multivariable Output Error State-Space (MOESP), [26, 27]).
Eigensystem realization algorithm method
The output of the ERA method (described by Gawronski [28]) is the discrete state description parameters in a balanced form (states vector is as controllable as observable). The state description is identified in the form described by Eqs. (3) and (4).
$$ {\boldsymbol{x}}_{\boldsymbol{i}+\mathbf{1}}=\boldsymbol{A}{\boldsymbol{x}}_{\boldsymbol{i}}+\boldsymbol{B}{\boldsymbol{u}}_{\boldsymbol{i}} $$
(3)
$$ {\boldsymbol{y}}_{\boldsymbol{i}}=\boldsymbol{C}{\boldsymbol{x}}_{\boldsymbol{i}}+\boldsymbol{D}{\boldsymbol{u}}_{\boldsymbol{i}} $$
(4)
where A, B, C, and D are matrixes of state description, xi is the state vector, ui is the vector of inputs, and index i corresponds to a time step. The first step of identification is to construct the Markov parameters from the impulse response considering zero initial conditions. Generally, the construction of Markov parameters hk can be written as in Eq. (5).
$$ {\boldsymbol{h}}_{\mathbf{0}}=\boldsymbol{D},{\boldsymbol{h}}_{\boldsymbol{k}}=\boldsymbol{C}{\boldsymbol{A}}^{k-1}\boldsymbol{B} $$
(5)
The individual Markov parameters are then written to the Markov matrix H. The coefficient p denotes the prediction horizon (model order) and p ≪ q holds. Choice of order is significant, too low order would not affect part of dynamics, and too high order could bring undesirable dynamics (typically caused by noise in measured data), and in addition, it would increase computational demands.
$$ \boldsymbol{H}=\left[{\boldsymbol{h}}_{\mathbf{0}}\kern0.5em {\boldsymbol{h}}_{\mathbf{1}}\kern0.5em \cdots \kern0.5em {\boldsymbol{h}}_{\boldsymbol{p}}\right] $$
(6)
To calculate Markov’s matrix H, it is first necessary to define an input matrix U and an output matrix Y.
$$ \boldsymbol{U}=\left[\begin{array}{ccccccc}{\boldsymbol{u}}_{\mathbf{0}}& {\boldsymbol{u}}_{\mathbf{1}}& {\boldsymbol{u}}_{\mathbf{2}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{p}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{q}}\\ {}\mathbf{0}& {\boldsymbol{u}}_{\mathbf{0}}& {\boldsymbol{u}}_{\mathbf{1}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{p}-\mathbf{1}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{q}-\mathbf{1}}\\ {}\vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {}\mathbf{0}& \mathbf{0}& \mathbf{0}& \cdots & {\boldsymbol{u}}_{\mathbf{0}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{q}-\boldsymbol{p}}\end{array}\right] $$
(7)
$$ \boldsymbol{Y}=\left[{\boldsymbol{y}}_{\mathbf{0}}\kern0.5em {\boldsymbol{y}}_{\mathbf{1}}\kern0.5em \cdots \kern0.5em {\boldsymbol{y}}_{\boldsymbol{q}}\right] $$
(8)
The dependence (9) applies to such formulated matrices, and after pseudoinversion of the input matrix, we obtain the formula for determination of the Markov matrix (10) and its distribution yields also individual Markov parameters.
$$ \boldsymbol{Y}=\boldsymbol{HU} $$
(9)
$$ \boldsymbol{H}=\boldsymbol{Y}{\boldsymbol{U}}^T{\left(\boldsymbol{U}{\boldsymbol{U}}^T\right)}^{-1} $$
(10)
Observation matrix P and controllability matrix Q are expressed from the Hankel matrix H1 and H2; it results from Eq. (5).
$$ {\boldsymbol{H}}_{\mathbf{1}}=\boldsymbol{PQ}=\left[\begin{array}{cccc}{\boldsymbol{h}}_{\mathbf{1}}& {\boldsymbol{h}}_{\mathbf{2}}& \cdots & {\boldsymbol{h}}_{\boldsymbol{p}}\\ {}{\boldsymbol{h}}_{\mathbf{2}}& {\boldsymbol{h}}_{\mathbf{3}}& \cdots & {\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{1}}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\boldsymbol{h}}_{\boldsymbol{p}}& {\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{1}}& \cdots & {\boldsymbol{h}}_{\mathbf{2}\boldsymbol{p}-\mathbf{1}}\end{array}\right] $$
(11)
$$ {\boldsymbol{H}}_{\mathbf{2}}=\boldsymbol{PAQ}=\left[\begin{array}{cccc}{\boldsymbol{h}}_{\mathbf{2}}& {\boldsymbol{h}}_{\mathbf{3}}& \cdots & {\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{1}}\\ {}{\boldsymbol{h}}_{\mathbf{3}}& {\boldsymbol{h}}_{\mathbf{4}}& \cdots & {\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{2}}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{1}}& {\boldsymbol{h}}_{\boldsymbol{p}+\mathbf{2}}& \cdots & {\boldsymbol{h}}_{\mathbf{2}\boldsymbol{p}}\end{array}\right] $$
(12)
By singular value decomposition (SVD), the Hankel matrix is broken down into a diagonal matrix of singular numbers Γ and an orthonormal matrices V and U. The diagonal matrix Γ is further broken down into Hankel matrices of singular numbers ΓV and ΓU.
$$ {\boldsymbol{H}}_{\mathbf{1}}=\boldsymbol{V}\boldsymbol{D}{\boldsymbol{U}}^T=\boldsymbol{V}{\boldsymbol{\Gamma}}_{\boldsymbol{V}}{\boldsymbol{\Gamma}}_{\boldsymbol{U}}{\boldsymbol{U}}^T $$
(13)
The observation matrix P and controllability matrix Q are given by Eqs. (14) and (15).
$$ \boldsymbol{P}=\boldsymbol{V}{\boldsymbol{\Gamma}}_{\boldsymbol{V}} $$
(14)
$$ \boldsymbol{Q}={\boldsymbol{\Gamma}}_{\boldsymbol{U}}{\boldsymbol{U}}^T $$
(15)
The matrix A is expressed by a pseudoinversion from the shifted Hankel matrix H2.
$$ \boldsymbol{A}={\boldsymbol{P}}^{+}{\boldsymbol{H}}_{\mathbf{2}}{\boldsymbol{Q}}^{+} $$
(16)
$$ {\boldsymbol{P}}^{+}={\left({\boldsymbol{P}}^T\boldsymbol{P}\right)}^{-1}{\boldsymbol{P}}^T $$
(17)
$$ {\boldsymbol{Q}}^{+}={\boldsymbol{Q}}^T{\left(\boldsymbol{Q}{\boldsymbol{Q}}^T\right)}^{-1} $$
(18)
The matrix B is determined as the first s columns of the matrix Q and the matrix C as the first r rows of the matrix P.
$$ \boldsymbol{B}=\boldsymbol{Q}{\boldsymbol{E}}_{\boldsymbol{s}} $$
(19)
$$ {\boldsymbol{E}}_{\boldsymbol{s}}={\left[{\boldsymbol{I}}_{\boldsymbol{s}}\kern0.5em \mathbf{0}\kern0.5em \cdots \kern0.5em \mathbf{0}\right]}^T $$
(20)
$$ \boldsymbol{C}={{\boldsymbol{E}}_{\boldsymbol{r}}}^T\boldsymbol{P} $$
(21)
$$ {\boldsymbol{E}}_{\boldsymbol{r}}={\left[{\boldsymbol{I}}_{\boldsymbol{r}}\kern0.5em \mathbf{0}\kern0.5em \cdots \kern0.5em \mathbf{0}\right]}^T $$
(22)
The matrix D corresponds to the zero Markov parameter.
$$ \boldsymbol{D}={\boldsymbol{h}}_{\mathbf{0}} $$
(23)
Multivariable Output Error State-Space method
The MOESP method will be described using an approach by Katayama [29]. In the first step, the data matrices UK and YK are formed (q is supposed as sufficiently large).
$$ {\boldsymbol{U}}_{\boldsymbol{K}}=\left[\begin{array}{cccc}{\boldsymbol{u}}_{\mathbf{0}}& {\boldsymbol{u}}_{\mathbf{1}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{q}-\mathbf{1}}\\ {}{\boldsymbol{u}}_{\mathbf{1}}& {\boldsymbol{u}}_{\mathbf{2}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{q}}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\boldsymbol{u}}_{\boldsymbol{p}-\mathbf{1}}& {\boldsymbol{u}}_{\boldsymbol{p}}& \cdots & {\boldsymbol{u}}_{\boldsymbol{p}+\boldsymbol{q}-\mathbf{2}}\end{array}\right] $$
(24)
$$ {\boldsymbol{Y}}_{\boldsymbol{K}}=\left[\begin{array}{cccc}{\boldsymbol{y}}_{\mathbf{0}}& {\boldsymbol{y}}_{\mathbf{1}}& \cdots & {\boldsymbol{y}}_{\boldsymbol{q}-\mathbf{1}}\\ {}{\boldsymbol{y}}_{\mathbf{1}}& {\boldsymbol{y}}_{\mathbf{2}}& \cdots & {\boldsymbol{y}}_{\boldsymbol{q}}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\boldsymbol{y}}_{\boldsymbol{p}-\mathbf{1}}& {\boldsymbol{y}}_{\boldsymbol{p}}& \cdots & {\boldsymbol{y}}_{\boldsymbol{p}+\boldsymbol{q}-\mathbf{2}}\end{array}\right] $$
(25)
Subsequently, LQ decomposition of these data matrices is performed.
$$ {\boldsymbol{U}}_{\boldsymbol{K}}={\boldsymbol{L}}_{\mathbf{1}\mathbf{1}}{\boldsymbol{Q}}_{\mathbf{1}}^T $$
(26)
$$ {\boldsymbol{Y}}_{\boldsymbol{K}}={\boldsymbol{L}}_{\mathbf{2}\mathbf{1}}{\boldsymbol{Q}}_{\mathbf{1}}^T+{\boldsymbol{L}}_{\mathbf{2}\mathbf{2}}{\boldsymbol{Q}}_{\mathbf{2}}^T $$
(27)
where L11 and L22 are lower triangular matrices and Q1 and Q2 are orthogonal matrices. The next step is SVD decomposition of matrix L22
$$ {\boldsymbol{L}}_{\mathbf{2}\mathbf{2}}=\left[{\boldsymbol{U}}_{\mathbf{1}}{\boldsymbol{U}}_{\mathbf{2}}\right]\left[\begin{array}{cc}{\boldsymbol{S}}_{\mathbf{1}\mathbf{1}}& \mathbf{0}\\ {}\mathbf{0}& \mathbf{0}\end{array}\right]\left[\begin{array}{c}{\boldsymbol{V}}_{\mathbf{1}}^T\\ {}{\boldsymbol{V}}_{\mathbf{2}}^T\end{array}\right]={\boldsymbol{U}}_{\mathbf{1}}{\boldsymbol{S}}_{\mathbf{1}\mathbf{1}}{\boldsymbol{V}}_{\mathbf{1}}^T $$
(28)
where U1 has dimension [p · l × n] and U2 has dimension [p · l × (p · l − n)], n is the dimension of the state vector, and l is the dimension of the output vector yi. Then, we define the extended observability matrix as:
$$ {\boldsymbol{O}}_{\boldsymbol{p}}={\boldsymbol{U}}_{\mathbf{1}}{\boldsymbol{S}}_{\mathbf{1}\mathbf{1}}^{1/2} $$
(29)
where \( {\boldsymbol{S}}_{\mathbf{11}}^{1/2} \) is a matrix square root of the matrix S11.
Using the extended observability matrix, the matrix C is given by:
$$ \boldsymbol{C}={\boldsymbol{O}}_{\boldsymbol{p}}\left(1:l,1:n\right) $$
(30)
Note, that
$$ \left[\begin{array}{c}\boldsymbol{C}\\ {}\boldsymbol{C}\boldsymbol{A}\\ {}\vdots \\ {}\boldsymbol{C}{\boldsymbol{A}}^{p-2}\end{array}\right]\boldsymbol{A}=\left[\begin{array}{c}\boldsymbol{C}\boldsymbol{A}\\ {}\boldsymbol{C}{\boldsymbol{A}}^2\\ {}\vdots \\ {}\boldsymbol{C}{\boldsymbol{A}}^{p-1}\end{array}\right]\to {\boldsymbol{O}}_{\boldsymbol{p}-\mathbf{1}}\boldsymbol{A}={\boldsymbol{O}}_{\boldsymbol{p}}\left(\left(l+1\right):\left(l\cdot p\right),1:n\right) $$
(31)
Therefore, using the pseudoinverse of Op − 1, the matrix A can be solved as:
$$ \boldsymbol{A}={\boldsymbol{O}}_{\boldsymbol{p}-\mathbf{1}}^{+}{\boldsymbol{O}}_{\boldsymbol{p}}\left(\left(l+1\right):\left(l\cdot p\right),1:n\right) $$
(32)
The calculation of B and D estimations is based on the solving equation:
$$ {\boldsymbol{U}}_{\mathbf{2}}^T\left[\begin{array}{cccc}\boldsymbol{D}& \mathbf{0}& \dots & \mathbf{0}\\ {}\boldsymbol{C}\boldsymbol{B}& \boldsymbol{D}& \dots & \mathbf{0}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}\boldsymbol{C}{\boldsymbol{A}}^{p-2}\boldsymbol{B}& \boldsymbol{C}{\boldsymbol{A}}^{p-3}\boldsymbol{B}& \dots & \boldsymbol{D}\end{array}\right]={\boldsymbol{U}}_{\mathbf{2}}^T{\boldsymbol{L}}_{\mathbf{2}\mathbf{1}}{\boldsymbol{L}}_{\mathbf{11}}^{-1} $$
(33)
Substituting \( {\boldsymbol{U}}_{\mathbf{2}}^T:= \left[{\mathcal{L}}_1{\mathcal{L}}_2\cdots {\mathcal{L}}_p\right] \) and \( {\boldsymbol{U}}_{\mathbf{2}}^T{\boldsymbol{L}}_{\mathbf{2}\mathbf{1}}{\boldsymbol{L}}_{\mathbf{11}}^{-1}:= \left[{\mathcal{M}}_1{\mathcal{M}}_2\cdots {\mathcal{M}}_p\right] \), we get overdetermined set of linear equations:
$$ \left[\begin{array}{cc}{\mathcal{L}}_1& {\overline{\mathcal{L}}}_2{\boldsymbol{O}}_{\boldsymbol{p}-\mathbf{1}}\\ {}{\mathcal{L}}_2& {\overline{\mathcal{L}}}_3{\boldsymbol{O}}_{\boldsymbol{p}-\mathbf{2}}\\ {}\mathbf{\vdots}& \mathbf{\vdots}\\ {}{\mathcal{L}}_{p-1}& {\overline{\mathcal{L}}}_q{\boldsymbol{O}}_{\mathbf{1}}\\ {}{\mathcal{L}}_p& \mathbf{0}\end{array}\right]\left[\begin{array}{c}\boldsymbol{D}\\ {}\boldsymbol{B}\end{array}\right]=\left[\begin{array}{c}{\mathcal{M}}_1\\ {}{\mathcal{M}}_2\\ {}\mathbf{\vdots}\\ {}{\mathcal{M}}_{p-1}\\ {}{\mathcal{M}}_p\end{array}\right] $$
(34)
where \( {\overline{\mathcal{L}}}_i=\left[{\mathcal{L}}_i\cdots {\mathcal{L}}_p\right] \), i = 2, ⋯, p. It can be shown that a unique least-squares solution exists if p > n.