Skip to main content
Log in

Optimal Evaluation of Discrete Continuous Markov Processes from Observed Digital Signals

  • ON THE 100th ANNIVERSARY OF V.I. TIKHONOV
  • Published:
Journal of Communications Technology and Electronics Aims and scope Submit manuscript

Abstract—On the basis of the theory of conditional Markov processes, recursive optimal and quasi-optimal algorithms for estimating discrete-continuous Markov processes from observed digital signals are synthesized. The efficiency of the quasi-optimal algorithm is confirmed by statistical modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

Similar content being viewed by others

REFERENCES

  1. R. L. Stratonovich, Conditional Markov Processes and Their Applications in Optimum Control Theory (Mosk. Gos. Univ., Moscow, 1966) [in Russian].

    Google Scholar 

  2. V. I. Tikhonov and M. A. Mironov, Markovian Processes (Sovetskoe Radio, Moscow, 1977) [in Russian].

    Google Scholar 

  3. V. I. Tikhonov and N. K. Kul’man, Nonlinear Filtration and Partially Coherent Reception of Signals (Sovetskoe Radio, Moscow, 1975) [in Russian].

    Google Scholar 

  4. V. I. Tikhonov and V. N. Kharisov, Statistical Analysis and Synthesis of Wireless Devices and Systems: School Textbook for College (Radio i Svyaz’, Moscow, 1991) [in Russian].

    Google Scholar 

  5. V. I. Tikhonov and D. S. Stepanov, Radiotekh. Elektron. (Moscow) 18, 1376 (1973).

    Google Scholar 

  6. V. I. Tikhonov, V. N. Kharisov, and V. A. Smirnov, Radiotekh. Elektron. (Moscow) 23, 1442 (1978).

    Google Scholar 

  7. Yu. G. Sosulin, Detection Theory and Estimation Theory for Stochastic Signals (Sovetskoe Radio, Moscow, 1978) [in Russian].

    Google Scholar 

  8. M. S. Yarlykov, Application of Markov Theory of Nonlinear Filtration in Radio Engineering (Sovetskoe Radio, Moscow, 1980) [in Russian].

    Google Scholar 

  9. M. S. Yarlykov and M. A. Mironov, Markov Theory for Estimation of Random Processes (Radio i Svyaz’, Moscow, 1993) [in Russian].

    MATH  Google Scholar 

  10. V. A. Cherdyntsev, Izv. Akad. Nauk Bel. SSR, Ser. Fiz.-Tekh., No. 2, 102 (1975).

  11. V. A. Cherdyntsev, Statistical Theory of Multiplexed Radio Systems (Vysheish. Shkola, Minsk, 1980) [in Russian].

    Google Scholar 

  12. S. Ya. Zhuk, Izv. Vyssh. Uchebn. Zaved. Radioelektron. 31 (1), 33 (1988).

    Google Scholar 

  13. A. I. Velichkin, Transfer of Analog Messages on Digital Channels of Communication (Radio i Svyaz’, Moscow, 1983) [in Russian].

    Google Scholar 

  14. Yu. N. Gorbunov, Digital Processing of Radar Signals in Terms of Use of Rough (Low-Digit) Quantization (TsNIRTIim. Akad. A.I. Berga, Moscow, 2008) [in Russian].

  15. E. A. Fedosov, V. V. Insarov, and O. S. Selivokhin, Control Systems of Final Situation in Conditions of Exterior Counteractions (Nauka, Moscow, 1989).

    MATH  Google Scholar 

  16. M. A. Mironov, Radiotekh. Elektron. (Moscow) 38, 109 (1993).

    Google Scholar 

  17. A. I. Velichkin, Radiotekh. Elektron. (Moscow) 35, 1471 (1990).

    Google Scholar 

  18. A. N. Detkov, Radiotekh. Elektron. (Moscow) 40, 1406 (1995).

    Google Scholar 

  19. A. N. Detkov, Izv. Ross. Akad. Nauk, Teor. Sist. Upr., No. 2, 73 (1997).

  20. A. N. Detkov, J. Commun. Technol. Electron. 62, 576 (2017).

    Article  Google Scholar 

  21. V. I. Tikhonov, Statistical Radio Engineering (Sovetskoe Radio, Moscow, 1982) [in Russian].

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. N. Detkov.

APPENDIX

APPENDIX

Using the rule of multiplication of probabilities and the main properties of Markov processes, we write expressions for the distribution law of state vector \({{\left[ {{{{\mathbf{z}}}^{T}}\left( k \right),{{{\mathbf{s}}}^{T}}\left( k \right)} \right]}^{T}}\):

$$\begin{gathered} f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right)\left| {{\mathbf{D}}_{1}^{k}} \right.} \right) = \frac{1}{{P\left( {{\mathbf{D}}_{1}^{k}} \right)}}f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),{\mathbf{D}}_{1}^{k}} \right) = \frac{1}{{P\left( {{\mathbf{D}}_{1}^{k}} \right)}} \\ \times \,\,\sum\limits_{{\mathbf{s}}\left( {k - 1} \right)} { \cdots \sum\limits_{{\mathbf{s}}\left( 1 \right)} {\int { \cdots \int {f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),{\mathbf{d}}\left( k \right),...,{\mathbf{z}}\left( 1 \right),{\mathbf{s}}\left( 1 \right),{\mathbf{d}}\left( 1 \right)} \right)} } } } \\ \times \,\,\prod\limits_{g = 1}^{k - 1} d {{{\mathbf{z}}}_{g}} = \frac{{P\left( {{\mathbf{D}}_{1}^{{k - 1}}} \right)}}{{P\left( {{\mathbf{D}}_{1}^{k}} \right)}}\sum\limits_{{\mathbf{s}}\left( {k - 1} \right)} {\int {f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),\left. {{\mathbf{d}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right),{\mathbf{d}}\left( {k - 1} \right)} \right)} } \\ \times \,\,f\left( {{\mathbf{z}}\left( {k - 1} \right),\left. {{\mathbf{s}}\left( {k - 1} \right)} \right|{\mathbf{D}}_{1}^{{k - 1}}} \right)d{\mathbf{z}}\left( {k - 1} \right), \\ \end{gathered} $$
(A.1)

where the integration with respect to variable \({\mathbf{z}}\) carried out in area \({{\Re }^{{{{n}_{x}}}}} \times \Omega \) \(\left( {\Omega \subset {{\Omega }_{1}} \times \cdot \cdot \cdot \times {{\Omega }_{{{{n}_{y}}}}}} \right)\).

We transform the first conditional probability density in the integrand:

$$\begin{gathered} f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),\left. {{\mathbf{d}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right),{\mathbf{d}}\left( {k - 1} \right)} \right) \\ = f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),\left. {{\mathbf{d}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right)} \right). \\ \end{gathered} $$

Here, absence \({\mathbf{d}}\left( {k - 1} \right)\) does not matter for given \({\mathbf{z}}\left( {k - 1} \right)\). Using the property of conditional probability densities, we write

$$\begin{gathered} f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),\left. {{\mathbf{d}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right)} \right) \\ = f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right)} \right) \\ \times \,\,P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right),{\mathbf{z}}\left( {k - 1} \right)} \right) \\ \times \,\,P\left( {\left. {{\mathbf{d}}\left( k \right)} \right|{\mathbf{z}}\left( k \right),{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right),{\mathbf{s}}\left( {k - 1} \right)} \right). \\ \end{gathered} $$
(A.2)

Due to unique transformation \({\mathbf{z}}\left( k \right)\) in \({\mathbf{d}}\left( k \right)\) when quantizing, the conditional probability of a digital signal [17] has form

$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right)|{\mathbf{z}}\left( k \right),{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right),{\mathbf{s}}\left( {k - 1} \right)} \right) \\ = \left\{ {\begin{array}{*{20}{c}} {1,}&{{\mathbf{y}}\left( k \right) \in \Omega } \\ {0,}&{{\mathbf{y}}\left( k \right) \notin \Omega } \end{array}} \right.. \\ \end{gathered} $$
(A.3)

Taking into account the identities

$$P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right),{\mathbf{z}}\left( {k - 1} \right)} \right) \equiv P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right)} \right),$$
$$\begin{gathered} f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right)} \right) \\ \equiv f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right)} \right), \\ \end{gathered} $$

which are fulfilled according to the conditions of the problem statement, and (A.3) we rewrite (A.2):

$$f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right),{\mathbf{d}}\left. {\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( {k - 1} \right)} \right) = \left\{ {\begin{array}{*{20}{c}} {f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right)} \right)P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right)} \right),}&{{{{\mathbf{y}}}_{k}} \in \Omega } \\ {0,}&{{{{\mathbf{y}}}_{k}} \notin \Omega } \end{array}} \right..$$
(A.4)

After substituting (A.4) into (A.1), we have

$$\begin{gathered} f\left( {{\mathbf{z}}\left( k \right),{\mathbf{s}}\left( k \right)\left| {{\mathbf{D}}_{1}^{k}} \right.} \right) = \frac{1}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}}\,\sum\limits_{{\mathbf{s}}\left( {k - 1} \right)} {\int {f\left( {{\mathbf{z}}\left( k \right)|{\mathbf{z}}\left( {k - 1} \right),{\mathbf{s}}\left( k \right)} \right)} } \\ \times \,\,P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right)} \right)f\left( {{\mathbf{z}}\left( {k - 1} \right),\left. {{\mathbf{s}}\left( {k - 1} \right)} \right|{\mathbf{D}}_{1}^{{k - 1}}} \right)d{\mathbf{z}}\left( {k - 1} \right). \\ \end{gathered} $$
(A.5)

Taking into account the ratios

$${\mathbf{s}}\left( k \right) \triangleq {{\left[ {{{a}_{j}}\left( {{{t}_{k}}} \right),{{b}_{m}}\left( {{{t}_{k}}} \right)} \right]}^{T}};\,\,\,\,P\left( {\left. {{\mathbf{s}}\left( k \right)} \right|{\mathbf{s}}\left( {k - 1} \right)} \right) \triangleq {{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}$$

rewrite (A.5):

$$\begin{gathered} f\left( {{\mathbf{z}}\left( k \right),{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{k}} \right.} \right) = \hat {f}\left( {{\mathbf{z}}\left( k \right),{{a}_{j}},{{b}_{m}}} \right) = \frac{1}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}}\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} } \\ \times \,\,\int {f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{{a}_{j}},{{b}_{m}}} \right)} \hat {f}\left( {{\mathbf{z}}\left( {k - 1} \right),{{a}_{i}}\left( {{{t}_{{k - 1}}}} \right),{{b}_{n}}\left( {{{t}_{{k - 1}}}} \right)} \right)d{\mathbf{z}}\left( {k - 1} \right). \\ \end{gathered} $$
(A.6)
$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) = \sum\limits_j {\sum\limits_m {\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} \int {\int {f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{\mathbf{z}}\left( {k - 1} \right),{{a}_{j}},{{b}_{m}}} \right)} } } } } \\ \times \,\,\hat {f}\left( {{\mathbf{z}}\left( {k - 1} \right),{{a}_{i}}\left( {{{t}_{{k - 1}}}} \right),{{b}_{n}}\left( {{{t}_{{k - 1}}}} \right)} \right)d{\mathbf{z}}\left( {k - 1} \right)d{\mathbf{z}}\left( k \right). \\ \end{gathered} $$
(A.7)

To find the probability of one-step prediction of observed digital signal \(P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)\) integrate (A.7) with respect to variable \({\mathbf{z}}\left( k \right)\) considering that \({\mathbf{z}}(k) = {{\left[ {{{{\mathbf{x}}}^{T}}(k),{{{\mathbf{y}}}^{T}}(k)} \right]}^{T}}\):

$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) = \sum\limits_j {\sum\limits_m {\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}} } } } {{\pi }}_{{nm}}^{b} \\ \times \,\,\int\limits_\Omega {\int\limits_{} {f\left( {{\mathbf{y}}\left( k \right)\left| {{\mathbf{z}}\left( {k - 1} \right),{{a}_{j}},{{b}_{m}}} \right.} \right)} } \\ \times \,\,\hat {f}\left( {{\mathbf{z}}\left( {k - 1} \right),{{a}_{i}},{{b}_{n}}} \right)d{\mathbf{z}}\left( {k - 1} \right)d{\mathbf{y}}\left( k \right). \\ \end{gathered} $$
(A.8)

We represent the posterior probability density of the vector z given that \({{a}_{j}}({{t}_{k}})\), \({{b}_{m}}({{t}_{k}})\)—(A.6) in the form of a system of recurrent equations

$$\begin{gathered} {{{\hat {f}}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right) = \hat {f}\left( {{\mathbf{z}}\left( k \right),{{a}_{j}},{{b}_{m}}} \right) \\ = \frac{1}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}}{{{\tilde {f}}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right), \\ \end{gathered} $$
(A.9)

where \({{\tilde {f}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right) = f\left( {{\mathbf{z}}\left( k \right)\left| {{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)\) is the conditional probability density of vector \({{\left[ {{{{\mathbf{z}}}^{T}}(k),{{a}_{j}}({{t}_{k}}),{{b}_{m}}({{t}_{k}})} \right]}^{T}}\):

$${{\tilde {f}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right) = \sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}} } {{\pi }}_{{nm}}^{b}{{\hat {P}}_{{in}}}(k - 1)\int {f\left( {{\mathbf{z}}\left( k \right)\left| {{\mathbf{z}}\left( {k - 1} \right),{{a}_{j}},{{b}_{m}}} \right.} \right)} {{\hat {f}}_{{in}}}\left( {{\mathbf{z}}\left( {k - 1} \right)} \right)d{\mathbf{z}}\left( {k - 1} \right).$$
(A.10)

To find the posterior probability of the discrete component of DCMP \(P\left( {{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{k}} \right.} \right)\) integrate (A.6) over variables \({\mathbf{z}}\left( k \right)\), \({\mathbf{z}}\left( {k - 1} \right)\)

$$\begin{gathered} P\left( {{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{k}} \right.} \right) = \hat {P}\left( {{{a}_{j}},{{b}_{m}}} \right) = \frac{1}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}} \\ \times \,\,\int\limits_\Omega {f\left( {{\mathbf{y}}\left( k \right)|{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right)} d{\mathbf{y}}\left( k \right) \\ \times \,\,\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} \hat {P}\left( {{{a}_{i}}\left( {{{t}_{{k - 1}}}} \right),{{b}_{n}}\left( {{{t}_{{k - 1}}}} \right)} \right)} \\ = \frac{{P\left( {{\mathbf{d}}\left( k \right)|{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right)}}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}} \\ \times \,\,\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} \hat {P}\left( {{{a}_{i}}\left( {{{t}_{{k - 1}}}} \right),{{b}_{n}}\left( {{{t}_{{k - 1}}}} \right)} \right)} . \\ \end{gathered} $$
(A.11)

Taking into account notation

$${{\tilde {P}}_{{jm}}}\left( k \right) = \sum\limits_i {\sum\limits_n {\pi _{{ij}}^{a}\pi _{{nm}}^{b}} \hat {P}\left( {{{a}_{i}}\left( {{{t}_{{k - 1}}}} \right),{{b}_{n}}\left( {{{t}_{{k - 1}}}} \right)} \right)} ,$$
(A.12)

rewrite (A.11) as

$$\begin{gathered} {{{\hat {P}}}_{{jm}}}\left( k \right) = \hat {P}\left( {{{a}_{j}}\left( {{{t}_{k}}} \right),{{b}_{m}}\left( {{{t}_{k}}} \right)} \right) \\ = \frac{{P\left( {{\mathbf{d}}\left( k \right)\left| {{{a}_{j}}\left( {{{t}_{k}}} \right),{{b}_{m}}\left( {{{t}_{k}}} \right),{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}}{{P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)}}{{{\tilde {P}}}_{{jm}}}\left( k \right), \\ \end{gathered} $$
(A.13)

where \(P\left( {{\mathbf{d}}\left( k \right),{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)\) is the conditional probability of one-step prediction of the observed digital signal:

$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right),{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) \\ = \int\limits_\Omega {f\left( {{\mathbf{y}}\left( k \right)\left| {{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)} d{\mathbf{y}}\left( k \right). \\ \end{gathered} $$
(A.14)

Expression (A.6) corresponds to formula (8); expressions (A.8)–(A.10) correspond to formulas (9)–(11), expressions (A.12)–(A.14) correspond to formulas (13), (12), and (14), respectively.

To derive quasi-optimal algorithms for digital filtering of continuous components of DCMP state vector (19)–(26), we use the assumption of the normality of posterior probability density \({{\hat {f}}_{{in}}}\left( {{\mathbf{z}}\left( {k - 1} \right)} \right)\) at the previous \(k - 1\)th cycle of digital filtering [19]:

$$\begin{gathered} {{{\hat {f}}}_{{in}}}\left( {{\mathbf{z}}\left( {k - 1} \right)} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}} + {{n}_{y}}}}}\det {{{{\mathbf{\hat {R}}}}}_{{zz}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{z}}\left( {k - 1} \right) - {\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right)} \right)}}^{T}}} \right. \\ \times \,\,{\mathbf{\hat {R}}}_{{zz}}^{{ - 1}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right)\left. {\left( {{\mathbf{z}}\left( {k - 1} \right) - {\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right)} \right)} \right\}. \\ \end{gathered} $$
(A.15)

Based on linear transformation (6) of conditionally Gaussian random variable \({\mathbf{z}}\left( {k - 1} \right)\) and taking into account (A.15) we rewrite (A.10):

$$\begin{gathered} {{{\tilde {f}}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}} + {{n}_{y}}}}}} }}\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} {{{\hat {P}}}_{{in}}}\left( {k - 1} \right)} \frac{1}{{\sqrt {\det {{{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{z} }}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}_{{zz}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{z} }}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\}, \\ \end{gathered} $$
(A.16)

where

$${\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{z} }}\left( {k,{{a}_{j}},{{b}_{m}}} \right) = {{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}}){\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right),$$
$$\begin{gathered} {{{{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{R} }}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) = {{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}}){{{{\mathbf{\hat {R}}}}}_{{zz}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right) \\ \times \,\,{\mathbf{\Phi }}_{{zz}}^{{\text{T}}}({{a}_{j}},{{b}_{m}}) + {{{\mathbf{B}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right). \\ \end{gathered} $$

As a result of the two-moment Gaussian approximation of the extrapolation probability density (A.16), we write

$$\begin{gathered} {{{\tilde {f}}}_{{jm}}}\left( {{\mathbf{z}}\left( k \right)} \right) = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}} + {{n}_{y}}}}}\det {{{{\mathbf{\tilde {R}}}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}} \right. \\ \times \,\,\left. {{\mathbf{\tilde {R}}}_{{zz}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\}, \\ \end{gathered} $$
(A.17)

where

$${\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) = \frac{1}{{{{{\tilde {P}}}_{{jm}}}\left( k \right)}}\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} {{{\hat {P}}}_{{in}}}\left( {k - 1} \right)} {{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}}){\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right),$$
(A.18)
$$\begin{gathered} {{{{\mathbf{\tilde {R}}}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) = \frac{1}{{{{{\tilde {P}}}_{{jm}}}\left( k \right)}}\sum\limits_i {\sum\limits_n {{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}} {{{\hat {P}}}_{{in}}}\left( {k - 1} \right)} \left( {{{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}})} \right.{\mathbf{\hat {R}}}_{{in}}^{{(zz)}}\left( {k - 1} \right){\mathbf{\Phi }}_{{zz}}^{T}({{a}_{j}},{{b}_{m}}) \\ + \,\,{{{\mathbf{B}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) + \left( {{{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}}){\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right){{{{\mathbf{\hat {z}}}}}_{{in}}}\left( {k - 1} \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right) \\ \left. { \times \,\,{{{\left( {{{{\mathbf{\Phi }}}_{{zz}}}({{a}_{j}},{{b}_{m}}){\mathbf{\hat {z}}}\left( {k - 1,{{a}_{i}},{{b}_{n}}} \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}} \right). \\ \end{gathered} $$
(A.19)

Taking into consideration

$${\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) \triangleq {{\left[ {{\mathbf{\tilde {x}}}_{{}}^{T}\left( {k,{{a}_{j}},{{b}_{m}}} \right),{\mathbf{\tilde {y}}}_{{}}^{T}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right]}^{T}}$$

from (A.18) and (A.19) it is easy to obtain expressions (19)–(23).

We denote

$$\begin{gathered} \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}} + {{n}_{y}}}}}\det {{{{\mathbf{\tilde {R}}}}}_{{zz}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}} \right. \\ \times \,\,\left. {{\mathbf{\tilde {R}}}_{{zz}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{z}}\left( k \right) - {\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\} \\ = f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{{a}_{j}},{{b}_{m}},{\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right). \\ \end{gathered} $$

Then, instead of (A.9), we have

$${{\hat {f}}_{{jm}}}\left( {{{{\mathbf{z}}}_{k}}} \right) = {{\theta }_{1}}f\left( {\left. {{\mathbf{z}}\left( k \right)} \right|{{a}_{j}},{{b}_{m}},{\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right),$$
(A.20)

where \(\theta _{1}^{{ - 1}} = P\left( {{\mathbf{d}}\left( k \right)\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)\). Considering that \({\mathbf{z}}\mathop = \limits^\Delta {{\left[ {{\mathbf{x}}_{{}}^{T},{\mathbf{y}}_{{}}^{T}} \right]}^{T}}\) and applying the formula for multiplying the probabilities, we write

$$\begin{gathered} f\left( {{\mathbf{x}}\left( k \right),\left. {{\mathbf{y}}\left( k \right)} \right|{{a}_{j}},{{b}_{m}},{\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right) \\ = f{\kern 1pt} \left( {\left. {{\mathbf{y}}\left( k \right)} \right|{{a}_{j}},{{b}_{m}},{\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right) \\ \times \,\,f{\kern 1pt} \left( {\left. {{\mathbf{x}}\left( k \right)} \right|{\mathbf{y}}\left( k \right),{{a}_{j}},{{b}_{m}},\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right). \\ \end{gathered} $$

The first conditional probability density is Gaussian [17]

$$\begin{gathered} f\left( {\left. {{\mathbf{y}}\left( k \right)} \right|{{a}_{j}},{{b}_{m}},{\mathbf{\tilde {z}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right) \\ = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{y}}}}}\det {{{{\mathbf{\hat {R}}}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{y}}\left( k \right) - {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}} \right. \\ \times \,\,\left. {{\mathbf{\hat {R}}}_{{yy}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{y}}\left( k \right) - {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\}. \\ \end{gathered} $$
(A.21)

The second conditional probability density has form

$$\begin{gathered} f\left( {\left. {{\mathbf{x}}\left( k \right)} \right|{\mathbf{y}}\left( k \right),{{a}_{j}},{{b}_{m}},\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right) \\ = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{x}}}}}\det {{{{\mathbf{\hat {R}}}}}_{{xx}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}{{{\left( {{\mathbf{x}}\left( k \right) - {\mathbf{\hat {x}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)}}^{T}}} \right. \\ \times \,\,\left. {{\mathbf{\hat {R}}}_{{xx}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{x}}\left( k \right) - {\mathbf{\hat {x}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\}, \\ \end{gathered} $$
(A.22)

where \({\mathbf{\hat {x}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\), \({{{\mathbf{\hat {R}}}}_{{xx}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\) are conditional estimates of discrete filtering and correlation matrices of their errors, determined from [19]. Using the technique [17], taking into account (A.21), (A.22), we write down conditional estimates of digital filtering and correlation matrices of their errors

$$\begin{gathered} {\mathbf{\hat {x}}}(k,{{a}_{j}},{{b}_{m}}) = {\mathbf{\tilde {x}}}(k,{{a}_{j}},{{b}_{m}}) + {{{{\mathbf{\tilde {R}}}}}_{{xy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) \\ \times \,\,{\mathbf{\tilde {R}}}_{{yy}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{\hat {y}}}(k,{{a}_{j}},{{b}_{m}}) - {\mathbf{\tilde {y}}}(k,{{a}_{j}},{{b}_{m}})} \right). \\ \end{gathered} $$
(A.23)
$$\begin{gathered} {{{{\mathbf{\hat {R}}}}}_{{xx}}}(k,{{a}_{j}},{{b}_{m}}) = {{{{\mathbf{\tilde {R}}}}}_{{xx}}}(k,{{a}_{j}},{{b}_{m}}) \\ + \,\,{{{{\mathbf{\tilde {R}}}}}_{{xy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){\mathbf{\tilde {R}}}_{{yy}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) \\ \times \,\,\left[ {{\mathbf{I}} - {{{\mathbf{E}}}_{{kb}}}} \right]{\mathbf{\tilde {R}}}_{{xy}}^{T}(k,{{a}_{j}},{{b}_{m}}). \\ \end{gathered} $$
(A.24)

Based on the optimization of the quantization procedure, we write [18]

$$\begin{gathered} {\mathbf{\hat {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) = {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right) \\ + \,\,{{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){{{\mathbf{Y}}}_{{{\text{opt}}}}}\left( {k,{\mathbf{d}}\left( k \right)} \right). \\ \end{gathered} $$
(A.25)

Substituting (A.25) into (A.23), we finally obtain

$$\begin{gathered} {\mathbf{\hat {x}}}(k,{{a}_{j}},{{b}_{m}}) = {\mathbf{\tilde {x}}}(k,{{a}_{j}},{{b}_{m}}) \\ + \,\,{\mathbf{K}}(k,{{a}_{j}},{{b}_{m}}){{{\mathbf{Y}}}_{{{\text{opt}}}}}\left( {k,{\mathbf{d}}\left( k \right)} \right), \\ \end{gathered} $$
(A.26)

where \({{{\mathbf{Y}}}_{{{\text{opt}}}}}\left( {k,{\mathbf{d}}\left( k \right)} \right) = {{\left[ {y_{{{\text{opt}}}}^{{(1)}},...,y_{{{\text{opt}}}}^{{(u)}},...,y_{{{\text{opt}}}}^{{({{n}_{y}})}}} \right]}^{T}}\), \(u = \overline {1,{{n}_{y}}} \) is the vector of optimal quantization levels. With optimal uniform quantization with step \({{{{\delta }}}_{{{\text{opt}}}}}\) meaning \(y_{{{\text{opt}}}}^{{{\text{(}}u{\text{)}}}}\) and \({{\eta }}_{{{\text{opt}}}}^{{(u)}}\) normalized optimal levels and quantization thresholds are determined by formulas [13]

$$y_{{{\text{opt}}}}^{{(u)}} = \left( {{{l}_{u}} - \frac{{{{L}_{u}} + 1}}{2}} \right){{\delta }}_{{{\text{opt}}}}^{{(u)}},\,\,\,\,{{\eta }}_{{{\text{opt}}}}^{{(u)}} = \left( {{{l}_{u}} - \frac{{{{L}_{u}}}}{2}} \right){{\delta }}_{{{\text{opt}}}}^{{(u)}}.$$
(A.27)

Values \({{\delta }}_{{{\text{opt}}}}^{{(u)}}\) are given in [13, Table 3.2]. There are also values \({{\varepsilon }}_{{{\text{opt}}}}^{{(u)}}\), which are minimum mean squares of quantization errors, which are used to determine the normalized correlation matrix of quantization errors [17]

$${{{\mathbf{E}}}_{{kb}}} = \left[ {{{{{\varepsilon }}}_{{{\text{opt}}}}}{{{{\delta }}}_{{uw}}}} \right],\,\,\,\,u,w = \overline {1,{{n}_{y}}} .$$

Let the condition for the formation of digital signals be constant quantization thresholds, and the difference at the input of the ADC quantizers is quantized \({\mathbf{y}}\left( k \right) - {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\) where

$$\begin{gathered} {\mathbf{\tilde {y}}}(k,{{a}_{j}},{{b}_{m}}) \\ = \sum\limits_i {\sum\limits_n {\frac{{{{\pi }}_{{ij}}^{a}{{\pi }}_{{nm}}^{b}{{{\hat {P}}}_{{in}}}(k - 1)}}{{{{{\tilde {P}}}_{{jm}}}(k)}}} } {{{\mathbf{\Phi }}}_{{yx}}}({{a}_{j}},{{b}_{m}}){\mathbf{\hat {x}}}(k - 1,{{a}_{i}},{{b}_{n}}), \\ \end{gathered} $$

is the optimal predicted value of vector \({\mathbf{y}}\left( k \right)\). Then, from the conditions we can write

$$\begin{gathered} {{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){{{\mathbf{H}}}_{{{\text{opt}}}}}(k,{{l}_{u}} - 1) < {\mathbf{y}}\left( k \right) \\ - \,\,{\mathbf{\tilde {y}}}(k,{{a}_{j}},{{b}_{m}}) \leqslant {{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){{{\mathbf{H}}}_{{{\text{opt}}}}}(k,{{l}_{u}}), \\ \end{gathered} $$
(A.28)

where

$${{{\mathbf{H}}}_{{{\text{opt}}}}} = {{[{{\eta }}_{{{\text{opt}}}}^{{(1)}},...,{{\eta }}_{{{\text{opt}}}}^{{(u)}},...,{{\eta }}_{{{\text{opt}}}}^{{({{n}_{y}})}}]}^{T}},\,\,\,\,u = \overline {1,{{n}_{y}}} ,$$

is the vector of optimal quantization thresholds determined from (A.27).

Taking into account condition (3) \({\mathbf{y}}\left( k \right) = \int {{\mathbf{v}}\left( {{\tau }} \right)} d{{\tau }}\), \({{\tau }} \in \left[ {{{t}_{{k - 1}}},{{t}_{k}}} \right]\), expression (A.28) has form

$$\begin{gathered} {{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){{{\mathbf{H}}}_{{{\text{opt}}}}}(k,{{l}_{u}} - 1) \\ < \int\limits_{{{t}_{{k - 1}}}}^{{{t}_{k}}} {{\mathbf{v}}(\tau )d\tau - {\mathbf{\tilde {y}}}(k,{{a}_{j}},{{b}_{m}})} \leqslant {{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right){{{\mathbf{H}}}_{{{\text{opt}}}}}(k,{{l}_{u}}). \\ \end{gathered} $$

To derive the conditional probability of one-step prediction of observations (14) from (A.13), we define the probability of digital signal

$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right)\left| {{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) \\ = \int\limits_\Omega {f\left( {{\mathbf{y}}\left( k \right)\left| {{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right)} d{\mathbf{y}}\left( k \right). \\ \end{gathered} $$
(A.29)

Considering expressions (A.13), (A.20), and (A.21), we can conclude that the conditional integrand in (A.29) is Gaussian:

$$\begin{gathered} f\left( {{\mathbf{y}}\left( k \right)\left| {{{a}_{j}},{{b}_{m}},{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) \\ = \frac{1}{{\sqrt {{{{\left( {2\pi } \right)}}^{{{{n}_{y}}}}}\det {{{{\mathbf{\tilde {R}}}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} }} \\ \times \,\,\exp \left\{ { - \frac{1}{2}\left( {{\mathbf{y}}\left( k \right) - {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right. \\ \times \,\,\left. {{\mathbf{\tilde {R}}}_{{yy}}^{{ - 1}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\left( {{\mathbf{y}}\left( k \right) - {\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)} \right)} \right\}. \\ \end{gathered} $$
(A.30)

Taking this into account, we rewrite (A.30) in form

$$\begin{gathered} P\left( {{\mathbf{d}}\left( k \right),{{a}_{j}},{{b}_{m}}\left| {{\mathbf{D}}_{1}^{{k - 1}}} \right.} \right) \\ = \prod\limits_{u = 1}^{{{n}_{y}}} {\left\{ {F\left\{ {\frac{{{{\eta }}_{{{\text{opt}}}}^{{(u)}}({{l}_{u}}) - {{{\tilde {y}}}_{u}}(k,{{a}_{j}},{{b}_{m}})}}{{{{{{\tau }}}_{u}}(k)}}} \right\}} \right.} \\ \left. { - \,\,F\left\{ {\frac{{{{\eta }}_{{{\text{opt}}}}^{{(u)}}({{l}_{u}} - 1) - {{{\tilde {y}}}_{u}}(k,{{a}_{j}},{{b}_{m}})}}{{{{{{\tau }}}_{u}}(k)}}} \right\}} \right\}, \\ \end{gathered} $$

where \({{\tilde {y}}_{u}}(k,{{a}_{j}},{{b}_{m}})\) is ith element vector \({\mathbf{\tilde {y}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\); \({{{{\tau }}}_{u}}\) is the diagonal element of lower triangular matrix \({{{\mathbf{T}}}_{{yy}}}\left( {k,{{a}_{j}},{{b}_{m}}} \right)\); and \(F\left\{ \cdot \right\}\) is the integral of the probability.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Detkov, A.N. Optimal Evaluation of Discrete Continuous Markov Processes from Observed Digital Signals. J. Commun. Technol. Electron. 66, 914–925 (2021). https://doi.org/10.1134/S1064226921080027

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064226921080027

Navigation