Abstract
This paper focuses on the optimal minimum mean square error estimation of a nonlinear function of state (NFS) in linear Gaussian continuous-time stochastic systems. The NFS represents a multivariate function of state variables which carries useful information of a target system for control. The main idea of the proposed optimal estimation algorithm includes two stages: the optimal Kalman estimate of a state vector computed at the first stage is nonlinearly transformed at the second stage based on the NFS and the minimum mean square error (MMSE) criterion. Some challenging theoretical aspects of analytic calculation of the optimal MMSE estimate are solved by usage of the multivariate Gaussian integrals for the special NFS such as the Euclidean norm, maximum and absolute value. The polynomial functions are studied in detail. In this case the polynomial MMSE estimator has a simple closed form and it is easy to implement in practice. We derive effective matrix formulas for the true mean square error of the optimal and suboptimal quadratic estimators. The obtained results we demonstrate on theoretical and practical examples with different types of NFS. Comparison analysis of the optimal and suboptimal nonlinear estimators is presented. The subsequent application of the proposed estimators demonstrates their effectiveness.
Similar content being viewed by others
REFERENCES
N. Davari and A. Gholami, “An asynchronous adaptive direct kalman filter algorithm to improve underwater navigation system performance,” IEEE Sens. J. 17, 1061–1068 (2017).
T. Rajaram, J. M. Reddy, and Y. Xu, “Kalman filter based detection and mitigation of subsynchronous resonance with SSSC,” IEEE Trans. Power Syst. 32, 1400–1409 (2017).
X. Deng and Z. Zhang, “Automatic multihorizons recognition for seismic data based on Kalman filter tracker,” IEEE Geosci. Remote Sensing Lett. 14, 319–323 (2017).
M. S. Grewal, A. P. Andrews, and C. G. Bartone, Global Navigation Satellite Systems, Inertial Navigation, and Integration (Wiley, NJ, 2013).
D. Simon, Optimal State Estimation (Wiley, NJ, 2006).
Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation (Wiley, New York, 2001).
V. I. Arnold, Mathematical Methods of Classical Mechanics (Springer, New York, 1989).
T. T. Cai and M. G. Low, “Optimal adaptive estimation of a quadratic functional,” Ann. Stat. 34, 2298–2325 (2006).
J. Robins, L. Li, E. Tchetgen, and A. Vaart, “Higher order infuence functions and minimax estimation of nonlinear functionals,” Prob. Stat. 2, 335–421 (2008).
J. Jiao, K. Venkat, Y. Han, and T. Weissman, “Minimax estimation of functionals of discrete distributions,” IEEE Trans. Inform. Theory 61, 2835–2885 (2015).
J. Jiao, K. Venkat, Y. Han, and T. Weissman, “Maximum likelihood estimation of functionals of discrete distributions,” IEEE Trans. Inform. Theory 63, 6774–6798 (2017).
Y. Amemiya and W. A. Fuller, “Estimation for the nonlinear functional relationship,” Ann. Stat. 16, 147–160 (1988).
D. L. Donoho and M. Nussbaum, “Minimax quadratic estimation of a quadratic functional,” J. Complexity 6, 290–323 (1990).
D. S. Grebenkov, “Optimal and suboptimal quadratic forms for noncentered gaussian processes,” Phys. Rev. E88, 032140 (2013).
B. Laurent and P. Massart, “Adaptive estimation of a quadratic functional by model selection,” Ann. Stat. 28, 1302–1338 (2000).
I. G. Vladimirov and I. R. Petersen, “Directly coupled observers for quantum harmonic oscillators with discounted mean square cost functionals and penalized back-action,” in Proceedings of the IEEE Conference on Norbert Wiener in the 21st Century, Melbourne, Australia,2016, pp. 78–83.
K. Sricharan, R. Raich, and A. O. Hero, “Estimation of nonlinear functionals of densities with confidence,” IEEE Trans. Inform. Theory 58, 4135–4159 (2012).
A. Wisler, V. Berisha, A. Spanias, and A. O. Hero, “Direct estimation of density functionals using a polynomial basis,” IEEE Trans. Signal Process. 66, 558–588 (2018).
M. Taniguchi, “On estimation of parameters of gaussian stationary processes,” J. Appl. Prob. 16, 575–591 (1979).
C. Zhao-Guo and E. J. Hanman, “The distribution of periodogram ordinates,” J. Time Ser. Anal. 1, 73–82 (1980).
D. Janas and R. Sachs, “Consistency for non-linear functions of the periodogram of tapered data,” J. Time Ser. Anal. 16, 585–606 (1995).
G. Fay, E. Moulines, and P. Soulier, “Nonlinear functionals of the periodogram,” J. Time Ser. Anal. 23, 523–553 (2002).
C. Noviello, G. Fornaro, P. Braca, and M. Martorella, “Fast and accurate ISAR focusing based on a doppler parameter estimation algorithm,” IEEE Geosci. Remote Sens. Lett. 14, 349–353 (2017).
Y. Wu and P. Yang, “Minimax rates of entropy estimation on large alphabets via best polynomial approximation,” IEEE Trans. Inform. Theory 62, 3702–3720 (2016).
Y. Wu and P. Yang, “Optimal entropy estimation on large alphabets via best polynomial approximation,” in Proceedings of the IEEE International Symposium on Information Theory, Hong Kong,2015, pp. 824–828.
S. O. Haykin, Adaptive Filtering (Prentice Hall, NJ, 2013).
T. K. Moon and W. C. Stirling, Mathematical Methods and Algorithms for Signal Processing (Prentice Hall, NJ, 2000).
A. Coluccia, “On the expected value and higher-order moments of the euclidean norm for elliptical normal variates,” IEEE Commun. Lett. 17, 2364–2367 (2013).
S. Nadarajah and S. Kotz, “Exact distribution of the max/Min of two gaussian random variables,” IEEE Trans. Very Large Scale Integr. Syst. 16, 210–212 (2008).
V. S. Pugachev and I. N. Sinitsyn, Stochastic Differential Systems. Analysis and Filtering, 2nd ed. (Nauka, Moscow, 1990) [in Russian].
V. S. Pugachev, “Assessment of the state and parameters of continuous nonlinear systems,” Avtom. Telemekh., No. 6, 63–79 (1979).
E. A. Rudenko, “Optimal structure of continuous nonlinear reduced-order Pugachev filter,” J. Comput. Syst. Sci. Int. 52, 866 (2013).
S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,” Proc. IEEE 92, 401–422 (2004).
K. Ito and K. Xiong, “Gaussian filters for nonlinear filtering problems,” IEEE Trans. Autom. Control 45, 910–927 (2000).
A. Doucet, N. D. Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice (Springer, London, 2001).
E. S. Armstrong and J. S. Tripp, “An application of multivariable design techniques to the control of the national transonic facility,” NASA Technical Paper, No. 1887 (NASA, Washington, DC, 1981), pp. 1–36.
R. Kan, “From moments of sum to moments of product,” J. Multivar. Anal. 99, 542–554 (2008).
B. Holmquist, “Expectations of products of quadratic forms in normal variables,” Stoch. Anal. Appl. 14, 149–164 (1996).
Author information
Authors and Affiliations
Corresponding authors
Additional information
This work was supported by the Incheon National University Research Grant in 2015–2016.
APPENDIX
APPENDIX
Proof of Theorem 1. The derivation of the polynomial estimators (3.2) is based on the Lemma 1.
Lemma 1. Let \(x \in {{\mathbb{R}}^{n}}\) be a Gaussian random vector, \(x \sim \mathbb{N}(\mu ,S)\) and \(A,B \in {{\mathbb{R}}^{{n \times n}}}\) be an arbitrary matrices. Then it holds that
The derivation of the formulas (A.1) is based on their scalar versions given in [37, 38], and standard transformations on random vectors.
This completes the proof of Lemma 1.
Next, replacing in (A.1) an unconditional expectations and covariance by their conditional versions, for example, \(\mu \to {\mathbf{E}}(x {\text{|}}{\kern 1pt} y_{0}^{t}) = \hat {x}\), S → \({\text{cov(}}\hat {x},\hat {x} {\text{|}}{\kern 1pt} y_{0}^{t}{\text{)}}\) = P we obtain (3.2).
This completes the proof of Theorem 1.
Proof of Theorem 2. The derivation of the MSEs is based on the Lemma 2.
Lemma 2.Let \(X \in {{\mathbb{R}}^{{3n}}}\) be a composite multivariate Gaussian vector, \({{X}^{{\text{T}}}} = [{{U}^{{\text{T}}}}\;{{V}^{T}}\;{{W}^{T}}]\):
Then the third- and fourth-order vector moments of the composite random vector are given by
The derivation of the vector formulas (A.3) is based on their scalar versions, and standard matrix manipulations,
where \({{\mu }_{h}} = {\mathbf{E}}({{x}_{h}})\), \({{S}_{{pq}}} = {\text{cov(}}{{x}_{p}},{{x}_{q}}{\text{)}}\).
This completes the proof of Lemma 2.
Further, we derive the formula (3.8). Using (3.5) and (3.6), the error can be written as
Next, using the unbiased and orthogonality properties of the Kalman estimate \({\mathbf{E}}\left( e \right) = {\mathbf{E}}(e{{\hat {x}}^{{\text{T}}}}) = 0,\) we obtain
Using Lemma 2 we can calculate high-order moments in (A.6). We have
where
Substituting (A.7) in (A.6), and after some manipulations, we get the optimal MSE (3.8).
In the case of the suboptimal estimate \(\tilde {z},\) the derivation of the MSE (3.9) is similar.
This completes the proof of Theorem 2.
Rights and permissions
About this article
Cite this article
Choi, W., Song, I.Y. & Shin, V. Two-Stage Algorithm for Estimation of Nonlinear Functions of State Vector in Linear Gaussian Continuous Dynamical Systems. J. Comput. Syst. Sci. Int. 58, 869–882 (2019). https://doi.org/10.1134/S1064230719060169
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1064230719060169