Skip to main content

Component and the Least Square Estimation of Mean and Covariance Functions of Biperiodically Correlated Random Signals

  • Conference paper
  • First Online:
Nonstationary Systems: Theory and Applications (WNSTA 2021)

Abstract

The component and the least square (LS) estimators of mean and covariance functions of biperiodically correlated random processes (BPCRPs) as the model of the signal with binary stochastic recurrence are analyzed. The formulae for biases and the variances for estimators are obtained and the sufficient condition for the mean square consistency of mean function and Gaussian BPCRP covariance function are given. It is shown that the leakage errors are absent for the LS estimators in contrast to the component ones. The comparison of the bias and variance of the component and the LS estimators is carried out for BPCRP particular case.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Dragan, Y., Javorskyj, I.: Rhythmics of Sea Waves and Underwater Acoustic Signals, Kyiv, Naukova Dumka (1982). (in Russian)

    Google Scholar 

  2. Gardner, W.A.: Introduction to Random Processes with Application to Signals and Systems. Macmillan, New York (1985)

    Google Scholar 

  3. Dragan, Y., Rozhkov, V., Javorskyj, I.: The Methods of Probabilistic Analysis of Oceanological Rhythmics, Leningad, Gidrometeoizdat (1987). (in Russian)

    Google Scholar 

  4. Gardner, W.A. (ed.): Cyclostationarity in Communications and Signal Processing. IEEE Press, New York (1994)

    Google Scholar 

  5. Hurd, H.L., Miamee, A.: Periodically Correlated Random Sequences: Spectral Theory and Practice. Wiley, New Jersey (2007)

    Book  Google Scholar 

  6. Javorskyj, I., Yuzefovych, R., Kravets, I., Matsko, I.: Methods of periodically correlated random processes and their generalizations. In: Chaari, F., Leśkow, J., Napolitano, A., Sanchez-Ramirez, A. (eds.) Cyclostationarity: Theory and Methods. LNME, pp. 73–93. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-04187-2_6

    Chapter  Google Scholar 

  7. Napolitano, A.: Generalizations of Cyclostationary Signal Processing: Spectral Analysis and Applications. Wiley, IEEE Press (2012)

    Google Scholar 

  8. Javorskyj, I.: Mathematical Models and Analysis of Stochastic Oscillations, Lviv, Karpenko Physico-Mechanical Institute of NAS of Ukraine (2013). (in Ukrainian)

    Google Scholar 

  9. Napolitano, A.: Cyclostationary Processes and Time Series: Theory, Applications and Generalizations. Elsevier, Academic Press (2020)

    Google Scholar 

  10. Antoni, J.: Cyclostationarity by examples. Mech. Syst. Sig. Process. 23, 987–1036 (2009)

    Article  Google Scholar 

  11. Gardner, W.A.: Exploitation of spectral redundancy in cyclostationary signals. IEEE SP Mag. 3, 14–36 (1991)

    Article  Google Scholar 

  12. Gardner, W.A., Napolitano, A., Paural, L.: Cyclostationarity: half century of research. Sig. Process. 86(4), 639–697 (2006)

    Article  Google Scholar 

  13. Napolitano, A.: Cyclostationarity: new trends and application. Sig. Process. 120, 385–408 (2016)

    Article  Google Scholar 

  14. Wylomanska, A., Obuchowski, J., Zimroz, R., Hurd, H.: Influence of different signal characteristics on PAR model stability. In: Chaari, F., Leskow, J., Napolitano, A., Zimroz, R., Wylomanska, A., Dudek, A. (eds.) Cyclostationarity: Theory and Methods - II. ACM, vol. 3, pp. 89–104. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16330-7_5

    Chapter  Google Scholar 

  15. Obuchowski, I., Zimroz, R., Wylomanska, A.: Identification of cyclic components in presence of non-Gaussian noise – application to crusher bearings damage detection. J. Vibroeng. 17(3), 473–480 (2015)

    Google Scholar 

  16. Antoni, J., Bonnardot, F., Raad, A., Badaoui, E.: Cyclostatinary modeling of rotating machine vibration signals. Mech. Syst. Sig. Process. 18, 1285–1314 (2004)

    Article  Google Scholar 

  17. Mykhajlyshyn, V., Yavorskyj, I., Vasylyna, Y., Drabych, O., Isayev, I.: Probabilistic models and statistical methods for the analysis of vibration signals in the problems of diagnostics of machines and structure. Mater. Sci. 33, 655–672 (1997)

    Article  Google Scholar 

  18. Javorskyj, I., Kravets, I., Matsko, I., Yuzefovych, R.: Periodically correlated random processes: application in early diagnostics of mechanical systems. Mech. Syst. Sig. Process. 83, 406–438 (2017)

    Article  Google Scholar 

  19. Javorskyj, I.: Biperiodically correlated random processes as model for bi-rhythmic signal. In: All-Union Conference on Information Acoustics, Moscow, pp. 6–10 (1987). (in Russian)

    Google Scholar 

  20. Javorskyj, I.: Statistical properties of biperiodically correlated random sequences. Otbor i Peredacha Informacii 1(77), 16–23 (1988). (in Russian)

    Google Scholar 

  21. Dehay, D.: Spectral analysis of the covariance of the almost periodically correlated processes. Stoch. Process. Appl. 50(2), 315–330 (1994)

    Article  MathSciNet  Google Scholar 

  22. Hurd, H., Leskow, J.: Strongly consistent and asymptotically normal estimation of the covariance for almost periodically correlated processes. Stat. Decis. 10(3), 201–225 (1992)

    MathSciNet  MATH  Google Scholar 

  23. Javorskyj, I.: Statistical analysis of poly- and almost periodically correlated random processes. Otbor i Peredacha Informacii 3(79), 1–10 (1989). (in Russian)

    MathSciNet  Google Scholar 

  24. Homepage. http://www.springer.com/lncs. Accessed 21 Nov 2016

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendices

Appendix A

Proof of Proposition 2.1

For the mathematical expectation of (4), we have

$$ E\hat{m}\left( t \right) = \sum\limits_{{l,n = - N_{1} }}^{{N_{1} }} {e^{{i\lambda _{{ln}} t}} \left[ {\frac{1}{T}\int\limits_{0}^{T} {m\left( s \right)e^{{ - i\lambda _{{ln}} s}} ds} } \right]} . $$

Taking into account representation (8) and integrating, we obtain

$$ \frac{1}{T}\int\limits_{0}^{\theta } {m\left( s \right)ds} = m_{{ln}} + \sum\limits_{\begin{subarray}{l} k,r = - N_{1} \\ k \ne l,r \ne n \end{subarray} }^{{}} {m_{{kr}} \varphi \left( {\lambda _{{k - l,r - n}} T} \right).} $$

Hence, we have (9) and (10).

The variance of estimator (4) is equal to

$$ Var\left[ {\hat{m}\left( t \right)} \right] = \sum\limits_{{l,n = - N_{1} }}^{{N_{1} }} {\sum\limits_{{k,r = - N_{1} }}^{{N_{1} }} {e^{{i\lambda _{{l - r,n - k}} t}} } } \left[ {\frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{1} ,s_{2} - s_{1} } \right)e^{{i\left( {\lambda _{{ln}} s_{1} - \lambda _{{kr}} s_{2} } \right)}} ds_{1} ds_{2} } } } \right]. $$

Introduce a new variable \(u = s_{2} - s_{1}\), change the order of integration, and take into consideration the equality \(b\left( {s, - u} \right) = b\left( {s - u,u} \right)\). Then,

$$ \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{1} ,s_{2} - s_{1} } \right)e^{{i\left( {\lambda _{{ln}} s_{1} - \lambda _{{kr}} s_{2} } \right)}} ds_{1} ds_{2} } = \frac{1}{{T^{2} }}} \int\limits_{0}^{T} {\int\limits_{{ - s}}^{{T - s}} {b\left( {s,u} \right)e^{{i\lambda _{{l - k,n - r}} s}} e^{{ - i\lambda _{{kr}} u}} duds} } $$
$$ = \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{{T - u}} {b\left( {s,u} \right)e^{{i\lambda _{{l - k,n - r}} }} \left( {e^{{i\lambda _{{ln}} u}} + e^{{ - i\lambda _{{kr}} u}} } \right)dsdu} } . $$

After substituting into last expressions the representation

$$ b\left( {s,u} \right) = \sum\limits_{{p,q = - N_{2} }}^{{N_{2} }} {B_{{pq}} \left( u \right)e^{{i\lambda _{{pq}} s}} } $$

and temporal integration in the first approximation, we come to (11). It follows from (11) that \(Var\left[ {\hat{m}\left( t \right)} \right] \to 0\) as \(T \to 0\), i.e., estimator (4) is the mean square consistent.

Appendix B

Proof of Proposition 2.2

Rewrite statistics (7) in the form

$$ \hat{B}_{{ln}} \left( u \right) = \frac{1}{T}\int\limits_{0}^{T} {\left[ {\mathop \xi \limits^{ \circ } \left( s \right)\mathop \xi \limits^{ \circ } \left( {s + u} \right) - \mathop {\hat{m}}\limits^{ \circ } \left( s \right)\mathop \xi \limits^{ \circ } \left( {s + u} \right) - \mathop \xi \limits^{ \circ } \left( s \right)\mathop {\hat{m}}\limits^{ \circ } \left( {s + u} \right) + \mathop {\hat{m}}\limits^{ \circ } \left( s \right)\mathop {\hat{m}}\limits^{ \circ } \left( {s + u} \right)} \right]} e^{{ - i\lambda _{{ln}} s}} ds, $$

where \(\mathop {\hat{m}}\limits^{ \circ } \left( s \right) = \hat{m}\left( s \right) - m\left( s \right)\). The mathematical expectation of every component is equal to

$$ E\left\{ {\frac{1}{T}\int\limits_{0}^{\theta } {\mathop \xi \limits^{ \circ } \left( s \right)\mathop \xi \limits^{ \circ } \left( {s + u} \right)e^{{ - i\lambda _{{ln}} s}} ds} } \right\} = B_{{ln}} \left( u \right) + \varepsilon \left[ {B_{{ln}} \left( u \right)} \right], $$
$$ E\left\{ {\frac{1}{T}\int\limits_{0}^{T} {\mathop {\hat{m}}\limits^{ \circ } \left( s \right)\mathop \xi \limits^{ \circ } \left( {s + u} \right)e^{{ - i\lambda _{{ln}} s}} ds} } \right\} = \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{1} ,s_{2} - s_{1} + u} \right)q_{{ln}} \left( {N_{1} ,s_{1} ,s_{2} } \right)ds_{1} ds_{2} } } , $$
(B.1)
$$ E\left\{ {\frac{1}{T}\int\limits_{0}^{T} {\mathop {\hat{m}}\limits^{ \circ } } \left( {s + u} \right)\mathop \xi \limits^{ \circ } \left( s \right)e^{{ - i\lambda _{{ln}} s}} ds} \right\} = \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{1} ,s_{2} - s_{1} } \right)h_{{ln}} \left( {N_{1} ,s_{1} ,s_{2} ,u} \right)ds_{1} ds_{2} } } , $$
(B.2)
$$ E\left\{ {\frac{1}{T}\int\limits_{0}^{T} {\mathop {\hat{m}}\limits^{ \circ } \left( s \right)\mathop {\hat{m}}\limits^{ \circ } \left( {s + u} \right)e^{{ - i\lambda _{{ln}} s}} ds} } \right\} $$
$$ = \frac{1}{T}\sum\limits_{{m,k = - N_{1} }}^{{N_{1} }} {\sum\limits_{{r,s = - N_{1} }}^{{N_{1} }} {\varphi \left( {\lambda _{{m + r - l,k + s - n}} T} \right)} } \int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{1} ,s_{2} - s_{1} } \right)e^{{ - i\left( {\lambda _{{mk}} s_{1} + \lambda _{{rs}} \left( {s_{2} + u} \right)} \right)}} ds_{1} ds_{2} } } . $$
(B.3)

Taking into consideration the representation (12) after transformation, we obtain in the first approximation formulas (13) and (14). Since the functions \(h\left( {N_{1} ,u_{2} } \right)\) and \(\tilde{h}\left( {N_{1} ,u,u_{1} } \right)\) are bounded, then \(\varepsilon \left[ {\hat{b}\left( {t,u} \right)} \right] \to 0\) as \(T \to \infty\) when conditions (8) are satisfied.

Appendix C

Proof of Proposition 2.3

For Gaussian BPCRP

$$ G\left( {s_{1} ,s_{1} + u,s_{2} ,s_{2} + u} \right) = b\left( {s_{1} ,s_{2} - s_{1} } \right)b\left( {s_{1} + u,s_{2} - s_{1} } \right) $$
$$ + b\left( {s_{1} ,s_{2} - s_{1} + u} \right)b\left( {s_{1} + u,s_{2} - s_{1} - u} \right). $$

Introduce a new variable \(u_{1} = s_{2} - s_{1}\). The function \(G\left( {s,s + u,s + u_{1} ,s + u + u_{1} } \right)\) is the biperiodical function of the variable \(s\) and it can be represented by the Fourier series

$$ G\left( {s,s + u,s + u_{1} ,s + u_{1} + u} \right) = \sum\limits_{{k,r = - 2N_{2} }}^{{2N_{2} }} {\tilde{B}_{{kr}} \left( {u_{1} ,u} \right)e^{{i\lambda _{{kr}} s}} } . $$
(C.1)

Proceeding from (12), we get

$$ \begin{gathered} \tilde{B}_{{kr}} \left( {u_{1} ,u} \right) = \left\{ \begin{gathered} \sum\limits_{{p_{1} = k - N_{2} }}^{{N_{2} }} {\sum\limits_{{q_{1} = r - N_{2} }}^{{N_{2} }} {B_{{p_{1} q_{1} }}^{{\left( {k,r} \right)}} \left( {u_{1} ,u} \right)} } ,\;k \le 0,\;r \le 0, \hfill \\ \sum\limits_{{p_{1} = k - N_{2} }}^{{N_{2} }} {\sum\limits_{{q_{1} = - N_{2} }}^{{N_{2} - r}} {B_{{p_{1} q_{1} }}^{{\left( {k,r} \right)}} \left( {u_{1} ,u} \right)} } ,\;k \le 0,\;r > 0, \hfill \\ \sum\limits_{{p_{1} = - N_{2} }}^{{N_{2} - k}} {\sum\limits_{{q_{1} = r - N_{2} }}^{{N_{2} - r}} {B_{{p_{1} q_{1} }}^{{\left( {k,r} \right)}} \left( {u_{1} ,u} \right),\;k > 0,\;r \le 0,} } \hfill \\ \sum\limits_{{p_{1} = - N_{2} }}^{{N_{2} - k}} {\sum\limits_{{q = - N_{2} }}^{{N_{2} - r}} {B_{{p_{1} q_{1} }}^{{\left( {k,r} \right)}} \left( {u_{1} ,u} \right)} ,\;k > 0,\;r > 0,} \hfill \\ \end{gathered} \right.. \hfill \\ \hfill \\ \end{gathered} $$
(C.2)

where

$$ B_{{pq}}^{{\left( {k,r} \right)}} \left( {u_{1} ,u} \right) = \left[ {B_{{k + p,r + q}} \left( {u_{1} } \right)\overline{B} _{{pq}} \left( {u_{1} } \right) + B_{{k + p,r + q}} \left( {u_{1} + u} \right)\overline{B} _{{pq}} \left( {u_{1} - u} \right)} \right]e^{{ - i\lambda _{{pq}} u}} . $$
(C.3)

After reducing the double integral

$$ \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {G\left( {s_{1} ,s_{1} + u,s_{2} ,s_{2} + u} \right)} e^{{i\left( {\lambda _{{pq}} s_{2} - \lambda _{{ln}} s_{1} } \right)}} ds_{1} ds_{2} } $$

in iterated integral and integrating respect to \(s\) in the first approximation, we obtain formula (15). If conditions (3) are satisfied, then \(\tilde{B}_{{pq}} \left( {u_{1} ,u} \right) \to 0\) as \(\left| {u_{1} } \right| \to \infty\). Thus, \(Var\left[ {\hat{b}\left( {t,u} \right)} \right] \to 0\) as \(\theta \to \infty\), i.e., estimator (5) is the mean square consistent.

Appendix D

Proof of Proposition 3.1

Taking into account the property

$$ \sum\limits_{{j = 1}}^{{2L_{1} + 1}} {m_{{jr}} M_{{jk}} } = \left\{ {\begin{array}{*{20}l} {\left| {\mathbf{M}} \right|,} \hfill & {r = k,} \hfill \\ {0,} \hfill & {r \ne k,} \hfill \\ \end{array} } \right. $$

we conclude the mathematical expectation of the elements of matrix \({\hat{\mathbf{m}}}\) \(E\hat{m}_{j} = m_{j}\), \(j \in \left[ {0,2L_{1} } \right]\), i.e., the estimators of the Fourier coefficients for the mean function are unbiased. Then, estimator (19) is also unbiased.

Proceeding from (19) for the variance estimator, we obtain

$$Var\left[ {\hat{m}\left( t \right)} \right] = \frac{1}{{\left| {\mathbf{M}} \right|^{2} }}\sum\limits_{{l,n = 0}}^{{2L}} {R_{{\tilde{m}_{l} \tilde{m}_{n} }} f_{l} \left( t \right)f_{n} \left( t \right)} ,$$

where

$$ R_{{\tilde{m}_{l} \tilde{m}_{n} }} = \frac{1}{{\theta ^{2} }}\int\limits_{0}^{\theta } {\int\limits_{0}^{\theta } {b\left( {s_{1} ,s_{2} - s_{1} } \right)} } \left\{ {\begin{array}{*{20}l} {\cos \omega _{l} s_{1} } \hfill & {\cos \omega _{n} s_{2} } \hfill \\ {\sin \omega _{l} s_{1} } \hfill & {\sin \omega _{n} s_{2} } \hfill \\ \end{array} } \right\}ds_{1} ds_{2} . $$
(D.1)

After transformation of the double integral (C.1), we obtain expression (20). Since the inner integral is bounded, then \(R_{{\tilde{m}_{l} \tilde{m}_{n} }} \to 0\) as \(T \to \infty\), i.e., the least square estimator of BPCRP’s mean function is the mean square consistent.

Appendix E

Proof of Proposition 3.2

Taking into account the series

$$ b\left( {t,u} \right) = B_{0} \left( u \right) + \sum\limits_{{k = 1}}^{{L_{c} }} {\left[ {B_{k}^{c} \left( u \right)\cos \omega _{k} t + B_{k}^{s} \left( u \right)\sin \omega _{k} t} \right]} $$

and the property

$$ \sum\limits_{{r = 1}}^{{2L_{2} + 1}} {d_{{rk}} D_{{rj}} } = \left\{ {\begin{array}{*{20}l} {\left| {\mathbf{D}} \right|,} \hfill & {k = j,} \hfill \\ {0,} \hfill & {k \ne j,} \hfill \\ \end{array} } \right. $$

we have

$$ \frac{{\left[ {D_{{1k}} } \right]^{T} }}{{\left| {\mathbf{D}} \right|}}\left[ {\frac{1}{\theta }\int\limits_{0}^{\theta } {b\left( {t,u} \right)dt} } \right] = B_{0} \left( u \right), $$
$$ \frac{{\left[ {D_{{p + 1,k}} } \right]^{T} }}{{\left| {\mathbf{D}} \right|}}\left[ {\frac{1}{\theta }\int\limits_{0}^{\theta } {b\left( {t,u} \right)\cos \omega _{r} tdt} } \right] = B_{p}^{c} \left( u \right), $$
$$ \frac{{\left[ {D_{{p + L_{2} + 1,k}} } \right]^{T} }}{{\left| {\mathbf{D}} \right|}}\left[ {\frac{1}{\theta }\int\limits_{0}^{\theta } {b\left( {t,u} \right)\sin \omega _{r} tdt} } \right] = B_{p}^{s} \left( u \right),\;p \in \left[ {1,L_{2} } \right]. $$

Thus, leakage error is absent.

It follows from (B.1) and (B.2) that the rest of the components of biases (24) and (25) for estimators of the covariance components are defined by the integrals of the forms

$$ I\left( T \right) = \frac{1}{{T^{2} }}\int\limits_{0}^{T} {\int\limits_{0}^{T} {b\left( {s_{2} ,s_{2} - s_{1} } \right)} } \left\{ {\begin{array}{*{20}l} {\cos \omega _{p} s_{1} } \hfill & {\cos \omega _{q} s_{2} } \hfill \\ {\sin \omega _{p} s_{1} } \hfill & {\sin \omega _{q} s_{2} } \hfill \\ \end{array} } \right\}ds_{1} ds_{2} . $$

If conditions (3) are satisfied, these integrals vanish as \(T \to \infty\). So, the LSM-estimators of the covariance components are asymptotically unbiased.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Javorskyj, I., Yuzefovych, R., Dzeryn, O. (2022). Component and the Least Square Estimation of Mean and Covariance Functions of Biperiodically Correlated Random Signals. In: Chaari, F., Leskow, J., Wylomanska, A., Zimroz, R., Napolitano, A. (eds) Nonstationary Systems: Theory and Applications. WNSTA 2021. Applied Condition Monitoring, vol 18. Springer, Cham. https://doi.org/10.1007/978-3-030-82110-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82110-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82191-3

  • Online ISBN: 978-3-030-82110-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics