Skip to main content
Log in

Asymptotic Analysis of Regression Quantile Estimators for Real-Valued Chirp Signal Model

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In this paper, we consider the problem of robust estimation of parameters of a 1-dimensional chirp signal model. We propose regression quantile estimators (RQE) for parameter estimation and establish asymptotic theoretical properties of the RQE. We establish that the RQE are strongly consistent estimators and also derive their asymptotic normal distribution. We perform extensive simulation studies and real signal analysis to access the performance of the proposed estimators and also to empirically validate the theoretical asymptotic results. We also present performance of RQE at different signal-to-noise ratio levels. It is observed that the RQE provide robust estimators when the noise distribution is heavy tailed and when outliers are present in the data. It is also observed that the performance of the RQE is much better than the least squares estimators. Real signal analysis provides evidence of practical applicability and usefulness of the proposed methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Algorithm 1
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data Availability

The data that supports the findings of this study is available from the corresponding author upon reasonable request.

References

  1. T.J. Abatzoglou, Fast maximum likelihood joint estimation of frequency and frequency rate. IEEE Trans. Aerosp. Electron. Syst. AES 22, 708–715 (1986)

    Article  Google Scholar 

  2. S. Barbarossa, A. Scaglione, G.B. Giannakis, Product high-order ambiguity function for multicomponent polynomial-phase signal modeling. IEEE Trans. Signal Process. 46(3), 691–708 (1998)

    Article  Google Scholar 

  3. P. Djuric, S. Kay, Parameter estimation of chirp signals. IEEE Trans. Acoust. Speech Signal Process. 38(12), 2118–2126 (1990)

    Article  Google Scholar 

  4. R. Grover, D. Kundu, A. Mitra, On approximate least squares estimators of parameters on one dimensional chirp signal. Statistics 52(5), 1060–1085 (2018)

    Article  MathSciNet  Google Scholar 

  5. R. Grover, D. Kundu, A. Mitra, Approximate least squares estimators of a two-dimensional chirp model and their asymptotic properties. J. Multivar. Anal. 168, 211–220 (2018)

    Article  MathSciNet  Google Scholar 

  6. R.I. Jennrich, Asymptotic properties of non-linear least squares estimators. Ann. Math. Stat. 40, 633–643 (1969)

    Article  MathSciNet  Google Scholar 

  7. T.S. Kim, H.K. Kim, S. Hur, Asymptotic properties of a particular nonlinear regression quantile estimation. Stat. Probab. Lett. 60, 387–394 (2002)

    Article  MathSciNet  Google Scholar 

  8. R. Koenker, G. Bassett, Regression quantiles. Econometrica 46(1), 33–50 (1978)

    Article  MathSciNet  Google Scholar 

  9. D. Kundu, S. Nandi, Parameter estimation of chirp signals in presence of stationary noise. Stat. Sin. 18(1), 187–201 (2008)

    MathSciNet  Google Scholar 

  10. A. Lahiri, D. Kundu, A. Mitra, On least absolute deviation estimators for one-dimensional chirp model. Statistics 48(2), 405–420 (2014)

    Article  MathSciNet  Google Scholar 

  11. A. Lahiri, D. Kundu, A. Mitra, Efficient algorithm for estimating the parameters of chirp signal. J. Multivar. Anal. 108, 15–27 (2012)

    Article  MathSciNet  Google Scholar 

  12. X. Li, Z. Sun, W. Yi, G. Cui, L. Kong, X. Yang, Computationally efficient coherent detection and parameter estimation algorithm for manoeuvring target. Signal Process. 155, 130–142 (2019)

    Article  Google Scholar 

  13. C.C. Lin, P.M. Djuric, Estimation of chirp signals by MCMC, in 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 00CH37100), vol 1 (2000), pp. 265–268

  14. A. Mittal, R. Grover, D. Kundu, A. Mitra, Estimation of the elementary chirp model parameters. IEEE Trans. Aerosp. Electron. Syst. (2023). https://doi.org/10.1109/TAES.2023.3254527

    Article  Google Scholar 

  15. M. Mujawar, J.A. Alzubi, Fundamentals of Mobile Communication (Redshine Publications, Lunawada, 2022). (ISBN: 978-1-4583-0356-1)

    Google Scholar 

  16. S. Nandi, D. Kundu, Asymptotic properties of the least squares estimators of the parameters of the chirp signals. Ann. Inst. Stat. Math. 56(3), 529–544 (2004)

    Article  MathSciNet  Google Scholar 

  17. S. Nandi, D. Kundu, Statistical Signal Processing (Springer, Berlin, 2020)

    Book  Google Scholar 

  18. W. Oberhofer, The consistency of nonlinear regression minimizing the L1-norm. Ann. Stat. 10(1), 316–319 (1982)

    Article  MathSciNet  Google Scholar 

  19. P. O’shea, A fast algorithm for estimating the parameters of a quadratic FM signal. IEEE Trans. Signal Process. 52(2), 385–393 (2004)

    Article  MathSciNet  Google Scholar 

  20. C. Pang, S. Liu, Y. Han, Coherent detection algorithm for radar maneuvering targets based on discrete polynomial-phase transform. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 12(9), 3412–3422 (2019)

    Article  Google Scholar 

  21. A. Shukla, D. Kundu, A. Mitra, R. Grover, On estimating parameters of a multi-component chirp model with equal chirp rates. IEEE Trans. Aerosp. Electron. Syst. (2023). https://doi.org/10.1109/TAES.2023.3247974

    Article  Google Scholar 

  22. G.W. Stewart, Introduction to Matrix Computations (Academic Press, New York, 1973)

    Google Scholar 

  23. A.W. Van der Vaart, Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics (Cambridge University Press, Cambridge, 1998)

    Google Scholar 

  24. I.M. Vinogradov, The Method of Trigonometrical Sums in the Theory of Numbers (Dover Publications, Inc., Mineola, NY, 2004). (Translated from the Russian, revised and annotated by K. F. Roth and Anne Davenport, Reprint of the 1954 translation)

    Google Scholar 

  25. A.M. Walker, On the estimation of a harmonic component in a time series with stationary independent residuals. Biometrika 58(1), 21–36 (1971)

    Article  MathSciNet  Google Scholar 

  26. P. Wang, H. Li, B. Himed, Parameter estimation of linear frequency-modulated signals using integrated cubic phase function, in IEEE 42nd Asilomar Conference on Signals, Systems and Computers, 2008, October (2008), pp. 487–491

  27. H. White, Nonlinear regression on cross-section data. Econometrica 48(3), 721–746 (1980)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Associate Editor and four anonymous referees for their constructive comments and suggestions which has considerably improved the quality of the paper. The work of the second author is partially supported by grant number MTR/2020/000599 of Science & Engineering Research Board, Department of Science & Technology, Government of India and JJM Professor Chair grant, Ministry of Jal Shakti, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amit Mitra.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or other conflict of interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Lemma 1

Proof

Realize that minimizing \(Q_T(\theta )\) with respect to \(\theta \) is equivalent to minimizing the following:

$$\begin{aligned} D_T(\theta )&=Q_T(\theta )-Q_T(\theta _0)\nonumber \\&=\frac{1}{T}\sum _{t=1}^T\left\{ \psi _{\beta }( Y_t-A\cos (\omega t+\eta t^2)-B\sin (\omega +\eta t^2))-\psi _{\beta }(\epsilon _t)\right\} \nonumber \\&=\frac{1}{T}\sum _{t=1}^T\left\{ \psi _{\beta }(h_t(\theta )+\epsilon _t)-\psi _{\beta }(\epsilon _t)\right\} , \end{aligned}$$
(11)

where

$$\begin{aligned} h_t(\theta )=A_0\cos (\omega _0 t+\eta _0t^2)+B_0\sin (\omega _0t+\eta _0t^2)-A\cos (\omega t+\eta t^2)-B\sin (\omega t+\eta t^2). \end{aligned}$$
(12)

From (11), we get the following:

$$\begin{aligned} D_T(\theta )=\frac{1}{T}\sum _{t=1}^TX_t(\theta ) \end{aligned}$$

where \(X_t(\theta )=\psi _{\beta }(h_t(\theta )+\epsilon _t)-\psi _{\beta }(\epsilon )\).

To show that \({\mathbb {E}}[X_t(\theta )] < \infty \), we first consider the case of \(h_t(\theta )\ge 0\).

$$\begin{aligned} {\mathbb {E}}(X_t(\theta ))&=\int \left\{ \psi _{\beta }(h_t(\theta )+\epsilon _t)-\psi _{\beta }(\epsilon _t)\right\} \textrm{d}G(\epsilon _t)\\&= \int _{-\infty }^0\left( \psi _{\beta }(h_t(\theta )+\epsilon _t)-\psi _{\beta }(\epsilon _t)\right) \textrm{d}G(\epsilon _t) +\int _0^{\infty }\left( \beta (h_t(\theta )+\epsilon _t)-\beta \epsilon _t\right) \textrm{d}G(\epsilon _t)\\&=\int _{-\infty }^{-h_t(\theta )}\left( (\beta -1)(h_t(\theta )+\epsilon _t)-(\beta -1)\epsilon _t\right) \textrm{d}G(\epsilon _t)+\beta \int _{0}^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)\\&\qquad +\int _{-h_t(\theta )}^0\left( \beta (h_t(\theta )+\epsilon _t)-(\beta -1)\epsilon _t\right) \textrm{d}G(\epsilon _t)\\&= (\beta -1)\int _{-\infty }^{-h_t(\theta )}h_t(\theta )\textrm{d}G(\epsilon _t)+\int _{-h_t(\theta )}^0(\beta h_t(\theta )+\epsilon _t)\textrm{d}G(\epsilon _t)+\beta \int _0^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)\\&= \beta \left\{ \int _{-\infty }^{-h_t(\theta )}h_t(\theta )\textrm{d}G(\epsilon _t)+\int _{-h_t(\theta )}^{0}h_t(\theta )\textrm{d}G(\epsilon _t)+\int _0^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)\right\} \\&\qquad -\int _{-\infty }^{-h_t(\theta )}h_t(\theta )\textrm{d}G(\epsilon _t)+\int _{-h_t(\theta )}^0\epsilon _t\textrm{d}G(\epsilon _t)\\&=(\beta -1)\int _{-\infty }^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)+\int _{-h_t(\theta )}^0(h_t(\theta )+\epsilon _t)\textrm{d}G(\epsilon _t)+\int _0^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)\\&={\mathcal {I}}_1+{\mathcal {I}}_2+{\mathcal {I}}_3. \end{aligned}$$

We have by integration by parts and Mean Value Theorem (MVT)

$$\begin{aligned}&{\mathcal {I}}_2 =\int _{-h_t(\theta )}^0(h_t(\theta )+\epsilon _t)\textrm{d}G(\epsilon _t)\nonumber \\&\quad =\bigg [(h_t(\theta )+\epsilon _t)G(\epsilon _t)\bigg ]_{-h_t(\theta )}^0-\int _{-h_t(\theta )}^0G(\epsilon _t)\textrm{d}\epsilon _t\nonumber \\&\quad = h_t(\theta )G(0)-h_t(\theta )G(-h_t^*(\theta ))\nonumber \\&\quad = h_t(\theta )[\beta -G(-h_t^*(\theta ))], \end{aligned}$$
(13)

where \( -h_t^*(\theta ) \in [-h_t(\theta ),0] \).

Further,

$$\begin{aligned}&{\mathcal {I}}_1 =(\beta -1)\int _{-\infty }^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)=(\beta -1)h_t(\theta )\nonumber \\&{\mathcal {I}}_3=\int _0^{\infty }h_t(\theta )\textrm{d}G(\epsilon _t)=h_t(\theta )(1-\beta ) \end{aligned}$$
(14)

Thus, for \(h_t(\theta )\ge 0\), from (13) and(14) we get

$$\begin{aligned} {\mathbb {E}}(X_t(\theta ))&=(\beta -1)h_t(\theta )+h_t(\theta )[\beta -G(-h_t^*(\theta ))]+h_t(\theta )[1-\beta ]\nonumber \\&=\left\{ \beta -G(-h_t^*(\theta ))\right\} h_t(\theta ).\ \end{aligned}$$
(15)

(9) follows using the fact that \( G(0)=\beta \) and \( G(-h_t^*(\theta ))<\infty \). For the case of \(h_t(\theta )<0\),

$$\begin{aligned} {\mathbb {E}}(X_t(\theta ))=\left\{ \beta -G(-h_t^{**}(\theta ))\right\} h_t(\theta ),\ \end{aligned}$$
(16)

with \(-h_t^{**}(\theta )\in [0,-h_t(\theta )] \).

Compactness of the parameter space ensures that \(|h_t(\theta )|<4K\) and it follows from (15) and (16) that \({\mathbb {E}}(X_t(\theta ))<\infty \). Proceeding similarly, we can show that \(\texttt {Var}(X_t(\theta ))<\infty \) and the bounds are independent of t.

Further, since \(\Theta \) is compact, using argument similar to Lahiri et al. [10], \(\exists \ \Theta _1,\Theta _2,\dots ,\Theta _k\) such that \(\Theta =\cup _{i=1}^k\Theta _i\) and \(\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-\underset{\theta \in \Theta _i}{\inf }X_t(\theta )<\frac{\epsilon }{4^t}\) for some \(\epsilon >0\). For \(\theta \in \Theta _i\),

$$\begin{aligned} D_T(\theta )-\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}[D_T(\theta )]&=\left[ \frac{1}{T}\sum _{t=1}^TX_t(\theta )-\frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta )\right] \\&\quad +\left[ \frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}[D_T(\theta )]\right] ={\mathfrak {A}}(\theta )+{\mathfrak {B}}(\theta ). \end{aligned}$$

Where

$$\begin{aligned} {\mathfrak {A}}(\theta )&=\frac{1}{T}\sum _{t=1}^TX_t(\theta )-\frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta )\\&\le \underset{\theta \in \Theta _i}{\sup }\left[ \frac{1}{T}\sum _{t=1}^TX_t(\theta )\right] -\frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta )\\&\le \frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-\frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta ). \end{aligned}$$

We see that \(\sup _{\theta \in \Theta _i}X_t(\theta )\)s are independent and non-identically distributed random variables with finite mean and variance with bounds, that do not depend on \(t\). Choosing \(T_{0i}\) large enough and applying Kolmogorov’s strong law of large numbers, we have for all \(T\ge T_{0i}\) such that \({\mathfrak {A}}(\theta )<\frac{\epsilon }{3}\ a.s.\),uniformly \(\forall \theta \in \Theta _i\) for any \(\epsilon >0\).

Let \(\mathfrak {B(\theta )}={\mathfrak {C}}(\theta )+{\mathfrak {D}}(\theta )\), where

$$\begin{aligned} {\mathfrak {C}}(\theta )=\frac{1}{T}\sum _{t=1}^{T}{\mathbb {E}}\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-{\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta }{\sup }X_t(\theta ) \end{aligned}$$

and

$$\begin{aligned} {\mathfrak {D}}(\theta )&={\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-{\mathbb {E}}\underset{T\rightarrow \infty }{\lim }D_T(\theta ). \end{aligned}$$

There exists a \(T_{*i} \), large enough, such that for all \(T\ge T_{*i}\), \(\mathfrak {C(\theta )}<\frac{\epsilon }{3}\). Further,

$$\begin{aligned} {\mathfrak {D}}(\theta )&={\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-{\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^TX_t(\theta )\\ {}&\le {\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta _i}{\sup }X_t(\theta )-{\mathbb {E}}\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\underset{\theta \in \Theta _i}{\inf }X_t(\theta )\\&\le \underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\frac{\epsilon }{4^t}=0. \end{aligned}$$

Combining the above, we get \(D_T(\theta )-\lim _{T\rightarrow \infty } D_T(\theta )\rightarrow 0\ a.s.\) uniformly \(\forall \theta \in \Theta \). \(\square \)

1.2 Proof of Lemma 2

Proof

Define \(A(\theta )=\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}[D_T(\theta )]\), observe that \(A(\theta _0)=0\). Realize that, Assumption A2 implies \(G(-h_t^*(\theta ))<\beta \) and \(G(-h_t^{**}(\theta ))>\beta \). The proof will be complete if we can show that \(A(\theta )>0 \) for all \(\theta \ne \theta _0\).

Thus for all \(\theta \ne \theta _0\)

$$\begin{aligned} A(\theta )&=\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}(D_T(\theta ))\\&=\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left( \frac{1}{T}\sum _{t=1}^TX_t(\theta )\right) \\&=\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\left\{ \sum _{t:\ h_t(\theta )\ge 0}h_t(\theta )\{\beta -G(-h^*(\theta )))\}+\sum _{t:\ h_t(\theta )<0}h_t(\theta )\{\beta -G(-h^{**}(\theta )))\}\right\} . \end{aligned}$$

From the above expression, we get the following,

$$\begin{aligned} A(\theta )\ge \underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\big |h_t(\theta )\big |\cdot \min \left\{ \beta -G(-h_t^{**}(\theta )),G(-h_t^{*}(\theta ))-\beta \right\} . \end{aligned}$$
(17)

Now,

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T\big |h_t(\theta )\big |^2&= \frac{1}{T}\sum _{t=1}^T \bigg |A_0\cos (\omega _0 t\!+\!\eta _0 t^2)\!+\!B_0\sin (\omega _0 t\!+\!\eta _0 t^2)-A\cos (\omega t+\eta t^2)-B\sin (\omega t+\eta t^2)\bigg |^2\\&=\frac{1}{T}\sum _{t=1}^T\bigg \{A_0^2\cos ^2(\omega _0t+\eta _0t^2)+B_0^2\sin ^2(\omega _0t+\eta _0t^2)+A^2\cos ^2(\omega t+\eta t^2)\\&\quad +B^2\sin ^2(\omega t+\eta t^2)\\&\quad +2A_0B_0\cos (\omega _0t+\eta _0t^2)\sin (\omega _0t+\eta _0t^2)-2AA_0\cos (\omega _0t+\eta _0t^2)\cos (\omega _0t+\eta _0t^2)\\&\quad -2BA_0\cos (\omega _0t+\eta _0t^2)\sin (\omega t+\eta t^2)-2B_0B\sin (\omega _0t+\eta _0t^2)\sin (\omega t+\eta t^2)\\&\quad -2AB_0\cos (\omega t+\eta t^2)\sin (\omega _0t+2\eta _0t^2)+2AB\cos (\omega t+\eta t^2)\sin (\omega t+\eta t^2)\bigg \}. \end{aligned}$$

From the above expression, using Lemma 1 of Lahiri et al. [11], we get \(\underset{T\rightarrow \infty }{\lim }\frac{1}{T}\sum _{t=1}^T\big |h_t(\theta )\big |^2=\frac{1}{2}A_0^2+\frac{1}{2}B_0^2+\frac{1}{2}A^2+\frac{1}{2}B^2>0\). Thus by Lemma 4 of Oberhofer [18], we can conclude that \(A(\theta )\) has a unique minimizer in \(\theta _0\in \Theta \). \(\square \)

1.3 Proof of Lemma 3

Proof

Note that

$$\begin{aligned} Q_T^*(\theta )-Q_T(\theta )&=\frac{1}{T}\sum _{t=1}^T\rho _T(h_t(\theta )+\epsilon _t)-\psi _{\beta }(h_t(\theta )+\epsilon _t)\\&=\frac{1}{T}\sum _{t=1}^T\rho _T(\epsilon _t-h_t^*(\theta ))-\psi _{\beta }(\epsilon _t-h_t^*(\theta )). \end{aligned}$$

where \(h_t^*(\theta )=f_t(\theta )-f_t(\theta _0)\); where \(f_t(\theta _0)=A_0\cos (\omega _0 t+\eta _0t^2)+B_0\sin (\omega _0t+\eta _0t^2)\) and \(f_t(\theta )=A\cos (\omega t+\eta t^2)+B\sin (\omega t+\eta t^2)\).

Let \(x(t;\theta )=\epsilon _t-h_t^*(\theta )\), from the above expression we get,

$$\begin{aligned} Q_T^*(\theta )-Q_T(\theta )&=\frac{1}{T}\sum _{t=1}^T\left\{ -\frac{a_T^3}{16}x^4(t;\theta )+\frac{3a_T}{8}x^2(t;\theta )+\frac{2\beta -1}{2}x(t;\theta )+\frac{3}{16a_T}\right\} \\&\quad {\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }+(\beta -1)x(t;\theta ){\mathbb {I}}_{\left\{ x(t;\theta )<-\frac{1}{a_T}\right\} }+\beta x(t;\theta ){\mathbb {I}}_{\left\{ x(t;\theta )>\frac{1}{a_T}\right\} }\\&\quad -\beta x(t;\theta ){\mathbb {I}}_{\left\{ x(t;\theta )>0\right\} }-(\beta -1)x(t;\theta ){\mathbb {I}}_{\left\{ x(t;\theta )<0\right\} }\\&=\frac{1}{T}\sum _{t=1}^T Z_t. \end{aligned}$$

Where

$$\begin{aligned} Z_t&=\left\{ -\frac{a_T^3}{16}x^4(t;\theta )+\frac{3a_T}{8}x^2(t;\theta )+\frac{2\beta -1}{2}x(t;\theta )+\frac{3}{16a_T}\right\} {\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} } \\&\qquad \qquad -\beta x(t;\theta ){\mathbb {I}}_{\left\{ 0< x(t;\theta )< \frac{1}{a_T}\right\} }-(\beta -1)x(t;\theta ){\mathbb {I}}_{\left\{ -\frac{1}{a_T}< x(t;\theta )\le 0\right\} }. \end{aligned}$$

Observe that,

$$\begin{aligned} |Z_t|&\le \bigg |-\frac{a_T^3}{16}x^4(t;\theta )+\frac{3a_T}{8}x^2(t;\theta )+\frac{2\beta -1}{2}x(t;\theta )+\frac{3}{16a_T}\nonumber \\&\quad -\beta x(t;\theta )-(\beta -1)x(t;\theta )\bigg |{\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }\nonumber \\&=\bigg |-\frac{a_T^3}{16}x^4(t;\theta )+\frac{3a_T}{8}x^2(t;\theta )+\frac{x(t;\theta )}{2}-\beta x(t;\theta )+\frac{3}{16a_T}\bigg |{\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }\nonumber \\&\le \bigg |-\frac{a_T^3}{16}\frac{1}{a_T^4}+\frac{3a_T}{8}\frac{1}{a_T^2}+\frac{3}{16a_T}+\left( \frac{1}{2}-\beta \right) x(t;\theta )\bigg |{\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }\nonumber \\&\le \bigg |\frac{1}{2a_T}+\left( \frac{1}{2}-\beta \right) x(t;\theta )\bigg |{\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }. \end{aligned}$$
(18)

Realize that

$$\begin{aligned}{} & {} 0<\beta<1&\implies -\frac{1}{2}<\frac{1}{2}-\beta<\frac{1}{2},{} & {} \nonumber \\ \text{ and }{} & {} -\frac{1}{a_T}\le x(t;\theta )\le \frac{1}{a_T}&\implies -\frac{1}{2a_T}<\left( \frac{1}{2}-\beta \right) x(t;\theta )<\frac{1}{2a_T}. \end{aligned}$$
(19)

Thus, using (19) in (18), we get,

$$\begin{aligned} |Z_t|&\le \left( \bigg |\frac{1}{2a_T}\bigg |+\bigg |\left( \frac{1}{2}-\beta \right) x(t;\theta )\bigg |\right) {\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }\\&=\left( \frac{1}{2a_T}+\frac{1}{2a_T}\right) {\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }\\&=\frac{1}{a_T}{\mathbb {I}}_{\left\{ |x(t;\theta )|\le \frac{1}{a_T}\right\} }. \end{aligned}$$

Thus, for any fixed \(\epsilon >0\),

$$\begin{aligned}&{\mathbb {P}}\left[ T\big |Q_T^*(\theta )-Q_T(\theta )\big |>\epsilon \right] \le \frac{{\mathbb {E}}\left[ T\big |Q_T^*(\theta )-Q(\theta )\big |\right] }{\epsilon } \,\,\, \text {(using Markov's inequality)}{} & {} \\&\quad =\frac{1}{\epsilon }{\mathbb {E}}\left[ \bigg |\sum _{t=1}^TZ_t\bigg |\right] \\&\quad \le \frac{1}{\epsilon }{\mathbb {E}}\left[ \sum _{t=1}^T\big |Z_t\big |\right] \\&\quad \le \frac{1}{\epsilon a_T}\sum _{t=1}^T{\mathbb {E}}\left[ {\mathbb {I}}_{\left\{ \big |x(t;\theta )\big |\le \frac{1}{a_T}\right\} }\right] \\&\quad =\frac{1}{\epsilon a_T}\sum _{t=1}^T{\mathbb {P}}\left[ \big |x(t;\theta )\big |\le \frac{1}{a_T}\right] \\&\quad =\frac{1}{\epsilon a_T}\sum _{t=1}^T{\mathbb {P}}\left[ h_t^*(\theta )-\frac{1}{a_T}\le \epsilon _t\le h_t^*(\theta )+\frac{1}{a_T}\right] \\&\quad =\frac{1}{\epsilon a_T}\sum _{t=1}^T\int _{h_t^*(\theta )-\frac{1}{a_T}}^{h_t^*(\theta )+\frac{1}{a_T}} \textrm{d}G(\epsilon _t)\\&\quad =\frac{1}{\epsilon a_T}\sum _{t=1}^T\frac{2}{a_T}g(\alpha _T^*)\qquad \text {(using integral mean value theorem (MVT), where}\\&\qquad { \big |\alpha _T^*-h_t^*(\theta )\big |\le \frac{1}{a_T})}\\&\quad =\frac{2T}{\epsilon a_T^2}g(\alpha _T^*)\\&\quad =\frac{2}{\epsilon }\left( \frac{T^2}{a_T^3}\right) \left( \frac{a_T}{T}\right) g(\alpha _T^*)\longrightarrow 0\ \text {as}\ T\rightarrow \infty . \qquad \qquad \qquad \qquad \qquad \qquad \quad (\text {using}\ (7)) \end{aligned}$$

We can thus conclude that \(T\left\{ Q_T^*(\theta )-Q_T(\theta )\right\} =o_p(1)\), where \(R_T=o_p(1)\) indicates \(R_T \rightarrow 0,\) in probability, as \(T\rightarrow \infty \). Further, since the parameter space \(\Theta \) is compact and \(Q_T^*(\theta )-Q_T(\theta )\) is continuous, we can say that there exists \(\theta ^*\) such that \(T\{Q_T^*(\theta ^*)-Q_T(\theta ^*)\}=\underset{\theta \in \Theta }{\sup }\ T\{Q_T^*(\theta )-Q_T(\theta )\}\). We thus obtain the following result,

$$\begin{aligned} \underset{\theta \in \Theta }{\sup }\ T\{Q_T^*(\theta )-Q_T(\theta )\}=o_p(1) \end{aligned}$$
(20)

We also note that \({\mathbb {P}}[|Q_T^*(\theta )-Q_T(\theta )|>\epsilon ]\le 2\,g(\alpha _T^*)/(\epsilon a_T^2)\le {\mathcal {C}}/\alpha _T^2\), where \({\mathcal {C}}\) is a constant. We get

$$\begin{aligned} \sum _{T=1}^{\infty }{\mathbb {P}}\left( |Q_T^*(\theta )-Q_T(\theta )|>\epsilon \right) \le {\mathcal {C}}\sum _{T=1}^{\infty }\frac{1}{a_T^2}<\infty , \end{aligned}$$

which implies that \(|Q_T^*(\theta )-Q_T(\theta )|\rightarrow 0\ a.s.\) and hence \(\sup _{\theta \in \Theta }|Q_T^*(\theta )-Q_T(\theta )|\rightarrow 0\ a.s.\) \(\square \)

1.4 Proof of Lemma 4

Proof

Since Lemma 3 holds, we proceed similarly as in proof of Theorem 1 and we take \({W}^*(\theta )=\rho _T(h_t(\theta )+\epsilon _t)-\rho _T(\epsilon _t)\). Define \({D}^*_T(\theta )={Q}^*_T(\theta )-{Q}^*_T(\theta _0)\). Clearly, \(\theta _T^*\) is also the minimizer of \(D_T^*(\theta )\). Proceeding similarly as in proving consistency for \({\hat{\theta }}_T\), we have \(D_T^{*}(\theta )-\lim _{T\rightarrow \infty }{\mathbb {E}}[D_T^*(\theta )]\rightarrow 0\ a.s.\) uniformly for all \( \theta \in \Theta \). Again, at \(\theta _0\), the value of \(\lim _{T\rightarrow \infty }{\mathbb {E}}[D_T^*(\theta )]\) is zero. Further, for \(\theta \ne \theta ^0\),

$$\begin{aligned} \underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ D_T^*(\theta )\right]&=\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ D_T^*(\theta )-D_T(\theta )\right] +\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ D_T(\theta )\right] \\&=\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ Q_T^*(\theta )-Q_T(\theta )\right] -\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ Q_T^*(\theta _0)-Q_T(\theta _0)\right] \\&\quad +\underset{T\rightarrow \infty }{\lim }{\mathbb {E}}\left[ D_T(\theta )\right] . \end{aligned}$$

The first two terms above converge to zero and using Lemma 2, we finally have for \(\theta \ne \theta _0\),

$$\begin{aligned} \lim _{T\rightarrow \infty }{\mathbb {E}}\left[ D_T^*(\theta )\right] >0. \end{aligned}$$

Thus, \(\theta _T^*\) is the strong consistent estimator of \(\theta _0\).\(\square \)

1.5 Proof of Lemma 5

Proof of Lemma 5

Let us denote by \(\nabla Q_T^*(\theta ;\beta )\) the \(4\times 1\) first derivative vector and \(\nabla ^2Q_T^*(\theta ;\beta )\) as the \(4\times 4\) second derivative matrix, where

$$\begin{aligned} \nabla Q_T^*(\theta _T;\beta )&=\frac{\partial Q_T^*(\theta _T;\beta )}{\partial \theta },\nonumber \\ \nabla ^2Q_T^*(\theta ;\beta )&=\frac{\partial Q_T^*(\theta ;\beta )}{\partial \theta '\partial \theta }=((d_{ij}(T,\theta ;\beta )))_{4\times 4}\ \text {(say).} \end{aligned}$$
(21)

Now we observe the following to derive explicit expressions of \(\nabla Q_T^*(\theta )\) and \(\nabla ^2Q_T^*(\theta )\), as a consequence of the way of defining the pseudo-function in (6),

$$\begin{aligned} \rho _T'(x)&=\left[ -\frac{a_T^3}{4}x^3+\frac{3a_T}{4}x+\frac{2\beta -1}{2}\right] {\mathbb {I}}_{\left\{ |x|<\frac{1}{a_T}\right\} }+\beta \ {\mathbb {I}}_{\left\{ x\ge \frac{1}{a_T}\right\} }+(\beta -1){\mathbb {I}}_{\left\{ x\le \frac{1}{a_T}\right\} }, \end{aligned}$$
(22)
$$\begin{aligned} \rho _T''(x)&=\left[ -\frac{3a_T^3}{4}x^2+\frac{3a_T}{4}\right] {\mathbb {I}}_{\left\{ |x|<\frac{1}{a_T}\right\} }. \end{aligned}$$
(23)

Since \({\hat{\theta }}_T\) is the minimizer of \(Q_T(\theta )\), we have \(Q({\hat{\theta }}_T)\le Q(\theta _T^*)\). Therefore,

$$\begin{aligned} T\{Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)\}&\le T\{Q_T^*({\hat{\theta }}_T)-Q_T({\hat{\theta }}_T)+Q_T(\theta _T^*)-Q_T^*(\theta _T^*)\}{} & {} \nonumber \\&=o_p(1). \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad [\text {Using (20)}]\nonumber \\ \implies T\{Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)\}&=o_p(1). \end{aligned}$$
(24)

Now, by Taylor series expansion up to second order, we get,

$$\begin{aligned} Q_T^*({\hat{\theta }}_T;\beta )-Q_T^*(\theta _T^*;\beta )&=\nabla Q_T^*(\theta _T^*;\beta )({\hat{\theta }}_T-\theta _T^*)+ \frac{1}{2}({\hat{\theta }}_T-\theta _T^*)^T\nabla ^2Q_T^* \nonumber \\&\quad ({\bar{\theta }}_T;\beta )({\hat{\theta }}_T-\theta _T^*), \end{aligned}$$
(25)

where \({\bar{\theta }}_T=({\bar{A}}_T,{\bar{B}}_T,{\bar{\omega }}_T,{\bar{\eta }}_T)=\gamma {\hat{\theta }}_T+(1-\gamma )\theta _T^*\) for some \(0<\gamma <1\). Note that \(\nabla Q_T^*(\theta _T^*)=0\). Further, \(\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )\) is a symmetric matrix and let \(\lambda _{\texttt {min}}(T;\beta )\) be the smallest eigen value of \(\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )\). Then by Courant–Fischer minimax characterization [22], we have

$$\begin{aligned} ({\hat{\theta }}_T-\theta _T^*)^T\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )({\hat{\theta }}_T-\theta _T^*)\ge \lambda _{\texttt {min}}(T;\beta )({\hat{\theta }}_T-\theta _T^*)^T({\hat{\theta }}_T-\theta _T^*). \end{aligned}$$

Using this inequality in (25), we get,

$$\begin{aligned} T({\hat{\theta }}_T-\theta _T^*)^T({\hat{\theta }}_T-\theta _T^*)\le \frac{2T}{\lambda _{\texttt {min}}(T;\beta )}\{Q_T^*({\hat{\theta }};\beta )-Q_T^*(\theta _T^*;\beta )\}. \end{aligned}$$
(26)

Given (24), for \(\sqrt{T}\big |\big |{\hat{\theta }}_T-\theta _T^*\big |\big |=o_p(1)\), it is sufficient to show that \(\lambda _{\texttt {min}}(T;\beta )>0\). In other words, it needs to be shown that \(\underset{T\rightarrow \infty }{\lim }\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )\) is a positive definite matrix as \(T\rightarrow \infty \).

Now,

$$\begin{aligned} {\mathbb {E}}\left[ \frac{1}{T}\sum _{t=1}^T\rho _T''(x(t;{\bar{\theta }}_T);\beta )\right]&=\frac{1}{T}\sum _{t=1}^T{\mathbb {E}}\left[ \rho _T''(x(t;{\bar{\theta }}_T);\beta )\right] \nonumber \\&=\frac{1}{T}\sum _{t=1}^T\int _{h_t^*({\bar{\theta }}_T)-\frac{1}{a_T}}^{h_t^*({\bar{\theta }}_T)+\frac{1}{a_T}}\left[ -\frac{3a_T^3}{4}\left( \epsilon _t-h_t^*({\bar{\theta }}_T)\right) ^2+\frac{3}{4}a_T\right] g(\epsilon _t)\textrm{d}\epsilon _t\nonumber \\&=\left[ g(\epsilon _t)\left\{ -\frac{3}{4}a_T^3\frac{(\epsilon _t-h_t^*({\bar{\theta }}_T))^3}{3}+\frac{3}{4}a_T\epsilon _t\right\} \right] _{h_t^*(\theta )-\frac{1}{a_T}}^{h_t^*(\theta )+\frac{1}{a_T}}-{\mathcal {I}}_4\nonumber \\&=g\left( h_t^*({\bar{\theta }}_T)+\frac{1}{a_T}\right) \left\{ -\frac{1}{4}+\frac{3}{4}a_T\left( h_t^*({\bar{\theta }}_T)+\frac{1}{a_T}\right) \right\} \nonumber \\&\qquad -g\left( h_t^*({\bar{\theta }}_T)-\frac{1}{a_T}\right) \left\{ \frac{1}{4}+\frac{3}{4}a_T\left( h_t^*({\bar{\theta }}_T)-\frac{1}{a_T}\right) \right\} -{\mathcal {I}}_4, \end{aligned}$$
(27)

where

$$\begin{aligned} {\mathcal {I}}_4&=\int _{h_t^*({\bar{\theta }}_T)-\frac{1}{a_T}}^{h_t^*({\bar{\theta }}_T)+\frac{1}{a_T}}g'(\epsilon _t)\left\{ -\frac{3}{4}a_T^3\frac{(\epsilon _t-h_t^*({\bar{\theta }}_T))^3}{3}+\frac{3}{4}a_T\epsilon _t\right\} \textrm{d}\epsilon _t\nonumber \\&\le M\int _{h_t^*(\theta )-\frac{1}{a_T}}^{h_t^*(\theta )+\frac{1}{a_T}}\left\{ -\frac{3}{4}a_T^3\frac{(\epsilon _t-h_t^*({\bar{\theta }}_T))^3}{3}+\frac{3}{4}a_T\epsilon _t\right\} \textrm{d}\epsilon _t \nonumber \\ {}&\qquad (\text {using (A2})\ \text {that } g'(.) \ \text {is bounded by, (say)}\ M)\nonumber \\&=M\left[ -\frac{a_T^3}{4}\frac{(\epsilon _t-h_t^*({\bar{\theta }}_T))^4}{4}\right] _{h_t^*(\theta )-\frac{1}{a_T}}^{h_t^*(\theta )+\frac{1}{a_T}}+M\left[ \frac{3a_T}{4}\frac{\epsilon _t^2}{2}\right] _{h_t^*(\theta )-\frac{1}{a_T}}^{h_t^*(\theta )+\frac{1}{a_T}}\nonumber \\&=\frac{3}{8}Mh_t^*({\bar{\theta }}_T). \end{aligned}$$
(28)

\({\bar{\theta }}_T=\gamma {\hat{\theta }}_T+(1-\gamma )\theta _T^*\), where \(\gamma \in (0,1)\). Since, we have \({\hat{\theta }}_T\xrightarrow {a.s.}\theta _0\) as \(T\rightarrow \infty \) and by Lemma 4, \(\theta _T^*\xrightarrow {a.s.}\theta _0\), therefore, as \(T\rightarrow \infty \), \({\bar{\theta }}_T\xrightarrow {a.s.}\theta _0\) which further implies that as \(T\rightarrow \infty \), \(h_t^*({\bar{\theta }}_T)\xrightarrow {a.s.}h_t^*(\theta _0)\). By definition of \(h_t^*(\theta )\), we note that \(h_t^*(\theta _0)=0\). Using this result in (28), we get,

$$\begin{aligned} {\mathcal {I}}_4\rightarrow 0\ \text {as}\ T\rightarrow \infty . \end{aligned}$$

Thus, from (27), as \(T\rightarrow \infty \), we get,

$$\begin{aligned} {\mathbb {E}}\left[ \frac{1}{T}\sum _{t=1}^T\rho _T''(x(t;{\bar{\theta }}_T);\beta )\right]&=g(0)\left\{ -\frac{1}{2}+\frac{3}{2}\right\}{} & {} \text {as}\ T\rightarrow \infty , \\&=g(0){} & {} \text {as}\ T\rightarrow \infty . \end{aligned}$$

Therefore, we have

$$\begin{aligned} {\mathbb {E}}\left[ \frac{1}{T}\sum _{t=1}^T\rho _T''(x(t;{\bar{\theta }}_T);\beta )-g(0)\right] =o(1). \end{aligned}$$
(29)

Similarly, we can show that

$$\begin{aligned} \texttt {Var}\left[ \frac{1}{T}\sum _{t=1}^T\rho _T''(x(t;{\bar{\theta }}_T);\beta )\right] =o(1), \end{aligned}$$
(30)

where Var[.] denotes variance of a random variable. Using Chebyshev’s Inequality, we have from (29) and (30),

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T\rho _T''(x(t;{\bar{\theta }}_T);\beta )=g(0)+o_p(1). \end{aligned}$$

Further, using Lemma 1 of Lahiri et al. [11] (see also Vingradov [24]), we calculate the following expressions, as defined in (21).

$$\begin{aligned} d_{11}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T}\sum _{t=1}^T\cos ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )\nonumber \\&=\frac{g(0)}{2}+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ d_{22}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T}\sum _{t=1}^T\sin ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )\nonumber \\&=\frac{g(0)}{2}+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ d_{12}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T}\sum _{t=1}^T\sin (\omega t+\eta t^2)\cos (\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )\nonumber \\&=o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T}d_{13}(T,{\bar{\theta }}_T;\beta )&=\frac{B}{T^2}\sum _{t=1}^Tt\cos ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{13}\nonumber \\&=\frac{B_0}{4}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T^2}d_{14}(T,{\bar{\theta }}_T;\beta )&=\frac{B}{T^3}\sum _{t=1}^Tt^2\cos ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{14}\nonumber \\&=\frac{B_0}{6}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T}d_{23}(T,{\bar{\theta }}_T;\beta )&=-\frac{A}{T^2}\sum _{t=1}^Tt\sin ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{23}\nonumber \\&=-\frac{A_0}{4}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T^2}d_{24}(T,{\bar{\theta }}_T;\beta )&=\frac{A}{T^3}\sum _{t=1}^Tt^2\sin ^2(\omega t+\eta t^2)\rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{24}\nonumber \\&=-\frac{A_0}{6}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T^2}d_{33}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T^3}\sum _{t=1}^Tt^2\left( A^2\sin ^2(\omega t+\eta t^2)+B^2\cos ^2(\omega t+\eta t^2)\right) \nonumber \\&\qquad \rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{33}\nonumber \\&=\frac{A_0^2+B_0^2}{6}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \end{aligned}$$
(31)
$$\begin{aligned} \frac{1}{T^3}d_{34}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T^4}\sum _{t=1}^T\left( A^2t^3\sin ^2(\omega t+\eta t^2)+B^2t^3\cos ^2(\omega t+\eta t^2)\right) \nonumber \\&\quad \rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{34}\nonumber \\&=\frac{A_0^2+B_0^2}{8}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \nonumber \\ \frac{1}{T^4}d_{44}(T,{\bar{\theta }}_T;\beta )&=\frac{1}{T^5}\sum _{t=1}^T\left( A^2t^4\sin ^2(\omega t+\eta t^2)+B^2t^4\cos ^2(\omega t+\eta t^2)\right) \nonumber \\&\quad \rho _T''(x_t({\bar{\theta }}_T);\beta )+{\mathcal {D}}_{44}\nonumber \\&=\frac{A_0^2+B_0^2}{10}g(0)+o_p(1),\ \text {as}\ T\rightarrow \infty ; \end{aligned}$$
(32)

where \({\mathcal {D}}_{13},\ {\mathcal {D}}_{14},\ {\mathcal {D}}_{23},\ {\mathcal {D}}_{24},\ {\mathcal {D}}_{33},\ {\mathcal {D}}_{34},\ {\mathcal {D}}_{44}\) in the above equations are terms which which goes to 0 as \(T\rightarrow \infty \).

Defining \(D=diag\left( 1,1,T^{-1},T^{-2}\right) \), we get,

$$\begin{aligned} D\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )D{\mathop {=}\limits ^{p}}g(0)\begin{pmatrix} \frac{1}{2}&{}0&{}\frac{B_0}{4}&{}\frac{B_0}{6}\\ 0&{}\frac{1}{2}&{}-\frac{A_0}{4}&{}-\frac{A_0}{6}\\ \frac{B_0}{4}&{}-\frac{A_0}{4}&{}\frac{A_0^2+B_0^2}{6}&{}\frac{A_0^2+B_0^2}{8}\\ \frac{B_0}{6}&{}-\frac{A_0}{6}&{}\frac{A_0^2+B_0^2}{8}&{}\frac{A_0^2+B_0^2}{10} \end{pmatrix}+o_p(1)=g(0)\Sigma +o_p(1). \end{aligned}$$
(33)

Clearly, \(D\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )D\) converges to \(g(0)\Sigma \), where \(\Sigma \) is positive definite. It is now easy to show that the matrix \(\underset{T\rightarrow \infty }{\lim }\nabla ^2Q_T^*({\bar{\theta }}_T;\beta )\) is positive definite. Hence by (26), we have,

$$\begin{aligned} \sqrt{T}\left( {\hat{\theta }}_T-\theta _T^*\right) =o_p(1). \end{aligned}$$
(34)

By Taylor series expansion of \(Q_T^*(\theta )\) about only \(\omega \) and about only \(\eta \), we obtain the following expressions:

$$\begin{aligned} Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)&=\frac{1}{2}\left( {\hat{\omega }}_T-\omega _T^*\right) \frac{\partial ^2 Q_T^*({\bar{\theta }}_T)}{\partial \omega ^2}, \end{aligned}$$
(35)
$$\begin{aligned} Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)&=\frac{1}{2}\left( {\hat{\eta }}_T-\eta _T^*\right) \frac{\partial ^2 Q_T^*({\bar{\theta }}_T)}{\partial \eta ^2}. \end{aligned}$$
(36)

Using (31), (32), (35) & (36), we obtain the following

$$\begin{aligned} \frac{2T\{Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)\}}{\frac{1}{T^2}d_{33}(T,{\bar{\theta }}_T;\beta )}&=T^3\left( {\hat{\omega }}_T-\omega _T^*\right) ^2, \\ \frac{2T\{Q_T^*({\hat{\theta }}_T)-Q_T^*(\theta _T^*)\}}{\frac{1}{T^4}d_{44}(T,{\bar{\theta }}_T;\beta )}&=T^5\left( {\hat{\eta }}_T-\eta _T^*\right) ^2. \end{aligned}$$

Using the above equations and the result obtained in (24), we finally get the following two important results

$$\begin{aligned} T^{3/2}\left( {\hat{\omega }}_T-\omega _T^*\right)&=o_p(1) \end{aligned}$$
(37)
$$\begin{aligned} \text {and} \, T^{5/2}\left( {\hat{\eta }}_T-\eta _T^*\right)&=o_p(1). \end{aligned}$$
(38)

From (34), (37) & (38), the result of Lemma 5 follows.\(\square \)

1.6 Proof of Lemma 6

Proof of Lemma 6

Let \(Q_T^*(\theta ;\beta )=Q_T^*\) (say) attains its minimum at \(\theta _T^*\). We use multivariate mean value theorem to obtain the following:

$$\begin{aligned} (Q_T^*)_{A_0}&=(Q_T^*)_{{\bar{A}}{\bar{A}}}(A_0-A_T^*)+(Q_T^*)_{{\bar{A}}{\bar{B}}}(B_0-B_T^*)\nonumber \\&\quad +(Q_T^*)_{{\bar{A}}{\bar{\omega }}}(\omega _0-\omega _T^*)+(Q_T^*)_{{\bar{A}}{\bar{\eta }}}(\eta _0-\eta _T^*), \end{aligned}$$
(39)
$$\begin{aligned} (Q_T^*)_{B_0}&=(Q_T^*)_{{\bar{B}}{\bar{A}}}(A_0-A_T^*)+(Q_T^*)_{{\bar{B}}{\bar{B}}}(B_0-B_T^*)\nonumber \\&\quad +(Q_T^*)_{{\bar{B}}{\bar{\omega }}}(\omega _0-\omega _T^*)+(Q_T^*)_{{\bar{B}}{\bar{\eta }}}(\eta _0-\eta _T^*), \end{aligned}$$
(40)
$$\begin{aligned} (Q_T^*)_{\omega _0}&=(Q_T^*)_{{\bar{\omega }}{\bar{A}}}(A_0-A_T^*)+(Q_T^*)_{{\bar{\omega }}{\bar{B}}}(B_0-B_T^*)\nonumber \\&\quad +(Q_T^*)_{{\bar{\omega }}{\bar{\omega }}}(\omega _0-\omega _T^*)+(Q_T^*)_{{\bar{\omega }}{\bar{\eta }}}(\eta _0-\eta _T^*), \end{aligned}$$
(41)
$$\begin{aligned} (Q_T^*)_{\eta _0}&=(Q_T^*)_{{\bar{\eta }}{\bar{A}}}(A_0-A_T^*)+(Q_T^*)_{{\bar{\eta }}{\bar{B}}}(B_0-B_T^*)\nonumber \\&\quad +(Q_T^*)_{{\bar{\eta }}{\bar{\omega }}}(\omega _0-\omega _T^*)+(Q_T^*)_{{\bar{\eta }}{\bar{\eta }}}(\eta _0-\eta _T^*), \end{aligned}$$
(42)

where

$$\begin{aligned} (Q_T^*)_{A_0}=\frac{\partial Q_T^*(\theta ;\beta )}{\partial A}\bigg |_{\theta _0=(A_0,B_0,\omega _0,\eta _0)}\qquad \text {and}\qquad (Q_T^*)_{{\bar{A}}{\bar{B}}}=\frac{\partial ^2 Q_T^*(\theta ;\beta )}{\partial A\partial B}\bigg |_{{\bar{\theta }}=({\bar{A}},{\bar{B}},{\bar{\omega }},{\bar{\eta }})}. \end{aligned}$$

Here, the point \({\bar{\theta }}=({\bar{A}},{\bar{B}},{\bar{\omega }},{\bar{\eta }})\) denotes a point on the line joining the points \(\theta _0=(A_0,B_0,\omega _0,\eta _0)\) and \(\theta ^*=(A^*,B^*,\omega ^*,\eta ^*)\) and is not typically the same point defined in (25). However, to simplify the notations we define it in similar fashion.Now the system of equations (39)–(42) can be replaced by the following:

$$\begin{aligned}&\left( T^{1/2}(Q_T^*)_{A_0},T^{1/2}(Q_T^*)_{B_0},T^{-1/2}(Q_T^*)_{\omega _0},T^{-3/2}(Q_T^*)_{\eta _0}\right) \nonumber \\&\quad = -\left( T^{1/2}(A_T^*-A_0),T^{1/2}(B_T^*-B_0),T^{3/2}(\omega _T^*-\omega _0),T^{5/2}(\eta _T^*-\eta _0)\right) \times Z_T, \end{aligned}$$
(43)

where

$$\begin{aligned} Z_T=\begin{pmatrix} (Q_T^*)_{{\bar{A}}{\bar{A}}} &{} \quad (Q_T^*)_{{\bar{A}}{\bar{B}}} &{} \quad T^{-1}(Q_T^*)_{{\bar{A}}{\bar{\omega }}} &{} \quad T^{-2}(Q_T^*)_{{\bar{A}}{\bar{\eta }}}\\ (Q_T^*)_{{\bar{B}}{\bar{A}}} &{} \quad (Q_T^*)_{{\bar{B}}{\bar{B}}} &{} \quad T^{-1}(Q_T^*)_{{\bar{B}}{\bar{\omega }}} &{} \quad T^{-2}(Q_T^*)_{{\bar{B}}{\bar{\eta }}}\\ T^{-1}(Q_T^*)_{{\bar{\omega }}{\bar{A}}} &{} \quad T^{-1}(Q_T^*)_{{\bar{\omega }}{\bar{B}}} &{} \quad T^{-2}(Q_T^*)_{{\bar{\omega }}{\bar{\omega }}} &{} \quad T^{-3}(Q_T^*)_{{\bar{\omega }}{\bar{\eta }}}\\ T^{-2}(Q_T^*)_{{\bar{\eta }}{\bar{A}}} &{} \quad T^{-2}(Q_T^*)_{{\bar{\eta }}{\bar{B}}} &{} \quad T^{-3}(Q_T^*)_{{\bar{\eta }}{\bar{\omega }}} &{} \quad T^{-4}(Q_T^*)_{{\bar{\eta }}{\bar{\eta }}}\\ \end{pmatrix}. \end{aligned}$$

Using \(h_t^*(\theta _0)=0\) and from Eq (22), we get,

$$\begin{aligned} T^{1/2}(Q_T^*)_{A_0}=T^{-1/2}\sum _{t=1}^T\left( -\cos (\omega _0t+\eta _0t^2)\right) \left[ k_T(\epsilon _t)+l_T(\epsilon _t)\right] , \end{aligned}$$

where

$$\begin{aligned} k_T(\epsilon _t)&=\beta \ {\mathbb {I}}_{\left\{ \epsilon _t\ge \frac{1}{a_T}\right\} }+(\beta -1){\mathbb {I}}_{\left\{ \epsilon _t\le \frac{1}{a_T}\right\} }\\ \text {and} \,\, l_T(\epsilon _t)&=\left[ -\frac{a_T^3}{4}\epsilon _t^3+\frac{3a_T}{4}\epsilon _t+\frac{2\beta -1}{2}\right] {\mathbb {I}}_{\left\{ |\epsilon _t|<\frac{1}{a_T}\right\} }. \end{aligned}$$

Using the fact that \(T^{-1/2}\sum _{t=1}^T\cos (\omega _0t+\eta _0t^2)=0\) as \(T\rightarrow \infty \), an application of the Markov’s theorem yields the following result

$$\begin{aligned} T^{-1/2}\sum _{t=1}^{T}-\cos (\omega _0t+\eta _0t^2)l_T(\epsilon _t)=o_p(1). \end{aligned}$$

On using the above, we get the following

$$\begin{aligned} T^{1/2}(Q_T^*)_{A_0}&=T^{-1/2}\sum _{t=1}^T(-\cos (\omega _0t+\eta _0t^2))k_T(\epsilon _t)+o_p(1), \end{aligned}$$
(44)
$$\begin{aligned} T^{1/2}(Q_T^*)_{B_0}&=T^{-1/2}\sum _{t=1}^T(-\sin (\omega _0t+\eta _0t^2))k_T(\epsilon _t)+o_p(1),\end{aligned}$$
(45)
$$\begin{aligned} T^{-1/2}(Q_T^*)_{\omega _0}&=T^{-3/2}\sum _{t=1}^T(A_0t\sin (\omega _0t+\eta _0t^2)-B_0t\cos (\omega _0t+\eta _0t^2))\nonumber \\&\quad k_T(\epsilon _t)+o_p(1) \end{aligned}$$
(46)
$$\begin{aligned} \text {and} \,\, T^{-3/2}(Q_T^*)_{\eta _0}&=T^{-5/2}\sum _{t=1}^T(A_0t^2\sin (\omega _0t+\eta _0t^2)-B_0t^2\cos (\omega _0t+\eta _0t^2))\nonumber \\&\quad k_T(\epsilon _t)+o_p(1). \end{aligned}$$
(47)

The sums (44)–(47) are of the form \(\sum _{t=1}^TU_t\), \(U_t\) appropriately defined. Using (7), we obtain from (44)

$$\begin{aligned} {\mathbb {E}}(U_t)&={\mathbb {E}}\left[ -T^{-1/2}\cos (\omega _0t+\eta _0t^2)k_T(\epsilon _t)\right] \nonumber \\&=-T^{-1/2}\cos (\omega _0t+\eta _0t^2)\left\{ \beta \int _{\frac{1}{a_T}}^{\infty }\textrm{d}G(\epsilon _t)+(\beta -1)\int _{-\infty }^{-\frac{1}{a_T}}\textrm{d}G(\epsilon _t)\right\} \nonumber \\&=-T^{-1/2}\cos (\omega _0t+\eta _0t^2)\left\{ \beta \left( 1-G\left( \frac{1}{a_T}\right) \right) +(\beta -1)G\left( -\frac{1}{a_T}\right) \right\} \nonumber \\&=o(1)\qquad \text {as}\ T\rightarrow \infty . \end{aligned}$$
(48)

Further, for (44),

$$\begin{aligned} {\mathbb {E}}(U_t^2)&={\mathbb {E}}\left[ T^{-1}\cos ^2(\omega _0t+\eta _0t^2)k_T^2(\epsilon _t)\right] \\&=T^{-1}\cos ^2(\omega _0t+\eta _0t^2)\left\{ \beta ^2\left( 1-G\left( \frac{1}{a_T}\right) \right) +(\beta -1)^2G\left( -\frac{1}{a_T}\right) .\right\} \\&\implies \texttt {Var}(U_t)=T^{-1}\cos ^2(\omega _0t+\eta _0t^2)\left\{ \beta ^2\left( 1-G \left( \frac{1}{a_T}\right) \right) +(\beta -1)^2G\left( -\frac{1}{a_T}\right) \right\} \\&+o(1). \qquad (\text {from}\ (48)) \end{aligned}$$

Thus,

$$\begin{aligned} \sum _{t=1}^T\texttt {Var}(U_t)=\frac{\beta (1-\beta )}{2}+o(1). \end{aligned}$$

The same result follows for (45).

For (46),

For (47),

Thus summarizing the above results, we get:

$$ \begin{aligned} \beta _T^2=\sum _{t=1}^T\texttt {Var}(U_t)={\left\{ \begin{array}{ll} \frac{\beta (1-\beta )}{2}+o(1), &{} \text {for (44)}\ \& \ (45)\\ \frac{A_0^2+B_0^2}{6}\beta (1-\beta )+o(1), &{} \text {for (46)}\\ \frac{A_0^2+B_0^2}{10}\beta (1-\beta )+o(1)&{} \text {for (47)} \end{array}\right. } \end{aligned}$$

Realize that for any \(\epsilon >0\), \(\lim _{T\rightarrow \infty } \beta _T^{-2}\sum _{t=1}^T{\mathbb {E}}\left( U_t^2{\mathbb {I}}_{\{|U_t|\ge \epsilon B_T\}}\right) =0\). Thus, applying Lindeberg–Feller central limit theorem [23], we can conclude that \(T^{1/2}(Q_T^*)_{A_0},\ T^{1/2}(Q_T^*)_{B_0}, T^{-1/2}(Q_T^*)_{\omega _0}, T^{-3/2}(Q_T^*)_{\eta _0}\) converges in law to \(N\left( 0,\frac{\beta (1-\beta )}{2}\right) , N\left( 0,\frac{\beta (1-\beta )}{2}\right) , N\left( 0,\frac{(A_0^2+B_0^2)}{6}\beta (1-\beta )\right) , N\left( 0,\frac{(A_0^2+B_0^2)}{10}\beta (1-\beta )\right) \) respectively.

For the limiting joint distribution, we consider the following random variable:

$$\begin{aligned} W_T(\delta _1,\delta _2,\delta _3,\delta _4)&=\delta _1T^{1/2}(Q_T^*)_{A_0}+\delta _2T^{1/2}(Q_T^*)_{B_0}+\delta _3T^{-1/2}(Q_T^*)_{\omega _0}\\&\quad +\delta _4T^{-3/2}(Q_T^*)_{\eta _0}, \end{aligned}$$

where \(\delta _i,\ i=\{1,2,3,4\}\) are arbitrary real numbers. As derived earlier,

$$\begin{aligned} V_T(\delta _1,\delta _2,\delta _3,\delta _4)&=\frac{1}{T}\sum _{t=1}^T\left\{ \delta _1 T^{1/2}(-\cos (\omega _0t+\eta _0t^2))+\delta _2T^{1/2}(-\sin (\omega _0t+\eta _0t^2))\right. \\&\quad +\left. \delta _3T^{-1/2}T_{\omega _0}(t)+\delta _4T^{-3/2}T_{\eta _0}(t)\right\} k_T(\epsilon _t)+o_p(1), \end{aligned}$$

where \(T_{\omega _0}(t)=A_0t\sin (\omega _0t+\eta _0t^2)-B_0t\cos (\omega _0t+\eta _0t^2)\) and \(T_{\eta _0}(t)=A_0t^2\sin (\omega _0t+\eta _0t^2)-B_0t^2\cos (\omega _0t+\eta _0t^2)\).

Clearly, \(V_T\) is also of the form \(\sum _{t=1}^TU_t\) where \({\mathbb {E}}(U_t)=o_p(1)\) and

$$\begin{aligned} B_T^2&=\sum _{t=1}^T\texttt {Var}(U_t)\\&=\left( \frac{\delta _1^2}{2}+\frac{\delta _2^2}{2}+\delta _3^2\frac{A_0^2+B_0^2}{6}+\delta _4^2\frac{A_0^2+B_0^2}{10}+\delta _1\delta _3\frac{B_0}{2}+\delta _1\delta _4\frac{B_0}{3} -\delta _2\delta _3\frac{A_0}{2}\right. \\&\quad \left. -\delta _2\delta _4\frac{A_0}{3}+\delta _3\delta _4\frac{A_0^2+B_0^2}{4}\right) \beta (1-\beta )+o(1) \end{aligned}$$

Thus, by Lindeberg–Fellar CLT, applied to the sum given above, we can say that \(V_T(\delta _1,\delta _2,\delta _3,\delta _4)\) converges in law to a normal distribution with mean 0 and variance given by

$$\begin{aligned}&\left( \frac{\delta _1^2}{2}+\frac{\delta _2^2}{2}+\delta _3^2\frac{A_0^2+B_0^2}{6}+\delta _4^2\frac{A_0^2+B_0^2}{10}+\delta _1\delta _3\frac{B_0}{2}+\delta _1\delta _4\frac{B_0}{3}\right. \\&\left. \quad -\delta _2\delta _3\frac{A_0}{2}-\delta _2\delta _4\frac{A_0}{3}+\delta _3\delta _4\frac{A_0^2+B_0^2}{4}\right) \beta (1-\beta ). \end{aligned}$$

Hence, using Cramer–Wold device, we have

$$\begin{aligned} \left( T^{1/2}(Q_T^*)_{A_0},\ T^{1/2}(Q_T^*)_{B_0},\ T^{-1/2}(Q_T^*)_{\omega _0}.\ T^{-3/2}(Q_T^*)_{\eta _0}\right) \end{aligned}$$

converging in law to \(N_4\left( (0,0,0,0),\beta (1-\beta )\Sigma \right) \), where

$$\begin{aligned} \Sigma =\begin{pmatrix} \frac{1}{2}&{} \quad 0&{} \quad \frac{B_0}{4}&{} \quad \frac{B_0}{6}\\ 0&{} \quad \frac{1}{2}&{} \quad -\frac{A_0}{4}&{} \quad -\frac{A_0}{6}\\ \frac{B_0}{4}&{} \quad -\frac{A_0}{4}&{} \quad \frac{A_0^2+B_0^2}{6}&{} \quad \frac{A_0^2+B_0^2}{8}\\ \frac{B_0}{6}&{} \quad -\frac{A_0}{6}&{} \quad \frac{A_0^2+B_0^2}{8}&{} \quad \frac{A_0^2+B_0^2}{10} \end{pmatrix}. \end{aligned}$$

We note that \(\Sigma ^{-1}=\Sigma ^*\), where \(\Sigma ^*\) is the matrix defined in (8).

From (43), and using the result in (33), we get, \(\lim _{T\rightarrow \infty }Z_T=g(0)\Sigma \).

Therefore,

$$\begin{aligned}&\left( T^{1/2}(A_T^*-A_0),T^{1/2}(B_T^*-B_0),T^{3/2}(\omega _T^*-\omega _0),T^{5/2}(\eta _T^*-\eta _0)\right) \nonumber \\&\qquad = \left( T^{1/2}(Q_T^*)_{A_0},T^{1/2}(Q_T^*)_{B_0},T^{-1/2}(Q_T^*)_{\omega _0},T^{-3/2}(Q_T^*)_{\eta _0}\right) \times Z_T^{-1} \end{aligned}$$
(49)

converges in law to \(N_4\left( (0,0,0,0),\frac{\beta (1-\beta )}{g^2(0)}\Sigma ^*\right) \). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Banerjee, S.S., Mitra, A. & Mondal, R. Asymptotic Analysis of Regression Quantile Estimators for Real-Valued Chirp Signal Model. Circuits Syst Signal Process 43, 1053–1100 (2024). https://doi.org/10.1007/s00034-023-02504-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-023-02504-1

Keywords

Navigation