Skip to main content
Log in

An efficient methodology to estimate the parameters of a two-dimensional chirp signal model

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

In various capacities of statistical signal processing two-dimensional (2-D) chirp models have been considered significantly, particularly in image processing—to model gray-scale and texture images, magnetic resonance imaging, optical imaging etc. In this paper we address the problem of estimation of the unknown parameters of a 2-D chirp model under the assumption that the errors are independently and identically distributed (i.i.d.). The key attribute of the proposed estimation procedure is that it is computationally more efficient than the least squares estimation method. Moreover, the proposed estimators are observed to have the same asymptotic properties as the least squares estimators, thus providing computational effectiveness without any compromise on the efficiency of the estimators. We extend the propounded estimation method to provide a sequential procedure to estimate the unknown parameters of a 2-D chirp model with multiple components and under the assumption of i.i.d. errors we study the large sample properties of these sequential estimators. Simulation studies and two synthetic data analyses have been performed to show the effectiveness of the proposed estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Abatzoglou, T. J. (1986). Fast maximum likelihood joint estimation of frequency and frequency rate. IEEE Transactions on Aerospace and Electronic Systems, 6, 708–715.

    Article  Google Scholar 

  • Barbarossa, S. (1995). Analysis of multicomponent LFM signals by a combined Wigner-Hough transform. IEEE Transactions on Signal Processing, 43(6), 1511–1515.

    Article  Google Scholar 

  • Djuric, P. M., & Kay, S. M. (1990). Parameter estimation of chirp signals. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(12), 2118–2126.

    Article  Google Scholar 

  • Djurović, I. (2017). Quasi ML algorithm for 2-D PPS estimation. Multidimensional Systems and Signal Processing, 28(2), 371–387.

    Article  MathSciNet  Google Scholar 

  • Djurović, I., & Simeunović, M. (2018). Parameter estimation of 2D polynomial phase signals using NU sampling and 2D CPF. IET Signal Processing, 12(9), 1140–1145.

    Article  Google Scholar 

  • Djurović, I., Wang, P., & Ioana, C. (2010). Parameter estimation of 2-D cubic phase signal using cubic phase function with genetic algorithm. Signal Processing, 90(9), 2698–2707.

    Article  Google Scholar 

  • Francos, J. M., & Friedlander, B. (1998). Two-dimensional polynomial phase signals: Parameter estimation and bounds. Multidimensional Systems and Signal Processing, 9(2), 173–205.

    Article  MathSciNet  Google Scholar 

  • Francos, J. M., & Friedlander, B. (1999). Parameter estimation of 2-D random amplitude polynomial-phase signals. IEEE Transactions on Signal Processing, 47(7), 1795–1810.

    Article  MathSciNet  Google Scholar 

  • Friedlander, B., & Francos, J. M. (1995). Estimation of amplitude and phase parameters of multicomponent signals. IEEE Transactions on Signal Processing, 43(4), 917–926.

    Article  Google Scholar 

  • Friedlander, B., & Francos, J. M. (1996). An estimation algorithm for 2-D polynomial phase signals. IEEE Transactions on Image Processing, 5(6), 1084–1087.

    Article  Google Scholar 

  • Grover, R., Kundu, D., & Mitra, A. (2018). Approximate least squares estimators of a two-dimensional chirp model and their asymptotic properties. Journal of Multivariate Analysis, 168, 211–220.

    Article  MathSciNet  Google Scholar 

  • Kundu, D., & Nandi, S. (2008). Parameter estimation of chirp signals in presence of stationary noise. Statistica Sinica, 18(1), 187–201.

    MathSciNet  MATH  Google Scholar 

  • Lahiri, A. (2013). Estimators of parameters of chirp signals and their properties. PhD thesis, Indian Institute of Technology, Kanpur.

  • Lahiri, A., & Kundu, D. (2017). On parameter estimation of two-dimensional polynomial phase signal model. Statistica Sinica, 27, 1779–1792.

    MathSciNet  MATH  Google Scholar 

  • Lahiri, A., Kundu, D., & Mitra, A. (2013). Efficient algorithm for estimating the parameters of two dimensional chirp signal. Sankhya B, 75(1), 65–89.

    Article  MathSciNet  Google Scholar 

  • Lahiri, A., Kundu, D., & Mitra, A. (2015). Estimating the parameters of multiple chirp signals. Journal of Multivariate Analysis, 139, 189–206.

    Article  MathSciNet  Google Scholar 

  • Peleg, S., & Porat, B. (1991). Linear FM signal parameter estimation from discrete-time observations. IEEE Transactions on Aerospace and Electronic Systems, 27(4), 607–616.

    Article  Google Scholar 

  • Prasad, A., Kundu, D., & Mitra, A. (2008). Sequential estimation of the sum of sinusoidal model parameters. Journal of Statistical Planning and Inference, 138(5), 1297–1313.

    Article  MathSciNet  Google Scholar 

  • Richards, F. S. (1961). A method of maximum-likelihood estimation. Journal of the Royal Statistical Society. Series B, 23(2), 469–475.

    MathSciNet  MATH  Google Scholar 

  • Saha, S., & Kay, S. M. (2002). Maximum likelihood parameter estimation of superimposed chirps using Monte Carlo importance sampling. IEEE Transactions on Signal Processing, 50(2), 224–230.

    Article  Google Scholar 

  • Simeunović, M., & Djurović, I. (2016). Parameter estimation of multicomponent 2D polynomial-phase signals using the 2D PHAF-based approach. IEEE Transactions on Signal Processing, 64(3), 771–782.

    Article  MathSciNet  Google Scholar 

  • Simeunović, M., Djurović, I., & Djukanović, S. (2014). A novel refinement technique for 2-D PPS parameter estimation. Signal Processing, 94, 251–254.

    Article  Google Scholar 

  • Stoica, P., Jakobsson, A., & Li, J. (1997). Cisoid parameter estimation in the colored noise case: asymptotic Cramér-Rao bound, maximum likelihood, and nonlinear least-squares. IEEE Transactions on Signal Processing, 45(8), 2048–2059.

    Article  Google Scholar 

  • Wang, P., Li, H., Djurovic, I., & Himed, B. (2010). Integrated cubic phase function for linear FM signal analysis. IEEE Transactions on Aerospace and Electronic Systems, 46(3), 963–977.

    Article  Google Scholar 

  • Wood, J. C., & Barry, D. T. (1994). Radon transformation of time-frequency distributions for analysis of multicomponent signals. IEEE Transactions on Signal Processing, 42(11), 3166–3177.

    Article  Google Scholar 

  • Wu, C. F. (1981). Asymptotic theory of nonlinear least squares estimation. The Annals of Statistics, 9, 501–513.

    Article  MathSciNet  Google Scholar 

  • Xia, X. G. (2000). Discrete chirp-Fourier transform and its application to chirp rate estimation. IEEE Transactions on Signal Processing, 48(11), 3122–3133.

    Article  MathSciNet  Google Scholar 

  • Zhang, K., Wang, S., & Cao, F., (2008). Product cubic phase function algorithm for estimating the instantaneous frequency rate of multicomponent two-dimensional chirp signals. In 2008 Congress on image and signal processing (Vol. 5, pp. 498–502).

Download references

Acknowledgements

The authors would like to thank the the Editor and the two unknown reviewers for their positive assessment of the manuscript and their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rhythm Grover.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Henceforth, we will denote \(\varvec{\theta }(n_0) = (A(n_0), B(n_0), \alpha , \beta )\) as the parameter vector and \(\varvec{\theta }^0(n_0) = (A^0(n_0), B^0(n_0), \alpha ^0, \beta ^0)\) as the true parameter vector of the 1-D chirp model (9).

To prove Theorem 1, we need the following lemma:

Lemma 1

Consider the set \(S_c = \{(\alpha , \beta ) : |\alpha - \alpha ^0| \geqslant c \text { or } |\beta - \beta ^0| \geqslant c\}\). If for any c > 0,

$$\begin{aligned} \liminf \inf \limits _{(\alpha , \beta ) \in S_c} \frac{1}{M N}\bigg [R^{(1)}_{MN}(\alpha , \beta ) - R^{(1)}_{MN}(\alpha ^0, \beta ^0)\bigg ] > 0 \ a.s. \end{aligned}$$
(21)

then, \({\hat{\alpha }}\) \(\rightarrow \) \(\alpha ^0\) and \({\hat{\beta }}\) \(\rightarrow \) \(\beta ^0\) almost surely as \(M \rightarrow \infty \). Note that \(\liminf \) stands for limit infimum and \(\inf \) stands for the infimum.

Proof

This proof follows along the same lines as that of Lemma 1 of Wu (1981). \(\square \)

Proof of Theorem 1:

Let us consider the following:

$$\begin{aligned}&\liminf \inf \limits _{(\alpha , \beta ) \in S_c} \frac{1}{M N}\bigg [R^{(1)}_{MN}(\alpha , \beta ) - R^{(1)}_{MN}(\alpha ^0, \beta ^0)\bigg ]&\\&\quad = \liminf \inf \limits _{(\alpha , \beta ) \in S_c} \frac{1}{M N}\bigg [\sum _{{n_0} = 1}^{N}R_{M}(\alpha , \beta , n_0) - \sum _{{n_0} = 1}^{N} R_{M}(\alpha ^0, \beta ^0, n_0)\bigg ]&\\&\quad = \liminf \inf \limits _{(\alpha , \beta ) \in S_c} \frac{1}{M N}\bigg [\sum _{{n_0} = 1}^{N}Q_{M}({\hat{A}}(n_0), {\hat{B}}(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{M}({\hat{A}}(n_0), {\hat{B}}(n_0), \alpha ^0, \beta ^0)\bigg ]&\\&\quad \geqslant \liminf \inf \limits _{(\alpha , \beta ) \in S_c} \frac{1}{M N}\bigg [\sum _{{n_0} = 1}^{N}Q_{M}({\hat{A}}(n_0), {\hat{B}}(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{M}(A^0(n_0) , B^0(n_0), \alpha ^0, \beta ^0)\bigg ]&\\&\quad \geqslant \liminf \inf \limits _{ \varvec{\theta }(n_0) \in M_c^{n_0}} \frac{1}{M N}\bigg [\sum _{{n_0} = 1}^{N}Q_{M}(A(n_0), B(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{M}(A^0(n_0) , B^0(n_0), \alpha ^0, \beta ^0)\bigg ]&\\&\quad \geqslant \frac{1}{N}\sum _{n_0 = 1}^{N} \liminf \inf \limits _{\varvec{\theta }(n_0) \in M_c^{n_0}} \frac{1}{M}\bigg [Q_M(\varvec{\theta }(n_0)) - Q_M(\varvec{\theta }^0(n_0))\bigg ] > 0.&\end{aligned}$$

This follows from the proof of Theorem 1 of Kundu and Nandi (2008). Here, \(Q_{M}(A(n_0), B(n_0), \alpha , \beta ) = {{\varvec{Y}}}^{\top }_{n_0}({{\varvec{\textit{I}}}} - {{\varvec{Z}}}_M(\alpha , \beta )({{\varvec{Z}}}_M(\alpha , \beta )^{\top }{{\varvec{Z}}}_M(\alpha , \beta ))^{-1}{{\varvec{Z}}}_M(\alpha , \beta )^{\top }){{\varvec{Y}}}_{n_0}\). Also note that the set \(M_c^{n_0}\) = \(\{\varvec{\theta }(n_0) : |A(n_0) - A^0(n_0)| \geqslant c \text { or } |B(n_0) - B^0(n_0)| \geqslant c \text { or } |\alpha - \alpha ^0| \geqslant c \text { or } |\beta - \beta ^0| \geqslant c\}\) which implies \(S_c \subset M_c^{n_0}\), for all \(n_0 \in \{1, \ldots , N\}\). Thus, using Lemma 1, \({\hat{\alpha }} \xrightarrow {a.s.} \alpha ^0\) and \({\hat{\beta }} \xrightarrow {a.s.} \beta ^0\). \(\square \)

Proof of Theorem 3:

Let us denote \(\varvec{\xi } = (\alpha , \beta )\) and \(\hat{\varvec{\xi }} = ({\hat{\alpha }}, {\hat{\beta }})\), the estimator of \(\varvec{\xi }^0 = (\alpha ^0, \beta ^0)\) obtained by minimising the function \(R_{MN}^{(1)}(\varvec{\xi }) = R_{MN}^{(1)}(\alpha , \beta )\) defined in (10).

Using multivariate Taylor series, we expand the \(1 \times 2\) gradient vector \({{\varvec{R}}}_{MN}^{(1)'}(\hat{\varvec{\xi }})\) of the function \(R_{MN}^{(1)}(\varvec{\xi })\), around the point \(\varvec{\xi }^0\) as follows:

$$\begin{aligned} {{\varvec{R}}}_{MN}^{(1)'}(\hat{\varvec{\xi }}) - {{\varvec{R}}}_{MN}^{(1)'}(\xi ^0) = (\hat{\varvec{\xi }} - \varvec{\xi }^0){{\varvec{R}}}_{MN}^{(1)''}(\bar{\varvec{\xi }}), \end{aligned}$$

where \(\bar{\varvec{\xi }}\) is a point between \(\hat{\varvec{\xi }}\) and \(\varvec{\xi }^0\) and \({{\varvec{R}}}_{MN}^{(1)''}(\bar{\varvec{\xi }})\) is the \(2 \times 2\) Hessian matrix of the function \(R_{MN}^{(1)}(\varvec{\xi })\) at the point \(\bar{\varvec{\xi }}\). Since \(\hat{\varvec{\xi }}\) minimises the function \(R_{MN}^{(1)}(\varvec{\xi })\), \({{\varvec{R}}}_{MN}^{(1)'}(\hat{\varvec{\xi }}) = 0\). Thus, we have

$$\begin{aligned} (\hat{\varvec{\xi }} - \varvec{\xi }^0) = - {{\varvec{R}}}_{MN}^{(1)'}(\varvec{\xi }^0)[{{\varvec{R}}}_{MN}^{(1)''}(\bar{\varvec{\xi }})]^{-1}. \end{aligned}$$

Multiplying both sides by the diagonal matrix \({{\varvec{D}}}_1^{-1} = \text {diag}(M^{\frac{-3}{2}}N^{\frac{-1}{2}}, M^{\frac{-5}{2}}N^{\frac{-1}{2}})\), we get:

$$\begin{aligned} (\hat{\varvec{\xi }} - \varvec{\xi }^0){{\varvec{D}}}_1^{-1} = - {{\varvec{R}}}_{MN}^{(1)'}(\varvec{\xi }^0){{\varvec{D}}}_1[{{\varvec{D}}}_1R_{MN}^{(1)''}(\bar{\varvec{\xi }}){{\varvec{D}}}_1]^{-1}. \end{aligned}$$
(22)

Consider the vector,

$$\begin{aligned} {{\varvec{R}}}_{MN}^{(1)'}(\varvec{\xi }^0){{\varvec{D}}}_1 = \begin{bmatrix} \frac{1}{M^{3/2}N^{1/2}}\frac{\partial R_{MN}^{(1)}(\varvec{\xi }^0)}{\partial \alpha },&\frac{1}{M^{5/2}N^{1/2}}\frac{\partial R_{MN}^{(1)}(\varvec{\xi }^0)}{\partial \beta } \end{bmatrix}. \end{aligned}$$

On computing the elements of this vector and using preliminary result (5) (see Sect. 2.1) and the definition of the function:

$$\begin{aligned} R_{MN}^{(1)}(\alpha , \beta ) = \sum \limits _{{n_0} = 1}^{N}R_M(\alpha , \beta , n_0) \end{aligned}$$

we obtain the following result:

$$\begin{aligned} -{{\varvec{R}}}_{MN}^{(1)'}(\varvec{\xi }^0){{\varvec{D}}}_1 \xrightarrow {d} \varvec{{\mathcal {N}}}_2(\mathbf{0 }, 2\sigma ^2\varvec{\Sigma }) \text { as } M \rightarrow \infty . \end{aligned}$$
(23)

Since \(\hat{\varvec{\xi }} \xrightarrow {a.s.} \varvec{\xi }^0\), and as each element of the matrix \({{\varvec{R}}}_{MN}^{(1)''}(\varvec{\xi })\) is a continuous function of \(\varvec{\xi }\), we have

$$\begin{aligned} \lim _{M \rightarrow \infty }{{\varvec{D}}}_1{{\varvec{R}}}_{MN}^{(1)''}(\bar{\varvec{\xi }}){{\varvec{D}}}_1 = \lim _{M \rightarrow \infty }{{\varvec{D}}}_1{{\varvec{R}}}_{MN}^{(1)''}(\varvec{\xi }^0){{\varvec{D}}}_1. \end{aligned}$$

Now using preliminary result (6) (see Sect. 2.1), it can be seen that:

$$\begin{aligned} \lim _{M \rightarrow \infty }{{\varvec{D}}}_1{{\varvec{R}}}_{MN}^{(1)''}(\varvec{\xi }^0){{\varvec{D}}}_1 \rightarrow \varvec{\Sigma }^{-1}. \end{aligned}$$
(24)

On combining (22), (23) and (24), we have the desired result. \(\square \)

Appendix B

To prove Theorem 5, we need the following lemmas:

Lemma 2

Consider the set \(S_c^{1}\) = \(\{(\alpha , \beta ) : |\alpha - \alpha _1^0| \geqslant c \text { or } |\beta - \beta _1^0| \geqslant c\}\).If for any c > 0,

$$\begin{aligned} \liminf \inf \limits _{(\alpha , \beta ) \in S_c^{1}} \frac{1}{M N}[R^{(1)}_{1,MN}(\alpha , \beta ) - R^{(1)}_{1,MN}(\alpha _1^0, \beta _1^0)] > 0 \ a.s. \end{aligned}$$
(25)

then, \({\hat{\alpha }}_1\) \(\rightarrow \) \(\alpha _1^0\) and \({\hat{\beta }}_1\) \(\rightarrow \) \(\beta _1^0\) almost surely as \(M \rightarrow \infty \).

Proof

This proof follows along the same lines as proof of Lemma 1. \(\square \)

Lemma 3

If Assumptions 13 and P4 are satisfied then:

$$\begin{aligned}\begin{aligned}&M({\hat{\alpha }}_1 - \alpha _1^0) \xrightarrow {a.s.} 0,\\&M^2({\hat{\beta }}_1 - \beta _1^0) \xrightarrow {a.s.} 0. \end{aligned} \end{aligned}$$

Proof

Let us denote \({{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi })\) as the \(1 \times 2\) gradient vector and \({{\varvec{R}}}_{1, MN}^{(1)''}(\varvec{\xi })\) as the \(2 \times 2\) Hessian matrix of the function \(R_{1, MN}^{(1)}(\varvec{\xi })\). Using multivariate Taylor series expansion, we expand the function \({{\varvec{R}}}_{1, MN}^{(1)'}(\hat{\varvec{\xi }}_1)\) around the point \(\varvec{\xi }_1^0\) as follows:

$$\begin{aligned} {{\varvec{R}}}_{1, MN}^{(1)'}(\hat{\varvec{\xi }}_1) - {{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0) = (\hat{\varvec{\xi }}_1 - \varvec{\xi }_1^0){{\varvec{R}}}_{1, MN}^{(1)''}(\bar{\varvec{\xi }}_1) \end{aligned}$$

where \(\bar{\varvec{\xi }}_1\) is a point between \(\hat{\varvec{\xi }}_1\) and \(\varvec{\xi }_1^0\). Note that \({{\varvec{R}}}_{1, MN}^{(1)'}(\hat{\varvec{\xi }}_1) = 0\). Thus, we have:

$$\begin{aligned} (\hat{\varvec{\xi }}_1 - \varvec{\xi }_1^0) = - {{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0)[{{\varvec{R}}}_{1, MN}^{(1)''}(\bar{\varvec{\xi }}_1)]^{-1}. \end{aligned}$$
(26)

Multiplying both sides by \(\frac{1}{\sqrt{MN}}{{\varvec{D}}}_1^{-1}\), we get:

$$\begin{aligned} (\hat{\varvec{\xi }}_1 - \varvec{\xi }_1^0)(\sqrt{MN}{{\varvec{D}}}_1)^{-1} = - \frac{1}{\sqrt{MN}}{{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0){{\varvec{D}}}_1[{{\varvec{D}}}_1{{\varvec{R}}}_{1, MN}^{(1)''}(\bar{\varvec{\xi }}_1){{\varvec{D}}}_1]^{-1}. \end{aligned}$$
(27)

Since each of the elements of the matrix \({{\varvec{R}}}_{1, MN}^{(1)''}(\varvec{\xi })\) is a continuous function of \(\varvec{\xi },\)

$$\begin{aligned} \lim _{M \rightarrow \infty } {{\varvec{D}}}_1{{\varvec{R}}}_{1, MN}^{(1)''}(\bar{\varvec{\xi }}_1){{\varvec{D}}}_1 = \lim _{M \rightarrow \infty } {{\varvec{D}}}_1{{\varvec{R}}}_{1, MN}^{(1)''}(\varvec{\xi }_1^0){{\varvec{D}}}_1. \end{aligned}$$

By definition,

$$\begin{aligned} R_{1, MN}^{(1)}(\varvec{\xi }) = \sum _{{n_0} = 1}^{N} R_{1,M}(\varvec{\xi }, n_0). \end{aligned}$$
(28)

Using this and the preliminary result (13) and (15) (see Sect. 3.1), it can be seen that:

$$\begin{aligned}&- \frac{1}{\sqrt{MN}}{{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0){{\varvec{D}}}_1 \xrightarrow {a.s.} 0 \text { as } M \rightarrow \infty . \end{aligned}$$
(29)
$$\begin{aligned}&{{\varvec{D}}}_1{{\varvec{R}}}_{1, MN}^{(1)''}(\varvec{\xi }_1^0){{\varvec{D}}}_1 \xrightarrow {a.s.} \varvec{\Sigma }_1^{-1} \text { as } M \rightarrow \infty . \end{aligned}$$
(30)

On combining (27), (29) and (30), we have the desired result. \(\square \)

Proof of Theorem 5:

Consider the left hand side of (25), that is,

$$\begin{aligned}&\liminf \inf \limits _{(\alpha , \beta ) \in S_c^{1}} \frac{1}{M N}\left[ R^{(1)}_{1,MN}(\alpha , \beta ) - R^{(1)}_{1,MN}(\alpha _1^0, \beta _1^0)\right]&\\&\quad = \liminf \inf \limits _{(\alpha , \beta ) \in S_c^{1}} \frac{1}{M N}\left[ \sum _{{n_0} = 1}^{N}Q_{1,M}({\hat{A}}_1(n_0), {\hat{B}}_1(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{1,M}({\hat{A}}_1(n_0), {\hat{B}}_1(n_0), \alpha _1^0, \beta _1^0)\right]&\\&\quad \geqslant \liminf \inf \limits _{(\alpha , \beta ) \in S_c^{1}} \frac{1}{M N}\left[ \sum _{{n_0} = 1}^{N}Q_{1,M}({\hat{A}}_1(n_0), {\hat{B}}_1(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{1,M}(A_1^0(n_0) , B_1^0(n_0), \alpha _1^0, \beta _1^0)\right]&\\&\quad \geqslant \liminf \inf \limits _{ \varvec{\theta }_1(n_0) \in M_c^{1,n_0}} \frac{1}{M N}\left[ \sum _{{n_0} = 1}^{N}Q_{1,M}(A_1(n_0), B_1(n_0), \alpha , \beta ) - \sum _{{n_0} = 1}^{N} Q_{1,M}(A_1^0(n_0) , B_1^0(n_0), \alpha _1^0, \beta _1^0)\right]&\\&\quad \geqslant \frac{1}{N}\sum _{n_0 = 1}^{N} \liminf \inf \limits _{\varvec{\theta }_1(n_0) \in M_c^{1,n_0}} \frac{1}{M}\bigg [Q_{1,M}(\varvec{\theta }_1(n_0)) - Q_{1,M}(\varvec{\theta }_1^0(n_0))\bigg ] > 0.&\end{aligned}$$

Here, \(Q_{1,M}(A(n_0), B(n_0), \alpha , \beta ) = {{\varvec{Y}}}^{\top }_{n_0}({{\varvec{\textit{I}}}} - {{\varvec{Z}}}_M(\alpha , \beta )({{\varvec{Z}}}_M(\alpha , \beta )^{\top }{{\varvec{Z}}}_M(\alpha , \beta ))^{-1}{{\varvec{Z}}}_M(\alpha , \beta )^{\top }){{\varvec{Y}}}_{n_0}\) and \(M_c^{1,n_0}\) can be obtained by replacing \(\alpha ^0\) and \(\beta ^0\) by \(\alpha _1^0\) and \(\beta _1^0\) respectively, in the set \(M_c^{n_0}\) defined in Lemma 1. The last step follows from the proof of Theorem 2.4.1 of Lahiri et al. (2015). Thus, using Lemma 2, \({\hat{\alpha }}_1 \xrightarrow {a.s.} \alpha _1^0\) and \({\hat{\beta }}_1 \xrightarrow {a.s.} \beta _1^0\) as \(M \rightarrow \infty \).

Following similar arguments, one can obtain the consistency of \({\hat{\gamma }}_1\) and \({\hat{\delta }}_1\) as \(N \rightarrow \infty \). Also,

$$\begin{aligned} \begin{aligned}&N({\hat{\gamma }}_1 - \gamma _1^0) \xrightarrow {a.s.} 0,\\&N^2({\hat{\delta }}_1 - \delta _1^0) \xrightarrow {a.s.} 0. \end{aligned} \end{aligned}$$

The proof of the above equations follows along the same lines as the proof of Lemma 3. From Theorem 7, it follows that as min\(\{M, N\} \rightarrow \infty \):

$$\begin{aligned} \begin{aligned}&({\hat{A}}_1 - A_1^0) \xrightarrow {a.s.} 0,\\&({\hat{B}}_1 - B_1^0) \xrightarrow {a.s.} 0. \end{aligned} \end{aligned}$$

Thus, we have the following relationship between the first component of model (1) and its estimate:

$$\begin{aligned} \begin{aligned}&{\hat{A}}_1 \cos ({\hat{\alpha }}_1 m + {\hat{\beta }}_1 m^2 + {\hat{\gamma }}_1 n + {\hat{\delta }}_1 n^2) + {\hat{B}}_1 \sin ({\hat{\alpha }}_1 m + {\hat{\beta }}_1 m^2 + {\hat{\gamma }}_1 n + {\hat{\delta }}_1 n^2) \\&\quad = A_1^0 \cos (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n + \delta _1^0 n^2) + B_1^0 \sin (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n + \delta _1^0 n^2) + o(1). \end{aligned} \end{aligned}$$
(31)

Here a function g is o(1), if \(g \rightarrow 0\) almost surely as min\(\{M, N\} \rightarrow \infty \).

Using (31) and following the same arguments as above for the consistency of \({\hat{\alpha }}_1\), \({\hat{\beta }}_1\), \({\hat{\gamma }}_1\) and \({\hat{\delta }}_1\), we can show that, \({\hat{\alpha }}_2\), \({\hat{\beta }}_2\), \({\hat{\gamma }}_2\) and \({\hat{\delta }}_2\) are strongly consistent estimators of \(\alpha _2^0\), \(\beta _2^0\), \(\gamma _2^0\) and \(\delta _2^0\) respectively. And the same can be extended for \(k \leqslant p.\) Hence, the result. \(\square \)

Proof of Theorem 7:

We will consider the following two cases that will cover both the scenarios—underestimation as well as overestimation of the number of components:

  • Case 1 When \(k = 1\):

    $$\begin{aligned} \begin{aligned} \begin{bmatrix} {\hat{A}}_1 \\ {\hat{B}}_1 \end{bmatrix} = [{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)^{\top }{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)]^{-1}{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)^{\top } {{\varvec{Y}}}_{MN \times 1} \end{aligned} \end{aligned}$$
    (32)

    Using Lemma 1 of Lahiri et al. (2015), it can be seen that:

    $$\begin{aligned} \frac{1}{MN}[{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)^{\top }{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)] \rightarrow \frac{1}{2}{{\varvec{\textit{I}}}}_{2 \times 2} \text { as } \min \{M, N\} \rightarrow \infty . \end{aligned}$$

    Substituting this result in (32), we get:

    $$\begin{aligned}\begin{aligned} \begin{bmatrix} {\hat{A}}_1 \\ {\hat{B}}_1 \end{bmatrix}&= \frac{2}{MN}{{\varvec{W}}}({\hat{\alpha }}_1, {\hat{\beta }}_1, {\hat{\gamma }}_1, {\hat{\delta }}_1)^{\top } {{\varvec{Y}}}_{MN \times 1} + o(1)\\&= \begin{bmatrix} \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y(m,n)\cos ({\hat{\alpha }}_1 m + {\hat{\beta }}_1 m^2 + {\hat{\gamma }}_1 n + {\hat{\delta }}_1 n^2) + o(1) \\ \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y(m,n)\sin ({\hat{\alpha }}_1 m + {\hat{\beta }}_1 m^2 + {\hat{\gamma }}_1 n + {\hat{\delta }}_1 n^2) + o(1) \end{bmatrix}. \end{aligned} \end{aligned}$$

    Now consider the estimate \({\hat{A}}_1\). Using multivariate Taylor series, we expand the function \(\cos ({\hat{\alpha }}_1 m + {\hat{\beta }}_1 m^2 + {\hat{\gamma }}_1 n + {\hat{\delta }}_1 n^2)\) around the point \((\alpha _1^0, \beta _1^0, \gamma _1^0, \delta _1^0)\) and we obtain:

    $$\begin{aligned} \begin{aligned}&{\hat{A}}_1 \\&\quad = \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y(m,n)\bigg \{\cos (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n + \delta _1^0 n^2) \\&\qquad - m ({\hat{\alpha }}_1 - \alpha _1^0)\sin (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n \\&\qquad + \delta _1^0 n^2) - m^2({\hat{\beta }}_1 - \beta _1^0)\sin (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n + \delta _1^0 n^2) \\&\qquad - n ({\hat{\gamma }}_1 - \gamma _1^0)\sin (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n \\&\qquad + \delta _1^0 n^2) - n^2({\hat{\delta }}_1 - \delta _1^0)\sin (\alpha _1^0 m + \beta _1^0 m^2 + \gamma _1^0 n + \delta _1^0 n^2) \bigg \} \\&\qquad \rightarrow 2 \times \frac{A_1^0}{2} = A_1^0 \text { almost surely as } \min \{M, N\} \rightarrow \infty , \end{aligned} \end{aligned}$$

    using (1) and Lemma 1 and Lemma 2 of Lahiri et al. (2015). Similarly, it can be shown that \({\hat{B}}_1 \rightarrow B_1^0\) almost surely as \(\min \{M, N\} \rightarrow \infty \).

    For the second component linear parameter estimates, consider:

    $$\begin{aligned}\begin{aligned} \begin{bmatrix} {\hat{A}}_2 \\ {\hat{B}}_2 \end{bmatrix} = \begin{bmatrix} \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y_1(m,n)\cos ({\hat{\alpha }}_2 m + {\hat{\beta }}_2 m^2 + {\hat{\gamma }}_2 n + {\hat{\delta }}_2 n^2) + o(1)\\ \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y_1(m,n)\sin ({\hat{\alpha }}_2 m + {\hat{\beta }}_2 m^2 + {\hat{\gamma }}_2 n + {\hat{\delta }}_2 n^2) + o(1) \end{bmatrix}. \end{aligned} \end{aligned}$$

    Here, \(y_1(m,n)\) is the data obtained at the second stage after eliminating the effect of the first component from the original data as defined in (18). Using the relationship (31) and following the same procedure as for the consistency of \({\hat{A}}_1\), it can be seen that:

    $$\begin{aligned} {\hat{A}}_2 \xrightarrow {a.s.} A_2^0 \quad \text {and} \quad {\hat{B}}_2 \xrightarrow {a.s.} B_2^0 \text { as } \min \{M, N\} \rightarrow \infty . \end{aligned}$$
    (33)

    It is evident that the result can be easily extended for any \(2 \leqslant k \leqslant p\).

  • Case 2 When \(k = p+1\):

    $$\begin{aligned} \begin{aligned} \begin{bmatrix} {\hat{A}}_{p+1} \\ {\hat{B}}_{p+1} \end{bmatrix} = \begin{bmatrix} \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y_p(m,n)\cos ({\hat{\alpha }}_{p+1} m + {\hat{\beta }}_{p+1} m^2 + {\hat{\gamma }}_{p+1} n + {\hat{\delta }}_{p+1} n^2) + o(1)\\ \frac{2}{MN}\sum \limits _{n=1}^{N}\sum \limits _{m=1}^{M}y_p(m,n)\sin ({\hat{\alpha }}_{p+1} m + {\hat{\beta }}_{p+1} m^2 + {\hat{\gamma }}_{p+1} n + {\hat{\delta }}_{p+1} n^2) + o(1) \end{bmatrix}, \end{aligned} \end{aligned}$$
    (34)

    where

    $$\begin{aligned} \begin{aligned}&y_p(m,n)\\&\quad = y(m,n) - \sum \limits _{j=1}^{p}\bigg \{{\hat{A}}_j \cos ({\hat{\alpha }}_j m + {\hat{\beta }}_j m^2 + {\hat{\gamma }}_j n + {\hat{\delta }}_j n^2) \\&\qquad + {\hat{B}}_j \sin ({\hat{\alpha }}_j m + {\hat{\beta }}_j m^2 + {\hat{\gamma }}_j n + {\hat{\delta }}_j n^2) \bigg \} \\&\quad = X(m,n) + o(1), \text { using }(31) \text { and case 1 results.} \end{aligned} \end{aligned}$$

    From here, it is not difficult to see that (34) implies the following result:

    $$\begin{aligned} {\hat{A}}_{p+1} \xrightarrow {a.s.} 0 \quad \text {and} \quad {\hat{B}}_{p+1} \xrightarrow {a.s.} 0 \text { as } \min \{M, N\} \rightarrow \infty . \end{aligned}$$

    This is obtained using Lemma 2 of Lahiri et al. (2015). It is apparent that the result holds true for any \(k > p.\)

\(\square \)

Proof of Theorem 8:

Consider (26) and multiply both sides of the equation with the diagonal matrix, \({{\varvec{D}}}_1^{-1}\):

$$\begin{aligned} (\hat{\varvec{\xi }}_1 - \varvec{\xi }_1^0){{\varvec{D}}}_1^{-1} = - {{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0){{\varvec{D}}}_1[{{\varvec{D}}}_1{{\varvec{R}}}_{1, MN}^{(1)''}(\bar{\varvec{\xi }}_1){{\varvec{D}}}_1]^{-1}. \end{aligned}$$
(35)

Computing the elements of the vector \(- {{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0){{\varvec{D}}}_1\) and using definition (28) and the preliminary result (14) (Sect. 3.1), we obtain the following result:

$$\begin{aligned} - {{\varvec{R}}}_{1, MN}^{(1)'}(\varvec{\xi }_1^0){{\varvec{D}}}_1 \xrightarrow {d} \varvec{{\mathcal {N}}}_2(\mathbf{0 }, 2\sigma ^2 \varvec{\Sigma }_1^{-1}) \text { as } M \rightarrow \infty . \end{aligned}$$
(36)

On combining (35), (36) and (30), we have:

$$\begin{aligned} (\hat{\varvec{\xi }}_1 - \varvec{\xi }_1^0){{\varvec{D}}}_1^{-1} \xrightarrow {d} \varvec{{\mathcal {N}}}_2(\mathbf{0 }, 2\sigma ^2 \varvec{\Sigma }_1) \end{aligned}$$

This result can be extended for \(k = 2\) using the relation (31) and following the same argument as above. Similarly, we can continue to extend the result for any \(k \leqslant p\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grover, R., Kundu, D. & Mitra, A. An efficient methodology to estimate the parameters of a two-dimensional chirp signal model. Multidim Syst Sign Process 32, 49–75 (2021). https://doi.org/10.1007/s11045-020-00728-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-020-00728-x

Keywords

Navigation