Skip to main content
Log in

Direction Finding of a Source in an Acoustic Waveguide and the Angular Resolution Limit

  • ACOUSTIC SIGNALS PROCESSING. COMPUTER SIMULATION
  • Published:
Acoustical Physics Aims and scope Submit manuscript

Abstract

A reduced-rank adaptive algorithm is constructed, which makes it possible to find sound sources directions using a horizontal array operating in a waveguide with inaccurately known parameters. Statistical simulation results are presented, demonstrating the high resolution of the method and acceptable accuracy in bearing estimation without the prior knowledge of the source ranges and depths. The Smith criterion is used to find the theoretical limit of the angular resolution as a function of the input signal-to-noise ratio and size of the input sampling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.

Similar content being viewed by others

REFERENCES

  1. V. L. Eliseevnin, Akust. Zh. 29 (1), 44 (1983).

    Google Scholar 

  2. M. J. Buckingham, IEEE Proc. F 131 (3), 298 (1984).

  3. V. L. Eliseevnin, Acoust. Phys. 42 (2), 182 (1996).

    ADS  Google Scholar 

  4. S. Lakshmipath and G. V. Anand, Signal Process. 84, 1367 (2004).

    Article  Google Scholar 

  5. J. Pang, J. Lin, A. Zhang, and X. Huang, in Proc. ICASSP (Las Vegas, NV, March 31–Apr. 4, 2008), p. 2433.

  6. H. Cox, J. Acoust. Soc. Am. 54 (3), 771 (1973).

    Article  ADS  Google Scholar 

  7. Z. Liu and A. Nehorai, IEEE Trans. Signal Process. 55 (11), 5521 (2007).

    Article  MathSciNet  ADS  Google Scholar 

  8. A. Amar and A. J. Weiss, IEEE Trans. Signal Process. 56, 5309 (2008).

    Article  MathSciNet  ADS  Google Scholar 

  9. M. N. El Korso, R. Boyer, A. Renaux, and S. Marcos, Signal Process. 92, 2471 (2012).

    Article  Google Scholar 

  10. S. T. Smith, IEEE Trans. Signal Process. 53 (5), 1597 (2005).

    Article  MathSciNet  ADS  Google Scholar 

  11. Robust Adaptive Beamforming, Ed. by J. Li and P. Stoica (John Wiley & Sons, Hoboken, NJ, 2006).

    Google Scholar 

  12. A. G. Sazontov and A. I. Malekhanov, Acoust. Phys. 61 (2), 213 (2015).

    Article  ADS  Google Scholar 

  13. R. O. Schmidt, IEEE Trans. Antennas Propag. 34 (3), 276 (1986).

    Article  ADS  Google Scholar 

  14. M. Pesavento, A. B. Gershman, and K. M. Wong, IEEE Trans. Signal Process. 50 (9), 2103 (2002).

    Article  ADS  Google Scholar 

  15. C. M. S. See and A. B. Gershman, IEEE Trans. Signal Process. 52 (2), 329 (2004).

    Article  MathSciNet  ADS  Google Scholar 

  16. A. G. Sazontov, I. P. Smirnov, and A. S. Chashchin, Izv. Vyssh. Uchebn. Zaved., Radiofiz. 59 (2), 99 (2016).

    Google Scholar 

  17. A. G. Sazontov and I. P. Smirnov, Acoust. Phys. 65 (4), 450 (2019).

    Article  ADS  Google Scholar 

  18. P. Stoica and A. Nehorai, IEEE Trans. Acoust., Speech, Signal Process. 37 (5), 720 (1989).

    Article  MathSciNet  Google Scholar 

  19. M. N. El Korso, R. Boyer, A. Renaux, and S. Marcos, in Proc. ICASSP (Dallas, TX, 2010), p. 3602.

  20. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, Upper Saddle River, NJ, 1993).

    MATH  Google Scholar 

  21. P. Stoica and E. G. Larsson, IEEE Trans. Signal Process. 49 (12), 3168 (2001).

    Article  ADS  Google Scholar 

Download references

Funding

The work was supported by the Russian Science Foundation, grant no. 20-19-00383.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. G. Sazontov.

APPENDIX:

APPENDIX:

DERIVATION OF EQ. (10)

For the considered formulation of the problem, the sampling observation vector \({{{\mathbf{x}}}_{l}}\) is a complex Gaussian vector with mean \({\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}\) and covariance matrix \(\sigma _{n}^{2}{{{\mathbf{I}}}_{N}}\).

Let us denote as \({\mathbf{\alpha }}\) the vector containing \((1 + 2M + 2L)J\) unknown parameters of the signal component of the observation vector

$$\begin{gathered} {\mathbf{\alpha }} = [{{{\boldsymbol{\varphi }}}^{T}},{{{\boldsymbol{\eta }}}^{T}},\operatorname{Re} {{\{ {{{\mathbf{s}}}_{1}}\} }^{T}}, \ldots , \\ \operatorname{Re} {{\{ {{{\mathbf{s}}}_{L}}\} }^{T}},\operatorname{Im} {{\{ {{{\mathbf{s}}}_{1}}\} }^{T}}, \ldots ,\operatorname{Im} {{\{ {{{\mathbf{s}}}_{L}}\} }^{T}}]{{{\kern 1pt} }^{T}}, \\ \end{gathered} $$

with \({\boldsymbol{\eta }} = {{({{{\mathbf{\xi }}}^{T}},{{{\mathbf{\zeta }}}^{T}})}^{T}},\) where \({\mathbf{\xi }} = {{({\mathbf{\xi }}_{1}^{T} \cdots ,{\mathbf{\xi }}_{M}^{T})}^{T}}\) and \({\mathbf{\zeta }} = {{({\mathbf{\zeta }}_{1}^{T}, \cdots ,{\mathbf{\zeta }}_{M}^{T})}^{T}}\) are \(MJ \times 1\) vectors containing the real and imaginary coefficients of complex mode amplitudes:

$$\begin{gathered} {{{\mathbf{\xi }}}_{m}} = [\operatorname{Re} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{1}})\} , \ldots ,\operatorname{Re} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{J}})\} ]{{{\kern 1pt} }^{T}}, \\ {{{\mathbf{\zeta }}}_{m}} = [\operatorname{Im} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{1}})\} , \ldots ,\operatorname{Im} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{J}})\} ]{{{\kern 1pt} }^{T}}, \\ m = 1, \ldots ,M. \\ \end{gathered} $$

As is known (see, e.g., [20]), when receiving a deterministic signal recorded against a white noise background, the lower Cramer–Rao bound, which determines the potentially achievable accuracy of the estimate of vector \({\mathbf{\alpha }}\), does not depend on the noise level and is given by the expression

$${\mathbf{CRB}}({\boldsymbol{\alpha }}) = \frac{{\sigma _{n}^{2}}}{2}{{\left\{ {\sum\limits_{l = {\kern 1pt} 1}^L {\operatorname{Re} ({\mathbf{W}}_{l}^{ + }{{{\mathbf{W}}}_{l}})} } \right\}}^{{ - 1}}},$$
(A1)

in which

$$\begin{gathered} {{{\mathbf{W}}}_{l}} = \left[ {\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {{{\mathbf{\varphi }}}^{T}}}},\frac{{{\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\kern 1pt} {{{\mathbf{\xi }}}^{T}}}},\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\kern 1pt} {{{\mathbf{\zeta }}}^{T}}}},\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial \operatorname{Re} {{{\{ {{{\mathbf{s}}}_{l}}\} }}^{T}}}},\frac{{\partial {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial \operatorname{Im} {{{\{ {{{\mathbf{s}}}_{l}}\} }}^{T}}}}} \right], \\ {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} = [{\mathbf{U}}({{\phi }_{1}}){\mathbf{b}}({{{\boldsymbol{\theta }}}_{1}}) \cdots {\mathbf{U}}({{\phi }_{J}}){\mathbf{b}}({{{\boldsymbol{\theta }}}_{J}})]. \\ \end{gathered} $$
(A2)

Let us calculate the derivatives entering into (A2). For the first block appearing on the right-hand side of (A2), we have

$$\begin{gathered} \frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {{{\mathbf{\varphi }}}^{T}}}} = \left[ {\frac{{\partial {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {{\phi }_{1}}}}, \cdots ,\frac{{\partial {\mathbf{G}}(\phi ,{\mathbf{\theta }}){\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {{\phi }_{J}}}}} \right], \hfill \\ \frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {{\phi }_{j}}}} = {{{\mathbf{d}}}_{j}}{{s}_{j}}(l),\,\,\,\,{{{\mathbf{d}}}_{j}} = \frac{{d{\mathbf{U}}({{\phi }_{j}})}}{{d{{\varphi }_{j}}}}{\mathbf{b}}({{{\boldsymbol{\theta }}}_{j}}) \hfill \\ \end{gathered} $$

and, therefore,

$$\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){{{\mathbf{s}}}_{l}}}}{{\partial {{{\boldsymbol{\varphi }}}^{T}}}} = [{{{\mathbf{d}}}_{1}}{{s}_{1}}(l), \ldots ,{{{\mathbf{d}}}_{J}}{{s}_{J}}(l)] \equiv {{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{{{\mathbf{S}}}_{l}},$$

where \({{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}\) is defined by Eq. (12) and \({{{\mathbf{S}}}_{l}} = diag{\kern 1pt} {\kern 1pt} ({{{\mathbf{s}}}_{l}})\).

Then,

$$\begin{gathered} \frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\mathbf{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\kern 1pt} {{{\mathbf{\xi }}}^{T}}}} = \left[ {\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\mathbf{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\mathbf{\xi }}_{1}^{T}}}, \ldots ,\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\mathbf{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\mathbf{\xi }}_{M}^{T}}}} \right], \\ \frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\mathbf{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\mathbf{\xi }}_{m}^{T}}} = \left[ {\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\mathbf{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} \operatorname{Re} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{1}})\} }}, \ldots ,\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} \operatorname{Re} \{ {{b}_{m}}({{{\boldsymbol{\theta }}}_{J}})\} }}} \right]. \\ \end{gathered} $$

Taking into account that \({{\partial {\kern 1pt} {\mathbf{b}}({{{\boldsymbol{\theta }}}_{j}})} \mathord{\left/ {\vphantom {{\partial {\kern 1pt} {\mathbf{b}}({{{\boldsymbol{\theta }}}_{j}})} {\partial \operatorname{Re} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{j}})\} }}} \right. \kern-0em} {\partial \operatorname{Re} \{ {{b}_{m}}({{{\mathbf{\theta }}}_{j}})\} }} = {{{\mathbf{e}}}_{m}}\), where \({{{\mathbf{e}}}_{m}}\) represents the \(m\)-th column of the \(M \times M\) identity matrix , we obtain

$$\frac{{\partial {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\mathbf{\xi }}_{m}^{T}}} = [{\mathbf{U}}({{\phi }_{1}}){{{\mathbf{e}}}_{m}}{{s}_{1}}(l), \cdots ,{\mathbf{U}}({{\phi }_{J}}){{{\mathbf{e}}}_{m}}{{s}_{J}}(l)] \equiv {{{\mathbf{D}}}_{m}}{{{\mathbf{S}}}_{l}},$$

where \({{{\mathbf{D}}}_{m}}\) is given by (13). Then,

$$\frac{{\partial {\kern 1pt} {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\kern 1pt} {{{\mathbf{\xi }}}^{T}}}} = [{{{\mathbf{D}}}_{1}}{{{\mathbf{S}}}_{l}}, \cdots ,{{{\mathbf{D}}}_{M}}{{{\mathbf{S}}}_{l}}] \equiv {{{\mathbf{H}}}_{l}}.$$

Similar calculations for the third block in (A2) lead to the result

$$\frac{{\partial {\kern 1pt} {\kern 1pt} {\mathbf{G}}({\boldsymbol{\varphi }},{\boldsymbol{\theta }}){\kern 1pt} {\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial {\kern 1pt} {\kern 1pt} {{{\mathbf{\zeta }}}^{T}}}} = i{{{\mathbf{H}}}_{l}}.$$

Finally, for the last two terms in (A2) it follows that

$$\frac{{\partial {\kern 1pt} {\kern 1pt} {\mathbf{G}}{\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial \operatorname{Re} {{{\{ {{{\mathbf{s}}}_{l}}\} }}^{T}}}} = {\mathbf{G}},\,\,\,\,\frac{{\partial {\kern 1pt} {\kern 1pt} {\mathbf{G}}{\kern 1pt} {{{\mathbf{s}}}_{l}}}}{{\partial \operatorname{Im} {{{\{ {{{\mathbf{s}}}_{l}}\} }}^{T}}}} = i{\mathbf{G}}.$$

Thus, matrix \({{{\mathbf{W}}}_{l}}\) can be represented as

$${{{\mathbf{W}}}_{l}} = [{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{\mathbf{S}}{}_{l},{{{\mathbf{{\rm H}}}}_{l}},i{{{\mathbf{{\rm H}}}}_{l}},{\mathbf{G}},i{\kern 1pt} {\mathbf{G}}].$$
(A3)

Below, we are interested in the minimal variance of the estimate of the angular coordinates of the sources, for which it is convenient to transform matrix (A1) to block-diagonal form. To do this, note that for a fixed sample index \(l\) the matrix \({{{\mathbf{W}}}_{l}}\) depends on the unknown vector \({{{\boldsymbol{\alpha }}}_{l}} = [{{{\boldsymbol{\varphi }}}^{T}},{{{\boldsymbol{\eta }}}^{T}},\operatorname{Re} {{\{ {{{\mathbf{s}}}_{l}}\} }^{T}},\operatorname{Im} {{\{ {{{\mathbf{s}}}_{l}}\} }^{T}}]{{{\kern 1pt} }^{T}}\). Following the idea of [14, 21], it is convenient to introduce into consideration a new parameter vector \({{{\mathbf{\tilde {\alpha }}}}_{l}}\) of the form

$$\begin{gathered} {{{{\mathbf{\tilde {\alpha }}}}}_{l}} = \left[ {{{{\boldsymbol{\varphi }}}^{T}},{{{\boldsymbol{\eta }}}^{T}},\operatorname{Re} \{ {{{\mathbf{V}}}_{l}}\} {\boldsymbol{\varphi }} + \operatorname{Re} \{ {{{\mathbf{T}}}_{l}}\} {\mathbf{\eta }} + \operatorname{Re} \{ {{{\mathbf{s}}}_{l}}\} } \right., \\ \operatorname{Im} \{ {{{\mathbf{V}}}_{l}}\} {\boldsymbol{\varphi }} + \operatorname{Im} \{ {{{\mathbf{T}}}_{l}}\} {\mathbf{\eta }} + \operatorname{Im} \{ {{{\mathbf{s}}}_{l}}\} ]{{{\kern 1pt} }^{T}}, \\ \end{gathered} $$

where \({{{\mathbf{V}}}_{l}} = {{({{{\mathbf{G}}}^{ + }}{\mathbf{G}})}^{{ - 1}}}{{{\mathbf{G}}}^{ + }}{{{\mathbf{D}}}_{{\mathbf{\varphi }}}}{{{\mathbf{S}}}_{l}}\),\({{{\mathbf{T}}}_{l}} = {{({{{\mathbf{G}}}^{ + }}{\mathbf{G}})}^{{ - 1}}}{{{\mathbf{G}}}^{ + }}{{{\mathbf{\Delta }}}_{l}}\), and \({{{\mathbf{\Delta }}}_{l}} = [{{{\mathbf{{\rm H}}}}_{l}},i{{{\mathbf{{\rm H}}}}_{l}}]\).

The corresponding vector is related to \({{{\mathbf{\alpha }}}_{l}}\) by

$${{{\boldsymbol{\tilde {\alpha }}}}_{l}} = {{{\mathbf{C}}}_{l}}{{{\boldsymbol{\alpha }}}_{l}},\,\,\,\,{\text{where}}\,\,\,\,{{{\mathbf{C}}}_{l}} = \left( {\begin{array}{*{20}{c}} {\mathbf{I}}&{\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}} \\ {\mathbf{0}}&{\mathbf{I}}&{\mathbf{0}}&{\mathbf{0}} \\ {\operatorname{Re} \{ {{{\mathbf{V}}}_{l}}\} }&{\operatorname{Re} \{ {{{\mathbf{T}}}_{l}}\} }&{\mathbf{I}}&{\mathbf{0}} \\ {\operatorname{Im} \{ {{{\mathbf{V}}}_{l}}\} }&{\operatorname{Im} \{ {{{\mathbf{T}}}_{l}}\} }&{\mathbf{0}}&{\mathbf{I}} \end{array}} \right){\kern 1pt} .$$

For the new vector

$$\begin{gathered} {\boldsymbol{\tilde {\alpha }}} = [{{{\boldsymbol{\varphi }}}^{T}},{{{\boldsymbol{\eta }}}^{T}},\operatorname{Re} {{\{ {{{{\mathbf{\tilde {s}}}}}_{1}}\} }^{T}}, \ldots ,\operatorname{Re} {{\{ {{{{\mathbf{\tilde {s}}}}}_{L}}\} }^{T}}, \\ \operatorname{Im} {{\{ {{{{\mathbf{\tilde {s}}}}}_{1}}\} }^{T}}, \ldots ,\operatorname{Im} {{\{ {{{{\mathbf{\tilde {s}}}}}_{L}}\} }^{T}}]{{{\kern 1pt} }^{T}}, \\ \end{gathered} $$

where \({{{\mathbf{\tilde {s}}}}_{{{\kern 1pt} l}}} = {{{\mathbf{s}}}_{l}} + {{{\mathbf{V}}}_{l}}{\boldsymbol{\varphi }} + {{{\mathbf{T}}}_{l}}{\boldsymbol{\eta }}\), the Cramer–Rao boundary can be calculated as

$${\mathbf{CRB}}({\boldsymbol{\tilde {\alpha }}}) = \frac{{\sigma _{n}^{2}}}{2}{{\left\{ {\sum\limits_{l = 1}^L {{{{({\mathbf{C}}_{l}^{{{\kern 1pt} - 1}})}}^{T}}\operatorname{Re} \{ {\mathbf{W}}_{l}^{ + }{{{\mathbf{W}}}_{l}}\} {\mathbf{C}}_{l}^{{{\kern 1pt} - 1}}} } \right\}}^{{ - {\kern 1pt} 1}}}.$$
(A4)

For a non-degenerate matrix \({{{\mathbf{C}}}_{l}}\), there is an inverse matrix equal

$${\mathbf{C}}_{l}^{{ - 1}} = \left( {\begin{array}{*{20}{c}} {\mathbf{I}}&{\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}} \\ {\mathbf{0}}&{\mathbf{I}}&{\mathbf{0}}&{\mathbf{0}} \\ { - \operatorname{Re} \{ {{{\mathbf{V}}}_{l}}\} }&{ - \operatorname{Re} \{ {{{\mathbf{T}}}_{l}}\} }&{\mathbf{I}}&{\mathbf{0}} \\ { - \operatorname{Im} \{ {{{\mathbf{V}}}_{l}}\} }&{ - \operatorname{Im} \{ {{{\mathbf{T}}}_{l}}\} }&{\mathbf{0}}&{\mathbf{I}} \end{array}} \right).$$

Then, using (A3), it is easy to find that

$$\begin{gathered} {{{\mathbf{W}}}_{l}}{\mathbf{C}}_{l}^{{ - 1}} = [{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{{{\mathbf{S}}}_{l}} - {\mathbf{G}}{{{\mathbf{V}}}_{l}},{{{\mathbf{\Delta }}}_{l}} - {\mathbf{G}}{{{\mathbf{T}}}_{l}},{\mathbf{G}},i{\mathbf{G}}] \\ \equiv [{\mathbf{\Pi }}{}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{{{\mathbf{S}}}_{l}},{\mathbf{\Pi }}{}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{{\rm H}}}}_{l}},i{\mathbf{\Pi }}{}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{{\rm H}}}}_{l}},{\mathbf{G}},i{\mathbf{G}}], \\ \end{gathered} $$
(A5)

where \({\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot } = {{{\mathbf{I}}}_{N}} - {\mathbf{G}}{{({{{\mathbf{G}}}^{ + }}{\mathbf{G}})}^{{ - 1}}}{{{\mathbf{G}}}^{ + }}\). Substituting (A5) in (A4) and taking into account that \({{{\mathbf{G}}}^{ + }}{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot } = 0\) for \({\mathbf{CRB}}({\boldsymbol{\tilde {\alpha }}})\), we obtain

$${\mathbf{CRB}}({\boldsymbol{\tilde {\alpha }}}) = \frac{{\sigma _{n}^{2}}}{{2L}}{\kern 1pt} {{\left( {\begin{array}{*{20}{c}} {{{{\mathbf{J}}}_{{{\boldsymbol{\varphi \varphi }}}}}}&{{{{\mathbf{J}}}_{{{\boldsymbol{\varphi \eta }}}}}}&{\mathbf{0}} \\ {{\mathbf{J}}_{{{\boldsymbol{\varphi \eta }}}}^{T}}&{{{{\mathbf{J}}}_{{{\boldsymbol{\eta \eta }}}}}}&{\mathbf{0}} \\ {\mathbf{0}}&{\mathbf{0}}&{{{{\mathbf{J}}}_{{{\mathbf{ss}}}}}} \end{array}} \right)}^{{ - 1}}}.$$
(A6)

Here, \({{{\mathbf{J}}}_{{{\boldsymbol{\varphi \varphi }}}}} = \operatorname{Re} \{ {\mathbf{F}}\} ,\)\({{{\mathbf{J}}}_{{{\boldsymbol{\varphi \eta }}}}} = [\operatorname{Re} \{ {\mathbf{K}}\} - \operatorname{Im} \{ {\mathbf{K}}\} ],\)\({{{\mathbf{J}}}_{{{\boldsymbol{\eta \eta }}}}} = \left( {\begin{array}{*{20}{c}} {\operatorname{Re} \{ {\mathbf{\Sigma }}\} }&{ - \operatorname{Im} \{ {\mathbf{\Sigma }}\} } \\ {\operatorname{Im} \{ {\mathbf{\Sigma }}\} }&{\operatorname{Re} \{ {\mathbf{\Sigma }}\} } \end{array}} \right)\), where

$$\begin{gathered} {\mathbf{F}} = {{L}^{{ - 1}}}\sum\limits_{l = 1}^L {{\mathbf{S}}_{l}^{ + }{\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{{{\mathbf{S}}}_{l}} \in {{C}^{{J \times J}}},} \\ {\mathbf{{\rm K}}} = {{L}^{{ - 1}}}\sum\limits_{l = 1}^L {{\mathbf{S}}_{l}^{ + }{\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{{\rm H}}}}_{l}} \in {{C}^{{J \times MJ}}},} \\ {\mathbf{\Sigma }} = {{L}^{{ - 1}}}\sum\limits_{l = 1}^L {{\mathbf{{\rm H}}}_{l}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{{\rm H}}}}_{l}} \in {{C}^{{MJ \times MJ}}}} , \\ \end{gathered} $$

and \({{{\mathbf{J}}}_{{{\mathbf{ss}}}}} = \left( {\begin{array}{*{20}{c}} {\operatorname{Re} \{ {{{\mathbf{G}}}^{ + }}{\mathbf{G}}\} }&{ - \operatorname{Im} \{ {{{\mathbf{G}}}^{ + }}{\mathbf{G}}\} } \\ {\operatorname{Im} \{ {{{\mathbf{G}}}^{ + }}{\mathbf{G}}\} }&{\operatorname{Re} \{ {{{\mathbf{G}}}^{ + }}{\mathbf{G}}\} } \end{array}} \right)\).

Considering the well-known matrix relation \(\text{diag}\{ {\mathbf{a}}\} {\mathbf{P}}\text{diag}\{ {\mathbf{b}}\} = {\mathbf{P}} \circ ({\mathbf{a}}{{{\mathbf{b}}}^{T}})\), in which the symbol \( \circ \) denotes the Hadamard product (or elementwise multiplication of matrices: \({{[{\mathbf{A}} \circ {\mathbf{B}}]}_{{ij}}} = {{[{\mathbf{A}}]}_{{ij}}}{{[{\mathbf{B}}]}_{{ij}}}\)), the expression for \({\mathbf{F}}\) can be represented in the equivalent form

$$\begin{gathered} {\mathbf{F}} = ({1 \mathord{\left/ {\vphantom {1 L}} \right. \kern-0em} L})\sum\limits_{l = 1}^L {{\mathbf{S}}_{l}^{ + }{\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}}{{{\mathbf{S}}}_{l}}} \\ = {\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}} \circ \sum\limits_{l = 1}^L {{{{\mathbf{s}}_{l}^{*}{\mathbf{s}}_{l}^{T}} \mathord{\left/ {\vphantom {{{\mathbf{s}}_{l}^{*}{\mathbf{s}}_{l}^{T}} L}} \right. \kern-0em} L}} \equiv {\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\varphi }}}} \circ {\mathbf{\hat {R}}}_{s}^{T}, \\ \end{gathered} $$
(A7)

where \({{{\mathbf{\hat {R}}}}_{s}} = ({1 \mathord{\left/ {\vphantom {1 L}} \right. \kern-0em} L})\sum\nolimits_{l = 1}^L {\mathbf{s}} {}_{l}{\mathbf{s}}_{l}^{ + }\) is the signal sample matrix. In writing (A7), it was taken into account that for the diagonal matrix \({{{\mathbf{S}}}_{l}}\), the equality \({{{\mathbf{S}}}_{l}} = {\mathbf{S}}_{l}^{T}\) is valid and therefore \({\mathbf{S}}_{l}^{ + } = {\mathbf{S}}_{l}^{*}\), where \(( \cdot ){\text{*}}\) denotes complex conjugation.

Similarly, for matrices \({\mathbf{{\rm K}}}\)and \({\mathbf{\Sigma }}\) we have

$$\begin{gathered} {\mathbf{{\rm K}}} = [{\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{1}} \circ {\mathbf{\hat {R}}}_{s}^{T} \cdots {\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{M}} \circ {\mathbf{\hat {R}}}_{s}^{T}] \\ \equiv {\mathbf{D}}_{{\boldsymbol{\varphi }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\eta }}}} \circ ({\mathbf{1}}_{M}^{T} \otimes {\mathbf{\hat {R}}}_{s}^{T}), \\ {\mathbf{\Sigma }} = \left( {\begin{array}{*{20}{c}} {{\mathbf{D}}_{1}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{1}} \circ {\mathbf{\hat {R}}}_{s}^{T}}& \cdots &{{\mathbf{D}}_{1}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{M}} \circ {\mathbf{\hat {R}}}_{s}^{T}} \\ \vdots & \ddots & \vdots \\ {{\mathbf{D}}_{{\mathbf{M}}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{1}} \circ {\mathbf{\hat {R}}}_{s}^{T}}& \cdots &{{\mathbf{D}}_{M}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{M}} \circ {\mathbf{\hat {R}}}_{s}^{T}} \end{array}} \right) \\ \equiv {\mathbf{D}}_{{\boldsymbol{\eta }}}^{ + }{\mathbf{\Pi }}_{{\mathbf{G}}}^{ \bot }{{{\mathbf{D}}}_{{\boldsymbol{\eta }}}} \circ ({{{\mathbf{1}}}_{M}}{\mathbf{1}}_{M}^{T} \otimes {\mathbf{\hat {R}}}_{s}^{T}), \\ \end{gathered} $$

where \({{{\mathbf{1}}}_{M}}\) is a \(M \times 1\) vector of ones, \({{{\mathbf{D}}}_{{\mathbf{\eta }}}} = [{{{\mathbf{D}}}_{1}} \cdots {{{\mathbf{D}}}_{M}}]\), and the symbol \( \otimes \) stands for the Kronecker product.

The error covariance matrix of the angular sources positions estimates of interest is found by inverting the upper left \(J \times J\) block in (A6):

$$\begin{gathered} {\mathbf{CRB}}({\boldsymbol{\varphi }}) = \frac{{\sigma _{n}^{2}}}{{2L}}{{({{{\mathbf{J}}}_{{{\boldsymbol{\varphi \varphi }}}}} - {{{\mathbf{J}}}_{{{\boldsymbol{\varphi \eta }}}}}{\kern 1pt} {\mathbf{J}}_{{{\boldsymbol{\eta \eta }}}}^{{ - 1}}{\mathbf{J}}_{{{\boldsymbol{\varphi \eta }}}}^{T})}^{{ - {\kern 1pt} 1}}} \\ = \frac{{\sigma _{n}^{2}}}{{2L}}\operatorname{Re} {{({\mathbf{F}} - {\mathbf{{\rm K}}}{\kern 1pt} {{{\mathbf{\Sigma }}}^{{ - 1}}}{{{\mathbf{K}}}^{ + }})}^{{ - {\kern 1pt} 1}}}. \\ \end{gathered} $$
(A8)

In obtaining (A8), we used result [18], according to which

$$\begin{gathered} \text{[}\operatorname{Re} \{ {\mathbf{K}}\} - \operatorname{Im} \{ {\mathbf{K}}\} ]{\kern 1pt} {\kern 1pt} {\kern 1pt} {{\left( {\begin{array}{*{20}{c}} {\operatorname{Re} \{ {\mathbf{\Sigma }}\} }&{ - \operatorname{Im} \{ {\mathbf{\Sigma }}\} } \\ {\operatorname{Im} \{ {\mathbf{\Sigma }}\} }&{\operatorname{Re} \{ {\mathbf{\Sigma }}\} } \end{array}} \right)}^{{ - 1}}} \\ \times \,\,{\kern 1pt} \left[ {\begin{array}{*{20}{c}} {\operatorname{Re} {{{\{ {\mathbf{K}}\} }}^{T}}} \\ {\operatorname{Im} {{{\{ {\mathbf{K}}\} }}^{T}}} \end{array}} \right] = \operatorname{Re} ({\mathbf{{\rm K}}}{{{\mathbf{\Sigma }}}^{{ - 1}}}{{{\mathbf{{\rm K}}}}^{ + }}). \\ \end{gathered} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sazontov, A.G., Smirnov, I.P. Direction Finding of a Source in an Acoustic Waveguide and the Angular Resolution Limit. Acoust. Phys. 67, 183–192 (2021). https://doi.org/10.1134/S1063771021020068

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1063771021020068

Keywords:

Navigation