Skip to main content
Log in

Refined Analysis of Sparse MIMO Radar

  • Published:
Journal of Fourier Analysis and Applications Aims and scope Submit manuscript

Abstract

We analyze a multiple-input multiple-output (MIMO) radar model and provide recovery results for a compressed sensing (CS) approach. In MIMO radar different pulses are emitted by several transmitters and the echoes are recorded at several receiver nodes. Under reasonable assumptions the transformation from emitted pulses to the received echoes can approximately be regarded as linear. For the considered model, and many radar tasks in general, sparsity of targets within the considered angle-range-Doppler domain is a natural assumption. Therefore, it is possible to apply methods from CS in order to reconstruct the parameters of the targets. Assuming Gaussian random pulses the resulting measurement matrix becomes a highly structured random matrix. Our first main result provides an estimate for the well-known restricted isometry property (RIP) ensuring stable and robust recovery. We require more measurements than standard results from CS, like for example those for Gaussian random measurements. Nevertheless, we show that due to the special structure of the considered measurement matrix our RIP result is in fact optimal (up to possibly logarithmic factors). Our further two main results on nonuniform recovery (i.e., for a fixed sparse target scene) reveal how the fine structure of the support set—not only the size—affects the (nonuniform) recovery performance. We show that for certain “balanced” support sets reconstruction with essentially the optimal number of measurements is possible. Indeed, we introduce a parameter measuring the well-behavedness of the support set and resemble standard results from CS for near-optimal parameter choices. We prove recovery results for both perfect recovery of the support set in case of exactly sparse vectors and an \(\ell _2\)-norm approximation result for reconstruction under sparsity defect. Our analysis complements earlier work by Strohmer & Friedlander and deepens the understanding of the considered MIMO radar model. Thereby—and apparently for the first time in CS theory—we prove theoretical results in which the difference between nonuniform and uniform recovery consists of more than just logarithmic factors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Note that there is a typo in [27, Theorem 5] where an additional factor \(N_t\) on the left-hand side of (21) shows up.

  2. Note that due to the comment after the formulation of the original result in [25] the extra factor \(2^{1/2n}\) appearing right-hand side of the assertion in [25, Theorem 6.22] can be removed. In the original version of [25, Theorem 6.22], the quantity \(\Vert \widetilde{\varvec{ F }}\Vert _{S_{2m}}^{2m}\) was missing; this has been corrected in a new version which can be obtained from the personal website of HR.

  3. Alternatively, more modern estimates based on moment generating function bounds [30] could be used, see also [14, Problem 8.6(d)].

References

  1. Baraniuk, R.G., Steeghs, P.: Compressive radar imaging. Proc. IEEE Radar Conf. 2007, 128–133 (2007)

    Google Scholar 

  2. Becker, S., Candès, E.J., Grant, M.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3(3), 165–218 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Buchholz, A.: Operator Khintchine inequality in non-commutative probability. Math. Ann. 319, 1–16 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cai, T., Zhang, A.: Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inform. Theory 60(1), 122–132 (2014)

    Article  MathSciNet  Google Scholar 

  5. Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci. Paris S’er. I Math. 346, 589–592 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Candès, E.J., Plan, Y.: Near-ideal model selection by \(\ell _{1}\) minimization. Ann. Stat. 37(5A), 2145–2177 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, C.-Y., Vaidyanathan, P.: Compressed sensing in MIMO radar. In: 2008 42nd Asilomar Conference on Signals, Systems and Computers, pp. 41–44 (2008)

  9. Chi, Y., Scharf, L., Pezeshki, A., Calderbank, A.: Sensitivity to basis mismatch in compressed sensing. IEEE Trans. Signal Process. 59(5), 2182–2195 (2011)

    Article  MathSciNet  Google Scholar 

  10. Cohen, A., Dahmen, W., DeVore, R.A.: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dirksen, S.: Tail bounds via generic chaining. Electron. J. Probab. 20(53), 1–29 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Ender, J.: On compressive sensing applied to radar. Signal Process. 90(5), 1402–1414 (2010)

    Article  MATH  Google Scholar 

  13. Fishler, E., Haimovich, A., Blum, R., Chizhik, D., Cimini, L., Valenzuela, R.: MIMO radar: an idea whose time has come. In: Proceedings of the IEEE Radar Conference 2004, pp. 71–78 (2004)

  14. Foucart, S., Rauhut, H.: A mathematical introduction to compressive sensing. Birkhäuser Basel (2013)

  15. Friedlander, B.: On the relationship between MIMO and SIMO radars. IEEE Trans. Signal Process. 57(1), 394–398 (2009)

    Article  MathSciNet  Google Scholar 

  16. Heckel, R., Morgenshtern, V.I., Soltanolkotabi, M.: Super-resolution radar. arXiv:1411.6272v2 (2014)

  17. Herman, M., Strohmer, T.: High resolution radar via compressed sensing. IEEE Trans. Signal Process. 57(6), 2275–2284 (2009)

    Article  MathSciNet  Google Scholar 

  18. Herman, M., Strohmer, T.: General deviants: An analysis of perturbations in compressed sensing. IEEE J. Select. Top. Signal Process. 4(2), 342–349 (2010)

    Article  Google Scholar 

  19. Hügel, M., Rauhut, H., Strohmer, T.: Remote sensing via l1-minimization. Found. Comput. Math. 14, 115–150 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Krahmer, F., Mendelson, S., Rauhut, H.: Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. 67(11), 1877–1904 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. Li, J., Stoica, P.: MIMO radar with colocated antennas. Signal Process. Mag. IEEE 24(5), 106–114 (2007)

    Article  Google Scholar 

  22. Li, J., Stoica, P. (Eds.): MIMO Radar Signal Processing. Wiley, New York (2009)

  23. Lust Piquard, F.: Inégalites de Khintchine dans \(c_p (1< p < \infty )\). C. R. Acad. Sci. Paris S’er. I Math. 303, 289–292 (1986)

  24. Pisier, G., Lust Piquard, F.: Noncommutative Khintchine and Paley inequalities. Ark. Mat. 29(2), 241–260 (1991)

    MathSciNet  MATH  Google Scholar 

  25. Rauhut, H.: Compressive sensing and structured random matrices. In: Fornasier, M. (ed.) Theoretical Foundations and Numerical Methods for Sparse Recovery. Radon Series of Computational and Applied Mathematics, vol. 9, pp. 1–92. deGruyter, Berlin (2010)

  26. Strohmer, T., Friedlander, B.: Compressed sensing for MIMO radar—algorithms and performance. In: 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, pp. 464–468 (2009)

  27. Strohmer, T., Friedlander, B.: Analysis of sparse MIMO radar. Appl. Comput. Harm. Anal. 37, 361–388 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Talagrand, M.: The Generic Chaining. Springer Monographs in Mathematics. Springer-Verlag, Berlin (2005)

  29. Talagrand, M.: Upper and lower bounds for stochastic processes, vol. 60 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Heidelberg (2014)

  30. Tropp, J.A.: User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12(4), 389–434 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Yu, Y., Petropulu, A., Poor, H.: Measurement matrix design for compressive sensing-based MIMO radar. IEEE Trans. Signal Process. 59(11), 5338–5352 (2011)

    Article  MathSciNet  Google Scholar 

  32. Yu, Y., Petropulu, A., Poor, H.: CSSF MIMO RADAR: Compressive-Sensing and Step-Frequency Based MIMO Radar. IEEE Trans. Aerosp. Electron. Syst. 48(2), 1490–1504 (2012)

    Article  Google Scholar 

Download references

Acknowledgments

The authors acknowledge funding from the European Research Council through the Starting Grant StG 258926.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Holger Rauhut.

Additional information

Communicated by Joel Tropp.

Appendices

Appendix 1: Orthogonality of the Matrices \(\varvec{\mathcal {X}}_{\varTheta }\)

Due to the definition in (31), for \((i,j) \in [N_R] \times [N_T]\), the (ij)th \(N_t \times N_t\) block of \(\varvec{\mathcal {X}}_{\varTheta }\) is given by

$$\begin{aligned} \varvec{\mathcal {X}}_{\varTheta }^{[i,j]} = e^{\imath 2\pi \cdot d_R \beta \Delta _\beta (i-1)} e^{\imath 2\pi \cdot d_T \beta \Delta _\beta (j-1)} \varvec{ M }_{f} \varvec{ T }_{\tau } . \end{aligned}$$

Lemma 26

The set of matrices \(\left\{ \frac{1}{\sqrt{N_t N_R N_T}} \varvec{\mathcal {X}}_{\varTheta } ,\, \varTheta \in \mathcal {G}\right\} \) forms an orthonormal basis.

Proof

We calculate the inner product \(\langle \varvec{\mathcal {X}}_{\varTheta '} , \varvec{\mathcal {X}}_{\varTheta } \rangle = \text {Tr} ( \varvec{\mathcal {X}}_{\varTheta ^\prime }^* \varvec{\mathcal {X}}_{\varTheta } )\). To this end we calculate the \((j^\prime ,j)\)th block of the product \(\varvec{\mathcal {X}}_{\varTheta ^\prime }^* \varvec{\mathcal {X}}_{\varTheta }\) (recall, \(\varvec{\mathcal {X}}_{\varTheta ^\prime }^* \varvec{\mathcal {X}}_{\varTheta }\) is a \(N_T \times N_T\) block matrix consisting of \(N_t \times N_t\) blocks),

$$\begin{aligned}&[ \varvec{\mathcal {X}}_{\varTheta '}^* \varvec{\mathcal {X}}_{\varTheta } ]^{[j^\prime ,j]} = \sum _{k=1}^{N_R} [ \varvec{\mathcal {X}}_{\varTheta '}^* ]^{[j^\prime ,k]} \varvec{\mathcal {X}}_{\varTheta }^{[k,j]} = \sum _{k=1}^{N_R} [ \varvec{\mathcal {X}}_{\varTheta '}^{[k,j^\prime ]} ]^* \varvec{\mathcal {X}}_{\varTheta }^{[k,j]} \nonumber \\&= \sum _{k=1}^{N_R} \overline{e^{\imath 2\pi \cdot d_R \beta ^\prime \Delta _\beta (k -1)} e^{\imath 2\pi \cdot d_T \beta ^\prime \Delta _\beta (j^\prime -1)}} e^{\imath 2\pi \cdot d_R \beta \Delta _\beta (k-1)} e^{\imath 2\pi \cdot d_T \beta \Delta _\beta (j-1)} \varvec{ T }_{\tau ^\prime }^* \varvec{ M }_{f^\prime }^* \varvec{ M }_{f} \varvec{ T }_{\tau } \nonumber \\&= e^{-\imath 2\pi \cdot d_T \beta ^\prime \Delta _\beta (j^\prime -1)} e^{\imath 2\pi \cdot d_T \beta \Delta _\beta (j - 1)} e^{\imath 2\pi \cdot \frac{(f-f')}{N_t} \tau ^\prime } \sum _{k=1}^{N_R} e^{\imath 2\pi \cdot d_R (\beta - \beta ^\prime ) \Delta _\beta (k-1)} \varvec{ M }_{f-f'} \varvec{ T }_{\tau - \tau '} . \end{aligned}$$
(74)

For the last equality we used that

$$\begin{aligned} \varvec{ T }_{\tau ^\prime }^* \varvec{ M }_{f^\prime }^* \varvec{ M }_{f} \varvec{ T }_{\tau } = e^{\imath 2\pi \cdot \frac{(f-f')}{N_t} \tau ^\prime } \varvec{ M }_{f-f'} \varvec{ T }_{\tau -\tau '} , \end{aligned}$$

which follows directly from the definitions of the operators \(\varvec{ M }_{f}\) and \(\varvec{ T }_{\tau }\) (see (6)). Due to (74), the Frobenius inner product between two matrices is given as

$$\begin{aligned}&\langle \varvec{\mathcal {X}}_{\varTheta '} , \varvec{\mathcal {X}}_{\varTheta } \rangle = \sum _{j=1}^{N_T} \text {Tr} ( [ \varvec{\mathcal {X}}_{\varTheta '}^* \varvec{\mathcal {X}}_{\varTheta } ]^{[j,j]} ) \\&= \bigg ( \sum _{j=1}^{N_T} e^{\imath 2\pi \cdot d_T (\beta - \beta ') \Delta _\beta (j-1)} \bigg ) e^{\imath 2\pi \cdot \frac{(f-f')}{N_t} \tau ^\prime } \sum _{k=1}^{N_R} e^{\imath 2\pi \cdot d_R (\beta -\beta ') \Delta _\beta (k-1)} \text {Tr} ( \varvec{ M }_{f-f'} \varvec{ T }_{\tau - \tau '} ) . \end{aligned}$$

Since \(\varvec{ M }_{f-f'}\) is a diagonal matrix, the trace of the product \(\varvec{ M }_{f-f'} \varvec{ T }_{\tau -\tau '}\) can only be nonzero if at least one of the diagonal entries of the matrix \(\varvec{ T }_{\tau -\tau '}\) is nonzero, i.e., if \(\tau = \tau '\) so that \(\varvec{ T }_{\tau -\tau '} = \varvec{{{\mathrm{Id}}}}\). This means that

$$\begin{aligned} \text {Tr} ( \varvec{ M }_{f-f'} \varvec{ T }_{\tau -\tau '} ) = \text {Tr} ( \varvec{ M }_{f-f'} ) = \sum _{k=1}^{N_t} e^{\imath 2\pi \cdot \frac{f-f'}{N_t} (k-1)} = {\left\{ \begin{array}{ll} N_t &{} \text {if} f = f', \\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Recalling the formula for \(\langle \varvec{\mathcal {X}}_{\varTheta '} , \varvec{\mathcal {X}}_{\varTheta } \rangle \) from above implies that for this inner product to be nonzero it necessarily has to hold that \(\varTheta ' = \varTheta \). Indeed, this follows from the appearance of the factor \( \sum _{j=1}^{N_T} e^{\imath 2\pi \cdot d_T (\beta - \beta ') \Delta _\beta (j-1)} \) which (recalling that \(d_T = 1/2\) and \(\Delta _\beta = 2/ N_T N_R\), see (1), (4)) is only nonzero (and equal to \(N_T\)) if \(\beta ^\prime = \beta \). Finally, we can conclude

$$\begin{aligned} \langle \varvec{\mathcal {X}}_{\varTheta '} , \varvec{\mathcal {X}}_{\varTheta } \rangle = \delta _{\varTheta ',\varTheta } N_T N_R N_t . \end{aligned}$$

The normalization yields the result. \(\square \)

Appendix 2: Proof of Lemma 11

For small u the first term on the right-hand side of (37) can be obtained by a volumetric argument. To this end, let for \(S\subset \{1, \ldots , N\}\), \(B_S \subset \mathbb {C}^N\) denote the set of all vectors \(\varvec{ x }\) with \(\Vert \varvec{ x }\Vert _2 \le 1\) and support in S. Introducing \(| \Vert \varvec{ x } |\Vert := \Vert \widetilde{\varvec{ V }}_{ \varvec{ x } } \Vert _{2\rightarrow 2}\) we find using (35) and a volumetric estimate, see e.g. [14, Proposition C.3],

$$\begin{aligned} \mathcal {N}( B_S, |\Vert \cdot |\Vert , u ) \le \mathcal {N}\big ( B_S, \sqrt{s/N_t} \Vert \cdot \Vert _2, u \big ) \le \bigg (1+ 2 \frac{\sqrt{s/N_t}}{u} \bigg )^{2s} \le \bigg (3 \frac{\sqrt{s/N_t}}{u} \bigg )^{2s}, \end{aligned}$$

where for the last inequality we used the assumption \(u \le \sqrt{s/N_t}\). Since \(\mathcal {A}\) is the union of all sets \(B_S\) with \(S \subset [N]\), and there are \(\genfrac(){0.0pt}{}{N}{s} \le (eN / s)^s\) possible choices for S there holds

$$\begin{aligned} \mathcal {N}( \mathcal {A}, \Vert \cdot \Vert _{2\rightarrow 2}, u ) \le (eN / s)^s (3 \sqrt{s/N_t} / {u})^{2s}, \end{aligned}$$

which implies the first bound in (37), namely

$$\begin{aligned} \log \mathcal {N}(\mathcal {A}, \Vert \cdot \Vert _{2\rightarrow 2}, u) \le 2s \left( \log (eN / s) + \log \bigg (\frac{2\sqrt{s}}{u\sqrt{N_t}} \bigg ) \right) \lesssim s \log \bigg ( \frac{N}{u^2 N_t} \bigg ) . \end{aligned}$$

For the second bound from the assertion we exploit the fact that

$$\begin{aligned} \{ \varvec{ x } \in \mathbb {C}^{N} {}:{}\varvec{ x } \text {s-sparse}, \Vert \varvec{ x }\Vert _2 \le 1\} \subset \sqrt{2s} {{\mathrm{conv}}}\bigcup _{\varTheta \in \mathcal {G}} \{ \varvec{e}_{\varTheta } ,\, \imath \varvec{e}_{\varTheta } ,\, -\varvec{e}_{\varTheta } ,\, -\imath \varvec{e}_{\varTheta } \} =: \widetilde{D}_s \end{aligned}$$

and, hence, the set \(\mathcal {A}\) from (33) is contained in the set \(\{ \widetilde{\varvec{ V }}_{ \varvec{ x } }{}:{}\varvec{ x } \in \widetilde{D}_s \}\). The following is a version of Maurey’s lemma. For a proof see, e.g., [20].

Lemma 27

There exists an absolute constant c for which the following holds. Let X be a normed space, consider a finite set \(\mathcal {U}\subset X\) of cardinality N, and assume that for every \(L\in \mathbb {N}\) and \((\varvec{ u }_1 , \ldots , \varvec{ u }_L ) \in \mathcal {U}^L\), \(\mathbb {E}\Vert \sum _{j=1}^L {\epsilon }_j \varvec{ u }_j \Vert _X \le A \sqrt{L}\), where \((\epsilon _1 , \ldots , \epsilon _L )\) denotes a Rademacher vector. Then for every \(u>0\),

$$\begin{aligned} \log \mathcal {N}( {{\mathrm{conv}}}( \mathcal {U}) , \Vert \cdot \Vert _X , u) \le c (A/u)^2 \log (N). \end{aligned}$$

In order to apply Lemma 27, we need to estimate the quantity \(\mathbb {E}_{\varvec{ \epsilon }} \Vert \sum _{k=1}^L \epsilon _k \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2}\), where \((\varvec{ u }_1 , \ldots , \varvec{ u }_L )\) is a sequence of extreme points in \(\widetilde{D}_s\) and \(\varvec{ \epsilon } = (\epsilon _1 , \ldots , \epsilon _L )\) is a Rademacher vector. The noncommutative Khintchine inequalityFootnote 3 [3, 25]—originally due to Lust-Piquard [23, 24]—yields

$$\begin{aligned} \mathbb {E}_{\varvec{ \epsilon }} \Vert \sum _{k=1}^L \epsilon _k \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2} \lesssim \sqrt{\log ( N_{\text {max}} )} \max \bigg \{ \Vert \sum _{k=1}^L \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^* \Vert _{2\rightarrow 2}, \Vert \sum _{k=1}^L \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^* \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2} \bigg \}^{{1}/{2}},\nonumber \\ \end{aligned}$$
(75)

where \(N_{\text {max}}\) stands for the maximum of the dimensions of the matrices \(\widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^*\) and \(\widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^* \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }\). Note, that \(N_{\text {max}}\) can be estimated by \(\max \{ N_R N_t , N_T N_t \} \le N\). Using the estimate (35) for \(\Vert \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2}\),

$$\begin{aligned} \Vert \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^* \Vert _{2\rightarrow 2} = \Vert \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } }^* \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2} = \Vert \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2}^2 \le \frac{1}{N_t} \Vert \varvec{ u }_k \Vert _1^2 = \frac{2s}{N_t}. \end{aligned}$$

An application of the triangle inequality yields, using the Khintchine inequality (75),

$$\begin{aligned} \mathbb {E}_{\varvec{ \epsilon }} \Vert \sum _{k=1}^L \epsilon _k \widetilde{\varvec{ V }}_{ \varvec{ \varvec{ u }_k } } \Vert _{2\rightarrow 2} \lesssim \sqrt{\log (N)} \sqrt{\frac{2s}{N_t}} \sqrt{L} . \end{aligned}$$

Finally, we can apply Lemma 27, yielding

$$\begin{aligned} \log \mathcal {N}(\mathcal {A}, \Vert \cdot \Vert _{2\rightarrow 2}, u) \lesssim \frac{s}{u^2 N_t} \log ^2 (N) . \end{aligned}$$

This establishes the second bound in (37). \(\square \)

Appendix 3: Basic Calculations for Proposition 13

The proof of Proposition 13 is based on the fact that

$$\begin{aligned} \widetilde{\varvec{ A }}_{S_{[\beta ]}}^* \widetilde{\varvec{ A }}_{S_{[\beta ]}} = \sum _{i,j = 1}^{N_T} \sum _{a,b = 1}^{N_t} \overline{{s}_{(i,a)}} {s}_{(j,b)} \varvec{ Y }^{(i,a),(j,b)} , \end{aligned}$$
(76)

where we write \(s_{(i,a)}\) for the a-th entry of the signal vector \(\varvec{ s }_i\) and where for \(\varTheta , \varTheta ^\prime \in S_{[\beta ]}\) the corresponding entry of a given matrix \(\varvec{ Y }^{(i,a),(j,b)}\) is given by

$$\begin{aligned}{}[\varvec{ Y }^{(i,a),(j,b)}]_{\varTheta ,\varTheta '} = \delta ^{\sim _{N_t}}_{a-b , \tau ^\prime -\tau } (N_T N_t)^{-1} e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (\tau +a-1)} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1)-\beta (i-1)]},\nonumber \\ \end{aligned}$$
(77)

To see this we recall that, according to (22), the inner products \(\langle \varvec{ A }_{\varTheta } , \varvec{ A }_{\varTheta ^\prime } \rangle \) satisfy

$$\begin{aligned}{}[ \widetilde{\varvec{ A }}_{S_{[\beta ]}}^* \widetilde{\varvec{ A }}_{S_{[\beta ]}} ]_{\varTheta , \varTheta ^\prime }&= \langle \widetilde{\varvec{ A }}_{\varTheta } , \widetilde{\varvec{ A }}_{\varTheta ^\prime } \rangle = (N_T N_R N_t)^{-1} \langle \varvec{ A }_{\varTheta } , \varvec{ A }_{\varTheta ^\prime } \rangle \\&= (N_T N_t)^{-1} \sum _{i,j=1}^{N_T} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1) - \beta (i-1)]} \left\langle \varvec{ M }_{f} \varvec{ T }_{\tau } \varvec{ s }_i , \varvec{ M }_{f^\prime } \varvec{ T }_{\tau ^\prime } \varvec{ s }_j \right\rangle , \end{aligned}$$

where we used that both \(\varTheta , \varTheta ^\prime \in S_{[\beta ]}\). Recalling the definitions of the operators \(\varvec{ M }_{f}\), \(\varvec{ T }_{\tau }\) (see (6)) one obtains for the latter inner product,

$$\begin{aligned} \left\langle \varvec{ M }_{f} \varvec{ T }_{\tau } \varvec{ s }_i , \varvec{ M }_{f^\prime } \varvec{ T }_{\tau ^\prime } \varvec{ s }_j \right\rangle&= \sum _{k=1}^{N_t} \overline{[\varvec{ M }_{f} \varvec{ T }_{\tau } \varvec{ s }_i]_k} [\varvec{ M }_{f^\prime } \varvec{ T }_{\tau ^\prime } \varvec{ s }_j]_k\\&\quad = \sum _{k=1}^{N_t} e^{\imath 2\pi \cdot \frac{f^\prime -f}{N_t}(k-1)} \overline{[\varvec{ s }_i]_{k - \tau }} [\varvec{ s }_j]_{k - \tau ^\prime } \\&= \sum _{a,b=1}^{N_t} \delta ^{\sim _{N_t}}_{a-b , \tau ^\prime -\tau } e^{\imath 2\pi \cdot \frac{f^\prime -f}{N_t} (a+\tau -1)} \overline{[\varvec{ s }_i]_{a}} [\varvec{ s }_j]_{b} . \end{aligned}$$

By combining the identities from above one finds

$$\begin{aligned}&[ \widetilde{\varvec{ A }}_{S_{[\beta ]}}^* \widetilde{\varvec{ A }}_{S_{[\beta ]}} ]_{\varTheta , \varTheta ^\prime } = \\&\sum _{i,j=1}^{N_T} \sum _{a,b=1}^{N_t} \overline{s_{(i,a)}} s_{(j,b)} \underbrace{\delta ^{\sim _{N_t}}_{a-b , \tau ^\prime -\tau } (N_T N_t)^{-1} e^{\imath 2\pi \cdot \frac{f^\prime -f}{N_t} (\tau +a-1)} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1) - \beta (i-1)]}}_{= [\varvec{ Y }^{(i,a),(j,b)}]_{\varTheta , \varTheta ^\prime },\, \text {see (77)}} , \end{aligned}$$

which shows (76).

The matrices \(\varvec{ Y }^{(i,a),(j,b)}\) allow for a simple formula for their adjoints, namely

$$\begin{aligned}{}[\varvec{ Y }^{(i,a),(j,b)}]^* = \varvec{ Y }^{(j,b),(i,a)} . \end{aligned}$$
(78)

1.1 The Product \(\varvec{ F }^* \varvec{ F }\)

The matrix \(\varvec{ F }\) consists of the blocks \(\varvec{ Y }^{(i,a),(j,b)}\) given by (77). Therefore, \(\varvec{ F }\) is self-adjoint so that \(\varvec{ F }^* \varvec{ F } = \varvec{ F }^2\). Like \(\varvec{ F }\) also the product \(\varvec{ F }^2\) consists of blocks and the block at the (ia)-th (block) row and the (jb)-th (block) column is given by

$$\begin{aligned}{}[\varvec{ F }^2]^{(i,a),(j,b)} = \sum _{r=1}^{N_T} \sum _{q = 1}^{N_t} \varvec{ Y }^{(i,a),(r,q)} \varvec{ Y }^{(r,q),(j,b)} . \end{aligned}$$
(79)

Recalling (77), the appearing summands \(\varvec{ Y }^{(i,a),(r,q)} \varvec{ Y }^{(r,q),(j,b)}\) are given entrywise by

$$\begin{aligned}&[ \varvec{ Y }^{(i,a),(r,q)} \varvec{ Y }^{(r,q),(j,b)} ]_{\varTheta , \varTheta ^\prime } = \sum _{\tilde{\varTheta } \in S_{[\beta ]}} \varvec{ Y }_{\varTheta ,\tilde{\varTheta }}^{(i,a),(r,q)} \varvec{ Y }_{\tilde{\varTheta },{\varTheta }^\prime }^{(r,q),(j,b)} \nonumber \\&= (N_T N_t)^{-2} \sum _{\tilde{\varTheta } \in S_{[\beta ]}} \delta ^{\sim _{N_t}}_{a-q , \tilde{\tau -\tau }} \delta ^{\sim _{N_t}}_{q-b,} {\quad \tau ^\prime -\tilde{\tau }} e^{\imath 2\pi \cdot \frac{\tilde{f} -f}{N_t} (\tau +a-1)}\nonumber \\&e^{\imath 2\pi \cdot \frac{f^\prime - \tilde{f}}{N_t} (\tilde{\tau }+q-1)} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1) - \beta (i-1)]} \nonumber \\&= (N_T N_t)^{-2} \delta ^{\sim _{N_t}}_{\tau ^\prime + b , \tau + a} |S_{[\beta ]}^{\tau + a - q}| e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (a+\tau -1)} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1)-\beta (i-1)]} . \end{aligned}$$
(80)

Combining this with (79) yields

$$\begin{aligned}{}[\varvec{ F }\varvec{ F }]^{(i,a),(j,b)}_{\varTheta ,\varTheta ^\prime }&= \sum _{r=1}^{N_T} \sum _{q = 1}^{N_t} \delta ^{\sim _{N_t}}_{\tau ^\prime + b , \tau + a} (N_T N_t)^{-2} |S_{[\beta ]}^{\tau + a - q}| e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (a+\tau -1)}\\&\quad e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1)-\beta (i-1)]} \\&= \delta ^{\sim _{N_t}}_{\tau ^\prime + b , \tau + a} N_T^{-1} N_t^{-2} |S_{[\beta ]}| e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (a+\tau -1)} e^{\imath 2\pi \cdot d_T \Delta _\beta [\beta ^\prime (j-1)-\beta (i-1)]} . \end{aligned}$$

1.2 Properties of the Matrices \(\varvec{ Y }^{(i,a),(j,b)}\)

The proof of Proposition 13 uses the identities

$$\begin{aligned} \sum _{i,j=1}^{N_T} \sum _{a,b=1}^{N_t} [\varvec{ Y }^{(i,a),(j,b)}]^* \varvec{ Y }^{(i,a),(j,b)} = \sum _{i,j=1}^{N_T} \sum _{a,b=1}^{N_t} \varvec{ Y }^{(i,a),(j,b)} [\varvec{ Y }^{(i,a),(j,b)}]^* = \frac{|S_{[\beta ]}|}{N_t} \varvec{{{\mathrm{Id}}}}.\nonumber \\ \end{aligned}$$
(81)

Due to (78), and by plugging in the identity we used in the second step of (80), the second sum is given entrywise by

$$\begin{aligned}&\sum _{i,j=1}^{N_T} \sum _{a,b=1}^{N_t} [\varvec{ Y }^{(i,a),(j,b)} \varvec{ Y }^{(j,b),(i,a)}]_{\varTheta ,\varTheta ^\prime }\\&\quad = \sum _{i,j=1}^{N_T} \sum _{a,b=1}^{N_t} \delta _{\tau ^\prime , \tau } \frac{|S_{[\beta ]}^{\tau + a - b}|}{(N_T N_t)^{2}} e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (\tau +a-1)} e^{\imath 2\pi \cdot d_T \Delta _\beta (\beta ^\prime -\beta ) (i-1)} \\&= \delta _{\tau ^\prime , \tau } \frac{N_T |S_{[\beta ]}|}{(N_T N_t)^{2}} e^{\imath 2\pi \cdot \frac{f'-f}{N_t} \tau } \sum _{a=1}^{N_t} e^{\imath 2\pi \cdot \frac{f'-f}{N_t} (a-1)} \sum _{i=1}^{N_T} e^{\imath 2\pi \cdot d_T \Delta _\beta (\beta ^\prime -\beta ) (i-1)}= \delta _{\varTheta , \varTheta ^\prime } \frac{|S_{[\beta ]}|}{N_t} , \end{aligned}$$

which establishes the second equality in (81). The first equality follows due to symmetry.

Appendix 4: Basics from Probability Theory

A complex-valued random variable \(\xi \) is standard complex Gaussian iff it has (complex) density \(\frac{1}{\pi } e^{-|\xi |^2}\), or, equivalently, \(\xi \) can be written as \(\xi = x + \imath y\), where \(x,y \sim N (0,1/2)\) are independent standard Gaussian random variables. More generally, a mean-zero complex Gaussian random variable with variance \(\sigma ^2\) is of the form \(\sigma \xi \), where \(\xi \) is a standard complex Gaussian. A Steinhaus sequence is a sequence of independent random variables which are all distributed uniformly on the complex unit circle \(\{ z \in \mathbb {C}{}:{}|z|=1 \}\).

Lemma 28

For a standard complex Gaussian random variable \(\xi \) there holds

$$\begin{aligned} \mathbb {P}( | \xi | \ge t ) \le e^{-t^2} . \end{aligned}$$

For a standard complex Gaussian random vector \(\varvec{ \xi }\) (having independent, standard complex Gaussian entries) and a (deterministic) complex vector \(\varvec{ a }\) of the same dimension, the random variable \(z := \langle \varvec{ a } , \varvec{ \xi } \rangle \) is mean-zero complex Gaussian with variance \(\Vert \varvec{ a }\Vert _2^2\). This implies the next statement.

Lemma 29

For a standard complex Gaussian random vector \(\varvec{ \xi }\) and a complex vector \(\varvec{ a }\) of the same dimension there holds

$$\begin{aligned} \mathbb {P}( | \langle \varvec{ a } , \varvec{ \xi } \rangle | \ge t ) = \mathbb {P}( | \xi | \ge t / \Vert \varvec{ a } \Vert _2 ) \le e^{-t^2 / \Vert \varvec{ a } \Vert _2^2} . \end{aligned}$$

For a 2n-dimensional standard Gaussian random vector \(\varvec{ g }\) we have, due to [14, (8.89)],

$$\begin{aligned} \mathbb {P}( \Vert \varvec{ g } \Vert _2 \ge \sqrt{2n} + t ) \le e^{-t^2 / 2} . \end{aligned}$$

Since an n-dimensional standard complex Gaussian random vector \(\varvec{ \xi }\) can be considered as a (real-valued) 2n-dimensional standard Gaussian random vector \(\varvec{ g }\) with independent entries from \(\mathcal {N}(0,1/2)\), we have the following lemma.

Lemma 30

For an n-dimensional standard complex Gaussian random vector \(\varvec{ \xi }\) there holds

$$\begin{aligned} \mathbb {P}( \Vert \varvec{ \xi } \Vert _2 \ge \sqrt{n} + t ) = \mathbb {P}( \Vert 2^{-1/2} \varvec{ g } \Vert _2 \ge \sqrt{n} + t ) = \mathbb {P}( \Vert \varvec{ g } \Vert _2 \ge \sqrt{2n} + \sqrt{2} t ) \le e^{-t^2} . \end{aligned}$$

Finally, the following lemma states a Hoeffding-type inequality for Steinhaus sequences.

Lemma 31

[[14, Cor.8.10]] Let \(\varvec{ a } \in \mathbb {C}^L\) and \(\varvec{ \epsilon } = (\epsilon _1,\ldots ,\epsilon _L)\) be a Steinhaus sequence. Then

$$\begin{aligned} \mathbb {P}( | \langle \varvec{ a } , \varvec{ \epsilon } \rangle | \ge u \Vert \varvec{ a }\Vert _2) \le 2 e^{-u^2 / 2} . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dorsch, D., Rauhut, H. Refined Analysis of Sparse MIMO Radar. J Fourier Anal Appl 23, 485–529 (2017). https://doi.org/10.1007/s00041-016-9477-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00041-016-9477-7

Keywords

Mathematics Subject Classification

Navigation