Skip to main content

Adventures in Compressive Sensing Based MIMO Radar

  • Chapter
  • First Online:
Excursions in Harmonic Analysis, Volume 3

Part of the book series: Applied and Numerical Harmonic Analysis ((ANHA))

Abstract

While radar has been around for many decades, novel developments in recent years have led to significant breakthroughs as well as to exciting new mathematical challenges. In this chapter, we consider a multiple-input-multiple-output (MIMO) radar system. Using sparsity as a key ingredient of our approach and tools from compressive sensing, we derive a mathematical framework for the imaging of targets in the azimuth-range-Doppler domain. Our analysis comprises uniformly spaced linear arrays with random waveforms, as well as random sensor arrays with deterministic waveforms. We also derive results that do not require the “on-the-grid” assumption often used in compressive sensing radar. Algorithmic aspects and numerical simulations are presented as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Actually the received signal will have a somewhat larger bandwidth \(B_1> B\) due to the Doppler effect. Our results could be easily modified to incorporate this increased bandwidth. Since in practice this increase in bandwidth is small, for convenience we simply assume \(B \approx B_1\).

  2. 2.

    This poor performance is caused by Property (ii) in Theorem 2.

  3. 3.

    SISO stands for single-input-single-output radar, and SIMO for single-input-multiple-output radar (i.e., a radar with one transmit and multiple receive antennas).

  4. 4.

    The attentive reader will have noticed that \({\bf U}_{(0)}\) is just the \(p \times p\) DFT matrix \({\bf F}_p\).

References

  1. Becker S, Candes E, Grant M. Templates for convex cone problems with applications to sparse signal recovery. Math Program Comput. 2011;3(3):165–218.

    Article  MATH  MathSciNet  Google Scholar 

  2. Calderbank AR, Cameron PJ, Kantor WM, Seidel JJ. Z4-Kerdock codes, orthogonal spreads, and extremal Euclidean line-sets. Proc London Math Soc., (3). 1997;75(2):436–80.

    Google Scholar 

  3. Candès E, Fernandez-Granda C. Super-resolution from noisy data. J Fourier Anal Appl. 2013;19(6):1229–54.

    Google Scholar 

  4. Candès E, Fernandez-Granda C. Towards a mathematical theory of super-resolution. Commun Pure Appl Math. (to appear).

    Google Scholar 

  5. Candès EJ, Plan Y. Near-ideal model selection by ℓ1minimization. Ann Stat. 2009;37(5A):2145–77.

    Article  MATH  Google Scholar 

  6. Carin L. On the relationship between compressive sensing and random sensor arrays. IEEE Antennas Propag Mag. 2009;51(5):72–81.

    Article  Google Scholar 

  7. Cevher V, Boufounos PT, Baraniuk RG, Gilbert AC, Strauss MJ. Strauss. Near-optimal bayesian localization via incoherence and sparsity. Proceedings of the 2009 International Conference on Information Processing in Sensor Networks (IPSN), 13–16 April; 2009. p. 205–16.

    Google Scholar 

  8. Chi Y, Scharf LL, Pezeshki A, Calderbank AR. Sensitivity to basis mismatch in compressed sensing. IEEE Trans Signal Process. 2011;59(5):2182–95.

    Article  MathSciNet  Google Scholar 

  9. Fannjiang A, Liao W. Coherence pattern—guided compressive sensing with unresolved grids. SIAM J Imaging Sci. 2012;5:179–202.

    Article  MATH  MathSciNet  Google Scholar 

  10. Fenn AJ, Temme DH, Delaney WP, Courtney WE. The development of phased-array radar technology. Lincoln Lab J. 2000;12(2):321–40.

    Google Scholar 

  11. Friedlander B. Adaptive signal design for MIMO radar. In Li J, Stoica P, editors. MIMO radar signal processing, chapter 5. Wiley; 2009.

    Google Scholar 

  12. Friedlander B. On the relationship between MIMO and SIMO radars. IEEE Trans Signal Process. 2009;57(1):394–8;.

    Article  MathSciNet  Google Scholar 

  13. Haupt J, Bajwa W, Raz G, Nowak R. Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans Inform Theory. 2010;56(11):5862–75.

    Article  MathSciNet  Google Scholar 

  14. Heath R, Strohmer T, Paulraj A. On quasi-orthogonal signatures for CDMA systems. IEEE Trans Info Theory. 2006;52(3):1217–26.

    Article  MATH  MathSciNet  Google Scholar 

  15. Herman M, Strohmer T. High-resolution radar via compressed sensing. IEEE Trans Signal Process. 2009;57(6):2275–84.

    Article  MathSciNet  Google Scholar 

  16. Herman M, Strohmer T. General deviants: an analysis of perturbations in compressed sensing. IEEE J Sel Top Signal Process Special Issue Compress Sens. 2010;4(2):342–49.

    Article  Google Scholar 

  17. Howard SD, Calderbank AR, Moran W. The finite Heisenberg-Weyl groups in radar and communications. EURASIP J Appl Signal Process. 2006;1–12:2006.

    MathSciNet  Google Scholar 

  18. Hügel M., Rauhut H, Strohmer T. Remote sensing via ℓ1-minimization. Found Comput Math. (to appear).

    Google Scholar 

  19. Inoue T, Heath RW. Kerdock codes for limited feedback mimo systems. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. p. 3113–6.

    Google Scholar 

  20. Kerdock A. Studies of low-rate binary codes (ph.d. thesis abstr.). IEEE Trans Inf Theory. 1972;18(2):316.

    Article  Google Scholar 

  21. König H. Isometric embeddings of euclidean spaces into finite-dimensional ℓ p -spaces. Banach Cent Publ. 1995;34:79–87.

    Google Scholar 

  22. Levenstein VI. Bounds on the maximal cardinality of a code with bounded modulus of the inner product. Soviet Math Dokl. 1982;25:526–31.

    Google Scholar 

  23. Li J., Stoica P. MIMO radar with colocated antennas: review of some recent work. IEEE Signal Process Mag. 2007;24(5):106–14.

    Article  Google Scholar 

  24. Li J, Stoica P, editors. MIMO radar signal processing. Wiley; 2009.

    Google Scholar 

  25. Lo Y. A mathematical theory of antenna arrays with randomly spaced element. IEEE Trans Antennas Propag. 1964;12(3):257–68.

    Article  Google Scholar 

  26. Lo Y. A probalistic approach to the problem of large antenna arrays. J Res Nat Bur Stand. 1964;68D(5):1011–9.

    Google Scholar 

  27. Pfander GE, Rauhut H, Tanner J. Identification of matrices having a sparse representation. IEEE Trans Signal Process. 2008;56(11):5376–88.

    Article  MathSciNet  Google Scholar 

  28. Potter LC, Ertin E, Parker JT, Cetin M. Sparsity and compressed sensing in radar imaging. Proc IEEE. 2010;98(6):1006–20.

    Article  Google Scholar 

  29. Rauhut H, Schnass K, Vandergheynst P. Compressed sensing and redundant dictionaries. IEEE Trans Inf Theory. 2008;54(5):2210–9.

    Article  MATH  MathSciNet  Google Scholar 

  30. Rihaczek AW. High-resolution radar. Boston: Artech House; 1996. (originally published: McGraw-Hill, NY, 1969).

    Google Scholar 

  31. Strohmer T, Friedlander B. Analysis of sparse MIMO radar. Appl Comput Harmon Anal. 2014;37:361–88.

    Google Scholar 

  32. Strohmer T, Heath R. Grassmannian rames with applications to coding and communications. Appl Comput Harmon Anal. 2003;14(3):257–75.

    Article  MATH  MathSciNet  Google Scholar 

  33. Strohmer T, Wang H. Accurate imaging of moving targets via random sensor arrays and Kerdock codes. Inverse Prob. 2013;29(2013):085001.

    MathSciNet  Google Scholar 

  34. Tang G, Bhaskar BN, Shah P, Recht B. Compressed sensing off the grid. Preprint, [arvix:1207.6053]; 2012.

    Google Scholar 

  35. Tang G, Bhaskar BN, Recht B. Sparse recovery over continuous dictionaries: Just discretize. Asilomar Conference Signals, Systems, Computers, Asilomar; 2013.

    Google Scholar 

  36. Tibshirani R. Regression shrinkage and selection via the lasso. J Roy Statist Soc Ser B. 1996;58(1):267–88.

    MATH  MathSciNet  Google Scholar 

  37. van der Vaart AW, Wellner JA. Weak convergence and empirical processes. Springer Series in Statistics. New York: Springer-Verlag; 1996. (With applications to statistics).

    Book  MATH  Google Scholar 

  38. Vershynin R. Introduction to the non-asymptotic analysis of random matrices. In Eldar CY, Kutyniok G, editors, Compressed sensing: theory and applications. Cambridge University Press; 2012.

    Google Scholar 

  39. Wootters WK, Fields BD. Optimal state-determination by mutually unbiased measurements. Ann Phys. 191(2):363–81, 1989.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

T. Strohmer and H. Wang acknowledge support from the NSF via grant DTRA-DMS 1042939, and from DARPA via grant N66001-11-1-4090.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Strohmer .

Editor information

Editors and Affiliations

Appendices

Appendix A

In this appendix we collect some auxiliary results.

Lemma 7.

[38, Proposition 34] Let \({\bf x} \in{\mathbb C}^n\) be a vector with \(x_k \sim{\cal CN}(0,\sigma^2)\) , then for every \(t>0\) one has

$${\mathbb P} \Big( \|{\bf x}\|_2 - {\mathbb E} \|{\bf x}\|_2> t \Big) \le e^{-\frac{t^2}{2\sigma^2}}.$$
(113)

The following lemma is a rescaled version of Lemma 3.1 in [29].

Lemma 8.

Let \({\bf A} \in{\mathbb C}^{n \times m}\) be a Gaussian random matrix with \(A_{i,j} \sim{\cal CN}(0,\sigma^2)\) . Then for all \({\bf x}, {\bf y} \in{\mathbb C}^m\) with \(\|{\bf x}\|_2 = \|{\bf y}\|_2=\sqrt{m}\) and all \(t>0\)

$${\mathbb P} \Big\{ |\frac{\sigma^2}{n}\langle{\bf A}{\bf x}, {\bf A}{\bf y} \rangle - \langle{\bf x},{\bf y} \rangle|> tm \Big\} \le 2 {\rm exp} \Big(-n\frac{t^2}{C_1+C_2t}\Big),$$
(114)

with \(C_1 = \frac{4e}{\sqrt{6\pi}}\) and \(C_2 = \sqrt{8}e\).

For convenience we state the following version of Bernstein’s inequality, which will be used in the proof of Lemma 10.

Lemma 9 (See e.g. [37])

Let \(X_1,\dots,X_n\) be independent random varibles with zero mean such that

$${\mathbb E} |X_i|^p \le \frac{1}{2} p! K^{p-2} v_i, \qquad \text{for all $i=1,\dots,n; p\in{\mathbb N}, p\ge 2$},$$
(115)

for some constants \(K>0\) and \(v_i> 0, i=1,\dots,n\) . Then, for all \(t>0\)

$${\mathbb P}\Big( \big|\sum_{i=1}^{n} X_i | \ge t \big) \le 2 {\rm exp}\Big( - \frac{t^2}{2v + Kt}\Big),$$
(116)

where \(v:=\sum_{i=1}^{n} v_i\).

We also need the following deviation inequality for unbounded random variables. It is a complex-valued and slightly sharpened version of Lemma 6 in [13]. Our proof strategy differs at certain steps from that of Lemma 6 in [13] (and our proof is a bit shorter).

Lemma 10.

Let x i and y i , \(i=1,\dots,n\) , be sequences of i.i.d. complex Gaussian random variables with variance σ. Then,

$${\mathbb P}\Big( \big|\sum_{i=1}^{n} \bar{x}_i y_i \big|> t \Big) \le 2 {\rm exp} \big(-\frac{t^2}{\sigma^2 (n \sigma^2 + 2t)}\big).$$
(117)

Proof

In order to apply Bernstein’s inequality, we need to compute the moments \({\mathbb E} |x_i y_i|^p\). Since x i and y i are independent, there holds

$${\mathbb E} (|x_i y_i|^p) = {\mathbb E} (|x_i|^p) {\mathbb E} (|y_i|^p) =({\mathbb E} (|x_i|^p ))^2.$$
(118)

The moments of x i are well-known:

$${\mathbb E} |x_i|^{2p} = p! \, \sigma^{2p},$$
(119)

hence

$$({\mathbb E} |x_i|^{2p})^2 = (2p!)^2 (\sigma^{2p})^2 \le \frac{1}{4} (2p)! (\sigma^{2})^{2p} \le \frac{1}{2} (2p)! (\sigma^{2})^{2p-2} \frac{(\sigma^{2})^2}{2}.$$
(120)

We apply Bernstein’s inequality (116) with \(K= \sigma^2\) and \(v_i = \frac{(\sigma^{2})^2}{2}, i=1,\dots,n\) and obtain (117).

Lemma 11.

Suppose M is an \(m\times m\) matrix, α and β are two joint independent random vectors in \({\mathbb C}^m\) with zero means and \(|\alpha_k|=|\beta_k|=1\) for \(k=1,\dots, m\) . If n is a positive constant, then for any \(t>0\) and \(s>0\),

  1. 1.

    if \(|m_{kj}|\le\frac{1}{\sqrt{n}}\) for all \(k, j\) , then

    $$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \beta\rangle|\le mt \Big)\ge1-4m{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big).\end{aligned}$$
    (121)

    and

    $$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \alpha\rangle|\le 2mt \Big)\ge1-8m{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big),\end{aligned}$$
    (122)
  2. 2.

    if \(|m_{kj}|\le \frac{1}{\sqrt{n}}\) for \(k\neq j\) and \(m_{jj}=1\) , then

    $$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \beta\rangle|\le s+mt \Big)\ge1-4{\rm exp}\Big(-\frac{s^2}{4m}\Big)-4m{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big),\end{aligned}$$
    (123)

    and

    $$\begin{aligned}\mathbb P\Big( m(1-2t)\le|\langle M\alpha, \alpha\rangle|\le m(1+2t) \Big)\ge1-8m{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big).\end{aligned}$$
    (124)

Proof

Note that

$$\begin{aligned} \langle M\alpha, \beta\rangle&=\sum_{k,j=1}^mm_{kj}\alpha_j\bar\beta_k\\ &=\sum_{l=1}^m\sum_{j=1}^mm_{j\oplus l,j}\alpha_j\bar\beta_{j\oplus l},\end{aligned}$$

where ⊕ denotes addition modulo m.

Let us first assume that \(|m_{kj}|\le\frac{1}{\sqrt{n}}\).

Since α and β are jointly independent, then for any l, the entries in \(\sum_{j=1}^mm_{j\oplus l, j}\\\alpha_j\bar\beta_{j\oplus l}\) are all jointly independent and it is easy to check that \({\mathbb E}(m_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l})=0\) and \(|m_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}|= |m_{j\oplus l, j}|\), then Theorem 4.5 in [18] will give

$$\begin{aligned} {\mathbb P}\Big( |\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}|\le t \Big)&\ge 1-4{\rm exp}\Big(-\frac{t^2}{4\sum_{j}|m_{j\oplus l, j}|^2}\Big)\nonumber\\ &\ge1-4{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big).\end{aligned}$$
(125)

We take all m different choices of l, then

$$\begin{aligned}\mathbb P\Big( |\sum_{l=1}^m\sum_{j=1}^mm_{i\oplus l, j}\alpha_j\bar\beta_{j\oplus l}|\le mt \Big)\ge1-4m{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big),\end{aligned}$$
(126)

which proves (121).

Now consider

$$ \langle M\alpha, \alpha\rangle=\sum_{l=1}^m\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l},$$

but different from above, the entries in \(\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}\) are no longer all jointly independent. But similar to the proof of Theorem 5.1 in [27] and Lemma 3 in [31], we observe that for any l we can split the index set \({1,\dots,m}\) into two subsets \(T_l^1,T_l^2\subset \{1,\dots,m\}\), each of size \(m/2\), such that the \(m/2\) variables \(\alpha_j\bar\alpha_{j\oplus l}\) are jointly independent for \(j\in T^1_l\), and analogous for \(T^2_l\). (For convenience we assume here that m is even, but with a negligible modification the argument also applies for odd m.) In other words, each of the sums \(\sum_{j\in T^r_l} m_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}, r=1,2\), contains only jointly independent terms.

So for each l,

$${\mathbb P}\Big(|\sum_{j\in T^r_l} m_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}| \le t \Big) \ge 1-4{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big),$$
(127)

which implies that

$$\begin{aligned} {\mathbb P}\Big(|\sum_{j} m_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}| \le 2t\Big) & \ge 1-8 {\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big),\end{aligned}$$
(128)

Again, we take all m different choices of l, then

$$\begin{aligned} {\mathbb P}\Big( |\sum_{l=1}^m\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}|\le 2mt \Big)\ge1-8m{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big),\end{aligned}$$
(129)

which proves (122).

Now let us assume that \(|m_{kj}|\le \frac{1}{\sqrt{n}}\) for \(k\neq j\) and \(m_{jj}=1\).

$$\begin{aligned} \langle M\alpha, \beta\rangle&=\sum_{j=1}^mm_{jj}\alpha_j\bar\beta_{j}+\sum_{l=1}^{m-1}\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}\\ &=\sum_{j=1}^m\alpha_j\bar\beta_{j}+\sum_{l=1}^{m-1}\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}.\end{aligned}$$

Since α and β are joint independent and \(|\alpha_j\bar\beta_j|=1\),

$$\begin{aligned}\mathbb P\Big( |\sum_{j=1}^m\alpha_j\bar\beta_{j}|\le s \Big)\ge1-4{\rm exp}\Big(-\frac{s^2}{4m}\Big).\end{aligned}$$
(130)

Similar to the proof of (126) above, we have that

$$\begin{aligned} {\mathbb P}\Big( |\sum_{l=1}^{m-1}\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}|\le (m-1)t \Big)\ge1-4(m-1){\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big),\end{aligned}$$

together with (130), it follows

$$\begin{aligned} {\mathbb P}\Big( |\langle M\alpha, \beta\rangle|\le s+(m-1)t \Big)\ge1-4{\rm exp}\Big(-\frac{s^2}{4m}\Big)-4(m-1){\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big),\end{aligned}$$

which proves (123).

Finally,

$$ \langle M\alpha, \alpha\rangle=\sum_{j=1}^mm_{jj}+\sum_{l=1}^{m-1}\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}=m+\sum_{l=1}^{m-1}\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l},$$

then (124) results from similar proof as for (122) and the triangle inequality.

Appendix B

We consider a general linear system of equations \({\Psi} {\bf x} = {\bf y}\), where \({\Psi} \in{\mathbb C}^{n \times m}\), \({\bf x} \in{\mathbb C}^m\) and \(n \le m\). We introduce the following generic K-sparse model:

  • The support \(I \subset \{1,\dots,m\}\) of the K nonzero coefficients of \({\bf x}\) is selected uniformly at random.

  • The non-zero entries of \({\operatorname{sgn}}({\bf x})\) form a Steinhaus sequence, i.e., \({\operatorname{sgn}}({\bf x}_k):={\bf x}_k/|{\bf x}_k|, k\in I,\) is a complex random variable that is uniformly distributed on the unit circle.

The following theorem is a slightly extended version of Theorem 1.3 in [5], see [31] for its proof.

Theorem 5.

Given \({\bf y} = {\Psi} {\bf x} + {\bf w}\) , where \({\Psi}\) has all unit-ℓ 2 -norm columns, \({\bf x}\) is drawn from the generic K-sparse model and \({\bf w}_i \sim{\cal CN}(0,\sigma^2)\) . Assume that

$$\mu({\Psi}) \le \frac{C_0}{\log m},$$
(131)

where \(C_0>0\) is a constant independent of \(n,m\) . Furthermore, suppose

$$K \le \frac{c_0 m}{\|{\Psi} \|_{\text{op}}^2 \log m}$$
(132)

for some constant \(c_0> 0\) and that

$$\underset{k\in I}{\min}\, |{\bf x}_k|> 8 \sigma \sqrt{2 \log m}.$$
(133)

Then the solution \(\hat{\bf x}\) to the debiased lasso computed with \(\lambda = 2 \sigma \sqrt{2 \log m}\) obeys

$$\operatorname{supp} (\hat{\bf x}) = {\operatorname{supp}} ({\bf x}),$$
(134)

and

$$\frac{\|\hat{\bf x} - {\bf x} \|_2}{\|{\bf x}\|_2} \le \frac{\sigma \sqrt{3 n}}{\|{\bf y}\|_2}$$
(135)

with probability at least

$$1 - 2m^{-1}(2\pi \log m + Km^{-1}) - {\cal O}(m^{-2 \log 2}).$$
(136)

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Strohmer, T., Wang, H. (2015). Adventures in Compressive Sensing Based MIMO Radar. In: Balan, R., Begué, M., Benedetto, J., Czaja, W., Okoudjou, K. (eds) Excursions in Harmonic Analysis, Volume 3. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-13230-3_13

Download citation

Publish with us

Policies and ethics