Abstract
While radar has been around for many decades, novel developments in recent years have led to significant breakthroughs as well as to exciting new mathematical challenges. In this chapter, we consider a multiple-input-multiple-output (MIMO) radar system. Using sparsity as a key ingredient of our approach and tools from compressive sensing, we derive a mathematical framework for the imaging of targets in the azimuth-range-Doppler domain. Our analysis comprises uniformly spaced linear arrays with random waveforms, as well as random sensor arrays with deterministic waveforms. We also derive results that do not require the “on-the-grid” assumption often used in compressive sensing radar. Algorithmic aspects and numerical simulations are presented as well.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Actually the received signal will have a somewhat larger bandwidth \(B_1> B\) due to the Doppler effect. Our results could be easily modified to incorporate this increased bandwidth. Since in practice this increase in bandwidth is small, for convenience we simply assume \(B \approx B_1\).
- 2.
This poor performance is caused by Property (ii) in Theorem 2.
- 3.
SISO stands for single-input-single-output radar, and SIMO for single-input-multiple-output radar (i.e., a radar with one transmit and multiple receive antennas).
- 4.
The attentive reader will have noticed that \({\bf U}_{(0)}\) is just the \(p \times p\) DFT matrix \({\bf F}_p\).
References
Becker S, Candes E, Grant M. Templates for convex cone problems with applications to sparse signal recovery. Math Program Comput. 2011;3(3):165–218.
Calderbank AR, Cameron PJ, Kantor WM, Seidel JJ. Z4-Kerdock codes, orthogonal spreads, and extremal Euclidean line-sets. Proc London Math Soc., (3). 1997;75(2):436–80.
Candès E, Fernandez-Granda C. Super-resolution from noisy data. J Fourier Anal Appl. 2013;19(6):1229–54.
Candès E, Fernandez-Granda C. Towards a mathematical theory of super-resolution. Commun Pure Appl Math. (to appear).
Candès EJ, Plan Y. Near-ideal model selection by ℓ1minimization. Ann Stat. 2009;37(5A):2145–77.
Carin L. On the relationship between compressive sensing and random sensor arrays. IEEE Antennas Propag Mag. 2009;51(5):72–81.
Cevher V, Boufounos PT, Baraniuk RG, Gilbert AC, Strauss MJ. Strauss. Near-optimal bayesian localization via incoherence and sparsity. Proceedings of the 2009 International Conference on Information Processing in Sensor Networks (IPSN), 13–16 April; 2009. p. 205–16.
Chi Y, Scharf LL, Pezeshki A, Calderbank AR. Sensitivity to basis mismatch in compressed sensing. IEEE Trans Signal Process. 2011;59(5):2182–95.
Fannjiang A, Liao W. Coherence pattern—guided compressive sensing with unresolved grids. SIAM J Imaging Sci. 2012;5:179–202.
Fenn AJ, Temme DH, Delaney WP, Courtney WE. The development of phased-array radar technology. Lincoln Lab J. 2000;12(2):321–40.
Friedlander B. Adaptive signal design for MIMO radar. In Li J, Stoica P, editors. MIMO radar signal processing, chapter 5. Wiley; 2009.
Friedlander B. On the relationship between MIMO and SIMO radars. IEEE Trans Signal Process. 2009;57(1):394–8;.
Haupt J, Bajwa W, Raz G, Nowak R. Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans Inform Theory. 2010;56(11):5862–75.
Heath R, Strohmer T, Paulraj A. On quasi-orthogonal signatures for CDMA systems. IEEE Trans Info Theory. 2006;52(3):1217–26.
Herman M, Strohmer T. High-resolution radar via compressed sensing. IEEE Trans Signal Process. 2009;57(6):2275–84.
Herman M, Strohmer T. General deviants: an analysis of perturbations in compressed sensing. IEEE J Sel Top Signal Process Special Issue Compress Sens. 2010;4(2):342–49.
Howard SD, Calderbank AR, Moran W. The finite Heisenberg-Weyl groups in radar and communications. EURASIP J Appl Signal Process. 2006;1–12:2006.
Hügel M., Rauhut H, Strohmer T. Remote sensing via ℓ1-minimization. Found Comput Math. (to appear).
Inoue T, Heath RW. Kerdock codes for limited feedback mimo systems. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. p. 3113–6.
Kerdock A. Studies of low-rate binary codes (ph.d. thesis abstr.). IEEE Trans Inf Theory. 1972;18(2):316.
König H. Isometric embeddings of euclidean spaces into finite-dimensional ℓ p -spaces. Banach Cent Publ. 1995;34:79–87.
Levenstein VI. Bounds on the maximal cardinality of a code with bounded modulus of the inner product. Soviet Math Dokl. 1982;25:526–31.
Li J., Stoica P. MIMO radar with colocated antennas: review of some recent work. IEEE Signal Process Mag. 2007;24(5):106–14.
Li J, Stoica P, editors. MIMO radar signal processing. Wiley; 2009.
Lo Y. A mathematical theory of antenna arrays with randomly spaced element. IEEE Trans Antennas Propag. 1964;12(3):257–68.
Lo Y. A probalistic approach to the problem of large antenna arrays. J Res Nat Bur Stand. 1964;68D(5):1011–9.
Pfander GE, Rauhut H, Tanner J. Identification of matrices having a sparse representation. IEEE Trans Signal Process. 2008;56(11):5376–88.
Potter LC, Ertin E, Parker JT, Cetin M. Sparsity and compressed sensing in radar imaging. Proc IEEE. 2010;98(6):1006–20.
Rauhut H, Schnass K, Vandergheynst P. Compressed sensing and redundant dictionaries. IEEE Trans Inf Theory. 2008;54(5):2210–9.
Rihaczek AW. High-resolution radar. Boston: Artech House; 1996. (originally published: McGraw-Hill, NY, 1969).
Strohmer T, Friedlander B. Analysis of sparse MIMO radar. Appl Comput Harmon Anal. 2014;37:361–88.
Strohmer T, Heath R. Grassmannian rames with applications to coding and communications. Appl Comput Harmon Anal. 2003;14(3):257–75.
Strohmer T, Wang H. Accurate imaging of moving targets via random sensor arrays and Kerdock codes. Inverse Prob. 2013;29(2013):085001.
Tang G, Bhaskar BN, Shah P, Recht B. Compressed sensing off the grid. Preprint, [arvix:1207.6053]; 2012.
Tang G, Bhaskar BN, Recht B. Sparse recovery over continuous dictionaries: Just discretize. Asilomar Conference Signals, Systems, Computers, Asilomar; 2013.
Tibshirani R. Regression shrinkage and selection via the lasso. J Roy Statist Soc Ser B. 1996;58(1):267–88.
van der Vaart AW, Wellner JA. Weak convergence and empirical processes. Springer Series in Statistics. New York: Springer-Verlag; 1996. (With applications to statistics).
Vershynin R. Introduction to the non-asymptotic analysis of random matrices. In Eldar CY, Kutyniok G, editors, Compressed sensing: theory and applications. Cambridge University Press; 2012.
Wootters WK, Fields BD. Optimal state-determination by mutually unbiased measurements. Ann Phys. 191(2):363–81, 1989.
Acknowledgments
T. Strohmer and H. Wang acknowledge support from the NSF via grant DTRA-DMS 1042939, and from DARPA via grant N66001-11-1-4090.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix A
In this appendix we collect some auxiliary results.
Lemma 7.
[38, Proposition 34] Let \({\bf x} \in{\mathbb C}^n\) be a vector with \(x_k \sim{\cal CN}(0,\sigma^2)\) , then for every \(t>0\) one has
The following lemma is a rescaled version of Lemma 3.1 in [29].
Lemma 8.
Let \({\bf A} \in{\mathbb C}^{n \times m}\) be a Gaussian random matrix with \(A_{i,j} \sim{\cal CN}(0,\sigma^2)\) . Then for all \({\bf x}, {\bf y} \in{\mathbb C}^m\) with \(\|{\bf x}\|_2 = \|{\bf y}\|_2=\sqrt{m}\) and all \(t>0\)
with \(C_1 = \frac{4e}{\sqrt{6\pi}}\) and \(C_2 = \sqrt{8}e\).
For convenience we state the following version of Bernstein’s inequality, which will be used in the proof of Lemma 10.
Lemma 9 (See e.g. [37])
Let \(X_1,\dots,X_n\) be independent random varibles with zero mean such that
for some constants \(K>0\) and \(v_i> 0, i=1,\dots,n\) . Then, for all \(t>0\)
where \(v:=\sum_{i=1}^{n} v_i\).
We also need the following deviation inequality for unbounded random variables. It is a complex-valued and slightly sharpened version of Lemma 6 in [13]. Our proof strategy differs at certain steps from that of Lemma 6 in [13] (and our proof is a bit shorter).
Lemma 10.
Let x i and y i , \(i=1,\dots,n\) , be sequences of i.i.d. complex Gaussian random variables with variance σ. Then,
Proof
In order to apply Bernstein’s inequality, we need to compute the moments \({\mathbb E} |x_i y_i|^p\). Since x i and y i are independent, there holds
The moments of x i are well-known:
hence
We apply Bernstein’s inequality (116) with \(K= \sigma^2\) and \(v_i = \frac{(\sigma^{2})^2}{2}, i=1,\dots,n\) and obtain (117).
Lemma 11.
Suppose M is an \(m\times m\) matrix, α and β are two joint independent random vectors in \({\mathbb C}^m\) with zero means and \(|\alpha_k|=|\beta_k|=1\) for \(k=1,\dots, m\) . If n is a positive constant, then for any \(t>0\) and \(s>0\),
-
1.
if \(|m_{kj}|\le\frac{1}{\sqrt{n}}\) for all \(k, j\) , then
$$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \beta\rangle|\le mt \Big)\ge1-4m{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big).\end{aligned}$$(121)and
$$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \alpha\rangle|\le 2mt \Big)\ge1-8m{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big),\end{aligned}$$(122) -
2.
if \(|m_{kj}|\le \frac{1}{\sqrt{n}}\) for \(k\neq j\) and \(m_{jj}=1\) , then
$$\begin{aligned}\mathbb P\Big( |\langle M\alpha, \beta\rangle|\le s+mt \Big)\ge1-4{\rm exp}\Big(-\frac{s^2}{4m}\Big)-4m{\rm exp}\Big(-\frac{t^2}{4\frac{m}{n}}\Big),\end{aligned}$$(123)and
$$\begin{aligned}\mathbb P\Big( m(1-2t)\le|\langle M\alpha, \alpha\rangle|\le m(1+2t) \Big)\ge1-8m{\rm exp}\Big(-\frac{t^2}{2\frac{m}{n}}\Big).\end{aligned}$$(124)
Proof
Note that
where ⊕ denotes addition modulo m.
Let us first assume that \(|m_{kj}|\le\frac{1}{\sqrt{n}}\).
Since α and β are jointly independent, then for any l, the entries in \(\sum_{j=1}^mm_{j\oplus l, j}\\\alpha_j\bar\beta_{j\oplus l}\) are all jointly independent and it is easy to check that \({\mathbb E}(m_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l})=0\) and \(|m_{j\oplus l, j}\alpha_j\bar\beta_{j\oplus l}|= |m_{j\oplus l, j}|\), then Theorem 4.5 in [18] will give
We take all m different choices of l, then
which proves (121).
Now consider
but different from above, the entries in \(\sum_{j=1}^mm_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}\) are no longer all jointly independent. But similar to the proof of Theorem 5.1 in [27] and Lemma 3 in [31], we observe that for any l we can split the index set \({1,\dots,m}\) into two subsets \(T_l^1,T_l^2\subset \{1,\dots,m\}\), each of size \(m/2\), such that the \(m/2\) variables \(\alpha_j\bar\alpha_{j\oplus l}\) are jointly independent for \(j\in T^1_l\), and analogous for \(T^2_l\). (For convenience we assume here that m is even, but with a negligible modification the argument also applies for odd m.) In other words, each of the sums \(\sum_{j\in T^r_l} m_{j\oplus l, j}\alpha_j\bar\alpha_{j\oplus l}, r=1,2\), contains only jointly independent terms.
So for each l,
which implies that
Again, we take all m different choices of l, then
which proves (122).
Now let us assume that \(|m_{kj}|\le \frac{1}{\sqrt{n}}\) for \(k\neq j\) and \(m_{jj}=1\).
Since α and β are joint independent and \(|\alpha_j\bar\beta_j|=1\),
Similar to the proof of (126) above, we have that
together with (130), it follows
which proves (123).
Finally,
then (124) results from similar proof as for (122) and the triangle inequality.
Appendix B
We consider a general linear system of equations \({\Psi} {\bf x} = {\bf y}\), where \({\Psi} \in{\mathbb C}^{n \times m}\), \({\bf x} \in{\mathbb C}^m\) and \(n \le m\). We introduce the following generic K-sparse model:
-
The support \(I \subset \{1,\dots,m\}\) of the K nonzero coefficients of \({\bf x}\) is selected uniformly at random.
-
The non-zero entries of \({\operatorname{sgn}}({\bf x})\) form a Steinhaus sequence, i.e., \({\operatorname{sgn}}({\bf x}_k):={\bf x}_k/|{\bf x}_k|, k\in I,\) is a complex random variable that is uniformly distributed on the unit circle.
The following theorem is a slightly extended version of Theorem 1.3 in [5], see [31] for its proof.
Theorem 5.
Given \({\bf y} = {\Psi} {\bf x} + {\bf w}\) , where \({\Psi}\) has all unit-ℓ 2 -norm columns, \({\bf x}\) is drawn from the generic K-sparse model and \({\bf w}_i \sim{\cal CN}(0,\sigma^2)\) . Assume that
where \(C_0>0\) is a constant independent of \(n,m\) . Furthermore, suppose
for some constant \(c_0> 0\) and that
Then the solution \(\hat{\bf x}\) to the debiased lasso computed with \(\lambda = 2 \sigma \sqrt{2 \log m}\) obeys
and
with probability at least
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Strohmer, T., Wang, H. (2015). Adventures in Compressive Sensing Based MIMO Radar. In: Balan, R., Begué, M., Benedetto, J., Czaja, W., Okoudjou, K. (eds) Excursions in Harmonic Analysis, Volume 3. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-13230-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-13230-3_13
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-319-13229-7
Online ISBN: 978-3-319-13230-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)