Generalized compressive detection of stochastic signals using Neyman–Pearson theorem

Compressive sensing (CS) enables reconstructing a sparse signal from fewer samples than those required by the classic Nyquist sampling theorem. In general, CS signal recovery algorithms have high computational complexity. However, several signal processing problems such as signal detection and classification can be tackled directly in the compressive measurement domain. This makes recovering the original signal from its compressive measurements not necessary in these applications. We consider in this paper detecting stochastic signals with known probability density function from their compressive measurements. We refer to it as the compressive detection problem to highlight that the detection task can be achieved via directly exploring the compressive measurements. The Neyman–Pearson (NP) theorem is applied to derive the NP detectors for Gaussian and non-Gaussian signals. Our work is more general over many existing literature in the sense that we do not require the orthonormality of the measurement matrix, and the compressive detection problem for stochastic signals is generalized from the case of Gaussian signals to the case of non-Gaussian signals. Theoretical performance results of the proposed NP detectors in terms of their detection probability and the false alarm rate averaged over the random measurement matrix are established. They are verified via extensive computer simulations.


Introduction
Compressive sensing (CS) is an important innovation in the field of signal processing.With CS, if the representation of a signal in a particular linear basis is sparse, we can sample it at a rate significantly smaller than that dictated by the classic Nyquist sampling theorem.To ensure that the obtained compressive measurements preserve sufficient information so that signal reconstruction is feasible, the measurement matrix needs to satisfy the well-known restricted isometry property (RIP).Fortunately, measurement matrices whose elements are independently drawn from the sub-Gaussian distribution would meet this requirement with a high probability [1][2][3][4][5][6].
To reconstruct the original signal from its compressive measurements, various algorithms were proposed in literature.Roughly speaking, they belong to three different categories, namely relaxation-based algorithms, pursuit algorithms, and Bayesian algorithms.Relaxation-based algorithms use functions easier to tackle to approximate the non-smooth and non-convex l 0 -norm employed in the signal reconstruction-oriented optimization problem.The relaxed problem is then solved via standard numerical techniques.Well-known instances of relaxation-based algorithms include the basis pursuit (BP) [7] and FOCUSS (focal underdetermined system solver) [8] that approximate the l 0 -norm using the l 1 -norm and l p -norm with p < 1. Pursuit algorithms search for a solution to the sparse signal recovery problem by taking a sequence of greedy decisions on the signal support.Algorithms belonging to this family include, but not limited to, the matching pursuit (MP) [9], orthogonal matching pursuit (OMP) [10], stagewise OMP (StOMP) [11], the iterative hard thresholding (IHT) [12], the hard thresholding pursuit (HTP) [13], the compressive sampling matching pursuit (CoSaMP) [14] and subspace pursuit (SP) [15].Bayesian algorithms formulate the signal reconstruction problem as a Bayesian inference problem and apply statistical tools to solve it.Typical Bayesian algorithms are the Bayesian compressive sensing (BCS) [16] and the Laplace prior-based BCS [17].
Despite the great efforts in developing efficient signal reconstruction methods for CS, researchers also noticed that there are many signal processing applications such as detection, classification and parameter estimation where recovering the original signal is not necessary, given that its compressive measurements are available.In [18], the technique of compressive signal processing (CSP) was advocated, where the deterministic signal detection problem is addressed in the compressive measurement domain.Duarte et al. [19] proposed another deterministic signal detection algorithm based on OMP.The developed technique can complete the signal detection task using fewer compressive measurements and iterations than those needed by the OMP algorithm to recover the original signal.However, this work did not establish analytically the detection probability and the false alarm rate, and the detection threshold was also found by experiments.
For stochastic signals, the compressive detection problem (i.e., detecting stochastic signals in the compressive measurement domain) can be formulated as the following binary hypothesis testing problem where y = [y 1 , y 2 , . . ., y M ] T is the compressive measurement vector; is the M × N measurement matrix (M < N ) whose elements are assumed to be drawn independently from a Gaussian distribution, and should satisfy the restricted isometry property (RIP) [18]; θ is the original stochastic signal with known probability density function (PDF); and n represents the additive noise.The above stochastic signal detection problem has been considered in [20].The authors proposed to design the projection matrices by maximizing the mutual information between projected signals and signal class labels.Sparse event detection in sensor networks under a CS framework was considered in [21].The problem of detecting spectral targets using noisy incoherent projections was investigated in [22,23].Through employing a linear subspace model for sparse signals, Wimalajeewa et al. [24] solved the stochastic detection problem with reduced number of measurements for a given performance.Several detection algorithms, dependent on the availability of information about the subspace model and the sparse signal, have also been developed.Specifically, Wang et al. [25] and Rao et al. [26] considered the compressive detection problem, assuming that the row vectors of the measurement matrix are orthonormal.Wang et al. [25] derived the theoretical performance limits for detecting an arbitrary random signal from compressive measurements.Rao et al. [26] studied the problem of detecting sparse random signals using compressive measurements.It also discussed the problem of detecting stochastic signal with known PDF.Recently, in [27], the authors applied the sparse random projection matrix in place of dense projection matrices and developed Neyman-Pearson (NP) detectors for stochastic signals.The constraint on the orthonormality of the measurement matrix was eliminated.But in [27], the case of correlated signal detection was not considered.
The contribution of this paper is as follows.First, we tackle the compressive detection problem with the elements of the measurement matrix drawn independently from a Gaussian PDF using the NP theorem.Therefore, the context of this work is both CS and random projection.As in [27], we do not assume that the row vectors of the measurement matrix are orthonormal to one another (i.e., the constraint T = I M is not required but E( T ) = I M is assumed when obtaining the average performance of the proposed NP detectors).In fact, the assumption that the measurement matrix consists of orthonormal rows is not proper in some cases.This work considers detecting Gaussian signals with independent samples and with correlated samples.The average performance of the proposed NP detectors under Gaussian random measurement matrices is found.The second contribution of this work is that the compressive detection problem for stochastic signals is generalized from the case of Gaussian signals to the case of non-Gaussian signals by invoking the central limit theorem.In this case, performance analysis of the proposed NP detectors is done by asymptotic analysis.In other words, we will consider in this paper a generalized compressive detection problem.In applications, the measurement matrix may be not necessarily orthonormal and the distribution of the signal may be not necessarily Gaussian.The detector we develop in this paper can be well applied to these cases.We will illustrate the performance of the developed signal detectors using extensive computer experiments.
It is worthwhile to point out that the development of the NP detectors proposed in this paper assumes the availability of the PDF of the stochastic signal of interest, but the practical applications of the signal detectors only require the mean and the covariance of the original stochastic signal being known a prior.As a result, the proposed techniques can be used to detect non-stochastic signals by utilizing their sample mean and sample covariance in place of their true but known mean and covariance.We will illustrate the detection of nonstochastic signals in the computer experiment section (see Sect. 5).
The rest of the paper is organized as follows.Section 2 presents the NP detector and its performance analysis for Gaussian signals with diagonal covariance (i.e., signals with independent samples).Section 3 extends the results of Sect. 2 to the case where the Gaussian signal has nondiagonal covariance (i.e., signals with correlated samples).Section 4 considers the case of non-Gaussian signals and resorts to asymptotic analysis to establish the detector.Computer experiment results are given in Section 5, and Section 6 is the conclusion.

Compressive detection of zero-mean Gaussian signals with diagonal covariance
We consider the binary hypothesis testing problem in (1).
Let the noise vector n be Gaussian distributed with mean 0 N ×1 , where 0 N ×1 is an N × 1 zero vector, and covariance β −1 I N , where I N is an N × N identity matrix.In contrast to existing works such as [25,26], the measurement matrix in (1) may not satisfy T = I M , i.e., its row vectors are not necessarily orthonormal.Assuming that the elements in θ = [θ 1 , θ 2 , . . ., θ N ] T are independent and identically distributed (i.i.d.) Gaussian random variables, we have ( We assume that the values of α and β are known.In other words, the PDFs of the signal vector and the noise are completely specified.Let P F denote the probability of choosing the hypothesis H 1 when the hypothesis H 0 is true, i.e., the false alarm rate.Let P D be the probability of choosing the hypothesis H 1 when the hypothesis H 1 is true, i.e., the detection probability.We will approach the detection problem (1) by applying the NP theorem in the compressive measurement domain.
First, We will find the PDF of y under the hypotheses H 0 and H 1 .Since n is Gaussian, we have that under hypothesis H 0 , where | • | denotes the matrix determinant.On the other hand, using (2), we can express the distribution of (θ + n) as Then, when the hypothesis H 1 holds, the PDF of y becomes Let r (y) be the likelihood ratio r (y) = p 1 (y)/ p 0 (y).The NP theorem says that we decide the hypothesis H 1 when r (y) > η or we decide the hypothesis H 0 when r (y) < η, where η is the detection threshold.η is obtained via solving where α F is the user-defined false alarm rate.
Taking the logarithm of r (y), we obtain an equivalent NP test given by y T Dy > γ (H 1 holds) y T Dy < γ (H 0 holds) (7) where D = β 2 α+β ( T ) −1 and γ = M log α+β α + 2 log η.We will find in the following development the average performance of the proposed NP detector in (7).
From (7), we can define the following test statistics using the compressive measurements y as t is a sufficient statistic for detecting independent Gaussian signals.Under H 0 , we have where As such, M 0 is also idempotent and it can be decomposed as (10) where r M is the rank of M 0 ; I r M is an r M ×r M identity matrix; 0 r M ×(N −r M ) , 0 N −r M and 0 (N −r M )×r M are the zero matrixes with the dimension r M ×(N −r M ), (N −r M )×(N −r M ) and (N − r M ) × r M ; U s is the matrix composed of eigenvectors corresponding to eigenvalues of one; and U n is a matrix consisting of eigenvectors corresponding to eigenvalues being zero.We can verify using (10) and the definition of that the mean of ω is and the covariance of ω is Thus, ω has a PDF N (0 M×1 , β −1 I M ).As a result, βn T M 0 n is the sum of squares of M i.i.d.standard Gaussian random variables such that βn T M 0 n has a chi-square distribution with M degrees of freedom.In other words, under the hypothesis H 0 , t (α +β)/β has a chi-square distribution with M degrees of freedom.
Under the hypothesis H 1 , we have Following the derivation that leads to the distribution function of t under the hypothesis H 0 , we can show that In summary, we have where χ 2 (M) denotes the chi-square distribution with M degrees of freedom.Combining ( 7), ( 8) and ( 15) yields where When the false alarm rate P F is set to α F , according to (16), we can obtain the detection threshold using The detection probability in this case is

Compressive detection of zero-mean Gaussian signals with non-diagonal covariance
In this section, we will generalize the results in the previous section to the case where the Gaussian signals have covariance that are no longer diagonal.In other words, the elements in θ may be correlated (compare, e.g., [27]).Assume that θ now has a covariance C N , which is positive semidefinite.The noise is still zero-mean Gaussian distributed as in Sect. 2. We have that the PDFs of the compressive measurement vector y under hypotheses H 0 and H 1 are According to the NP theorem, We will decide the hypothesis H 1 if r (y) = p 1 (y)/ p 0 (y) > η.The equivalent NP test is where C = C N + β −1 I N is the covariance of (θ + n), and γ = log Note that though C N is positive semidefinite only and may be singular, yet it can be shown in the following that Applying the fact that β −1 > 0, we can easily verify that the rank of C + β is N .So C has full rank, and as a result, it is positive definite.It has the Cholesky decomposition C = R R T , where R is an N × N non-singular matrix.
From (20), we can define t = y T (β( T ) −1 −( C T ) −1 )y as the test statistics.We have, under the hypothesis H 1 , where p = R −1 (θ + n) is a Gaussian random vector with zero mean and covariance R is a real symmetric matrix with rank less than or equal to M. Applying Q = A diag(λ 1 , . . ., λ M , 0, . . ., 0)A T , we have where λ 1 , λ 2 , . . ., λ M are the largest M eigenvalues of Q arranged in a descending order, A is orthonormal and and as a result, its elements are independent to one another.Putting ( 22) into (21) gives that the test statistics t under H 1 is We next derive the detection probability of the NP detector given in (20), which is where in this case, t is defined in (23).will find P D by evaluating the cumulative distribution function (CDF) of t = M k=1 λ k s 2 k .For this purpose, the characteristic function of t, which is Fourier transform of the PDF p(t), is computed first.Mathematically, we have Applying the fact that s k , where k = 1, 2, . . ., M, is mutually independent, we have where Performing the inverse Fourier transform of φ t (ω) in ( 27) and putting the obtained p(t) into (24) give which completes the derivation of P D , the detection probability of the NP detector in (20).We proceed to consider the false alarm rate of the detector (20).In this case, following the process that leads to (23), we can show that the test statistics is as A diag λ 1 , . . ., λ M , 0, . . ., 0 A T , and λ 1 , λ 2 , . . ., λ M are the M largest eigenvalues of . We can also verify that s k , k = 1, 2, . . ., M, are mutually independent and s k 2 has a chi-square distribution with one degree of freedom.The false alarm rate can then be calculated using With (30), the detection threshold γ can be found by first setting the false alarm rate to a given value α F and then solving for γ .Numerical methods are available for solving such Volterra integral equations [28].

Compressive detection of zero-mean non-Gaussian signals
This section considers detecting zero-mean stochastic signals that have non-Gaussian distributions.The corresponding hypothesis testing problem is still given by ( 1), where in this case, the additive noise is also non-Gaussian.The measurement matrix contains i.i.d.random Gaussian entries with zero mean and variance 1/N .The signal of interest θ and the noise n have covariance C N and β −1 I N , where C N is positive semidefinite and β > 0. We further assume that θ and n are independent to each other.The proposed NP signal detector for this scenario will be derived as follows using asymptotic (large-sample) analysis.
Let E(x) and D(x) be the statistical expectation and the covariance of the random quantity x.It can be shown that for the non-Gaussian scenario, we still have that under hypothesis H 1 , E(θ + n) = 0 N ×1 and D(θ The zero-mean property of the compressive measurements can also be established using y i = N j=1 i, j θ j + n j and E(θ i + n i ) = 0, where i = 1, 2, . .., M. Let 2  1 =D (y i ) = N j=1 D( i, j θ j + n j ) and 3 2 = N i=1 ρ i , where ).The central limit theorem [29] tells that if 2 / 1 → 0 as N → ∞, y i would follow a Gaussian distribution y i → N (0, D(y i )).We will next evaluate lim N →∞ 2 / 1 to establish the PDF of the compressive measurements under the hypothesis H 1 .We have from ( 1) where i,• denotes the ith row of the measurement matrix , and C i,k is the (i, k)th element of the covariance C. Using (31), we can obtain the following inequalities Note that the elements of are drawn independently from a Gaussian distribution with zero mean and variance 1/N , we have that lim N →∞ Applying the central limit theorem [29] yields that as On the other hand, the covariance between two compressive measurements is Combining ( 34) and (35), and using (31), we have that under the hypothesis H 1 , as N → ∞, the PDF of the compressive measurements y converges asymptotically to Following a similar procedure, it can be shown that under the hypothesis H 0 , as N → ∞, we have Comparing ( 35) and ( 36) with (19) reveals that in the non-Gaussian signal scenario, as N → ∞, the NP signal detector is given in (20) and its performance in terms of the detection probability and false alarm rate has been derived in ( 28) and (30).
We will consider a special case when the signal θ has a covariance α −1 I N such that C = α −1 + β −1 I N .In this case, (36) and (37) become The corresponding NP detector is given in (7), and its performance results are summarized in (16).

Computer experiments
In this section, we will illustrate the performance of the proposed NP detectors for Gaussian and non-Gaussian signals via computer experiments.The obtained results reveal the relationship between the detection performance and the factors such as the false alarm rate, the number of compressive measurements used for detection and the signal-to-noise ratio (SNR).In all the experiments, the elements of the compressive measurement matrix are drawn independently from a Gaussian distribution with zero mean and variance 1/N .We will show the performance of the NP detectors developed in this paper only for clarity.

Compressive detection of Gaussian random signals with independent samples
We consider the NP detector (7) for Gaussian signals with diagonal covariance first.In Fig. 1, we show the receiveroperating characteristic (ROC) curves that illustrate the relationship between the detection probability of the NP detector and the false alarm rate.We include in the figure the theoretical detection probability given by (18) (denoted as 'predict'), and the one from the Monte Carlo simulation of 5,000 independent runs using the M × N measurement matrix whose elements are drawn from a Gaussian distribution (denoted as 'statistic').For the purpose of comparison, we also plot in the figure the detection probability of the detector in (7) (denoted as 'statistic-I') when only the first M elements of the original signal are used for detection (in this case, a signal truncation instead of signal compression is performed).
To generate Fig. 1, the original signal has N = 500 samples.We consider two cases, one with SNR = 0 dB and the other with SNR = −5 dB.The SNR is defined as 10 log 10 where α −1 and β −1 are the variances of the original signal and the noise, respectively.In each SNR case, the number of compressive measurements M is varied as M = 0.05N , 0.1N , 0.2N , 0.4N .The obtained ROC curves are summarized in Fig. 1a for SNR = −5 dB and Fig. 1b for SNR = 0 dB.
We can see that the simulation results closely match the theoretical ones.This validates our analytical derivation of the detection probability P D of the NP detector in (7) (see (18)).Besides, comparing Fig. 1a with Fig. 1b shows that a higher SNR leads to a larger P D , as expected.We can also see that P D gradually improves as the number of used compressive measurements M increases as well, which is also expected.
Interestingly, we also find from Fig. 1 that the detection performance using the Gaussian compressive measurement matrix and the signal truncation is the same.In other words, under the i.i.d.Gaussian signal model, the performance of compressive signal detection using M compressive measurements would be identical to the case where only the first M samples of the original signal are used for the detection task.The underlying reasons are as follows.First, the obtained simulation results can be explained theoretically by examining (18), which reveals that the detection probability of the detector in (7) depends on the rank of the measurement matrix only.In this simulation, the measurement matrices of the compressive signal detector and the signal truncation-based detector both have a rank of M, which renders their detection performance identical to each other.Another important insight is that the obtained simulation results may result from the signal model adopted in the development of the compressive detector (7).In particular, the original signal θ and the noise n are both assumed to be composed of i.i.d.Gaussian samples and they are independent to each other.As a result, roughly speaking, the compression of the noise-corrupted original signal due to the use of compressive measurement matrix and the direct signal truncation would result in the same detection performance observed in Fig. 1.
The compressive signal detection technique may outperform the direct signal truncation-based approach when, e.g., the original signal has correlated samples or some of its samples have significantly larger variance than others.In these cases, due to its averaging nature, the use of the compressive measurement matrix may provide the better detection performance, because the direct signal truncation may not be able to retain all the signal samples with large power.As the original signal no longer has a diagonal covariance matrix with identical diagonal elements, the proposed compressive detectors in Sects.3 and 4 need to be invoked to address the signal detection task.We will investigate more in our future work the impact of the properties of the stochastic signals to be detected on the performance of the proposed compressive detectors.
To better illustrate the impact of the number of compressive measurements used on the detection performance, we repeat the simulation experiment that produced Fig. 1.This time, we fix the false alarm rate at 0.01 and 0.1, and generate two sets of results shown in Fig. 2a for α F = 0.1 and Fig. 2b for α F = 0.01.In each subfigure, we plot the detection probability as function of M/N .Other settings are the same as in the previous experiment.We observe that increasing the number of compressive measurements does lead to improved detection probability.But the amount of performance improvement is not as significant as the one brought by enlarging the SNR.

Compressive detection of non-Gaussian signals
We consider the compressive detection of non-Gaussian signals in this subsection.Two cases are simulated, one with the signal of interest having a uniform distribution in [−2,2], and the other with a t-distribution with 8 degrees of freedom.In both cases, the elements of the signal of interest θ are i.i.d. with mean 0 and variance 4/3.The noise n has a Gaussian distribution with covariance β −1 I N .Since both θ and n have diagonal covariance, as shown in Sect. 4 via asymptotic analysis [see (38) and (39)], the NP detector for this case is given in (7) and its detection probability and the false alarm rate are given in (16).
In the first experiment of this subsection, we set N = 500 and fix the false alarm rate at 0.1.We plot the detection probability as function of M/N in Fig. 3a.The detection probability P D from a simulation of 5,000 runs for the uniform and t-distributed signals are denoted as 'uni' and 't' in the figure, while 'predict' represents the theoretical results obtained via evaluating (18) with the false alarm rate setting to be 0.1.It can be observed from Fig. 3a that under different SNR levels, the theoretical detection probability from (18) is very close to the results from simulation.This observation holds for both uniformly distributed and t-distributed signals, which justifies the validity of the asymptotic analysis used in Sect. 4 to derive the NP detector for non-Gaussian signals and establish its performance.We repeat the experiment that leads to Fig. 3a but this time, we fix the SNR at −2 dB and vary the false alarm rate to show the ROCs of the NP detector for non-Gaussian signals.Again, it can be seen that the asymptotic analysis we applied does lead to NP detectors for non-Gaussian signals whose performance can be predicted precisely using (16).
In the second experiment of this subsection, we reduce the length of the original signal to N = 50.Other setup is the same as in the previous experiment.The results are summarized in Fig. 4. Comparing Fig. 4 with Fig. 3 shows performance degradation in terms of lower detection probability.This comes from the fact that with reduced N , the number of compressive measurements used for detection would be also decreased, which leads to poorer performance of the detector.Another important observation is that as N decreases, the performance of the NP detector from simulation no longer matches the theoretical values well.This is expected, because as indicated in Sect.4, the NP detector for the non-Gaussian signals converges to the one given (7) as N increases to infinity.

Compressive detection of Gaussian random signals corrupted by correlated noise
In this subsection, we consider the compressive detection of Gaussian random signals corrupted by correlated noise.The colored noise is generated by a second-order moving average (MA) model driven by a zero-mean white Gaussian process e(k), which is n(k) = e(k) + 0.5e(k − 1) + 0.2e(k − 2).To detect the presence of the original signal, the detector given in ( 7) is applied with the variance of the noise β −1 set to be the variance of the correlated noise.Note that in the case of colored noise, the detector ( 7) is only suboptimal, as it assumes a noise vector with independent samples.The simulation setup is very similar the one used to generate Fig. 3.We set the length of the original signal to be N = 500.In Fig. 3a, the detection probability as function of M/N is shown.We consider here different SNR levels.In Fig. 3b, we fix the SNR at 0 dB and plot the detector ROC Other simulation setup is the same as the one used to generate Fig. 5.The obtained simulation results are plotted in Fig. 6.It can bee seen that the simulated detection probability follows closely the trend of the theoretical values given in (18), thanks to the utilization of a relatively large number of data samples (see also Sect. 4).The difference between the simulation and the theoretical results probably comes from our assumption that the recorded FM signal has a diagonal covariance.

Conclusion
In this paper, we considered the problem of detecting stochastic signals using compressive measurements.Different from most existing literature, we investigated the case where the measurement matrix is no longer orthonormal, which makes this work more general.Under the condition that the PDFs of the signal of interest and the noise are known, NP detectors have been developed for Gaussian and non-Gaussian signals.Explicit expressions for the detection probability and the false alarm rate of the proposed detectors were established and verified via extensive computer experiments.

Fig. 2 Fig. 3
Fig. 2 Detection probability of the detector in (7) as function of M/N .a α F = 0.1.b α F = 0.01 also equal to the sum of squares of M i.i.d.standard Gaussian random variables.As such,