1 Introduction

High quality signal detection is of great importance for the development of advanced technical systems. The main characteristics of complex signal processing systems are used, for example, in radars, communication, wireless communication, sonar, acoustics and navigation systems. The characteristics of signal processing deteriorate because of the noise effect. Generally, the development of such systems is based on classical methods derived from the theory of statistical hypotheses testing [23]. These methods do not define constraints with respect to the use probability density functions (PDFs) of stochastic processes. In applications, however, the Gaussian PDF is much more widely used. The assumption of the Gaussian PDF is in many cases a convenient mathematical idealization of a real stochastic process, although it does not describe real processes with satisfactory accuracy [8]. The synthesis and analysis of optimal signal detection algorithms in non-Gaussian noise are generally very difficult. Classical methods are characterized by significant limitations associated with the complexity of the algorithmic implementation and the requisite increase in computational resources.

In the scientific literature, two different approaches are typically applied toward the solution of this problem. One approach is based on the use of a PDF for the description of random processes from which the signal detection methods are developed [4,5,6, 21]. Despite the fact that PDFs provide a complete description of stochastic processes, these methods have some limitations, and the computational complexity associated with non-Gaussian processes is notable.

Another approach toward describing random processes is based on the use of the moment and cumulant functions. In this case, the properties of decision functions can be described using other characteristics, such as the mean and variance of decision rules (DRs). For example, a deflection criterion was developed for a class of linear-quadratic (L-Q) systems [3, 18, 19]. Further development of this criterion is shown in [2]. It is worth noting that the deflection criterion and its modifications are weakly connected with the classical criteria that are based on the use of PDFs.

This second approach can be represented in the form of higher-order statistics (HOS), see [12], such as moments and cumulant (semivariants) functions [14, 20]. Such functions allow the description of the statistical properties of non-Gaussian processes with reasonable accuracy [10, 14]. The HOS techniques are used for the development of the signal detection methods [15, 22]. However, these methods have some restrictions, for instance the detection of deterministic signals, and the imposition that only third-order statistics can be used. A new method was proposed for signal detection based on the use of the moment quality criterion for decision making [13]. This approach has led to improvements in the accuracy of non-Gaussian signal processing relative to traditional methods, along with a reduction in the complexity of signal detection algorithms [16, 17]. In those papers, the signal detection methods and algorithms were proposed only for uncorrelated non-Gaussian noise. However, in practical cases, the signal often propagates through turbulent media or along multiple paths. With that in mind, one should model a signal more rigorously in the form of a correlated non-Gaussian process.

The main objective of this paper is the synthesis and analysis of the signal detection methods in correlated non-Gaussian noise based on the moment-cumulant description of random variables and polynomial DRs. This approach seems to optimally satisfy the adapted moment quality criterion of statistical hypotheses testing. Such an approach provides an opportunity to create effective algorithms for the operation of data receiving and processing systems.

2 Mathematical Models of Correlated Non-Gaussian Processes Using Higher-Order Statistics

A multidimensional (MD) probability density function (PDF) is a complete mathematical description of a statistically dependent stochastic process \(\xi (t)\). However, the PDF is not always known, or there may the estimation of its parameters pose a challenge \(\left( {\vartheta _1 , \vartheta _2 ,\;\ldots ,\vartheta _n } \right) \). A different approach is based on the moment and cumulant characteristics [10, 12, 14, 20].

One way to describe statistically dependent random variables involves the use of MD moments and cumulants. The MD moments are defined as the coefficients of the characteristic function \(f_{\xi _1 ,\xi _2 ,\ldots ,\xi _n } (\vartheta _1 ,\vartheta _2 ,\ldots ,\vartheta _n )\)

$$\begin{aligned} m_{i_1 ,i_2 ,\ldots ,i_n } =(-j)^{i_1 +i_2 +\cdots +i_n }\times \left[ \frac{\partial ^{i_1 +i_2 +\cdots +i_n }f_{\xi _1 ,\xi _2 ,\ldots ,\xi _n } (\vartheta _1 ,\vartheta _2 ,\ldots ,\vartheta _n )}{\partial \vartheta _1^{i_1 } \partial \vartheta _2^{i_2 } \ldots \partial \vartheta _n^{i_n } }\right] \quad \end{aligned}$$
(1)

and MD cumulants are defined as

$$\begin{aligned} \chi _{i_1 ,i_2 ,\ldots ,i_n } =(-j)^{i_1 +i_2 +\cdots +i_n }\times \left[ \frac{\partial ^{i_1 +i_2 +\cdots +i_n }\ln f_{\xi _1 ,\xi _2 ,\ldots ,\xi _n } (\vartheta _1 ,\vartheta _2 ,\ldots ,\vartheta _n )}{\partial \vartheta _1^{i_1 } \partial \vartheta _2^{i_2 } \ldots \partial \vartheta _n^{i_n } }\right] .\qquad \end{aligned}$$
(2)

As noted above, the MD characteristic function, including the MD PDF, is a complete description of the statistically dependent random variables. However, a two-dimensional (2D) PDF is often sufficient for describing characteristics of the statistical relationship between random variables [9].

Suppose there are sample values of a stationary random process that can be considered to be different random variables. Then the relationship between two random variables is a simple and widely used example of statistically dependent variables. This is equivalent to using a 2D PDF.

Let’s consider the case when two random variables \(\xi \) and \(\eta \) are statistically independent of each other with their respective \(p_\xi \) and \(p_\eta \) PDF, that are described as initial moments \(m_i^{\left( \xi \right) } \hbox { and }m_i^{\left( \eta \right) } \). If the random variables \(\xi \) and \(\eta \) are statistically dependent on each other, then they have joint moments of (ij) dimension. The joint moments of these two random variables are the values that equal the mathematical expectation of the product of the two variables \(\xi ^{i}, \eta ^{j}\) and are defined as

$$\begin{aligned} m_{i,j}^{\left( {\xi ,\eta } \right) } =E\xi ^{i}\eta ^{j}=\int \limits _{-\infty }^{+\infty } {x^{i}y^{j}} p\left( {x,y} \right) \mathrm{d}x\mathrm{d}y, \end{aligned}$$
(3)

where p(xy) is a joint PDF of the random variables \(\xi \hbox { and }\eta \).

Joint moments of dimension (ij) will be used for the description of various statistical relationships.

It was shown [10, 14] that for non-Gaussian statistically independent random variables with zero mean and variance \(\chi _2 \) the relationship between the initial moments \(m_i \) and cumulants \(\chi _i \) up to the fourth order is as follows:

$$\begin{aligned} m_1 =0,\, m_2 =\chi _2 ,\, m_3 =\chi _3 ,\, m_4 =\chi _4 +3\chi _2^2 . \end{aligned}$$
(4)

Often it is convenient to use dimensionless cumulants called cumulant coefficients \(\gamma _n ={\chi _\mathrm{n} }/{\chi _2^{n/2} }\). Using joint moments allows application of the skewness \(\gamma _3 \) and kurtosis \(\gamma _4 \) coefficients to describe characteristics of the non-Gaussian processes. Let us note that for Gaussian random variables the cumulant coefficients of third and higher order \(\left( {\gamma _3 ,\gamma _4 ,\;\ldots } \right) \) are equal to zero.

The relationship between the joint moments \(m_{i,j} \) and cumulants \(\chi _{i,j} \) up to the fourth order for the correlated non-Gaussian random variables is defined as

$$\begin{aligned} m_{1,1} =\chi _{1,1} ,\, m_{1,2} =\chi _{1,2} ,\, m_{1,3} =\chi _{1,3} +3\chi _2 \chi _{1,1} ,\, m_{2,2} =\chi _{2,2} +\chi _2^2 +2\chi _{1,1}^2.\nonumber \\ \end{aligned}$$
(5)

The 2D moments can be transformed into 1D moments if random variables are statistically independent.

It is known that the joint second-order cumulant \(\chi _{1,1} \) (covariance) describes statistical relationships of the first-order (or correlation of the random variables) and is defined as

$$\begin{aligned} \mathrm{Cov}(\xi _1 ,\xi _2 )= & {} E\left( {\xi _1 -m_{\xi _1 } } \right) \left( {\xi _2 -m_{\xi _2 } } \right) , \end{aligned}$$
(6)
$$\begin{aligned} \chi _{1,1}= & {} m_{1,1} -m_{\xi 1} m_{\xi 2} , \end{aligned}$$
(7)

where \(m_{\xi 1} =E(\xi _1 ), m_{\xi 2} =E(\xi _2 )\).

The moment-cumulant description of the correlated non-Gaussian processes requires additional research and development. For this purpose it will be convenient to work with the additional (new) concept of punched random variables developed in [10].

The cumulants of the series expansion of any characteristic function can be separated into various classes in which the characteristic function has similar properties. Non-Gaussian random variables classified as described above are called punched random variables. The mathematical models of random variables were proposed and approved in classes of uncorrelated non-Gaussian random processes using punched random variables (when the moment-cumulant models are represented only by a part of the cumulants from all possible sets that can match the real existing process). Using this punched classification, there are also various skewness, kurtosis, and skewness-kurtosis statistically independent random variables [10].

Definition 1

A variable is said to be a punched random variable if in its cumulant description one part of cumulant coefficients of the 3rd order is distinct from zero and another part is strictly equal to zero. The remaining higher order cumulant coefficients can assume arbitrary values [10].

In this paper we propose development of new moment and cumulant models of the non-Gaussian statistically dependent random variables. On the basis of our approach, it is then possible to create signal detection DRs using the adapted new moment quality criterion for statistical hypothesis testing.

The classification of new mathematical models is obtained from 2D moments and cumulants of non-Gaussian correlated processes. Cumulants and 2D moments depend on the form of the correlation function and samples at times \(t_v \) and \(t_k \):

$$\begin{aligned} m_{1,1}^{\left( {v,k} \right) } =m_{1,1}^{\left( \tau \right) } ,\, \chi _{1,1}^{\left( {v,k} \right) } =\chi _{1,1}^{\left( \tau \right) } , \end{aligned}$$
(8)

where \(\tau =|t_v -t_k |\) is a sampling interval.

Definition 2

A variable is said to be a Gaussian statistically dependent random variable if its 1D and joint cumulants of the 2nd order are not equal to zero \((\chi _2 \ne 0, \chi _{1,1}^{\left( \tau \right) } \ne 0)\) while all other cumulants and joint cumulants of third and higher orders are equal to zero.

In this case the initial moments up to the fourth order are as follows:

$$\begin{aligned} m_1 =\chi _1 ,\, m_2 =\chi _2 ,\, m_3 =0,\, m_4 =3\chi _2^2 ,\, \end{aligned}$$
(9)

and joint moments and cumulants are defined as:

$$\begin{aligned} m_{1,1}^{\left( \tau \right) } =\chi _{1,1}^{\left( \tau \right) } =\chi _2 \cdot \rho ^{\left( \tau \right) } ,\, m_{1,2}^{\left( \tau \right) } =\chi _{1,2}^{\left( \tau \right) } =0,\, m_{2,2}^{\left( \tau \right) } =\chi _2^2 +2\chi _{1,1}^{\left( \tau \right) } =\chi _2^2 \left( {1+2\rho ^{\left( \tau \right) ^{2}} } \right) ,\nonumber \\ \end{aligned}$$
(10)

where \(\rho ^{\left( \tau \right) } \) is a correlation function of a given form. For example, the correlation function could be presented in either an exponential \(\rho ^{\left( \tau \right) } =e^{-A\left| \tau \right| }\) or an alternative form.

Definition 3

A variable is said to be a skewness dependent random variable of the first type and the first kind if the 1D cumulants \((\chi _2 \ne 0,\chi _3 \ne 0)\) and joint cumulants \((\chi _{1,1}^{\left( \tau \right) } \ne 0, \chi _{1,2}^{\left( \tau \right) } \ne 0)\) are not equal to zero, while the other cumulants and joint cumulants of the higher orders are equal to zero.

In this case the initial moments up to the fourth order are as follows:

$$\begin{aligned} m_1 =\chi _1 ,\, m_2 =\chi _2 ,\, m_3 =\chi _3 ,\, m_4 =3\chi _2^2 , \end{aligned}$$
(11)

and joint moments and cumulants are defined as:

$$\begin{aligned} m_{1,1}^{\left( \tau \right) }= & {} \chi _2 \cdot \rho ^{\left( \tau \right) } ,\, m_{1,2}^{\left( \tau \right) } =\chi _{1,2}^{\left( \tau \right) } =\gamma _3 \chi _2^{3/2} \rho ^{\left( \tau \right) ^{3/2}},\nonumber \\ m_{2,2}^{\left( \tau \right) }= & {} \chi _2^2 +2\chi _{1,1}^{\left( \tau \right) } =\chi _2^2 \left( {1+2\rho ^{\left( \tau \right) ^{2}} } \right) , \end{aligned}$$
(12)

Definition 4

A variable is called a kurtosis dependent random variable of the first type and the first kind if the 1D cumulants \((\chi _2 \ne 0,\chi _4 \ne 0)\) and joint cumulants \((\chi _{1,1}^{\left( \tau \right) } \ne 0, \chi _{1,3}^{\left( \tau \right) } \ne 0, \chi _{2,2}^{\left( \tau \right) } \ne 0)\) are not equal to zero. The other cumulants and joint cumulants of higher orders must equal zero.

In this case the initial moments up to the fourth order are as follows:

$$\begin{aligned} m_1 =\chi _1 ,\, m_2 =\chi _2 ,\, m_3 =0,\, m_4 =\chi _4 +3\chi _2^2 , \end{aligned}$$
(13)

and joint moments and cumulants are defined as:

$$\begin{aligned} m_{1,1}^{\left( \tau \right) }= & {} \chi _2 \cdot \rho ^{\left( \tau \right) } ,\, m_{1,2}^{\left( \tau \right) } =\chi _{1,2}^{\left( \tau \right) } =0,\, m_{1,3}^{\left( \tau \right) } =\gamma _4 \left( {\chi _{1,1}^{\left( \tau \right) } } \right) ^{2}+3\chi _2 \chi _{1,1}^{\left( \tau \right) } ,\, m_{2,2}^{\left( \tau \right) }\nonumber \\= & {} \chi _2^2 \left( {\gamma _4 \rho ^{\left( \tau \right) ^{2}} +1+2\rho ^{\left( \tau \right) ^{2}} } \right) . \end{aligned}$$
(14)

Definition 5

A variable is called a skewness-kurtosis dependent random variable of the second type and the first kind if the 1D cumulants \((\chi _2 \ne 0, \chi _3 \ne 0, \chi _4 \ne 0)\) and joint cumulants \((\chi _{1,1}^{\left( \tau \right) } \ne 0,\chi _{1,2}^{\left( \tau \right) } \ne 0, \chi _{1,3}^{\left( \tau \right) } \ne 0, \chi _{2,2}^{\left( \tau \right) } \ne 0)\) are not equal to zero. The other cumulants and joint cumulants of the higher orders must equal zero.

In this case, the initial moments up to fourth order are as follows:

$$\begin{aligned} m_1 =\chi _1 ,\, m_2 =\chi _2 ,\, m_3 =\chi _3 ,\, m_4 =\chi _4 +3\chi _2^2 , \end{aligned}$$
(15)

and joint moments and cumulants are defined as:

$$\begin{aligned}&m_{1,1}^{\left( \tau \right) } =\chi _2 \cdot \rho ^{\left( \tau \right) } ,\, m_{1,2}^{\left( \tau \right) } =\chi _{1,2}^{\left( \tau \right) } =\gamma _3 \chi _2^{3/2} \rho ^{\left( \tau \right) ^{3/2}},\, \nonumber \\&m_{1,3}^{\left( \tau \right) } =\gamma _4 \left( {\chi _{1,1}^{\left( \tau \right) } } \right) ^{2}+3\chi _2 \chi _{1,1}^{\left( \tau \right) } ,\, m_{2,2}^{\left( \tau \right) } =\chi _2^2 \left( {\gamma _4 \rho ^{\left( \tau \right) ^{2}} +1+2\rho ^{\left( \tau \right) ^{2}} } \right) . \end{aligned}$$
(16)

It should be noted that 1D and MD cumulant coefficients cannot take arbitrary values. This condition is determined by the positive definiteness of the characteristic functions [10].

The proposed models are different from well-known models as they account for the properties of non-Gaussian correlation random processes using higher order cumulant coefficients. These models will be used for the development and adaptation of the new moment quality criterion for statistical hypothesis testing and signal detection methods.

3 Adaptation of the Moment Quality Criterion Decision Making

Assume that random signals \(\xi \left( t \right) \) are observed in some time interval \(\left( {0, T} \right) \). We need to define the signal processing algorithms and their characteristics of the input stochastic process \(\xi \left( t \right) \) based on decision making. A “Yes” signal S(t) \((\hbox {hypothesis}\,H_1 )\) or a “No” signal \((\hbox {hypothesis} \,H_0 )\), expressed by S(t), in the input stochastic process \(\xi \left( t \right) \), where \(\xi (t)=S(t)+\eta (t); \eta (t)\)—is a non-Gaussian stationary random process that describes the sequence of moments and cumulants, and the detection of S(t)—is useful.

Let us use the moment-cumulant description of \(\xi \left( t \right) \) which is represented as a finite sequence of moments and cumulants. Let us assume that such a sequence has the form \((m_i^{\left( v \right) } , m_{i,j}^{\left( \tau \right) } )\) under the assumption \(H_1 \) and the form \((u_i^{\left( v \right) } , u_{i,j}^{\left( \tau \right) } )\) under the assumption \(H_0 \). Here, \(\{u_i^{\left( v \right) } ,m_i^{\left( v \right) } \}\) are the 1D moments at the time \(t_v \) of the i-th order and \(\{u_{i,j}^{\left( \tau \right) } ,m_{i,j}^{\left( \tau \right) } \}\) are the 2D joint moments of the (ij) dimension.

If the sampling signal is \(\xi \left( t \right) \), and its discrete values at the time \(t_v \) are \(\mathbf{X}=\left\{ {x_1 ,\;x_2 ,\ldots ,x_n } \right\} \), then we have:

$$\begin{aligned} \xi _v =S_v \left( {\alpha _k } \right) +\eta _v \left( {\gamma _k ,\chi _{i,j}^{\left( \tau \right) } } \right) \hbox { if }H_1 \hbox { holds}, \end{aligned}$$
(17)
$$\begin{aligned} \xi _v =\eta _v \left( {\gamma _k ,\chi _{i,j}^{\left( \tau \right) } } \right) , \quad v=\overline{1, n} \hbox { if }H_0 \hbox { holds}. \end{aligned}$$
(18)

here, \(S_v \left( {\alpha _k } \right) \quad \)are signals with known parameters \(\alpha _k ; \eta _v \left( {\gamma _k ,\chi _{i,j}^{\left( \tau \right) } } \right) \) are non-Gaussian random variables with known parameters in the form of cumulants \(\gamma _k \) and joint cumulants \(\chi _{i,j}^{\left( \tau \right) } , k=\overline{1, \mu } \).

In the classical approach, the optimal Bayesian algorithm of signal detection is determined as the minimum average risk [23]. The minimal sufficient statistic for simple hypothesis testing is defined as a likelihood ratio, and it can be calculated from

$$\begin{aligned} \Lambda (\mathbf{X})=P\left( {\mathbf{X}|H_1 } \right) /P\left( {\mathbf{X}|H_0 } \right) . \end{aligned}$$
(19)

The solution of such problems is mainly done under the assumption of a Gaussian PDF of the random variables. In other cases, defining the PDF and finding solutions in the form (19) is difficult. Therefore, it is more convenient to use a different approach when the likelihood ratio is represented as a power polynomial function [16].

Assume that the likelihood ratio is a continuous function that is represented as a stochastic power polynomial of degree s for independent random samples \(x_v \):

$$\begin{aligned} \ln \Lambda (\mathbf{X})=\sum _{v=1}^n {\sum _{i=1}^s {k_{iv} } } x_v^i +k_0 . \end{aligned}$$
(20)

The unknown coefficients \(k_{iv} \) and \(k_0 \) in (20) can be determined from the minimum of one well-known probabilistic quality criterion (Bayes, Neyman–Pearson etc.), but generally it is not possible to obtain them explicitly. Therefore, a new moment quality criterion for statistical hypothesis testing was proposed in [13].

Let us assume that there is a decision function

$$\begin{aligned} f\left( \mathbf{X} \right) =\gamma \left( \mathbf{X} \right) +k_0 {\begin{array}{l} {H_1 } \\ > \\ < \\ {H_0 } \\ \end{array} }0, \end{aligned}$$
(21)

where \({\gamma }\left( \mathbf{X} \right) \) is a function of sample values \(\mathbf{X}; k_0 \) is chosen so that

$$\begin{aligned} M_0= & {} E\left[ {f\left( \mathbf{X} \right) /H_0 } \right] =\int \limits _{-\infty }^\infty {f\left( \mathbf{X} \right) P\left( {\mathbf{X}/{H_0 }} \right) \;\mathrm{d}x} <0, \end{aligned}$$
(22)
$$\begin{aligned} M_1= & {} E\left[ {f\left( \mathbf{X} \right) /H_1 } \right] =\int \limits _{-\infty }^\infty {f\left( \mathbf{X} \right) P\left( {\mathbf{X}/{H_1 }} \right) \;\mathrm{d}x} \ge 0. \end{aligned}$$
(23)

According to the Chebyshev inequality, the probability of the first and the second kind errors (21) is defined as:

$$\begin{aligned} \alpha =P\left[ {f\left( \mathbf{X} \right) \ge 0/H_0 } \right] \le {G_0 }/{M_0^2 }=\alpha _0 , \end{aligned}$$
(24)
$$\begin{aligned} \beta =P\left[ {f\left( \mathbf{X} \right) <0/H_1 } \right] \le {G_1 }/{M_1^2 }=\beta _0 , \end{aligned}$$
(25)

where \(G_i (\gamma )=\int \limits _{-\infty }^\infty {\left[ {f\left( \mathbf{X} \right) -M_i } \right] ^{2}P\left( {\mathbf{X}/{H_i }} \right) \;\mathrm{d}x\quad } \) is a variance of the decision function \(\gamma \left( \mathbf{X} \right) \) with hypotheses \(H_i , i=0,1\), respectively.

Then the criterion of the sum of probability errors can be written as the following inequality

$$\begin{aligned} F_1 \left( {\alpha ,\beta } \right) =\alpha +\beta \le \alpha _0 +\beta _0 =\frac{G_0 }{M_0^2 }+\frac{G_1 }{M_1^2 }=\Phi _1 \left( {M,G} \right) . \end{aligned}$$
(26)

Let us assume that for \(M_0 \) and \(M_1 \)the coefficient \(k_0 \) is defined as

$$\begin{aligned} k_0 =-0.5\left( {E_0 +E_1 } \right) , \end{aligned}$$
(27)

where \(E_i (\gamma )=E\left[ {\gamma \left( \mathbf{X} \right) \left| {H_i } \right. } \right] \) is the mean of the decision function \(\gamma \left( \mathbf{X} \right) \) with hypotheses \(H_i , i=0,1\), respectively. Then, the composite function \(\Phi _1 \left( {M,G} \right) \) for the coefficient \(k_0 \) defined above (27), is determined as \(\Phi _1 \left( {M,G} \right) =4\hbox {Ku1}(E,G)\), where

$$\begin{aligned} \mathrm{Ku}1(E,G)=\frac{G_0 \left[ \gamma \right] +G_1 \left[ \gamma \right] }{\left( {E_1 \left[ \gamma \right] -E_0 \left[ \gamma \right] } \right) ^{2}}. \end{aligned}$$
(28)

The functional \(\mathrm{Ku}1(E,G)\) is the quality criterion of decision making of (21). This criterion is called “The moment quality criterion of upper bounds of error probability” or “Ku criterion” for short.

If we take into account the probability of occurrence of hypotheses \(H_i \) then (26) can be written as the following inequality

$$\begin{aligned} F_2 \left( {\alpha ,\beta } \right) =p_0 \alpha +p_1 \beta \le p_0 \frac{G_0 }{M_0^2 }+p_1 \frac{G_1 }{M_1^2 }=\Phi _2 \left( {M,G} \right) , \end{aligned}$$
(29)

where \(p_i =P\left\{ {H_i } \right\} \) is the probability of occurrence of hypotheses \(H_i \), \(\sum \limits _{i=0}^1 {p_i =1} \), \(i=0,1\).

Then the composite function \(\Phi _2 \left( {M,G} \right) \) for such a coefficient \(k_0 \) is determined as \(\Phi _2 \left( {M,G} \right) =4\hbox {Ku}2(E,G)\), where

$$\begin{aligned} \mathrm{Ku}2(E,G)=\frac{p_0 G_0 \left[ \gamma \right] +p_1 G_1 \left[ \gamma \right] }{\left( {E_1 \left[ \gamma \right] -E_0 \left[ \gamma \right] } \right) ^{2}}. \end{aligned}$$
(30)

Let us now take into account the probability of occurrence of hypotheses \(p_i \) \((i=0,1)\) and cost \(C_{i\hbox {j}} , i,j=0,1\). Here, the cost \(C_{i\hbox {j}} \)is associated with a decision \(\mathrm{Dec}_i \), given that \(H_j \) is the true hypothesis [1]. Then, the criterion of probability errors sum given in (26) can be written as the following inequality

$$\begin{aligned} F_3 \left( {\alpha ,\beta } \right)\le & {} p_0 C_{00} +p_1 C_{11} +p_0 (C_{01} -C_{00} )\frac{G_0 }{M_0^2 }+p_1 (C_{10} -C_{11} )\frac{G_1 }{M_1^2 }\nonumber \\= & {} \Phi _3 \left( {M,G} \right) . \end{aligned}$$
(31)

Using the coefficient \(k_0 \) given in (27), the composite function \(\Phi _3 \left( {M,G} \right) \) is defined as

$$\begin{aligned} \Phi _3 \left( {M,G} \right) =p_0 C_{00} +p_1 C_{11} +4\mathrm{Ku}3(E,G), \end{aligned}$$
(32)

where

$$\begin{aligned} \mathrm{Ku}3(E,G)=\frac{bG_0 \left[ \gamma \right] +dG_1 \left[ \gamma \right] }{\left( {E_1 \left[ \gamma \right] -E_0 \left[ \gamma \right] } \right) ^{2}}, \end{aligned}$$
(33)

where \(b=p_0 \left( {C_{01} -C_{00} } \right) , \,d=p_1 \left( {C_{10} -C_{11} } \right) \).

Let us consider the moment quality criterion \(\mathrm{Ku}1(E,G)\) to demonstrate the efficiency of signal detection. Such a Ku criterion provides the minimum of the sum probability of the first and second kind of errors. The decision rule (DR) will be optimal if the sum of variances for the hypothesis and the alternative is minimal, and the distance between the mean functions is as large as possible.

The new moment quality criterion (28) is different from the well-known probabilistic quality criterion, but it has a definite correlation with them. The Ku criterion is used to create effective methods and algorithms for signal detection in uncorrelated non-Gaussian noise [16, 17]. In order to solve the problem of signal processing in correlated non-Gaussian noise, the moment quality criterion (34), defined below, needs to be adapted. This can be done using the new moment and cumulant models that were obtained previously in Sect. 2.

In this case, the adapted criterion should take into account the correlation of sample values. This can be achieved by using 2D cumulant models of the dimension (ij), where the unknown coefficients \(k_{iv} \) and \(k_0 \) of the DR are obtained from (20). Then the mean and the variance of the polynomial stochastic DR (20) under our hypotheses and the alternative, are defined as:

$$\begin{aligned} E_{0(sn)}= & {} \sum _{i=1}^s {\sum _{v=1}^n {k_{iv} u_i^{\left( v \right) } } }, \, E_{1(sn)} =\sum _{i=1}^s {\sum _{v=1}^n {k_{iv} m_i^{\left( v \right) } } } \end{aligned}$$
(34)
$$\begin{aligned} G_{0(sn)}= & {} \sum _{i=s}^s {\sum _{j=1}^s {\sum _{v=1}^n {\sum _{k=1}^n {k_{iv} } } k_{jv} F_{(i,j)}^{\left( {v,k} \right) } (H_r )} } ,r=0,1,\, v,k=\overline{1,n} ,\, \end{aligned}$$
(35)

where the parameters that take into account the correlation relationship are defined as:

$$\begin{aligned} F_{(i,j)}^{\left( {v,k} \right) } (H_0 )=u_{\left( {i,j} \right) }^{\left( {v,k} \right) } -u_i^{\left( v \right) } u_j^{\left( k \right) } ,\, F_{(i,j)}^{\left( {v,k} \right) } (H_1 )=m_{\left( {i,j} \right) }^{\left( {v,k} \right) } -m_i^{\left( v \right) } m_j^{\left( k \right) } . \end{aligned}$$

Taking into account the above equations we conclude that the adapted new moment quality criterion for statistical hypothesis testing is:

$$\begin{aligned} \mathrm{Ku}1(E,G,\rho )=\frac{\sum \limits _{v=1}^n {\sum \limits _{k=1}^n {\sum \limits _{i=1}^s {\sum \limits _{j=1}^s {k_{iv} k_{jk} \left( {F_{i,j}^{\left( {v,k} \right) } (H_0 )+F_{i,j}^{\left( {v,k} \right) } (H_1 )} \right) } } } } }{\sum \limits _{v=1}^n {\sum \limits _{i=1}^s {k_{iv} \left( {m_i^{\left( v \right) } -u_i^{\left( v \right) } } \right) ^{2}} } }. \end{aligned}$$
(36)

The optimal coefficients \(k_{iv} \) of the polynomial stochastic DR (20) have to minimize the adapted new moment quality criterion \(\mathrm{Ku}1(E,G,\rho )\) and are defined as

$$\begin{aligned} \sum \limits _{j=1}^s {k_{iv} \left( {F_{i,j}^{\left( {v,k} \right) } (H_0) +F_{i,j}^{\left( {v,k} \right) } (H_1 )} \right) } =m_i^{\left( v\right) } -u_i^{\left( v \right) } , \quad v,k=\overline{1,n} , \quad i=\overline{1,s}.\nonumber \\ \end{aligned}$$
(37)

Numerical methods and Schur complement of a matrix block are used for solution of algebraic equations (37). This is possible since the 2D joint moments \(u_{\left( {i,j} \right) }^{\left( \tau \right) } \) and \(m_{\left( {i,j} \right) }^{\left( \tau \right) } \) depend on the correlation function \(\rho ^{\left( \tau \right) } \) which is defined as

$$\begin{aligned} \rho ^{(\tau )}=\left( {{\begin{array}{llll} 1&{} {\rho ^{(\tau _{1,2} )}}&{} {\ldots }&{} {\rho ^{(\tau _{1,n} )}} \\ {\rho ^{(\tau _{2,1} )}}&{} 1&{} {\ldots }&{} {\rho ^{(\tau _{2,n} )}} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\rho ^{(\tau _{n,1} )}}&{} {\rho ^{(\tau _{n,2} )}}&{} {\ldots }&{} 1 \\ \end{array} }} \right) , \end{aligned}$$
(38)

where, for example, the correlation function has an exponential form \(\rho ^{(\tau _{v,k} )} =e^{-A\left| {t_v -t_k } \right| }, v,k=\overline{1,n} \).

The quality criterion in parameter estimation theory is a variance of parameter estimation of random variables. In [11] it was shown that the minimum variance is inversely proportional to the Fisher information function. This information function is defined as a PDF form of the sample values. It is easy to show that the mean and variance of the polynomial stochastic DR (20) can be represented as Kullback–Leibler information number using a PDF for the hypothesis and the alternative. Then the moment quality criterion \(\mathrm{Ku}1(E,G,\rho )\) decision making can also be defined by using a PDF. Therefore, it is appropriate to introduce the concept of extracted information from samples of volume n related to the difference in hypotheses \(H_1 \) and \(H_0 \), just as it would be done in parameter estimation theory.

Definition 6

Inverse value of the moment quality criterion \(\mathrm{Ku}1(E,G,\rho )\) will be called the extracted information from samples based on discrimination hypotheses \(H_0 , H_1 \), and it will be denoted as

$$\begin{aligned} \mathrm{Ku}1(E,G,\rho )=I_{\mathrm{Ku}s}^{-1} . \end{aligned}$$
(39)

It can be shown that \(I_{\mathrm{Ku}s} \) can also be defined as

$$\begin{aligned} I_{\mathrm{Ku}s}^ =G_{1(sn)} +G_{0(sn)} =E_{1(sn)} -E_{0(sn)} . \end{aligned}$$
(40)

Our new method of signal detection in correlated non-Gaussian noise is developed based on new moment-cumulant models and an adapted moment quality criterion for statistical hypotheses testing. This method will be used for synthesis and analysis of the non-linear polynomial stochastic DR.

4 Synthesis of the Polynomial Algorithms of Signal Detection in Correlated Non-Gaussian Noise

Let us consider the efficiency of the method presented in this paper by using an example of signal detection. Assume that a random signal \(\xi (t)\) is observed in the time interval [0, T] and that it consists of the useful fully known signal a [7] and noise \(\eta (t)\)

$$\begin{aligned} \xi (t)=a+\eta (t)\;, \end{aligned}$$
(41)

where \(\eta (t)\) is a non-Gaussian statistically dependent random process with zero mean and variance \(\chi _2 \), and that it is described by a sequence of moments and joint cumulants.

If the sampling signal is \(\xi \left( t \right) \) and discrete values at the time \(t_v \) are \(\mathbf{X}=\left\{ {x_1 ,\;x_2 ,\ldots ,x_n } \right\} \) then we have

$$\begin{aligned} \xi _v= & {} a+\eta _v \left( {\gamma _k ,\chi _{i,j}^{\left( \tau \right) } } \right) ,\hbox { if holds }H_1 , \end{aligned}$$
(42)
$$\begin{aligned} \xi _v= & {} \eta _v \left( {\gamma _k ,\chi _{i,j}^{\left( \tau \right) } } \right) , v=\overline{1, n} ,\hbox { if holds }H_0 . \end{aligned}$$
(43)

Let us consider asymmetrical-excess non-Gaussian statistically dependent random variables. Then the initial and joint moments up to the fourth order for the hypothesis \(H_0 \) are as follows:

$$\begin{aligned} u_1 =0,\, u_2 =\chi _2 ,\, u_3 =\chi _3 ,\, u_4 =\chi _4 +3\chi _2^2 , \end{aligned}$$
(44)
$$\begin{aligned}&u_{11}^{(v,k)} =\chi _{11} =\chi _2 \rho ^{(v,k)},\, u_{1,2}^{\left( {v,\hbox {k}} \right) } =\chi _{1,2}^{\left( {v,\hbox {k}} \right) } =\gamma _3 \chi _2^{3/2} \rho ^{\left( {v,\hbox {k}} \right) ^{3/2}},\, \\&u_{2,2}^{\left( {v,\hbox {k}} \right) } =\chi _2^2 +2\chi _{1,1}^{\left( {v,\hbox {k}} \right) } =\chi _2^2 \left( {\gamma _4 \rho ^{\left( {v,\hbox {k}} \right) ^{2}} +1+2\rho ^{\left( {v,\hbox {k}} \right) ^{2}} } \right) , \end{aligned}$$

and assuming hypothesis \(H_1 \) are defined as

$$\begin{aligned} m_1= & {} a,\, m_2 =a^2 +\chi _2 ,\, m_3 =a^3 +3a\chi _2 +\chi _3 ,\nonumber \\ m_4= & {} a^4 +4a\chi _3 +6a^{2}\chi _2 +3\chi _2^2 +\chi _4 , \end{aligned}$$
(45)
$$\begin{aligned} m_{1,1}^{(v,k)}= & {} a^{2}+\chi _2 \rho ^{(v,k)}, m_{1,2}^{(v,k)} =a^{3}+\hbox {a}\chi _2 +2a\chi _2 \rho ^{(v,k)}+\gamma _3 \chi _2^{3/2} \left( {\rho ^{(v,k)}} \right) ^{3/2},\\ m_{2,2}^{(v,k)}= & {} a^{4}+2\hbox {a}^{\hbox {2}}\chi _2 +4a^{2}\chi _2 \rho ^{(v,k)}+4a\gamma _3 \chi _2^{3/2} \left( {\rho ^{(v,k)}} \right) ^{3/2}\\&+\,\chi _2^2 \left( {\gamma _4 \left( {\rho ^{(v,k)}} \right) ^{2}+1+2\left( {\rho ^{(v,k)}} \right) ^{2}} \right) . \end{aligned}$$

The polynomial DR (20) of degree \(s=1\) for signal detection in a correlated non-Gaussian noise is defined as

$$\begin{aligned} \Lambda (\mathbf{X})=\sum _{v=1}^n {k_{1v} } x_v +k_0 {\begin{array}{l} {H_1 } \\ > \\ < \\ {H_0 } \\ \end{array} }0, \end{aligned}$$
(46)

where the unknown coefficient \(k_0 \) is defined in (27), and it can be written as

$$\begin{aligned} k_0 =\frac{1}{2}\sum _{v=1}^n {k_{1v} } a. \end{aligned}$$
(47)

Furthermore, using the adapted new moment quality criterion, the unknown coefficients \(k_{iv} \) are obtained from the system of equations (37), and can be written as

$$\begin{aligned} k_{1v} \left( {F_{1,1}^{\left( {v,k} \right) } (H_0 )+F_{1,1}^{\left( {v,k} \right) } (H_1 )} \right) =m_1 -u_1^ , \quad v,k=\overline{1,n} , \end{aligned}$$
(48)

where \(F_{\left( {i,j} \right) }^{\left( {v,k} \right) } \left( {H_0 } \right) =u_{\left( {i,j} \right) }^{\left( {v,k} \right) } -u_i u_j , F_{\left( {i,j} \right) }^{\left( {v,k} \right) } \left( {H_0 } \right) =m_{\left( {i,j} \right) }^{\left( {v,k} \right) } -m_i m_j \).

Note that then the Eq. (48) is transformed into the form

$$\begin{aligned} k_{1v} 2\chi _2 \rho ^{(v,k)}=a. \end{aligned}$$
(49)

For a linear DR (46), the optimal coefficient is defined as \(k_{1v} =\frac{q^{0.5}A_v }{\Delta _1 }\), where \( q={a^{2}}/{\chi _2 }\) is a signal-to-noise ratio (SNR), \(A_v \) is a determinant obtained from \(\Delta _1 \) when the v-th column is replaced by a column consisting of ones only; \(\Delta _1 \) is as follows:

$$\begin{aligned} \Delta _1= & {} \det \left\| {F_{\left( {1,1} \right) }^{\left( {v,k} \right) } } \right\| =2\chi _2 \det \left\| {\rho ^{(\tau _{v,k} )}} \right\| \nonumber \\ {}= & {} 2\chi _2 \det \left\| {{\begin{array}{llll} 1&{} {\rho ^{(\tau _{1,2} )}}&{} {\ldots }&{} {\rho ^{(\tau _{1,n} )}} \\ {\rho ^{(\tau _{2,1} )}}&{} 1&{} {\ldots }&{} {\rho ^{(\tau _{2,n} )}} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\rho ^{(\tau _{n,1} )}}&{} {\rho ^{(\tau _{n,2} )}}&{} {\ldots }&{} 1 \\ \end{array} }} \right\| , v,k=\overline{1,n} . \end{aligned}$$
(50)

Here, \(F_{\left( {1,1} \right) }^{\left( {v,k} \right) } =F_{\left( {1,1} \right) }^{\left( {v,k} \right) } \left( {H_0 } \right) +F_{\left( {1,1} \right) }^{\left( {v,k} \right) } \left( {H_1 } \right) =2\chi _2 \rho ^{(v,k)}\hbox { and }\rho ^{\left( {v,\hbox {k}} \right) } =\rho ^{\left( {\tau _{v,\mathrm{k}} } \right) } =e^{-A\left| {v-k} \right| }\) is the exponential correlation function.

Then the linear DR (46) becomes

$$\begin{aligned} \sum _{v=1}^n {A_v } \left( {x_v -\frac{a}{2}} \right) {\begin{array}{l} {H_1 } \\ > \\ < \\ {H_0 } \\ \end{array} }0. \end{aligned}$$
(51)

If one takes into account (39) and (40), the value of the information extracted from the samples based on hypotheses \(H_0 \), \(H_1 \), is defined as

$$\begin{aligned} I_1 =\frac{q}{\Delta _1 }\sum _{v=1}^n {A_v ,} \end{aligned}$$
(52)

and the quality criterion value \(\mathrm{Ku}1(E,G,\rho )\) is defined as the inverse of (52).

It is easy to show that if the sample values are independent, i.e \(\rho ^{(\tau _{v,k} )}=0\, (\hbox {when }v\ne k)\) and \(\rho ^{(\tau _{v,k} )}=1\, (\hbox {when }v=k)\), then the moment quality criterion \(\mathrm{Ku}1(E,G,\rho )\) can be written as

$$\begin{aligned} \mathrm{Ku}1\left( {E,G} \right) =2/{nq} \end{aligned}$$
(53)

and the DR (51) transforms to the well-known classical form

$$\begin{aligned} \frac{1}{n}\sum _{v=1}^n {x_v -\frac{a}{2}} {\begin{array}{l} {H_1 } \\ > \\ < \\ {H_0 } \\ \end{array} }0. \end{aligned}$$
(54)

It has been shown in [2] that the DR (54) can be obtained from the likelihood ratio under the assumption of non-correlated Gaussian noise. The first and second order moments and joint cumulant \(\chi _{1,1}^{\left( \tau \right) } \) are used for the synthesis of the linear DR (51). These parameters are typical characteristics of a Gaussian PDF. Note that the DR (51) does not take into account the non-Gaussian noise distribution. Let us consider the case when the degree of the polynomial is equal to\(\,s=2\). The DR will then be non-linear, and in the general case it takes the form

$$\begin{aligned} \sum _{v=1}^n {k_{1v} x_v +\sum _{v=1}^n {k_{2v} x_v^2 +k_0 {\begin{array}{l} {H_1 } \\ > \\ < \\ {H_0 } \\ \end{array} }} 0} , \end{aligned}$$
(55)

where the optimal coefficients, defined from the equation systems (37), are equal to

$$\begin{aligned} k_{1v} =\frac{B_v }{\Delta _2 }, v=\overline{1,n} ,\, k_{2v} =\frac{C_v }{\Delta _2 },\, v=\overline{n+1,\;2n} . \end{aligned}$$
(56)

Here \(B_v \) is a determinant obtained from \(\Delta _2 \) when v-th column \((v=\overline{1,n} )\) is replaced by another one with elements \(\left( {q^{0.5},q^{0.5},\ldots , q^{0.5}\quad q, q,\ldots ,q} \right) , C_v \)—is defined in a similar way when \(v=\overline{n+1,\;2n} \hbox { and }\Delta _2 \) can be written as

$$\begin{aligned} \Delta _2 =\det \left| { {\begin{array}{ll} {\left\| {F_{1,1}^{\left( {\tau _{v,k} } \right) } } \right\| }&{} {\left\| {F_{1,2}^{\left( {\tau _{v,k} } \right) } } \right\| } \\ {\left\| {F_{2,1}^{\left( {\tau _{v,k} } \right) } } \right\| }&{} {\left\| {F_{2,2}^{\left( {\tau _{v,k} } \right) } } \right\| } \\ \end{array} } } \right| , \quad v,k=\overline{1,n} . \end{aligned}$$
(57)

In general, the threshold \(k_0 \) of DR (55) with the coefficients \(k_{1v} \) and \(k_{2v} \) is defined as

$$\begin{aligned} k_0 =-\frac{1}{2\Delta _2 }\sum _{v=1}^n {\left( {q^{0.5}B_v +C_v \left( {q+1} \right) } \right) } . \end{aligned}$$
(58)

The DR (55) takes into account the correlated non-Gaussain noise distribution in the form of skewness \(\gamma _3 \) and kurtosis \(\gamma _4 \) coefficients as well as joint cumulants \(\chi _{i,j}^{\left( \tau \right) } , i,j=\overline{1,2} \).

Thus, for coefficients \(k_{1v} \hbox { and }k_{2v} \), the value of the extracted information (discrimination) from samples following from hypotheses \(H_0 , H_1 \), using DR (55), equals

$$\begin{aligned} I_2 =\frac{1}{\Delta _2 }\sum _{v=1}^n {\left( {q^{0.5}B_v +qC_v } \right) } . \end{aligned}$$
(59)

Similarly, it is possible to synthesize a non-linear polynomial DR with a higher degree s. The block diagram of the general DR (55) of order \(s=2\) is shown in Fig. 1.

Fig. 1
figure 1

Block diagram of implementation of a power polynomial DR

The analysis of the efficiency of the algorithm is obtained from the values of the moment quality criterion \(\mathrm{Ku}1(E,G,\rho )\) or from the comparison of the extracted information \(I_{\mathrm{Ku}s} \) for the linear and nonlinear DRs of degree \(s=1,2\). It will be shown later that the nonlinear processing of samples by the nonlinear DR \((s=2)\) can increase the signal detection efficiency in correlated non-Gaussian noises.

5 Results and Discussion

Using a new concept of moment quality criterion and new methods, we have developed non-linear algorithms, computer tools and a new strategy for addressing the problem of signal detection in correlated non-Gaussian noise. In addition, we have developed a new generator of correlated non-Gaussian processes to carry out simulation. The generator is based on certain Gaussian Mixture Models (GMM) and an adaptation to correlated processes. The GMM is based on the use of multiple Gaussian generators with given parameters of the PDF and is defined as:

$$\begin{aligned} P\left( {x/{\vec {\vartheta }}} \right) =\sum _{i=1}^r {\frac{\delta _i }{\sqrt{2\pi \sigma _i^2 }}\exp \left\{ {-\frac{\left( {x-m_i } \right) ^{2}}{2\sigma _i^2 }} \right\} } , \end{aligned}$$
(60)

where \(m_i , \sigma _i^2 \) are the mean and the variance of Gaussian components respectively; \(\delta _i \) are the proportionality coefficients of Gaussian components such that they are subject to the condition

$$\begin{aligned} \sum _{i=1}^r {\delta _i } =1. \end{aligned}$$
(61)

Consider the two Gaussian components mentioned above. The resulting multidimensional non-Gaussian PDF is then defined as follows

$$\begin{aligned} \begin{array}{l} P_n (\xi (t_1 ),\ldots ,\xi (t_n ))=\frac{\delta }{\sqrt{(2\pi \sigma _{^{1}}^2 )^{n}D}}\exp [-\frac{1}{2\sigma _{^{1}}^2 D}\sum _{\mu ,v=1}^n {D_{\mu v} (\xi _\mu -m_1 )(\xi _v -m_1 )} ] \\ \quad +\,\frac{1-\delta }{\sqrt{(2\pi \sigma _{^{2}}^2 )^{n}D}}\exp \left[ -\frac{1}{2\sigma _{^{2}}^2 D}\sum _{\mu ,v=1}^n {D_{\mu v} (\xi _\mu -m_2 )(\xi _v -m_2 )} \right] \\ \end{array}, \end{aligned}$$
(62)

where D is the determinant of the n-th order, whose elements are the correlation coefficients, and \(D_{\mu \nu } \) is an algebraic complement of D.

A block diagram of the Gaussian Mixture generator is presented in Fig. 2. Output random sequence \((\vec {x}_{kor} )\) is generated based on the values of the initial input moment \((\alpha _1 , \alpha _2 , \ldots , \alpha _n )\), the number of samples (n), and the correlation function \(R_{ij} \). Blocks of the diagram perform the following functions: (1) is an arithmetic unit for calculation of the distribution of the parameters of Gaussian generators with the number of samples \((1-\delta )n\) going into the unit (2) and the number \(\delta n\) going into block (3); (4) and (5) are multipliers of the random sequences by \(\sigma _1 \) and \(\sigma _2 \) respectively; (6) and (7) are blocks for adding of the random sequences and the means \(m_1 \) and \(m_2 \) respectively; (8) is a mixer of the two random sequences, (9) is a multiplier of the random sequence and correlation function; (10) is a generator of the correlation relationships of samples, (11) is a cumulative adder and (12) is the “former” of the output random non-Gaussian correlated sequence.

Fig. 2
figure 2

Block diagram of correlated Gaussian mixture generator

The simulation results of the uncorrelated (a) and correlated (b) non-Gaussian processes are shown in Fig. 3. Correlation fields of these processes are shown in Fig. 3c, d respectively. It can be seen that randomness of the correlated non-Gaussian process is decreased and this is the result of the relationship between sample valu.

Fig. 3
figure 3

Uncorrelated (a) and correlated (b) non-Gaussian asymmetrical-excess processes and correlation fields (c, d), respectively, if \(n=1000, A=0.01\) (b, d), \(\gamma _3 =0.8, \gamma _4 =0.8\)

In this paper we have developed a linear DR of signal detection in correlated non-Gaussian noise. In addition, values of the moment quality criterion \(\mathrm{Ku}1\left( {E,G,\rho } \right) \) as inverse of (58) were also obtained. It is shown that the linear adapted DR (57) is transformed into linear classical DR (60) under the assumption of uncorrelated noise (when the parameter A of the exponential correlation function satisfies \(\hbox {A}>5)\). The values of the criteria \(\mathrm{Ku}1\left( {E,G,\rho } \right) \) and \(\mathrm{Ku}1\left( {E,G} \right) \) are also the same as in uncorrelated noise (Fig. 4).

Using adequate mathematical models of random processes and methods of signal processing allows us to improve the efficiency of signal detection in correlated non-Gaussian noise. The efficiency of the adapted (57) and classic (60) DR is shown in Fig. 5. It is clear that the probability errors of signal detection of the adapted DR are smaller than in the classic DR. The results that are being compared are dependent on the correlation of random process. The efficiency of both DRs is the same for uncorrelated processes, for example when A>5.

Fig. 4
figure 4

The values of the moment quality criterion of the linear adapted (*) and linear classic \((\circ )\) DR signal detection from SNR (q) when \(A=0,1\) and \(A=5\)

Fig. 5
figure 5

Simulation of the probability errors of the linear adapted \((\circ )\) and linear classic (*) DR in correlated non-Gaussian noise from SNR (q) when \(A=0,1\) and \(A=1\)

Figures 6 and 7 show the ratio of the extracted information values \(I_1 /I_2 \) in the context of the discrimination of hypotheses \(H_0 \) and \(H_1 \) from the SNR and skewness coefficient\(\gamma _3 \). We have used the exponential correlation function for the simulation. It can been seen that the ratio is less than one; therefore, the extracted information value \(I_2 \) (65) for a non-linear DR is greater than the extracted information value \(I_1 \) (58) for a linear DR. Accordingly, the value of moment quality criterion \(\mathrm{Ku}1\left( {E,G,\rho } \right) _{s=2} \) (DR has degree \(s=2)\) is less than \(\mathrm{Ku}1\left( {E,G,\rho } \right) _{s=1} \) (DR has degree \(s=1)\), and the efficiency of the non-linear DR (61) is better than that of the linear DR (57). Smaller values of the criterion correspond to smaller values for the probability of the errors of the polynomial DR.

As it can be seen for Gaussian noise, when the skewness and kurtosis coefficients are equal to zero \((\gamma _3 =\gamma _4 =0)\), the values of the criterion for \(s=1\) and \(s=2\) are the same \((I_1 /I_2 =1)\). Taking into account the asymmetry of the distribution of random variables \((\gamma _3 \ne 0)\) the values of the criterion \(\mathrm{Ku}1\left( {E,G,\rho } \right) _{s=2} \) for a nonlinear DR are lower in comparison to the values of criterion \(\mathrm{Ku}1\left( {E,G,\rho } \right) _{s=1} \). Smaller values of the criterion \(\mathrm{Ku}1\left( {E,G,\rho } \right) _{s=2} \) correspond to the decrease of the probability of the errors of the first and second kind for the DR. For example, for \(\gamma _3 =1.3\) the probability of errors of the nonlinear DR decreased approximately 2 times \((q=1\) or SNR=0 dB, kurtosis coefficient \(\gamma _4 =0\) and \(n=100)\) and 1.4 times if \(\gamma _4 =2\) (Figs. 67). The efficiency of signal processing will improve due to the increase in the degree of the polynomial DR. For convenience we write “gam3” and “gama4” instead of \(\gamma _3 \) and \(\gamma _4 \) in (Figs. 67).

Fig. 6
figure 6

Comparison of the ratio of the extracted information \(I_1 /I_2 \) about discrimination of hypotheses \(H_0 \) and \(H_1 \) from q (SNR, dB) using the polynomial DR of order \(s=1,2\), where \(A=0.1, \gamma _4 =0\) and \(\gamma _4 =2\)

Fig. 7
figure 7

Comparison of the ratio of the extracted information \(I_1 /I_2 \) about discrimination of hypotheses \(H_0 \) and \(H_1 \) from the skewness coefficient \(\gamma _3 \) (gama3) using the polynomial DR of order \(s=1,2\), where \(A=0.1, \gamma _4 =0\) and \(\gamma _4 =2\)

It has been shown in this paper that a significant improvement in signal processing efficiency is obtained for small values of parameter q (SNR) and for the boundary values of the asymmetry coefficient \(\gamma _3 (\gamma _3 \le \sqrt{\gamma _4 +2})\) [10, 14]. Research also focused on other types of correlated noise: asymmetrical, excess and asymmetrical-excess non-Gaussian. For these cases, the efficiency of signal processing improved in comparison to the well-known results under the assumption of Gaussian noise.

6 Conclusions

The complexity associated with description of non-Gaussian processes in the theory of signal processing requires a new approach toward solving the problems of signal detection. The approach described herein is based on the application of the moment-cumulant function of random processes and moment criterion quality for the statistical hypotheses testing. New mathematical models of correlated non-Gaussian processes have been developed. An adaptation of the moment quality criterion of upper bounds of error probability was also proposed. Furthermore, power polynomial algorithms were developed for the correlated non-Gaussian processes based on the new method. This approach enables description of the characteristics of correlated non-Gaussian stochastic processes while taking into account the cumulant coefficients of the third and higher orders as well as joint cumulants. An appropriate description of such processes and a non-linear polynomial processing of sample values under DR \((s=2)\) allow the signal detection efficiency to be increased relative to the well-known results.