1 Introduction

The traditional direction of arrival (DOA) estimation originates from 1960s; it is usually used in radar [1,2,3,4,5], underwater detection [6,7,8], and mobile communication [9,10,11,12,13,14,15]. Generally speaking, most of the direction finding algorithms need to know the accurate array manifold, and they are very sensitive to the errors in the sensor channels. However, due to the present processing technology, perturbations in applications are often inevitable, such as temperature, humidity, shake, and device aging, all of them will lead to the estimation performance deterioration. The main errors in array signal processing include mutual coupling, gain-phase uncertainty, and sensor position errors, so the array requires to be calibrated.

Existing calibrations can be categorized as active correction and self-correction; the former needs a correction source in known orientation; it has a low computation and wide calibration range, but there is often some deviation between the direction of the actual correction source and that of the preset value. While self-correction does not need the correction source, it usually evaluates the DOA and array errors simultaneously by some criteria; this kind of algorithms have small cost and a great potential of applications: Hawes introduced a Gibbs sampling approach based on Bayesian compressive sensing Kalman filter for the DOA estimation with mutual coupling effects; it is proved to be useful when the target moves into the endfire region of the array [16]. Rocca calculated the DOA of multiple sources by means of processing the data collected in a single time snapshot at the terminals of a linear antenna array with mutual coupling [17]. Based on sparse signal reconstruction, Basikolo developed a simple mutual coupling compensation method for nested sparse circular arrays; it is different from previous calibrations for uniform linear array (ULA) [18]. Elbir offered a new data transformation algorithm which is applicable for three-dimensional array via decomposition of the mutual coupling matrix [19, 20].

For the gain-phase error, Lee used the covariance approximation technique for spatial spectrum estimation with a ULA; it achieves DOA, together with gain-phase uncertainty of the array channels [21]. A F Liu introduced a calibration algorithm based on the eigendecomposition of the covariance matrix; it behaves independently of phase error and performs well in spite of array errors [22]. Cao addressed a direction finding method by fourth- order cumulant (FOC); it is suitable for the background of spatially colored noise [23]. In [24], the Toeplitz structure of array is employed to deal with the gain error, then sparse least squares is utilized for estimating the phase error. In recent years, the spatial spectrum estimation in the presence of multiple types of array errors has also been researched; Z M Liu described an eigenstructure-based algorithm which estimates DOA as well as corrections of mutual coupling and gain-phase of every channel [25]. Reference [26, 27] respectively discussed the calibration techniques for three kinds of errors existing in the array simultaneously. For the same questions, Boon obtained mutual coupling, gain-phase, and sensor position errors through maximum likelihood estimation, but it needs several calibration sources in known orientations [28].

For the past few years, DOA calculation for mixture far-field and near-field sources (FS and NS) has got more and more attentions and rapid development; Liang developed a two-stage MUSIC algorithm with cumulant which averts pairing parameters and loss of the aperture [29]. In [30], based on FOC and the estimation of signal parameters via rotational invariance techniques (ESPRIT), K Wang proposed a new localization algorithm for the mixed signals. In [31, 32], two localization methods based on sparse signal reconstruction are provided by Ye and B Wang respectively; they can achieve improved accuracy and resolve signals which are close to each other. The methods above only apply to the background of only FS, but there are rare published literatures of DOA estimation for mixed signals at the background of more than one kind of array error.

This paper considers the problem of DOA estimation of FS in mixed sources with mutual coupling and gain-phase error array. It skillfully separates the array error and spatial spectrum function by matrix transformation, then the DOA can be obtained through searching the peaks of the modified spatial spectrum, thus the process of array calibration is avoided; meanwhile, the approach is also suitable for the circumstance that the FS and NS are close to each other.

2 Methods

Before modeling, we assume that the array signal satisfies the following conditions:

  1. 1.

    The incident signals are narrowband signals, they are independent of one another and stationary processes with zero-mean

  2. 2.

    The noise on each sensor is zero mean white Gaussian process, they are independent of one another and the incident signals

  3. 3.

    The sensor array is isotropic

  4. 4.

    In order to assure that every column of array manifold is linear independent of one another, number of FS K1 and NS K2 are known beforehand, where that of FSK1 meets K1 < M, and K1 + K2 < 2M + 1, where 2M + 1is the number of sensors.

2.1 Data model

The data model is given in Fig. 1; consider K1 far-field signals \( {s}_{k_1}\left({k}_1=1,2,\cdots, {K}_1\right) \) and K2 near-field signals \( {s}_{k_2}\left({k}_2=1,2,\cdots, {K}_2\right) \) impinging on a 2M + 1-element array from \( \left[{\theta}_1,\cdots, {\theta}_{K_1},{\theta}_{K_1+1},\cdots, {\theta}_K\right] \), define 0th-element as the reference sensor; here, we have K = K1 + K2, d is the unit inter- element spacing, and it is equal to half of the signal wavelength, the range between \( {s}_{k_2} \)and reference sensor is \( {l}_{k_2} \), then the received data can be written

$$ \mathbf{X}(t)=\mathbf{A}\left(\theta \right)\mathbf{S}(t)+\mathbf{N}(t) $$
(1)

where

$$ \mathbf{X}(t)={\left[{X}_{-M}(t),\cdots, {X}_{-m}(t),\cdots, {X}_0(t),\cdots, {X}_m(t),\cdots, {X}_M(t)\right]}^{\mathrm{T}} $$
(2)

here, Xm(t) is the received data on the mth channel, andA(θ) is the array manifold

$$ {\displaystyle \begin{array}{l}\mathbf{A}\left(\theta \right)=\Big[{\mathbf{a}}_{FS}\left({\theta}_1\right),\cdots, {\mathbf{a}}_{FS}\left({\theta}_{k_1}\right),\cdots, {\mathbf{a}}_{FS}\left({\theta}_{K_1}\right),\\ {}\kern3.25em {\mathbf{a}}_{NS}\left({\theta}_{K_1+1}\right),\cdots, {\mathbf{a}}_{NS}\left({\theta}_{k_2}\right),\cdots, {\mathbf{a}}_{NS}\left({\theta}_K\right)\Big]\\ {}\kern2em =\left[{\mathbf{A}}_{FS},{\mathbf{A}}_{NS}\right]\end{array}} $$
(3)

where \( {\mathbf{A}}_{FS}=\left[{\mathbf{a}}_{FS}\left({\theta}_1\right),\cdots, {\mathbf{a}}_{FS}\left({\theta}_{k_1}\right),\cdots, {\mathbf{a}}_{FS}\left({\theta}_{K_1}\right)\right] \) is the array manifold of FS for the ideal case, and \( {\mathbf{a}}_{FS}\left({\theta}_{k_1}\right) \)is the steering vector of \( {s}_{k_1} \); \( {\mathbf{A}}_{NS}=\left[{\mathbf{a}}_{NS}\left({\theta}_{K_1+1}\right),\cdots, {\mathbf{a}}_{NS}\left({\theta}_{k_2}\right),\cdots, {\mathbf{a}}_{NS}\left({\theta}_K\right)\right] \) is the array manifold of NS for the ideal case, and \( {\mathbf{a}}_{NS}\left({\theta}_{k_2}\right) \) is the steering vector of \( {s}_{k_2} \), therefore.

$$ {\displaystyle \begin{array}{r}{\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)=\Big[\exp \left(-\mathrm{j}2\uppi f{\tau}_{-M}\left({\theta}_{k_1}\right)\right),\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_{-m}\left({\theta}_{k_1}\right)\right),\\ {}\cdots, 1,\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_m\left({\theta}_{k_1}\right)\right),\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_M\left({\theta}_{k_1}\right)\right)\Big]{}^{\mathrm{T}}\\ {}\left({k}_1=1,2,\cdots, {K}_1\right)\end{array}} $$
(4)
Fig. 1
figure 1

Data model

where f is the frequency, and

$$ {\displaystyle \begin{array}{l}{\tau}_m\left({\theta}_{k_1}\right)=m\frac{d}{c}\sin {\theta}_{k_1}\\ {}\Big(m=-M,\cdots, -m,\cdots, 0,\cdots, m,\cdots, M;\\ {}{k}_1=1,2,\cdots, {K}_1\Big)\end{array}} $$
(5)

is the propagation delay for the k1 ‐ th (k1 = 1, 2, ⋯K1) FS at sensor m with respect to sensor 0, in the same way, we have.

$$ {\displaystyle \begin{array}{r}{\mathbf{a}}_{NS}\left({\theta}_{k_2}\right)=\Big[\exp \left(-\mathrm{j}2\uppi f{\tau}_{-M}\left({\theta}_{k_2}\right)\right)\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_{-m}\left({\theta}_{k_2}\right)\right),\\ {}\cdots, 1,\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_m\left({\theta}_{k_2}\right)\right),\cdots, \exp \left(-\mathrm{j}2\uppi f{\tau}_M\left({\theta}_{k_2}\right)\right)\Big]{}^{\mathrm{T}}\\ {}\left({k}_2=1,2,\cdots, {K}_2\right)\end{array}} $$
(6)

by examining the geometry information in Fig. 1, we have

$$ {\tau}_m\left({\theta}_{k_2}\right)=\frac{l_{k_2}-\sqrt{\ {l}_{k_2}^2+{(md)}^2-2{l}_{k_2} md\sin {\theta}_{k_2}}}{c} $$
(7)

it is the propagation delay for NS \( {s}_{k_2} \) at sensor m with respect to sensor 0; Eq. (7) can be expressed as another form according to Taylor series [33].

$$ {\tau}_m\left({\theta}_{k_2}\right)=-\frac{m^2{d}^2}{4{l}_{k_2}c}\cos 2{\theta}_{k_2}+\frac{1}{c} md\sin {\theta}_{k_2}-\frac{m^2{d}^2}{4{l}_{k_2}c} $$
(8)

in (1), signal matrix is

$$ {\displaystyle \begin{array}{l}\mathbf{S}(t)={\left[{\mathbf{S}}_{FS},{\mathbf{S}}_{NS}\right]}^{\mathrm{T}}\\ {}\kern1.5em ={\left[{\mathbf{S}}_1,\cdots, {\mathbf{S}}_{k_1},\cdots, {\mathbf{S}}_{K_1},{\mathbf{S}}_{K_1+1},\cdots, {\mathbf{S}}_{k_2},\cdots, {\mathbf{S}}_K\right]}^{\mathrm{T}}\end{array}} $$
(9)

where \( {\mathbf{S}}_{FS}={\left[{\mathbf{S}}_1,\cdots, {\mathbf{S}}_{k_1},\cdots, {\mathbf{S}}_{K_1}\right]}^{\mathrm{T}} \) is matrix of FS, and \( {\mathbf{S}}_{NS}={\left[{\mathbf{S}}_{K_1+1},\cdots, {\mathbf{S}}_{k_2},\cdots, {\mathbf{S}}_K\right]}^{\mathrm{T}} \) is that of NS. N(t) is the Gaussian white noise matrix, so covariance of received data for the ideal case is

$$ {\displaystyle \begin{array}{l}\mathbf{R}=\frac{1}{B}\mathbf{X}(t){\mathbf{X}}^{\mathrm{H}}(t)\\ {}\kern0.5em =\frac{1}{B}\mathbf{A}\left(\theta \right)\mathbf{S}(t){\mathbf{S}}^{\mathrm{H}}(t){\mathbf{A}}^{\mathrm{H}}\left(\theta \right)+{\sigma}^2\mathbf{I}\\ {}\kern0.5em ={\mathbf{R}}_{FS}+{\mathbf{R}}_{NS}+{\sigma}^2\mathbf{I}\end{array}} $$
(10)

where

$$ {\mathbf{R}}_{FS}=\frac{1}{B}{\mathbf{A}}_{FS}{\mathbf{S}}_{FS}{\mathbf{S}}_{FS}^{\mathrm{H}}{\mathbf{A}}_{FS}^{\mathrm{H}} $$
(11)
$$ {\mathbf{R}}_{NS}=\frac{1}{B}{\mathbf{A}}_{NS}{\mathbf{S}}_{NS}{\mathbf{S}}_{NS}^{\mathrm{H}}{\mathbf{A}}_{NS}^{\mathrm{H}} $$
(12)

and B is the number of snapshots, I is the identity matrix with the dimension (2M + 1) × (2M + 1).

2.2 Array error model

The mutual coupling of ULA can be expressed by the following matrix W(1)

$$ {\mathbf{W}}_{(1)}=\left[\begin{array}{l}\ 1\kern1em {c}_1\kern1.5em \cdots \kern0.75em {c}_Q\kern2.25em \\ {}\ {c}_1\kern0.75em 1\kern1em {c}_1\kern3em \ddots \kern1em \\ {}\kern1.5em {c}_1\kern7em {c}_Q\ \\ {}\vdots \kern1.75em \ddots \kern1em \ddots \kern1em \ddots \kern0.5em \\ {}\ {c}_Q\kern4em \\ {}\kern1.5em \ddots \kern5.5em 1\kern1em {c}_1\\ {}\kern3.25em {c}_Q\kern3.75em {c}_1\kern1em 1\end{array}\right] $$
(13)

here, cq(q = 1, 2, ⋯, Q) denotes the mutual coupling coefficient, and Q represents the freedom degree.

The gain-phase perturbation is usually expressed as

$$ {\mathbf{W}}_{(2)}=\operatorname{diag}\left(\ {\left[\ {W}_{-M},\cdots, {W}_{-m},\cdots, 1,\cdots, {W}_m,\cdots, {W}_M\right]}^{\mathrm{T}}\right) $$
(14)

where

$$ {W}_m={\rho}_m{\mathrm{e}}^{\mathrm{j}{\phi}_m} $$
$$ m=-M,\cdots, -m,\cdots, 0,\cdots, m,\cdots, M $$
(15)

ρm,ϕm are respectively the gain and phase errors, and they are independent with each other.

Therefore, the steering vector of the kth signal with mutual coupling and gain-phase errors is

$$ {\mathbf{a}}^{\hbox{'}}\left({\theta}_k\right)={\mathbf{W}}_{(1)}{\mathbf{W}}_{(2)}\mathbf{a}\left({\theta}_k\right)=\mathbf{Wa}\left({\theta}_k\right) $$
(16)

here

$$ \mathbf{W}={\mathbf{W}}_{(1)}{\mathbf{W}}_{(2)} $$
(17)

Then the array manifold with array errors can be written

$$ {\displaystyle \begin{array}{l}{\mathbf{A}}^{\hbox{'}}\left(\theta \right)=\Big[{\mathbf{a}}_{FS}^{\prime}\left({\theta}_1\right),\cdots, {\mathbf{a}}_{FS}^{\prime}\left({\theta}_{k_1}\right),\cdots, {\mathbf{a}}_{FS}^{\prime}\left({\theta}_{K_1}\right),\\ {}\kern3.25em {\mathbf{a}}_{NS}^{\prime}\left({\theta}_{K_1+1}\right),\cdots, {\mathbf{a}}_{NS}^{\prime}\left({\theta}_{k_2}\right),\cdots, {\mathbf{a}}_{NS}^{\prime}\left({\theta}_K\right)\Big]\\ {}\kern2.25em =\left[{\mathbf{A}}_{FS}^{\prime },{\mathbf{A}}_{NS}^{\prime}\right]\\ {}\kern2.25em =\mathbf{WA}\left(\theta \right)\end{array}} $$
(18)

where

$$ \mathbf{A}{\hbox{'}}_{FS}={\mathbf{WA}}_{FS}=\left[{\mathbf{a}}_{FS}^{\prime}\left({\theta}_1\right),\cdots, {\mathbf{a}}_{FS}^{\prime}\left({\theta}_{k_1}\right),\cdots, {\mathbf{a}}_{FS}^{\prime}\left({\theta}_{K_1}\right)\right] $$
(19)

\( {\mathbf{a}}_{FS}^{\prime}\left({\theta}_{k_1}\right) \) is the steering vector of \( {s}_{k_1} \), and

$$ \mathbf{A}{\hbox{'}}_{NS}={\mathbf{WA}}_{NS}=\left[{\mathbf{a}}_{NS}^{\prime}\left({\theta}_{K_1+1}\right),\cdots, {\mathbf{a}}_{NS}^{\prime}\left({\theta}_{k_2}\right),\cdots, {\mathbf{a}}_{NS}^{\prime}\left({\theta}_K\right)\right] $$
(20)

\( {\mathbf{a}}_{NS}^{\prime}\left({\theta}_{k_2}\right) \) is the steering vector of \( {s}_{k_2}(t) \), thus the received data with array errors is

$$ {\mathbf{X}}^{\hbox{'}}(t)={\mathbf{A}}^{\hbox{'}}\left(\theta \right)\mathbf{S}(t)+\mathbf{N}(t)=\mathbf{WA}\left(\theta \right)\mathbf{S}(t)+\mathbf{N}(t) $$
(21)

for the convenience of derivation below, we also define the vector of the two array errors as

$$ {\displaystyle \begin{array}{l}\mathbf{w}={\mathbf{W}}_{(1)}\Big[\ {\rho}_{-M}{\mathrm{e}}^{\mathrm{j}{\phi}_{-M}},\cdots, {\rho}_{-m}{\mathrm{e}}^{\mathrm{j}{\phi}_{-m}},\cdots, \\ {}\kern3em 1,\cdots, {\rho}_m{\mathrm{e}}^{\mathrm{j}{\phi}_m},\cdots, {\rho}_M{\mathrm{e}}^{\mathrm{j}{\phi}_M}\Big]{}^{\mathrm{T}}\end{array}} $$
(22)

2.3 Constructing spatial spectrum

The covariance with the two kinds of array imperfections is

$$ {\displaystyle \begin{array}{l}{\mathbf{R}}^{\hbox{'}}=\frac{1}{B}{\mathbf{X}}^{\prime }(t){\left({\mathbf{X}}^{\prime }(t)\right)}^{\mathrm{H}}\\ {}\kern1em =\frac{1}{B}{\mathbf{A}}^{\hbox{'}}\left(\theta \right)\mathbf{S}(t){\mathbf{S}}^{\mathrm{H}}(t){\left({\mathbf{A}}^{\prime}\left(\theta \right)\right)}^{\mathrm{H}}+{\sigma}^2\mathbf{I}\\ {}\kern1em =\frac{1}{B}\mathbf{WA}\left(\theta \right)\mathbf{S}(t){\mathbf{S}}^{\mathrm{H}}(t){\mathbf{A}}^{\mathrm{H}}\left(\theta \right){\mathbf{W}}^{\mathrm{H}}+{\sigma}^2\mathbf{I}\\ {}\kern1em =\mathbf{R}{\hbox{'}}_{FS}+\mathbf{R}{\hbox{'}}_{NS}+{\sigma}^2\mathbf{I}\end{array}} $$
(23)

where the covariance of the FS is

$$ {\mathbf{R}}_{FS}^{\prime }=\frac{1}{B}{\mathbf{W}\mathbf{A}}_{FS}{\mathbf{S}}_{FS}{\mathbf{S}}_{FS}^{\mathrm{H}}{\mathbf{A}}_{FS}^{\mathrm{H}}{\mathbf{W}}^{\mathrm{H}} $$
(24)

that of the NS is

$$ {\mathbf{R}}_{NS}^{\prime }=\frac{1}{B}{\mathbf{W}\mathbf{A}}_{NS}{\mathbf{S}}_{NS}{\mathbf{S}}_{NS}^{\mathrm{H}}{\mathbf{A}}_{NS}^{\mathrm{H}}{\mathbf{W}}^{\mathrm{H}} $$
(25)

so the noise eigenvector U can be acquired by decomposing R, here, and then we are able to plot the spatial spectrum [34] as a function of DOA of FS

$$ {\displaystyle \begin{array}{l}{P}_{MU-F}\left(\theta \right)=\frac{1}{{\left({\mathbf{a}}_{FS}^{\prime}\left(\theta \right)\right)}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}{\mathbf{a}}_{FS}^{\prime}\left(\theta \right)}\\ {}\kern3.5em =\frac{1}{{\mathbf{a}}_{FS}^{\mathrm{H}}\left(\theta \right){\mathbf{W}}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}{\mathbf{W}\mathbf{a}}_{FS}\left(\theta \right)}\\ {}\kern3.5em =\frac{1}{Y}\end{array}} $$
(26)

2.4 Transforming spectrum function

The denominator of (26) is equivalent to

$$ Y=\sum \limits_{k_1=1}^{K_1}\left({\mathbf{a}}_{FS}^{\mathrm{H}}\left({\theta}_{k_1}\right){\mathbf{W}}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}{\mathbf{W}\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\right) $$
(27)

transform (27) into another form

$$ {\displaystyle \begin{array}{l}Y=\sum \limits_{k_1=1}^{K_1}\ {\mathbf{a}}_{FS}^{\mathrm{H}}\left({\theta}_{k_1}\right){\mathbf{W}}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}{\mathbf{W}\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\\ {}\kern0.5em =\sum \limits_{k_1=1}^{K_1}{\mathbf{w}}^{\mathrm{H}}\left\{{\left(\operatorname{diag}\left({\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\right)\ \right)}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}\operatorname{diag}\left({\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\right)\right\}\mathbf{w}\\ {}\kern0.5em ={\mathbf{w}}^{\mathrm{H}}\mathbf{D}\left(\theta \right)\mathbf{w}\end{array}} $$
(28)

where

$$ \mathbf{D}\left(\theta \right)=\sum \limits_{k_1=1}^{K_1}\left\{\ {\left(\operatorname{diag}\left({\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\right)\ \right)}^{\mathrm{H}}{\mathbf{U}}^{\hbox{'}}{\left({\mathbf{U}}^{\hbox{'}}\right)}^{\mathrm{H}}\operatorname{diag}\left({\mathbf{a}}_{FS}\left({\theta}_{k_1}\right)\right)\ \right\} $$
(29)

solving the peaks of (26) means minimizing (28). w ≠ 0, thus wHD(θ)w will be zero only if the determinant of D(θ) is 0, so θ equals the practical signals at this time, then \( {\theta}_1,\cdots {\theta}_{K_1} \) can be evaluated by plotting the modified spatial spectrum as a function of DOA of FS

$$ \kern0.75em {P}_{MMU-F}\left(\theta \right)=\frac{1}{\left|\mathbf{D}\left(\theta \right)\right|} $$
(30)

where |D(θ)| stands for determinant of D(θ), the addressed approach is appropriate for FS in mixed signals, so it is called FM for short, and we know from the deduction above, the course of estimating array errors has been averted. According to the derivation, we know signal and sensor number must satisfy K < 2 M + 1, but there is no limitation to specific number of far-field and near-field signals. Then, FM can be summarized by the following Fig. 2:

Fig. 2
figure 2

Steps of the proposed FM

3 Computation

Assume the region of DOA θ is limited in \( 0<\alpha <\theta <\beta <\frac{\uppi}{2} \), plotting step sizes of DOA is Δθ. The proposed FM approach involves computing (2M + 1) × (2M + 1) dimensional covariance matrices, determining their eigenvectors, solving one-dimensional spatial spectrums, and estimating local maximum values for FS; here, we just calculate the primary procedures for simplicity, so the computation is about \( {\left(2M+1\right)}^2Z+\frac{8}{3}{\left(2M+1\right)}^3+\frac{2\left(\beta -\alpha \right){\left(2M+1\right)}^2}{\Delta_{\theta }} \), and that of mixed near-field and far-field source localization based on uniform linear array partition (MULAP) [30] needs to form three 2M × 2Mfourth-order cumulant matrices, decompose a 4M × 4M matrix. Then using the ESPRIT to estimate the DOA with decomposing two 2(M − 1) × 2(M − 1) matrices, so it is nearly \( 3{(2M)}^2B+\frac{4}{3}{(4M)}^3+\frac{8}{3}{\left(2M-1\right)}^3 \).

4 Results and discussion

In this section, simulation results are used for the provided approach; first, let us consider four uncorrelated FS and three NS impinging on an eleven-element array from (13, 35, 50, 68) and(25, 60, 85); their frequencies are 3 GHz, the array signal model is shown in Fig. 1, and the sixth sensor is deemed as the reference. In view of complexity of the array imperfections, the establishment of error model will be simplified, assuming c1 = a1 + b1j,c2 = a2 + b2j, Q = 2,a1 and b1 distribute in (−0.5~0.5), a2 and b2 is selected in (‐0.25~0.25) uniformly. Gain and phase errors are respectively chosen in [0, 1.6] and [−24, 24] randomly, α = 0, β = 90, Δθ = 0.1, 500 independent trials are run for each scenario. And the estimation error is defined as

$$ \varepsilon =\sum \limits_{i=1}^{K_1}\left|{\theta}_i-{\widehat{\theta}}_i\right| $$
(31)

where θi is the true DOA of the ith FS, and \( {\widehat{\theta}}_i \) is the corresponding estimated value. Sparse Bayesian array calibration (SBAC) [25], MULAP, and FM are compared for the simulations.

First, Fig. 3 demonstrates the modified spatial spectrum of uncorrelated FS; it can be observed that the four peaks correspond the actual DOA, and Fig. 4 illustrates the estimation accuracy versus signal-to-noise ratio (SNR) when number of snapshots B is 25, then Fig. 5 describes that versus number of snapshots B when SNR is 8 dB. As it is seen in Figs. 4 and 5, all the three algorithms fail to estimate the results at lower SNR, and they perform better as the SNR or number of snapshots increases, finally converge to some certain value. As MULAP is not suitable for super-resolution direction finding in the presence of array imperfections, a large error still exists even if SNR is high or number of snapshots is large enough. And SBAC needs the array calibration ahead of estimating DOA, but the procedure of mutual coupling and gain-phase uncertainty estimations also introduces some error. Comparatively speaking, FM avoids the process of array correction before deciding FS, so it outperforms SBAC and MULAP in most cases, but when SNR is lower than−6 dB, as the signal subspace is not completely orthogonal to the noise subspace, its performance is poorer than SBAC.

Fig. 3
figure 3

Spatial spectrum

Fig. 4
figure 4

Estimation errors versus SNR

Fig. 5
figure 5

Estimation errors versus number of snapshots

In the second section, we will discuss the performance for the circumstance of far-field DOA estimation when FS and NS are close to each other; consider four FS and three NS impinging on an eleven-element array from (3, 12, 20, 28), (8, 17, 33); other conditions are the same with the trial above.

Figure 6 demonstrates the modified spatial spectrum when FS and NS are close to each other, and it can be seen, DOA of the FS is still resolved successfully by the proposed FM. Then Fig. 7 gives the estimation accuracy versus SNR when number of snapshots B is 25, and Fig. 8 demonstrates that versus number of snapshots B when SNR is 8 dB. It is noted that the three algorithms perform almost the same with the circumstance when FS and NS are close to each other; we can properly enhance the SNR or number of snapshots to improve their performance.

Fig. 6
figure 6

Spatial spectrum when FS and NS are close to each other

Fig. 7
figure 7

Estimation errors versus SNR when FS and NS are close to each other

Fig. 8
figure 8

Estimation errors versus number of snapshots when FS and NS are close to each other

5 Conclusions

This paper introduces the DOA estimation problem of FS in mixed FS and NS with mutual coupling and gain-phase error array. The approach avoids array calibration by spectrum function transformation according to the structure of the array, so as to lessen the computational load to a great extent. Then we will concentrate on calculating these parameters of array imperfections and locating NS in the future.