1 Introduction

Distributed fusion [1] refers to combining the information of decentralized sensors [2, 3], in which the observation information of each sensor is processed independently. In comparison with centralized fusion, the distributed fusion can significantly save time and storage resources in the fusion center. As for distributed fusion, the conventional solution is Bar-Shalom-Campo algorithm [4]. Later, Chang et al. [5] proved that this algorithm is not optimal in terms of the root-mean-square-error (RMSE), and they further presented an optimal fusion algorithm. Nevertheless, this algorithm can only be used in the scenario of two sensors fusion. Hence, Li et al. [6] proposed a unified fusion architecture, which can be used for two and more sensors. However, all aforementioned algorithms are designed with linear state space models (SSM), which results in the inferior fusion performance of nonlinear tracking.

Recently, some intelligent techniques, such as the adaptive fuzzy backstepping control technique [7] and the adaptive neural network technique [8] with nonlinear model predictive control, are proposed to advance the application of nonlinear models in multi-sensors or sensor network control [9]. Hence, the optimal distributed fusion with nonlinear model becomes one of the most important direction of signal processing. To our best knowledge, the most effective algorithms for fusion are based on covariance intersection (CI) [10, 11]. These CI-based algorithms can be easily combined with nonlinear filers to form the state-of-the-art solutions to nonlinear problems, such as UKF-SCI [12] and DPF-ICI [13]. The former is more accurate because the unscented Kalman filter (UKF) performs better in normally nonlinear tracking, while the latter has an advantage in non-Gaussian scenarios benefitting from the particle filter (PF). However, these CI-based algorithms require the computation on additional fractional powers to calculate the fusion results [14], which causes an increment of estimation error. Although many applications utilize a feedback structure1 to improve the fusion accuracy [15], this paper proves that the sub-optimality caused by the increment of estimation error still exists in CI. To our best knowledge, few algorithms are proposed to overcome this sub-optimality. Hence, the present study is motivated to achieve a fusion algorithm which can outperform the CI algorithm. In summary, the novelty of this paper is that our algorithm overcomes the sub-optimality problem in covariance intersection (CI) fusion, which has not been addressed in the previous work.

In this paper, a Monte Carlo Bayesian (MCB) algorithm is proposed to overcome the disadvantage of the sub-optimality in CI algorithm, such that the fusion performance can be improved. Specifically, a distributed fusion architecture is designed based on the Bayesian tracking framework (BTF). This algorithm utilizes the law of total probability to form the distributed architecture, thus avoiding the increment of estimation error. Furthermore, the Monte Carlo sampling [16] is incorporated into our distributed architecture. This sampling method offers a direct approximate inference of the expectation of the target function with respect to a probability distribution [17]. Benefiting from this sampling method, the intractable problem of nonlinear estimation in BTF is circumvented by means of the approximation algorithm with sampling particles. Finally, based on the approximation, the fusion results are achieved in the form of the mean and variance of estimation at each step. The meanings of notation in the paper are listed in Table 1. The contributions of this work are listed as follows,

  • A novel distributed fusion architecture is proposed, which makes full use of the information of different sensors to produce robust fusion estimation.

    Table 1 Notation list
  • A MCB algorithm is developed by means of the Monte Carlo sampling, which solves the nonlinear fusion problem based on the numerical approximation.

2 Distributed fusion architecture based on BTF

In this section, our distributed fusion architecture is developed based on BTF. Let us consider the I sensors fusion scenario with feedback structure. The state and the ith observation sequences are denoted as \(\left \{\boldsymbol {x}_{k};k\in \mathbb {N},\boldsymbol {x}_{k}\in \mathbb {R}^{d_{x}}\right \}\) and \(\left \{\boldsymbol {z}^{i}_{k};k\in \mathbb {N},\boldsymbol {z}^{i}_{k}\in \mathbb {R}^{d_{z^{i}}}\right \}\), where k is the time step, i={1,2,…I} is the number of sensors, d x and \(\phantom {\dot {i}\!}d_{z^{i}}\) are the dimensions of the state and the i-th observation vectors, respectively. Their relationship can be represented by the SSM as follows,

$$ \mathbf{Transition~equation:}~~p(\boldsymbol{x}_{k}|\boldsymbol{x}_{k-1}),~~k \geq 0, $$
(1)
$$ \mathbf{Observation~equation:}~~q(\boldsymbol{z}^{i}_{k} |\boldsymbol{x}_{k}),~~k\geq0, $$
(2)

where p(·) is the transition distribution, and q(·) is the likelihood of x k in the i-th sensor. According to the BTF, the fusion process is to obtain the posterior distribution \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k}\right)\) based on the SSM, where \(\boldsymbol {z}^{f}_{1:k} = \left \{\boldsymbol {z}^{i}_{1:k}\right \}^{I}_{i=1}\) is the integrated observation set combining all the sensors information from step 1 to k. Notice that, in the feedback structure, we normally have \(\boldsymbol {z}^{i}_{1:k}=\left \{\boldsymbol {z}^{i}_{k},\boldsymbol {z}^{f}_{1:k-1}\right \}\) because the prior observation information has been shared between sensors. Then, the fusion is calculated iteratively with two steps: the prediction and update steps.

The prediction step:

$$ p(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}) = \int p(\boldsymbol{x}_{k}|\boldsymbol{x}_{k-1})p\left(\boldsymbol{x}_{k-1}|\boldsymbol{z}^{f}_{1:k-1}\right)d\boldsymbol{x}_{k-1}. $$
(3)

The update step:

$$\begin{array}{@{}rcl@{}} p(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}) = \frac{q\left(\boldsymbol{z}^{f}_{k}|\boldsymbol{x}_{k}\right)p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)}{p\left(\boldsymbol{z}^{f}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)}, \end{array} $$
(4)

where

$$\begin{array}{@{}rcl@{}} p\left(\boldsymbol{z}^{f}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right) = \int q\left(\boldsymbol{z}^{f}_{k}|\boldsymbol{x}_{k}\right)p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)d\boldsymbol{x}_{k} \end{array} $$
(5)

is the marginal likelihood.

In normal cases, the current observation from each sensor is independent of others. Thus, the posterior distribution of (4) can be recalculated according to the law of total probability as follows,

$$\begin{array}{@{}rcl@{}} p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right)&\propto& p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k}\right). \end{array} $$
(6)

Then, our distributed architecture is obtained by utilizing (6) to decompose the computation of (4) to each sensor, which is summarized in Fig. 1.

Fig. 1
figure 1

The distributed fusion architecture. The figure describes the fusion architecture in which sensors and fusion center are linked with information flow

Figure 1 shows that, in the fusion process of time step k, the fusion center makes a prediction with (3). Then, the information of prediction is sent to each distributed sensor, in which the likelihood is calculated. Afterwards, each sensor sends the likelihood back to the fusion center to yield the final result with (6). Finally, the fusion result feeds back to the fusion center as the prior information of the next step.

3 Sub-optimality in CI algorithm with feedback structure

This section analyzes the sub-optimality of the existing CI algorithm with feedback structure, by comparing it with the aforementioned distributed fusion in Section 2. This paper only concentrates on the most common scenarios that the distribution in SSM is Gaussian [18]. Without loss of generality, the fusion with I sensors is considered, where I≥2. According to [19], for any w∈(0,1), the posterior distribution of two sensors fusion in CI algorithm holds:

$$ p\left(\boldsymbol{x}_{k}|\left\{\boldsymbol{z}^{1}_{1:k},\boldsymbol{z}^{2}_{1:k}\right\}\right) \propto p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{1}_{1:k}\right)^{\omega}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{2}_{1:k}\right)^{1 - \omega}. $$
(7)

Lemma 1.

Let \(\boldsymbol {x}_{k}\in \mathbb {R}^{d_{x}}\), \(\boldsymbol {z}^{i}_{k}\in \mathbb {R}^{d_{z^{i}}}\), where \(k\in \mathbb {N}\); \(i=\{1,2,\ldots I\},I\in \mathbb {N},I\geq 2\); d x and \({d_{z^{i}}}\phantom {\dot {i}\!}\) are the dimensions of vectors x k and \({\boldsymbol {z}^{i}_{k}}\). Let \(\boldsymbol {z}^{f}_{1:k} = \left \{\boldsymbol {z}^{i}_{1:k}\right \}^{I}_{i=1}\), where \(\boldsymbol {z}^{i}_{1:k} = \left \{\boldsymbol {z}^{i}_{k},\boldsymbol {z}^{f}_{1:k-1}\right \}\). Given (7), the posterior distribution \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k}\right)\) can be derived as follows,

$$ p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}) \propto p(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)\prod_{i=1}^{I} q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k}\right)^{\omega_{i}}, $$
(8)

where w i ∈(0,1) and \(\sum _{i=1}^{I}w_{i}=1\).

Proof.

We decompose \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k}\right)\) into a product form according to (7) as follows

$$\begin{array}{@{}rcl@{}} &&p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right)= p\left(\boldsymbol{x}_{k}|\{\boldsymbol{z}^{i}_{1:k}\}^{I}_{i=1}\right)\\ &\propto&\! \left(\prod_{i=1}^{I}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{i}_{1:k}\right)^{\omega'_{i}\prod_{j=0}^{i-1}(1 - \omega'_{j})}\!\right)p\!\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{I}_{1:k}\right)^{\prod_{l=1}^{I}(1 - \omega'_{l})}, \end{array} $$
(9)

where ω0′≡0, and \(w^{\prime }_{i}\in (0,1),~~i=1,2\ldots I\). Then, (9) can be rewritten as

$$ p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right) \propto \prod_{i=1}^{I}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{i}_{1:k}\right)^{\omega_{i}}, $$
(10)

where \(\omega _{i} \triangleq \omega '_{i}\prod _{j=0}^{i-1}(1 - \omega '_{j}),~~i=1,2\ldots I-1\), and \(\omega _{I} \triangleq \prod _{l=1}^{I}(1 - \omega '_{l})\). It can be easily known that w i ∈(0,1) and \(\sum _{i=1}^{I}w_{i}=1\). Based on (10), we finally have

$$\begin{array}{@{}rcl@{}} p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right) &\propto& \prod_{i=1}^{I}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{i}_{k},\boldsymbol{z}^{f}_{1:k-1}\right)^{\omega_{i}} \\ &\propto& p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k}\right)^{\omega_{i}}. \end{array} $$
(11)

This completes the proof of Lemma 1.

In the feedback structure, Lemma 1 provides a generalized form of posterior distribution \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k}\right)\) calculated in CI fusion. As seen from (8) and (6), the CI algorithm adds a fractional power ω i to each likelihood.

Lemma 2.

Assuming that likelihoods obey Gaussian distribution, i.e., \(q\left (\boldsymbol {z}^{i}_{k}|\boldsymbol {x}_{k}\right) = \mathcal {N}\left (\boldsymbol {z}^{i}_{k}; h^{i}(\boldsymbol {x}_{k}),{\sigma ^{2}_{i}}\right)\) for each i={1,2,…I}, where h i(·) is the mapping from \(\mathbb {R}^{d_{x}}\) to \(\mathbb {R}^{d_{z^{i}}}\), and \({\sigma ^{2}_{i}}\) is the corresponding variance. Then, the variance of likelihood in each sensor becomes \({\sigma ^{2}_{i}}/\omega _{i}\) in CI algorithm.

Proof.

According to Lemma 1, the posterior distribution of CI is calculated with (8) as

$$\begin{array}{@{}rcl@{}} &&p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right)\\ &\propto& p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)\prod_{i=1}^{I}\exp\left(-\frac{(\boldsymbol{z}^{i}_{k}-h^{i}(\boldsymbol{x}_{k}))^{2}}{2{\sigma^{2}_{i}}/\omega_{i}}\right). \end{array} $$
(12)

Hence, the variance of each likelihood in CI fusion has turned into \({\sigma ^{2}_{i}}/\omega _{i}\) in the process of calculating the posterior distribution. This completes the proof of Lemma 2.

Since there exists

$$\begin{array}{@{}rcl@{}} \omega_{i}\in(0,1){\Rightarrow\sigma^{2}_{i}}/\omega_{i}>{\sigma^{2}_{i}}, \end{array} $$

ω i increases the variance of likelihood in CI fusion. Hence, compared with (6) in the distributed fusion of Section 2, the estimation (8) calculated in CI contains more uncertainty under the same observations. Such uncertainty leads to the increment of estimation error. As a result, the sub-optimality still exists in CI fusion with feedback structure.

4 Monte Carlo Bayesian algorithm

To avoid the increment of estimation error, this paper utilizes the distributed fusion architecture in Section 2 to develop the fusion algorithm. In the most common applications, the observation is nonlinear [20]. Therefore, the posterior distributed in (6) cannot be simply calculated by joint Gaussian method with the procedure of squares completion [21]. To solve this problem, our MCB algorithm is proposed. This algorithm incorporates the Monte Carlo sampling into the distributed fusion architecture, by which the posterior distribution can be directly approximated with particles.

4.1 Monte Carlo sampling in MCB algorithm

At time k, N independent random particles are drawn from prediction distribution \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k-1}\right)\) in (3):

$$ \left\{\boldsymbol{X}^{n}_{k} \sim p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)\right\}^{N}_{n=1}, $$
(13)

where \(\boldsymbol {X}^{n}_{k}\) is the n-th particle drawn from \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k-1}\right)\). According to the approximation inference method of numerical sampling [17], the prediction distribution \(p\left (\boldsymbol {x}_{k}|\boldsymbol {z}^{f}_{1:k-1}\right)\) can be approximated as follows,

$$ p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right) \approx \frac{1}{N}\sum\limits_{n=1}^{N}\delta_{\boldsymbol{X}^{n}_{k}}(\boldsymbol{x}_{k}). $$
(14)

In (14), \(\delta _{\boldsymbol {X}^{n}_{k}}(\cdot)\) denotes the Dirac delta function,

$$\begin{array}{@{}rcl@{}} \delta_{\boldsymbol{X}^{n}_{k}}(x) = \left\{ \begin{array}{ll} 1,~x=\boldsymbol{X}^{n}_{k},\\ 0,~x \neq \boldsymbol{X}^{n}_{k}. \end{array} \right. \end{array} $$
(15)

According to (14), the posterior distribution in (6) can be approximated as,

$$\begin{array}{@{}rcl@{}} p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k}\right) &\approx& \left[\frac{C}{N}\sum_{n=1}^{N}\delta_{\boldsymbol{X}^{n}_{k}}(\boldsymbol{x}_{k})\right]\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k}\right)\\ &=& \frac{C}{N}\sum_{n=1}^{N}\left[\prod_{i=1}^{I}q(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k})\right]\delta_{\boldsymbol{X}^{n}_{k}}(\boldsymbol{x}_{k}), \end{array} $$
(16)

where C is the normalization constant.

4.2 Fusion with MCB algorithm

According to the distributed fusion architecture in section 2, the estimation of state x k is calculated iteratively by (3) and (6).

First, in the fusion center, at time step k, given the previous state estimation with the mean \(\boldsymbol {\bar {x}}_{k-1}\) and variance P k−1, the unscented sigma point set χ k−1 is calculated as follows,

$$ \boldsymbol{\chi}_{k-1} = \left[\boldsymbol{\hat{x}}_{k-1}, \boldsymbol{\hat{x}}_{k-1}\pm\sqrt{(I_{x}+\lambda)\boldsymbol{P}_{k-1}}\right]. $$
(17)

In (17), χ i,k−1χ k−1 denotes the ith point in the sigma point set, and λ is the scaling parameter [22]. Then, the prediction (3) is calculated in the form of mean \(\boldsymbol {\bar {x}}_{k|k-1}\) and variance P k|k−1 [22] as

$$ \boldsymbol{\bar{x}}_{k|k-1} = \sum\limits_{i=0}^{2I_{x}}{W^{m}_{i}}f(\chi_{i,k-1}), $$
(18)
$${} \boldsymbol{P}_{k|k-1} = \boldsymbol{Q} + \sum\limits_{i=0}^{2I_{x}}{W^{c}_{i}}\left[f(\chi_{i,k-1}) - \boldsymbol{\bar{x}}_{k|k-1}\right]\left[f(\chi_{i,k-1}) - \boldsymbol{\bar{x}}_{k|k-1}\right]^{\mathrm{T}}, $$
(19)

where f(·) is the state transition function, Q is the variance of corresponding noise, \({W^{m}_{i}}\in \boldsymbol {W}^{m}\) and \({W^{c}_{i}}\in \boldsymbol {W}^{c}\) are unscented transformation parameters calculated before estimation [22]. Then, N particles are drawn from the Gaussian approximation of prediction:

$$ \{\boldsymbol{X}^{n}_{k} \sim \mathcal{N}(\boldsymbol{x}_{k};~\boldsymbol{\bar{x}}_{k|k-1},~\boldsymbol{P}_{k|k-1})\}^{N}_{n=1}. $$

Second, the particles are sent to each sensor node. After achieving the observation \(\boldsymbol {z}^{i}_{k}\) in the ith sensor, the likelihood is approximated by the set: \(\left \{q\left (\boldsymbol {z}^{i}_{k}|\boldsymbol {X}^{n}_{k}\right)\right \}^{N}_{n=1}\). Then, each sensor sends its own approximated likelihood back to the fusion center.

Finally, the fusion center receives all the approximated likelihoods, and calculates the fusion results in the form of the mean and variance with (16) as follows,

$$\begin{array}{@{}rcl@{}} &&\boldsymbol{\bar{x}}_{k} = \int \boldsymbol{x}_{k}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)d\boldsymbol{x}_{k}\\ &\approx& \int \boldsymbol{x}_{k}\left\{\frac{C}{N}\sum_{n=1}^{N}\left[\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k}\right)\right]\delta_{\boldsymbol{X}^{n}_{k}}\left(\boldsymbol{x}_{k}\right)\right\}d\boldsymbol{x}_{k}\\ &=& \frac{C}{N}\sum_{n=1}^{N} \left[\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{X}^{n}_{k}\right)\right]\boldsymbol{X}^{n}_{k}, \end{array} $$
(20)
$$\begin{array}{@{}rcl@{}} &&\boldsymbol{P}_{k} = \int \left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)\left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)^{\mathrm{T}}p\left(\boldsymbol{x}_{k}|\boldsymbol{z}^{f}_{1:k-1}\right)d\boldsymbol{x}_{k}\\ &\approx& \int \left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)\left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)^{\mathrm{T}} \left\{\frac{C}{N}\sum_{n=1}^{N}\left[\prod_{i=1}^{I}q(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k})\right]\delta_{\boldsymbol{X}^{n}_{k}}\left(\boldsymbol{x}_{k}\right)\right\} d\boldsymbol{x}_{k}\\ &=& \frac{C}{N}\sum_{n=1}^{N}\int\left\{\left[\prod_{i=1}^{I}\left[ q(\boldsymbol{z}^{i}_{k}|\boldsymbol{x}_{k})\right]\left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)\left(\boldsymbol{x}_{k}-\boldsymbol{\bar{x}}_{k}\right)^{\mathrm{T}}\delta_{\boldsymbol{X}^{n}_{k}}\left(\boldsymbol{x}_{k}\right)\right.\right\} d\boldsymbol{x}_{k}\\ &=& \frac{C}{N}\sum_{n=1}^{N} \left[\prod_{i=1}^{I}q\left(\boldsymbol{z}^{i}_{k}|\boldsymbol{X}^{n}_{k}\right)\right]\left[\boldsymbol{X}^{n}_{k} - \boldsymbol{\bar{x}}_{k}\right]\left[\boldsymbol{X}^{n}_{k} - \boldsymbol{\bar{x}}_{k}\right]^{\mathrm{T}}. \end{array} $$
(21)

5 Simulation results

In this section, the simulation results are provided to validate the performance of our MCB algorithm. In our simulation, a classic two-dimensional (2-D) fusion scenario with the nonlinear observation of one active and two passive radars is considered. In such a scenario, the fusion process is simulated by tracking a single target under several cases. For each case, different noise and kinematic models of transition equation are applied. Finally, 100 Monte Carlo simulations have been run for each case, and the fusion performance of nonlinear tracking is compared between our MCB algorithm and the UKF-SCI algorithm [12] with feedback structure. Note that, in this paper, we concentrate on Gaussian tracking scenarios, in which the UKF-SCI outperforms the DPF-ICI [13].

5.1 Simulation setup

In the common radar system, the transition Eq. (1) is considered as

$$ \boldsymbol{x}_{k} = \boldsymbol{F} \boldsymbol{x}_{k-1} + \boldsymbol{n}_{k-1}, $$
(22)

where x k =[d x,k , d y,k , v x,k , v y,k ]T is the column vector, indicating the 2-D distance and velocity of a single target in the xy plane at time step k. F is the state transition matrix with sampling interval s T . In this paper, three state transition matrices are taken into account for three kinematic models: constant-velocity (CV) and constant-turn (CT) with known turn rates ω=2.5°/s and ω=5°/s [23]:

$$\begin{array}{@{}rcl@{}} \boldsymbol{F}= \left[ \begin{array}{cccc} 1&0&s_{T}&0\\ 0&1&0&s_{T}\\ 0&0&1&0\\ 0&0&0&1\\ \end{array} \right] \end{array} $$
(23)

for CV model and

$$\begin{array}{@{}rcl@{}} \boldsymbol{F}= \left[ \begin{array}{cccc} 1&0&\frac{\sin(\omega s_{T})}{\omega}&\frac{\cos(\omega s_{T})-1}{\omega}\\ 0&1&\frac{1-\cos(\omega s_{T})}{\omega}&\frac{\sin(\omega s_{T})}{\omega}\\ 0&0&\cos(\omega s_{T})&-\sin(\omega s_{T})\\ 0&0&\sin(\omega s_{T})&\cos(\omega s_{T})\\ \end{array} \right] \end{array} $$
(24)

for CT model [24]. The trajectories of three kinematic models in our scenario are shown in Fig. 2.

Fig. 2
figure 2

The trajectories of target for three dynamic models. There are three sub-figures a, b, and c in this figure. a is the trajectory drew according CV model. b is the trajectory drew according CT model with turn rate ω=2.5°/s. c is the trajectory drew according CT model with turn rate ω=5°/s

In addition, n k =[n d , n d , n v , n v ]T represents the state transition noise in distance and velocity. For fair comparison, this paper utilizes the same noise for different models which obeys the Gaussian distribution, i.e., \(n_{d} \sim \mathcal {N}(n_{d}; 0, {\sigma ^{2}_{d}})\) and \(n_{v} \sim \mathcal {N}(n_{v}; 0, {\sigma ^{2}_{v}})\). Here, σ d and σ v =2σ d /s T are standard deviations of transition noise of distance and velocity, respectively.

In our simulation, the observation equation in (2) is replaced by active and passive radar observation equations defined as,

$$\begin{array}{@{}rcl@{}} &&\mathbf{Active:} \boldsymbol{z}_{k} = \left[ \begin{array}{l} \arctan\frac{d_{y,k}}{d_{x,k}}\\ \sqrt{d^{2}_{x,k}+d^{2}_{y,k}} \end{array} \right] + \left[ \begin{array}{l} n_{\theta}\\ n_{r} \end{array} \right], \end{array} $$
(25)
$$\begin{array}{@{}rcl@{}} &&\mathbf{Passive:} \boldsymbol{z}_{k} = \arctan\frac{d_{y,k}}{d_{x,k}} + n_{\theta}, \end{array} $$
(26)

where \(n_{\theta } \sim \mathcal {N}(n_{\theta }; 0,\sigma ^{2}_{\theta })\), \(n_{r} \sim \mathcal {N}(n_{r}; 0,{\sigma ^{2}_{r}})\); σ θ and σ r are standard deviations for tracking azimuth and distance. Then, we evaluate the performance of fusion with three radars, i.e., one active and two passive radars. Following [25], the aforementioned parameters are set as s T =0.1(s), σ θ =0.0001(r a d), σ r =0.5(m), and σ d ={0.25,0.3,0.35,0.4}(m).

The simulation setting may be applied to a composite guidance scenario [26], in which there are one active and two passive radars [27, 28]. Although we simplify the observation equation in a 2-D scenario, the simulation scenario is still suitable for the practical application of tracking and surveillance in network centric warfare (NCW) [29].

5.2 Evaluation

In our simulation, the single target tracking is performed with the aforementioned kinematic models. For each model, both MCB and UKF-SCI algorithms are utilized to fuse the information of three radars (one active and two passive) together, and obtain the final tracking result. Furthermore, in the tracking process of each model, the fusion performance is evaluated with four standard deviations of transition noise (i.e., {0.25,0.3,0.35,0.4}(m)). Note that 100 Monte Carlo runs are applied in our simulation for each scenario, and the particle number N in our MCB algorithm is set to be 200.

The simulation results of CV model are shown in Figs. 3 and 4. In Fig. 3, two scenarios with σ d ={0.3,0.4} are selected to show the fusion root-mean-squared-error (RMSE) averaged over all 100 Monte Carlo simulations, along with the tracking time. Obviously, we can see from Fig. 3 that our MCB algorithm reduces the RMSE of both azimuth and distance, along with the tracking time. Such RMSE reduction becomes larger, when the transition noise increases. Furthermore, in Fig. 4, we show the average RMSE (from tracking time 3 to 50 s) of MCB and UKF-SCI algorithms with all scenarios of four transition noises. As seen from this figure, our MCB algorithm outperforms the UKF-SCI algorithm in all cases except the azimuth case of σ d =0.25. In addition, the improvement of our MCB algorithm increases, when the transition noise becomes larger.

Fig. 3
figure 3

The fusion RMSE of azimuth and distance for both MCB and UKF-SCI algorithms, with CV model. In the upper figures, σ d =0.3(m), and in the bottom figures, σ d =0.4(m). There are four sub-figures a, b, c, and d in this figure, in which, under the CV model, the changes of fusion RMSE of azimuth and distance are described along with time steps. Meanwhile, a and b show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.3(m); b and d show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.4(m)

Fig. 4
figure 4

The comparison of azimuth and distance RMSE at different standard deviations of distance, with CV model. There are two sub-figures a and b in this figure, in which, under the CV model, the changes of fusion RMSE of azimuth and distance are described along with different standard deviations of distance. The fusion RMSE in a and b is the mean of all results covering all fusion time steps

The simulation results of CT model are shown in Figs. 5, 6, 7, and 8. Figures 5 and 6 show the fusion RMSE comparison of MCB and UKF-SCI algorithms with the known turn rate ω=2.5°/s. Figures 7 and 8 show the same things with the known turn rate ω=5°/s. All the results in these figures are similar to the ones in Figs. 3 and 4. Hence, our MCB algorithm also outperforms the UKF-SCI algorithm with CT models. In addition, our MCB algorithm can perform better when the turn rate increases. The details are shown in Tables 2 and 3.

Fig. 5
figure 5

The fusion RMSE of azimuth and distance for both MCB and UKF-SCI algorithms, with CT model of turn rate 2.5°/s. In the upper figures, σ d =0.3(m), and in the bottom figures, σ d =0.4(m). There are four sub-figures a, b, c, and d in this figure, in which, under the CT model of turn rate ω=2.5°/s, the changes of fusion RMSE of azimuth and distance are described along with time steps. Meanwhile, a and b show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.3(m); b and d show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.4(m)

Fig. 6
figure 6

The comparison of azimuth and distance RMSE at different standard deviations of distance, with CT model of turn rate 2.5°/s. There are two sub-figures (a) and (b) in this figure, in which, under the CT model of turn rate 2.5°/s., the changes of fusion RMSE of azimuth and distance are described along with different standard deviations of distance. The fusion RMSE in (a) and (b) is the mean of all results covering all fusion time steps

Fig. 7
figure 7

The fusion RMSE of azimuth and distance for both MCB and UKF-SCI algorithms, with CT model of turn rate 5°/s. In the upper figures, σ d =0.3(m), and in the bottom figures, σ d =0.4(m). There are four sub-figures a, b, c, and d in this figure, in which, under the CT model of turn rate ω=5°/s, the changes of fusion RMSE of azimuth and distance are described along with time steps. Meanwhile, a and b show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.3(m); b and d show the comparison between MCB and UKF-SCI algorithms under the case of standard deviations of distance σ d =0.4(m)

Fig. 8
figure 8

The comparison of azimuth and distance RMSE at different standard deviations of distance, with CT model of turn rate 5°/s. There are two sub-figures (a) and (b) in this figure, in which, under the CT model of turn rate 5°/s., the changes of fusion RMSE of azimuth and distance are described along with different standard deviations of distance. The fusion RMSE in (a) and (b) is the mean of all results covering all fusion time steps

Table 2 The proportion of azimuth RMSE reduced by MCB algorithm over UKF-SCI, with four standard deviations of transition noise
Table 3 The proportion of distance RMSE reduced by MCB algorithm over UKF-SCI, with four standard deviations of transition noise

Tables 2 and 3 depict the proportions of reduced RMSE for different kinematic models and noises. As seen in these two tables, the reduction of RMSE becomes larger when the transition noise increases. This is consistent with the results shown in aforementioned figures. Moreover, the reduction of RMSE with CT model of turn rate 5°/s is largest among all three kinematic models. That means, our MCB algorithm has more advantage for high maneuvering targets.

In summary, based on the BTF, our MCB algorithm makes full use of the information of all observations, and fuses it to obtain more accurate estimation on target tracking. Hence, compared with the CI algorithm, there are two advantages in our MCB algorithm: (1) When the transition noise is large, the fusion RMSE of azimuth and distance is still small; (2) When the turn rate is large, the fusion RMSE of azimuth and distance is small as well. In other words, in the high maneuvering cases such as large transition noise and turn rate, our MCB outperforms the state-of-the-art CI algorithm, in terms of the fusion RMSE.

5.3 Computational complexity

In this section, the computational complexity of our MCB algorithm is analysed. According to Section 4.2, in the fusion center, the algorithm contains two steps for each iteration: the prediction and update steps. In the prediction step, according to (18) and (19), the mean and variance are calculated with a summed form, in which the computational complexity is proportional to the dimension of tracking state. Moreover, N particles are drawn by sampling precess whose complexity is O(N). In the update step, firstly, we need to multiple all I sensors information of each particle to form a fusion likelihood. The computational complexity of this process is O(I). Then, the likelihood is used to compute the fusion results with the complexity being O(N), according to (20) and (21). In summary, the computational complexity in the update step is O(I·N).

To further evaluate the computational complexity of our MCB algorithm, we have recorded the computational time of the prediction and update steps, respectively, for one iteration in the simulation. Specially, the computer used for the test is with Intel Core i7-3770 CPU at 3.4 GHz and 4 GB RAM. In the aforementioned tracking case in Sections 5.1 and 5.2, the dimension of tracking state is 4, the particle number is 200 and the sensor number is 3. Through the simulation, we found out that the prediction and update steps take around 0.563 and 2.3 ms for one iteration. In other word, we only need 2.863 ms to compute the tracking results in the fusion center at each iteration. Hence, this algorithm is fast enough to utilize in the fields of radar tracking and fusion.

6 Conclusions

In this paper, we have proposed a novel MCB algorithm to achieve the distributed fusion estimation of nonlinear tracking. First, the distributed fusion architecture is set up based on BTF. Second, the sub-optimality in CI algorithms is proved. Then, to solve the estimation problem of nonlinear tracking, the Monte Carlo sampling method is incorporated into the distributed architecture. Benefiting from this sampling method, the approximation of fusion results is obtained through random particles. Simulation results verify that our MCB algorithm outperforms the state-of-the-art CI algorithm.

In summary, there are three directions of the future work in our paper. (1) Our MCB algorithm only offers a distributed calculation on the update step. Hence, a total distributed fusion structure is needed to further reduce the computation and communication overhead. (2) The time-discrete SSM used in our paper is actually a special case of continuous-time SSM. Therefore, we can extend our method to the exponential tracking scenario, in which the filtering can be processed with partially unknown and uncertain transition probabilities [3033] with Markovian jump system. (3) Unknown inputs which represent the faults can be added into SSM and the residuals are calculated [34]. Hence, the fault detection algorithms [35] and the fuzzy model [36] can also be incorporated into our fusion system to strengthen the reliability of fusion process in the future work.

7 Endnote

1 The feedback structure means a structure which feeds the tracking result back to each sensor as the prior knowledge for the next step.