1 Introduction

The distributed detection system has high positioning accuracy, wide detection range, and strong anti-interference ability and plays an important role in underwater target detection and tracking. It is an inevitable development trend of information warfare and three-dimensional warfare.

Multi-sensor data fusion is one of the important algorithms for distributed systems, and its main research results are concentrated in the field of synchronous data fusion. Based on the Kalman filter, the literature [1,2,3] established adaptive filter, sequential filter, and other methods to perform fusion estimation on multi-sensor measurements. Paper [4] proposed a distributed fusion estimation in the sense of least squares to solve the problem of data packet loss. Paper [5] proposed a distributed tracking method for maneuvering targets by cross-correlation, which has a high value in engineering applications.

However, differences in sensor sampling rate, measurement accuracy, and processing step size will lead to data asynchrony in the fusion system [6]. For the problem of data asynchrony in distributed systems, Professor Han Chongzhao divided the multi-sensor asynchronous fusion problem in detail. On this basis, Blair uses the least squares method to estimate the equivalent synchronous state from the asynchronous measurement and then uses the synchronous fusion method to obtain the global estimation result [7, 8]. The literature [9] proposed an estimation algorithm based on the covariance intersection (CI) fusion algorithm. Bar-Shalom proposed the A1, B1, and C1 fusion methods and derived the Bl, Al1, and Bl1 fusion methods for the asynchronous problem of lag measurement [10]. In addition, distributed cubic information filter [11, 12], particle filter [13], strong unscented tracking filter [14], and other algorithms are also used to deal with asynchronous problems caused by different sampling and transmission speeds. Although many scholars are devoted to the research of asynchronous information fusion, most of the existing methods rely on accurate prior information, which is difficult to practically apply in engineering.

For the underwater multi-sensor distributed detection system, due to the limited sound velocity and the discreteness of the sensor, the target sound signals received by the distributed system at the same time come from different times. Because of the time-varying and space-varying variation of underwater acoustic channels, it is impossible to accurately obtain the specific time of acoustic signal propagation, which makes it difficult to estimate the asynchronous situation of multi-sensor data [15]. Referring to the sound ray correction algorithm commonly used in long-baseline underwater acoustic positioning systems, this paper proposes a modified sequential Kalman filter algorithm in asynchronous track fusion for the data asynchrony caused by sound propagation. When the observation result of a sensor is selected as the fusion benchmark, the asynchronous sequential Kalman filter algorithm based on constant sound velocity feedback (AS-KF-CSV) or synchronous sequential Kalman filter based on effective sound velocity feedback (AS-KF-ESV) is used to update the asynchronous information of each sensor from the same target, and the track fusion calculation of asynchronous information is performed after adjusting the algorithm parameters. The simulation results show that the algorithm has higher estimation accuracy under the same measurement.

The structure of the paper is as follows: Section 2 introduces the asynchronous measurement model of multiple sensors. Section 3 presents the main result of this paper, namely the sequential Kalman filter algorithm based on sound propagation time feedback. Section 4 gives the performance analysis of the algorithm under various application conditions. Finally, concluding remarks are given in Sect. 5.

2 Multi-sensor asynchronous measurement model

Figure 1 shows a multi-sensor detection scenario. For a discrete-time system, the motion state of the target can be expressed as:

$${\varvec{X}}({\text{t}} + 1) ={\varvec{\varPhi}}(t){\varvec{X}}(t) + {\varvec{G}}(t){\varvec{V}}(t)$$
(1)

\({\varvec{\varPhi}}(t) \in {\mathbf{R}}^{n,n}\) is the transition matrix of the target state, \({\varvec{X}}(t) \in {\mathbf{R}}^{n}\) represents the state vector of the target at the time \(t\); \({\varvec{G}}(t) \in {\mathbf{R}}^{{{\text{n}},h}}\) is the process noise distribution matrix;\({\varvec{V}}(t) \in {\mathbf{R}}^{{\text{h}}}\) is a Gaussian white noise sequence with mean value 0 and covariance \(Q(t)\); \({\mathbf{R}}\) is the set of real numbers, \(n\) is the number of rows of matrix \(X\), and \(h\) is the number of rows of matrix \(V\).

Fig. 1
figure 1

Multi-target motion model diagram

Given the real coordinates of the target and the sensor, combined with the corresponding acoustic environment, the state sequence \(\left\{ {X_{ij} ;t} \right\}\) of the radiation signal from the target \(i\) to the sensor \(j\) can be calculated by the gamma acoustic model. Then, the observation value of the sensor \(j\) to target \(i\) can be expressed as:

$$Z_{{t_{1} }}^{ij} = \left\{ \begin{array}{ll} HX_{{\min (t - t_{1} )}}^{ij} + v_{{t_{1} }} + a_{{t_{1} }} & \, rand(1) < Pf_{j} \hfill \\ \emptyset &\, rand(1) > Pf_{j} \hfill \\ \end{array} \right.$$
(2)

where \(Z_{{t_{1} }}^{ij}\) is the observation of the target \(i\) by the sensor \(j\) at a time \(t_{1}\), \(X_{{\min (t - t_{1} )}}^{ij} \in \left\{ {X_{ij} ;t} \right\}\) is the closest target state to \(t_{1}\), \(v_{{t_{1} }}\) and \(a_{{t_{1} }}\) are the system observation error and environmental noise, respectively, when \(t_{1}\), \(v_{{t_{1} }}\) and \(a_{{t_{1} }}\) are not correlated. \(Pf_{j}\) is the detection probability of the sensor \(j\). For the multi-target moving scene, the observation \(Z_{{t_{1} }}^{j}\) of the sensor \(j\) at \(t_{1}\) is the union of multi-target and false track. To highlight the problem and simplify the model, this paper defaults to no false track generated by each sensor:

$$Z_{{t_{1} }}^{j} = \bigcup {Z_{{t_{1} }}^{ij} }$$
(3)

3 Modified sequential Kalman filter algorithm in asynchronous track fusion

3.1 Sequential Kalman filter

The essence of sequential is recursion. Because of its strong adaptability to the number of sensors and the form of data, it is often used in multi-sensor information fusion. As a linear expression form of the Bayesian filter, the Kalman filter solves the shortcomings of the Bayesian filter without an analytical solution and is a common method for dealing with target-tracking problems. For the distributed detection system, each sensor estimates the target state to obtain the incomplete posterior estimation results in the current state of the target and then updates the incomplete posterior estimation with the sequential filter to obtain the complete posterior estimation.

Suppose there is a distributed detection system composed of \(n\) sensors, and a target enters the monitoring area. Taking the sensor \(j_{1}\) as the fusion benchmark, the state estimation, and covariance matrix at \(t_{1}\) are calculated according to the posterior estimation \(x_{{t_{1} - 1}}^{{j_{1} }}\) and \(P_{{t_{1} - 1}}^{{j_{1} }}\) at \(t_{1} - 1\):

$$x_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} = Fx_{{t_{1} - 1}}^{{j_{1} }}$$
(4)
$$P_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} = FP_{{t_{1} - 1}}^{{j_{1} }} F^{T} + Q_{{t_{1} }}^{{j_{1} }}$$
(5)

Then, Kalman gain can be calculated by the covariance matrix at \(t_{1}\):

$$K_{{t_{1} }}^{{j_{1} }} = P_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} H_{{t_{1} }}^{{j_{1} T}} \left( {H_{{t_{1} }}^{{j_{1} }} P_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} H_{{t_{1} }}^{{j_{1} T}} + R_{{t_{1} }}^{{j_{1} }} } \right)^{ - 1}$$
(6)

Then, the observation value \(Z_{{t_{1} }}^{{ij_{1} }}\) of the sensor \(j_{1}\) is processed to calculate the incomplete posterior estimate:

$$x_{{t_{1} }}^{{j_{1} }} = x_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} + K_{{t_{1} }}^{{j_{1} }} (Z_{{t_{1} }}^{{j_{1} }} - Hx_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }} )$$
(7)
$$P_{{t_{1} }}^{{j_{1} }} = (I - K_{{t_{1} }}^{{j_{1} }} H_{{t_{1} }}^{{j_{1} }} )P_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} }}$$
(8)

Based on the estimation results of the sensor \(j_{1}\), the estimation of other sensors in the distributed system is predicted:

$$x_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} }}} \right. \kern-0pt} {t_{1} }} - 1}}^{{j_{1} k_{n} }} = E\{ {{x_{{t_{1} }}^{{k_{n} }} } \mathord{\left/ {\vphantom {{x_{{t_{1} }}^{{k_{n} }} } {x_{{t_{1} }}^{{j_{1} }} }}} \right. \kern-0pt} {x_{{t_{1} }}^{{j_{1} }} }}\} = F(\tau_{{t_{1} }}^{{k_{n} }} ,\tau_{{t_{1} }}^{{j_{1} }} )x_{{t_{1} }}^{{j_{1} }}$$
(9)

Then, the latest Kalman gain can be expressed as:

$$K_{{t_{1} }}^{{j_{1} ,k_{n} }} = P_{{t_{1} }}^{{j_{1} ,k_{n - 1} }} H_{{t_{1} }}^{{j_{1} ,k_{n} T}} \left( {H_{{t_{1} }}^{{j_{1} ,k_{n} }} P_{{t_{1} }}^{{j_{1} ,k_{n - 1} }} H_{{t_{1} }}^{{j_{1} ,k_{n} T}} + R_{{t_{1} }}^{{k_{n} }} } \right)^{ - 1}$$
(10)

Then, calculate the predicted estimates of all probe nodes based on the reference nodes:

$$x_{{t_{1} }}^{{j_{1} ,k_{n} }} = x_{{t_{1} - 1}}^{{j_{1,} k_{n - 1} }} + K_{{t_{1} }}^{{j_{1} ,k_{n} }} (z_{{t_{1} }}^{{k_{n} }} - H(x_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} ,k_{n} }} ))$$
(11)
$$P_{{t_{1} }}^{{j_{1} ,k_{n} }} = (I - K_{{t_{1} }}^{{j_{1} ,k_{n} }} H_{{t_{1} }}^{{j_{1} ,k_{n} }} )P_{{{{t_{1} } \mathord{\left/ {\vphantom {{t_{1} } {t_{1} - 1}}} \right. \kern-0pt} {t_{1} - 1}}}}^{{j_{1} ,k_{n} }}$$
(12)

After iterative calculation of the measurement \(z_{{t_{1} }}^{{k_{n} }}\) of all sensors, a complete posterior estimate of the target state is obtained:

$$x_{{t_{1} }} = x_{{t_{1} }}^{{j_{1} ,k_{{{\text{end}}}} }}$$
(13)
$$P_{{t_{1} }} = P_{{t_{1} }}^{{j_{1} ,k_{{{\text{end}}}} }}$$
(14)

This completes the theoretical derivation of the sequential Kalman filter.

3.2 Asynchronous information correction based on sound propagation time

In the underwater acoustic positioning system, it is often assumed that the acoustic environment is a constant sound velocity gradient. That is, constant sound velocity is used for approximate calculation. Thus, the propagation time \(\tau_{{t_{1} }}^{{ik_{n} }}\) of the radiated sound from the target to each sensor can be obtained, and the asynchronous time of each sensor to the target observation output can be updated successively:

$$\tau_{{t_{1} }}^{{k_{n} }} = \tau_{{t_{1} }}^{{ik_{n} }} + \tau_{{t_{1} }}^{{j_{1} k_{n} }}$$
(15)

However, in the marine environment, the sound velocity is time-varying and spatially varying with the change in the underwater acoustic environment, and the bending phenomenon of the sound line also causes errors in the estimation of sound propagation delay.

For the scenario in Fig. 1, the time of radiation signal from the target \(i\) reaching the sensor \(j_{1}\) and \(j_{2}\) is \(t_{1}\) and \(t_{2},\) respectively, and the average sound speed in the propagation process is \(c_{1}\) and \(c_{2},\) respectively. There are:

$$\left\{ \begin{gathered} \left( {i_{x} - j_{1x} } \right)^{2} + \left( {i_{y} - j_{1y} } \right)^{2} = (c_{1} t_{1} )^{2} \hfill \\ \left( {i_{x} - j_{2x} } \right)^{2} + \left( {i_{y} - j_{2y} } \right)^{2} = (c_{2} t_{2} )^{2} \hfill \\ \end{gathered} \right.$$
(16)

Transform the coordinate system, let \(j_{1x} = - j_{2x}\), \(j_{1y} = j_{2y}\), then:

$$i_{x} = \frac{{(c_{1} t_{1} )^{2} - (c_{2} t_{2} )^{2} }}{{4j_{1x} }}$$
(17)

If \(c_{1}\) is chosen to calculate the sound speed, the positioning error can be calculated as:

$$\vartriangle = \frac{{\left( {c_{1}^{{2}} - c_{2}^{{2}} } \right)}}{{{4}j_{1x} }}t_{2}^{2}$$
(18)

It is easy to know that the positioning error of the distributed system is proportional to the sound velocity error and propagation time. According to the Snell theorem in layered media, it is necessary to know the source depth, initial grazing angle, and vertical distribution of sound velocity to calculate the accurate sound propagation time. These are often unpredictable in the process of target location, so this paper chooses an effective sound velocity method to correct the specific propagation time of the sound line.

Effective sound velocity is defined as the ratio of the geometric distance between the transceiver and the actual sound propagation time [15]. According to the application scenario of the distributed underwater detection system, the effective sound velocity table of the monitoring area relative to the sensor can be easily measured. In practical engineering applications, the value of sound velocity is checked according to the prior information of the target, and then, the propagation time \(\tau_{{t_{1} }}^{{ik_{n} }}\) of the target sound wave relative to each sensor is updated. In the simulation calculation in this paper, the Bellhop model is used to traverse the monitoring area to generate an effective sound velocity table for each sensor. The sound velocity gradient used for calculation and the sound ray track at a certain time is shown in Figs. 2 and 3.

Fig. 2
figure 2

Sound velocity grad

Fig. 3
figure 3

Rays trace

3.3 Algorithm adaptive adjustment

Detailed performance analysis results of the AS-KF-ESV algorithm are given above. But in the actual marine environment, due to the influence of noise and other factors, the phenomenon of track frame missing and track loss often occurs, which will lead to the difficulty of algorithm convergence. Therefore, fusion rules are designed based on the algorithm to improve the adaptability and robustness of the algorithm.

In principle, we hope to use the measurement of the same sensor as the fusion benchmark to track the target. Therefore, the sensor \(j_{1}\) closest to the target is selected as the fusion benchmark at the beginning of the fusion, and marks are set to record the missed detection probability \(p_{m}\) of each sensor to track the target in a period of the time window. When the reference node is the missing frame, the node with the lowest probability of missing frame will be selected as the temporary fusion reference node. When the reference node misses three frames continuously, the sensor will be considered to have lost the target, and the temporary fusion reference node will be promoted to the fusion reference node.

4 Numerical simulation analysis

4.1 Performance evaluation method

Due to the discrete nature of the simulation system, the performance evaluation method represented by root-mean-squared error (RMSE) has misjudgment in practical use. Therefore, the optimal subpattern assignment (OSPA) distance is used to evaluate performance. OSPA distance is defined as the distance between two sets. Let the target state estimation set be \(X = \left\{ {x_{1} ,x_{2} , \ldots x_{n} } \right\}\), and the truth set of the target state be \(Y = \left\{ {y_{1} ,y_{2} , \ldots y_{m} } \right\}\) and satisfy \(0 < n \le m\). OSPA distance can be defined as:

$$d_{p,c}^{{{\text{OSPA}}}} (X,Y) = \left( {\mathop {\min }\limits_{\pi } \frac{1}{m}\sum\limits_{i = 1}^{n} {d_{c} \left( {x_{i} ,y_{\pi i} } \right)^{p} + \frac{{c^{p} }}{m}\left( {m - n} \right)} } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 p}}\right.\kern-0pt} \!\lower0.7ex\hbox{$p$}}}}$$
(19)

where \(d_{c} \left( {x_{i} ,y_{\pi i} } \right)\) is the cutoff distance measure of correlation degree between set \(X\) and set \(Y\), expressed as:

$$d_{c} \left( {x,y} \right) = \min \left\{ {c,d(x,y)} \right\}$$
(20)

In OSPA, distance \(c\) is the correlation cutoff radius and \(p\) is a dimensionless real number. The OSPA can be obtained by the definition:

$$0 \le d_{p,c}^{{{\text{OSPA}}}} \le c$$
(21)

In the simulation performance analysis of this paper, the OSPA parameter is set as \(c = 100,p = 1\).

4.2 Simulation analysis of algorithm performance

(1) Algorithm validity analysis

Situation 1: It is assumed that there are four sensors in the region to form a distributed detection system for cooperative detection, and the observation output of each sensor is the target track under the background of interference. The standard deviation is set as \(\left[ {25\;{\text{m}},20\;{\text{m}},20\;{\text{m}},20\;{\text{m}}} \right]\), the detection period is set as \(T = 5\;{\text{s}}\), and the startup time is set as \([10\;{\text{s}},11\;{\text{s}},12\;{\text{s}},13\;{\text{s}}]\). The target appeared at 0, and its initial state was set as \([3000\;{\text{m}},8\;{\text{m/s}},0.1\;{\text{m/s}}^{2} ,1000\;{\text{m}},1\;{\text{m/s}}, - 0.01\;{\text{m/s}}^{2} ]\). The sensor was sampled 157 times in the whole simulation process. Monte Carlo simulation time is \(N = 100\), and the time resolution of target motion is 1 ms. That is, the simulation error of sound propagation time is less than 1 ms. It is assumed that the whole moving process of the target can be observed by four sensors. The AS-KF-ESV algorithm is used to process the asynchronous measurement. The simulation results are illustrated in Fig. 4. The x and y coordinates in the figure are the coordinates in the geodetic coordinate system, and the unit is m. At the same time, OSPA is used to evaluate the algorithm performance, and the evaluation results are illustrated in Fig. 5.

Fig. 4
figure 4

The estimations of the algorithm

Fig. 5
figure 5

OSPA performance evaluation

In this simulation experiment, the AS-KF-ESV algorithm can realize the asynchronous measurement fusion of distributed detection system, which verifies the effectiveness of the algorithm. As the initial value is the direct output of the sensor to the target location, the AS-KF-ESV algorithm's initial estimation accuracy is low. After several iterations, the algorithm converges to the optimal state. Near frame 20, the observed value of the sensor and the estimated value of the AS-KF-ESV algorithm both fluctuated to some extent, but the AS-KF-ESV algorithm soon completed convergence, and the algorithm reached a stable state.

The simulation conditions remain unchanged. Direct fusion method (D-KF), AS-KF, AS-KF-CSV, and AS-KF-ESV are, respectively, used for fusion processing. The simulation results are shown in Fig. 6.

Fig. 6
figure 6

Performance comparison of different algorithms

In this simulation experiment, the D-KF algorithm failed to complete convergence due to the time mismatch of the observation data of the four sensors, and the target position could not be estimated. The other three asynchronous fusion algorithms all completed the target estimation. At the beginning of the simulation, the accuracy curves of AS-KF-ESV and AS-KF-CSV were close because the target was close to the sensor. As the target was far away from the sensor, the estimation accuracy of AS-KF-ESV began to be better than that of the AS-KF-CSV algorithm. Overall, the estimation accuracy after propagation time correction is better than without time correction.

(2) Analysis of algorithm influence factors

Situation 2: It is assumed that there are four sensors in the region to form a distributed detection system for cooperative detection. The observation output of each sensor is the target track in the background of interference, the standard deviation is set as \(\left[ {25\;{\text{m}},20\;{\text{m}},20\;{\text{m}},20\;{\text{m}}} \right]\), the detection period is set as \(T = 5\;{\text{s}}\), and the startup time is set as \([10\;{\text{s}},11\;{\text{s}},12\;{\text{s}},13\;{\text{s}}]\). There are two uniformly moving targets in the region, both of which appear at the moment 0. The initial state of target one is set as \([ - 1000\;{\text{m}},25\;{\text{m/s}},1000\;{\text{m}},13\;{\text{m/s}}]\), and the initial state of target two is set as \([ - 1000\;{\text{m}},11\;{\text{m/s}}, - 1000\;{\text{m}},13\;{\text{m/s}}]\). The sensor has carried out 157 sampling times in the simulation process. Monte Carlo simulation time is \(N = 100\), and the time resolution of target motion is 1 ms. It is assumed that four sensors can observe the whole moving process of the target.

Figure 7 shows the simulation results of situation 2, the x and y coordinates in the figure are the coordinates in the geodetic coordinate system, and the unit is m. Figure 8 shows the comparison of performance analysis between situation 1 and situation 2. In this simulation experiment, the AS-KF-ESV algorithm completes the asynchronous track fusion of three different motion situation targets. There is no direct linear or nonlinear relationship between fusion accuracy and target velocity.

Fig. 7
figure 7

The estimations of situation 2

Fig. 8
figure 8

OSPA performance evaluation

Other elements in situation 1 remain unchanged, and the standard deviation of sensors is successively changed to \([5\;{\text{m}},5\;{\text{m}},5\;{\text{m}},5\;{\text{m}}]\), \([15\;{\text{m}},20\;{\text{m}},25\;{\text{m}},30\;{\text{m}}]\), \([15\;{\text{m}},20\;{\text{m}},25\;{\text{m}},30\;{\text{m}}]\) and \([35\;{\text{m}},35\;{\text{m}},35\;{\text{m}},35\;{\text{m}}]\). Simulation results are shown in Fig. 9.

Fig. 9
figure 9

Influence of measurement quality on estimation results

In this simulation experiment, the accuracy of the AS-KF-ESV algorithm for the target fusion location fluctuates with the performance of the sensor. AS-KF-ESV algorithm can give more accurate target location results when all sensors are consistent. On the contrary, when the consistency of each sensor is poor, the fusion result of the AS-KF-ESV algorithm tends to be the higher precision measurement.

Keep other elements unchanged in situation 1, and change the startup time of each sensor to \([10\;{\text{s}},10.5\;{\text{s}},11\;{\text{s}},11.5\;{\text{s}}]\) and \([10\;{\text{s}},11.5\;{\text{s}},13\;{\text{s}},14.5\;{\text{s}}]\). The simulation results are shown in Fig. 10.

Fig. 10
figure 10

Influence of asynchronous time on estimation results

In this simulation experiment, when the time delay of each sensor is small, the accuracy of the fusion estimation result is relatively high. Figure 11 shows the fusion results of a different number of sensors in the same situation. It can be seen that the increase in the number of fused sensors, the higher the accuracy of fusion results.

Fig. 11
figure 11

Comparison of fusion results of different numbers of sensors

(3) Robustness analysis of the algorithm

To simulate the frame loss phenomenon during actual detection and analyze the algorithm's robustness, three control experiments were designed based on situation 1: each of the four sensors randomly lost 10, 20, and 30 frames. 100 Monte Carlo simulation experiments were performed, respectively. The simulation results are shown in Fig. 12.

Fig. 12
figure 12

Effect of missing measurement on estimation results

In this simulation experiment, the missing frame of the sensor will affect the fusion accuracy. The more the number of missing frames, the lower the accuracy of fusion estimation, which is consistent with the actual situation. Meanwhile, in these three groups of control experiments, AS-KF-ESV has completed the track fusion of multiple sensors, and the fusion accuracy is higher than the original measurement. In particular, when the number of missing frames is controllable, the fusion precision loss of the AS-KF-ESV algorithm is within the controllable range, which verifies the robustness and practical value of the algorithm.

(4) Algorithm complexity analysis

To analyze the complexity of the algorithm proposed in this paper, based on Situation 1, the AS-KF-ESV algorithm is used to compare the time and space resource occupation with the benchmark algorithm. Among them, effective sound velocity meters with an accuracy of 5 m, 10 m, and 15 m are designed, respectively, for AS-KF-ESV query. This experiment has conducted 10,000 Monte Carlo simulation calculations, and the comparison results are shown in Table 1.

Table 1 Analysis results of algorithm complexity

From the data in the table, it can be seen that under the same simulation conditions, the running time of the AS-KF-ESV algorithm is about 8% longer than the basic algorithm, and there is no obvious linear relationship between the running time and the accuracy of the effective sound velocity algorithm. At the same time, compared with the basic algorithm, the AS-KF-ESV algorithm significantly increases the occupation of computer resources, and with the improvement of the accuracy of the effective sound velocity algorithm, this algorithm significantly increases the occupation of memory. In addition, in this simulation experiment, the calculation of the effective sound velocity meter needs a lot of time, but because the effective sound velocity method algorithm is pre-built data, the time complexity calculation in this paper does not include the calculation of the effective sound velocity method algorithm.

5 Conclusion

This paper proposes a modified sequential Kalman asynchronous track fusion based on sound propagation time to solve the problem of signal asynchronism caused by different sound propagation times in distributed underwater acoustic detection systems. After each sensor of the distributed system locates the stable tracking target, the constant sound speed method or the effective sound speed method is used to calculate the sound propagation time of the target signal from the sound source to the sensor reception. The asynchronous information of the tracking target is updated, and then, the asynchronous tracks are fused by sequential Kalman filter. During the fusion process, according to the detection results of each sensor, the members of the fusion group are dynamically updated, which solves the phenomenon that the point sensor loses the target, or the target escapes from the detection range of a certain sensor in the actual detection. The simulation results show that the algorithm can effectively realize the asynchronous measurement fusion of distributed detection system. And it has higher estimation accuracy than the original algorithm under the same conditions. Robustness analysis also verifies the applicability of the algorithm in practical situations.