1 Introduction

Low-cost, easy to use and readily available unmanned aircraft systems (UASs) are advancing in capabilities to magnetize military and commercial industries. Also these advance capabilities strengthens the adversaries to distract, desensitize and disrupt military operations [1]. Any operating drone near a populated area or restricted areas can record video or audio conversations, crash deliberately onto people or objects, and could even drop malicious biological and chemical weapons [2, 3]. However, it is not just drones that are the threat. They are swarms of drones, perhaps tens or dozens or hundreds, spying or striking [4]. As a result, defense agencies and industry experts are looking at a system of systems approach that incorporates a broad set of capabilities – electronic warfare, electro-optics and infrared cameras, RADAR’s and user display [5,6,7,8,9,10]. By 2020, the Federal Aviation Administration (FAA) expects the number of UAVs flying in the US to be as many as 30,000. This figure is disturbing given the recent number of incidents involving UAVs flying over or landing on critical infrastructures. As the proliferation of small consumer drones has raised much recent interest in their regulation and monitoring, one potential way to detect and identify these drones is using ground-based radar [11]. However, composite UAV’s have a small radar cross section (RCS), and as they fly slowly and at low altitudes they easily blend into surrounding clutter which makes a typical radar to see them [12,13,14]. DoD officials are particularly concerned about UASs with a low-radar cross section. Any target attack is considered to be detected via electronic systems such as electro-optics/IR systems or ground- or air–based RF (Radio Frequency) systems. Both of these systems work by using the target’s signatures such as its minimum resolvable temperature difference or its RCS. The target must reduce these signatures in order to be detected or not to be seen [15,16,17]. The mission effectiveness of RADAR could be degraded by electronic attacks such as RADAR Absorption Materials (RAM’s) which decrease target signature. A low RCS tends to increase the covariance of the RF sensor and the sensed tracking position errors. Here the RADAR sensor is an active sensor, which has narrow beam width and the RADAR beams are prone to electromagnetic interference. The forward looking infrared (FLIR) sensor is a passive system, which is quite sensitive to atmospheric conditions and is not susceptible or prone to electromagnetic interference. In this parlance, with thermal imaging technology, it is impossible for a drone to go unnoticed; any object, hot or cold will be detected by the 360° thermal sensor, day and night. Thermal imaging allows for day and night surveillance, but also guarantees the ability to view any object, even deemed as stealth, whether it is hot or cold [10]. One such an example of detecting the DJI Phantom 3 UAV by RADAR, IR night vision and Thermal camera is shown in Fig. 1.

Fig. 1.
figure 1

(from [10, 15])

RADAR and IR signatures

FLIR sensors provide good azimuth resolution, but poor range resolution. Radar sensors provide good range resolution but poor azimuth resolution. Fusing the detection results from both FLIR and radar sensors can significantly enhance performance by utilizing the strengths of each [18, 19]. With these insights the main objective of this paper is to mitigate a high degree of uncertainty associated with the dynamics of target model and the origin of measurement model uncertainty using complex data association and robust filtering algorithms. The organization of the paper is as follows: Sect. 2 presents the mathematical modeling and formulation of the two different sensors along with state estimation techniques. Section 3 deals with the experimental results and analysis followed by conclusion and future scope in Sect. 4.

2 Mathematical Modeling and Formulation

A three dimensional spherical coordinate system [20] using flat earth approximation [21] is considered in Fig. 2 to provide the target information sensed by RADAR and FLIR where point P describes the position of the target in Cartesian coordinate system.

Fig. 2.
figure 2

3D spherical coordinate system

Equation 1 describes the targets position, velocity and acceleration in x, y and z directions in a finite-dimensional representation [22].

$$ \left[ {x \,\dot{x}\, \ddot{x} \,y \,\dot{y}\,\ddot{y}\, z \,\dot{z}\, \ddot{z}} \right]^{T} $$
(1)

Using the targets position information the line of site system measurements [23] are given by

$$ r = \sqrt {\Delta x^{2} + \Delta y^{2} + \Delta z^{2} } $$
(2)
$$ \phi = tan^{ - 1} \left( {\frac{\Delta y}{\Delta x}} \right) $$
(3)
$$ \theta = tan^{ - 1} \left( {\frac{ - \Delta z}{{\sqrt {\Delta x^{2} + \Delta y^{2} } }}} \right) $$
(4)

To start with the motion characteristics the target is modeled as a point target that can be easily fit into a state variable form and the target motions are generated in MATLAB. The first is a projectile motion [24] and the second trajectory considered is a special case where the target is aggressively maneuvering in a nonlinear fashion [25] given by

$$ x_{k} = 0.5x_{k - 1} + \frac{{25x_{k - 1} }}{{1 + x_{k - 1}^{2} }} + 8{ \cos }\left( {1.2\left( {k - 1} \right)} \right) $$
(5)

2.1 State Estimation

A state variable approach [26] is adopted to represent the state vector of a dynamic system and for a given kinematic model the generic representation is given as

$$ \hat{X}_{k} \left( - \right) = F\hat{X}_{k - 1} \left( + \right) + Gw_{k - 1} $$
(6)
$$ Z_{k} = H\hat{X}_{k - 1} \left( + \right) + v_{k} $$
(7)

where Eq. 9 is a system dynamic model, Eq. 10 is measurement model and vk is the measurement noise variance for both RADAR and FLIR. The measurement model describes the motion characteristics of the target modeled as a point target assuming a passive detection is given. The measurement error model [27] for RADAR whose variable variance is modeled by Gaussian noise written as

$$ \sigma_{range} = \frac{\tau }{{2.5\sqrt {2SNR} }} $$
(8)
$$ \sigma_{az - el} = \frac{{\theta_{3db} }}{{2.5\sqrt {2SNR} }} $$
(9)

where SNR is given by

$$ SNR = \frac{{nP_{T} G_{T} G_{R} \sigma \lambda^{2} }}{{\left( {4\pi } \right)^{3} kTBF(R_{max} )^{4} L}} $$
(10)

The RADAR signature σ is predicted using POFACETS [28] shown in Fig. 3, a convenient tool developed based on physical optics approximation. A built-in CAD library airplane model shown in Fig. 3 is considered for simulation.

Fig. 3.
figure 3

Triangular surface of the airplane model and its predicted RCS

The reduced RCS considered for simulation is only a predicted value due to RADAR absorption material used in POFACETS tool. Similarly, noises in IR systems caused by any distortions or statistical fluctuations in the electrical current are classified and characterized as Johnson noise, shot noise, generation-recombination noise, and photon noises. Using blackbody radiance theory and detector theory [29], these noises calculated using equations Therefore, total noise current is given by

$$ \sigma_{noise}\,or\, n\left( R \right)_{Photovoltaic} = \sqrt {jn\left( r \right) + sn\left( r \right) + pn\left( r \right)} $$
(11)
$$ \sigma_{noise}\,or\, n\left( R \right)_{Photoconductive} = \sqrt {jn\left( r \right) + grn\left( r \right) + pn\left( r \right)} $$
(12)

where

$$ Johnson\, noise\, jn\left( r \right) = \frac{{4kT_{d} B\left( R \right)}}{{R_{d} }} $$
(13)
$$ Dark\, current\, shot \,noise\, sn\left( r \right) = 2qi_{D} B\left( R \right) $$
(14)
$$ {\text{generation}} - {\text{recombination }}\,noise \,grn\left( r \right) = 4qGi_{D} B\left( R \right) $$
(15)
$$ Photon\, noise\, pn\left( r \right) = \frac{{2\eta q^{2} \left( {P_{s} + P_{b} } \right)B\lambda }}{hc} $$
(16)

2.2 Sensor Fusion

An Extended Kalman Filter (EKF) based centralized fusion architecture is considered in which the data from both sensors are synchronized at the same sampling interval T. Two widely fusion methods such as Measurement Fusion (MF) and State Vector Fusion (SVF) methods [30, 31] are implemented. Measurement fusion starts with fusion of measurement noise variances and measured values given by the equations

$$ \sigma_{f} = \sigma_{ir} - \sigma_{ir} \left[ {\sigma_{ir} + \sigma_{r} } \right]^{ - 1} \sigma_{ir} $$
(17)
$$ Z_{f} = Z_{ir} + \sigma_{ir} \left[ {\sigma_{ir} + \sigma_{r} } \right]^{ - 1} \left[ {Z_{r} - Z_{ir} } \right] $$
(18)
$$ Z = \left[ {Z_{az}\, Z_{el}\, Z_{r} } \right] $$
(19)

Equation 19 gives the final fused measurement vector upon which the fused covariances and fused measured values are collected and concatenated into EKF. In state vector fusion each sensor measured values are estimated with EKF and the resultant state estimates and state error covariances of each sensor are fused. The fusion steps are given by

$$ Z_{f} = Z_{ir} + P_{ir} \left[ {P_{ir} + P_{r} } \right]^{ - 1} \left[ {Z_{r} - Z_{ir} } \right] $$
(20)
$$ P_{f} = P_{ir} - P_{ir} \left[ {P_{ir} + P_{r} } \right]^{ - 1} P_{r}^{T} $$
(21)

Also, while using EKF the measurement sensitivity matrix is obtained by finite difference method [31, 32] for the purpose of which is to linearize. It is given by

$$ H_{k} = H_{ij} = \left. {\frac{{\partial h_{i} }}{{\partial h_{j} }}} \right|X = \hat{X}_{k - 1} \left( + \right) = \frac{{h_{i} \left( {x_{j} + \Delta x_{j} } \right) - h_{i} \left( {x_{j} } \right)}}{{\Delta x_{j} }} $$
(22)

2.3 Particle Filter

Particle filter (PF) is a class of efficient Bayesian Filters introduced for nonlinear approximations [33]. They are also known as Sequential Monte Carlo filters that are presented in this section. The purpose of exploiting the PF is to understand its effective estimation for nonlinear case alongside the Extended Kalman filter. The steps involved in the Particle filter are:

  • Initialize the state estimate vector \( \hat{X}_{k - 1} \left( - \right) \)

  • Choose the process noise variance value for \( Q_{k} \) as 0.1

  • Generate randomly N particle \( \left( {x_{0,i}^{+}\, where\, i = 1, \ldots N} \right) \) assuming a known Gaussian probability density function

  • Obtain a priori particles \( \hat{X}_{k} \left( - \right) \) by performing time propagation step for k = 1, 2… \( \hat{X}_{k} \left( - \right) = f_{k - 1} \left( {\hat{X}_{k - 1,i} \left( + \right),w_{k - 1}^{i} } \right) \), where i = 1…N

  • Compute the relative likelihood \( q_{i} \) of each particle \( \hat{X}_{k} \left( - \right) \) stipulated by measurement and evaluating pdf \( p\left( {y_{k} |\hat{X}_{k - 1,i} } \right) \)

  • Scale the relative likelihood as \( q_{i} = \frac{{q_{i} }}{{\mathop \sum \nolimits_{j = 1}^{N} q_{j} }} \) and sum of the likelihoods equal to one.

  • Perform the resampling step based on these relative likelihoods generate a set of posteriori particles \( \hat{X}_{k} \left( + \right) \).

3 Simulation Analysis

For the purpose of the analysis it is assumed that both RADAR and FLIR are installed at the same location with the origin as their center and the operating range is assumed to be 25 km. The simulation is done with a sampling interval of 0.01 s. The specifications for RADAR and FLIR for simulation are taken from [16, 29].

Other than PF, few widely used nonlinear state estimators are Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF). However, from Figs. 4 and 5, its understood that if the model nonlinearities are rigorous and difficult to linearize, EKF is prone to tuning difficulties that could result in unpredictable estimates [34]. Even though UKF provides significant raise in estimation accuracy it is only an approximate nonlinear estimator [35] whereas PF is a complete nonlinear state estimator. Moreover this recursive Bayesian estimator known as Particle filter provide better estimation accuracy for a non Gaussian distribution. Here the PF algorithm is based on estimating the probability distribution function (PDF) for the state variables given measurement variables and the generation of the particles is a trade-off. The key feature with PF over linear filtering is resampling which is to generate posteriori particles on the basis of relative likelihood. Also, the functional equivalence of two fusion methods shows a better performance than a single sensor. In MF a weighted or combined is obtained by fusing the sensor measurements directly and then based on the fused measurement EKF is utilized for final state estimate. In MF the combined measurement noise variance is essential to feed into EKF. SVF first estimates the sensor measurements using EKF and fuses the smoothed or final estimates.

Fig. 4.
figure 4

Sensor and filter estimates for non-linear motion

Fig. 5.
figure 5

Position errors in range for nonlinear trajectory

The superiority of these two fusion methods are shown in two cases. The RMS and RSSP errors in range and angles are shown in Table 1 to determine the performance evaluation of the methods for two different target motions.

Table 1. Comparison of RMS and RSSP errors

4 Conclusion and Future Scope

The performance evaluation and the necessity of the fusion methods are presented in this paper for two different target motions. A nonlinear filtering theory in which particle filter a recent development for general solution to the nonlinear problem is also presented besides data fusion. From the simulation analysis it can be concluded that by adapting suitable state estimation techniques, the choice of filters/estimators can mitigate the definite cause of uncertainty with the sensor or target to demonstrate better performance in tracking accuracy. As, all systems are ultimately nonlinear, the high degree of nonlinearity/uncertainty associated with system dynamics and measurement model makes a difficult tracking problem. The future work of this paper involves real time target detection and tracking in multi target and multi sensor scenario. This involves the use of signal and image processing algorithms to extract the target features in the presence of radar absorption materials using RADAR, Night Vision and Thermal Cameras. The predicted RCS considered for simulation in this paper has to be investigated with more technical aspects. Also, to see how the FLIR thermal sensors can augment the target information when more position errors are introduced due to reduced RCS and when the target is blended in surrounding clutter. Therefore this research is termed as detection and tracking of sUAS in the presence of surrounding clutter using novel filtering and image acquisition techniques.