1 Introduction

The number of resident space objects (RSOs) is increasing year after year and, to cope with that, the sensing capabilities of space surveillance and tracking (SST) sensor networks are also growing [1]. SST systems are composed by sensors and on-ground processing infrastructure devoted to the build-up, maintenance and exploitation of a catalogue of RSO: a robust automated database that contains information of every detected and maintained object. During surveillance, large areas of the sky are scanned to obtain data for both catalogue build-up and maintenance activities. The catalogue build-up process consists in detecting and cataloguing new objects to include them in the catalogue without any previous information (i.e., initialising), while the maintenance process entails the update of the information of already existing objects. Hence, the catalogue build-up depends on the capability of the processing infrastructure to detect and initialise new objects from measurements, packed as tracks, provided by a sensor network. Besides, the orbital information of the objects must be initialised with enough accuracy to ensure successful correlation of subsequent tracks as part of the maintenance of the catalogue.

The catalogue is used for the provision of space situational awareness (SSA) products (e.g., high-risk collision, upcoming re-entry, and fragmentation detection warnings) based on the information available on it. Therefore, it is crucial to develop methods that enable the detection and initialisation of new RSOs. One of the most relevant features that makes the observation association problem so challenging is the coupling between detection and estimation. To identify a new RSO it is required to estimate its orbit, while only observations belonging to the same RSO should be used in this estimation, making it necessary to have the association solved.

Before addressing the definition of the problem, the authors find it necessary to clarify some terms that are extensively used along this paper:

Measurement: Single value of a geometrical or physical property of an object observed by a sensor at certain epoch, t:

$$\begin{aligned} z\left( t\right) . \end{aligned}$$

In this paper the following measurements are used: azimuth, elevation, range and range-rate.

Observation: Set of m measurements taken from a single sensor at a common epoch, t, and originated from the same object:

$$\begin{aligned} {\mathcal {O}} = {\varvec{z}}\left( t\right) = \lbrace z_1\left( t\right) , z_2\left( t\right) , \ldots , z_m\left( t\right) \rbrace . \end{aligned}$$

Track: set of n observations taken by a single sensor during a period of time, \(\Delta {t}=t_n-t_1\), of continuous observation of an object:

$$\begin{aligned} {\mathcal {T}} = \bigcup \limits _{i=1}^{n} {\mathcal {O}}_{i} = \lbrace {\mathcal {O}}_1, {\mathcal {O}}_2, \ldots , {\mathcal {O}}_n \rbrace . \end{aligned}$$

It is also referred to as tracklet, too short arc (TSA) or very short arc (VSA) if there is not enough data to reliably estimate an orbit [2]. Surveillance sensors provide uncorrelated tracks (UCTs), while tracking sensors normally know with certain confidence the object being observed.

Hypothesis: Association of N tracks assumed to have been originated from a common RSO:

$$\begin{aligned} {\mathcal {H}} = \bigcup \limits _{i=1}^{N} {\mathcal {T}}_{i} = \lbrace {\mathcal {T}}_1, {\mathcal {T}}_2, \ldots , {\mathcal {T}}_N \rbrace .. \end{aligned}$$

Along the paper, notation of set theory is used. Accordingly, the number of elements of a set A, i.e., cardinality, is denoted \(\vert A \vert\), the union and intersection of two sets A and B is denoted \(A \cup B\) and \(A \cap B\), respectively, and the set of all members of A that are not members of B denoted as \(A \setminus B\).

There are several association/correlation problems present in the cataloguing activities: (1) track-to-track, (2) track-to-orbit and (3) orbit-to-orbit. The first one arises during the detection and initialisation of new RSOs with survey sensing data (catalogue build-up), since a single track is usually not enough to reliably estimate the orbit of an RSO (too high uncertainty for cataloguing purposes). The second one is required for the orbital information update (catalogue maintenance), i.e., to correlate incoming tracks against the orbits of already catalogued RSOs. The third one is needed to compare two catalogues of RSOs for tagging or quality assessment purposes, as well as for self-correlation tasks such as detection of duplicated objects. The terminology used by authors from the SST community [3,4,5,6] may differ from the classical tracking community [7,8,9,10]. In the latter, the term track refers to a well-established orbit in a RSO catalogue, while for the former a track is a set of observations. They are only analogous if the set of observations contained on the track are enough to reliably estimate an orbit. Along the tracking community, the three correlation problems above stated are known as: (1) observation-to-observation association (OTOA), (2) observation-to-track association (OTTA) and (3) track-to-track association (TTTA). This paper tackles the first problem and for that the terminology from the SST community will be used. This terminology may differ from the one from near-earth orbit (NEO) community, but the concepts and methodologies remain valid. Besides, the term RSO detection is used along this paper in the context of SST, i.e., referring to the initialisation of a new RSO in the catalogue, which includes not only the generation of observations but also a sufficiently accurate orbit estimation for cataloguing purposes.

Track association is a NP-hard (non-deterministic polynomial-time hard) combinational optimisation problem [11], i.e., the computational cost increases exponentially with the number of RSOs. Besides, it is an active area of research in data fusion with different strategies to tackle the problem. In the absence of track association, manual operations are required and an important fraction of survey sensing data is missed out [12].

Multi target tracking (MTT) methods, traditionally applied to sensing, guidance, navigation and air traffic control, among others [13], have been also used to tackle the track association and correlation problems. Joint probabilistic data association (JPDA) [14], multiple hypothesis tracking (MHT) [10] and random finite sets (RFS) [15] are very promising frameworks for the build-up and maintenance of catalogues of RSOs, although they are still under research and often computationally unaffordable.

In the MHT framework, associations of tracks, or hypotheses, are generated, evaluated, pruned and promoted in such a way that the involved observations have a high likelihood of belonging to a common RSO. The choice of the figure of merit drives the whole association process since it represents the criteria used to evaluate the likelihood of an association of tracks belonging to the same object and thus, the decision-making process. Usual approaches [3, 4] rely on projecting orbital differences into the covariance space, i.e., using the Mahalanobis distance [16] in the orbit domain:

$$\begin{aligned} \left( \hat{{\varvec{x}}}_1 - \hat{{\varvec{x}}}_2 \right) ^{\text {T}} \left( {\varvec{P}}_1 + {\varvec{P}}_2 \right) ^{-1} \left( \hat{{\varvec{x}}}_1 - \hat{{\varvec{x}}}_2 \right) , \end{aligned}$$
(1)

where \(\hat{{\varvec{x}}}_1\) and \(\hat{{\varvec{x}}}_2\) are the state vectors estimated from the available observations to be correlated and \({\varvec{P}}_1\) and \({\varvec{P}}_2\) their associated covariance matrices. The Mahalanobis distance reliably represents distances in the uncertainty space, but only if uncertainty realism holds, i.e., covariance properly characterises the state uncertainty, always under the assumption of a Gaussian probability distribution function [17]. Accordingly, there are two main drawbacks in these approaches: (1) the orbit estimated from one or two tracks is not accurate enough to obtain a reliable enough state and covariance [18, 19] to use in correlation/association and (2) this figure of merit favours association cases of tracks with unrealistically large covariance. The latter situation is related to the so-called dilution of probability, depicted in Fig. 1. In this case, using the Mahalanobis distance could lead to the association of \(\hat{{\varvec{x}}}_0\) with \(\hat{{\varvec{x}}}_2\) instead of \(\hat{{\varvec{x}}}_1\) due to the greater covariance volume of the former, \(\hat{{\varvec{x}}}_2\), if compared to the latter, \(\hat{{\varvec{x}}}_1\). This could be mitigated by introducing an additional penalty term (related to the determinant of the covariance matrix, known as differential entropy [20]) in Eq. 1 to avoid favouring cases with large covariances. These concerns are also applicable to similar approaches, such as the intersection of two families of confidence ellipsoids [21], the Bhattacharyya distance [22] or the Kullback–Leibler (KL) divergence [23], which have been also applied to track association [24]. On the contrary, [25] propose the use of the radial velocity, i.e., association in the measurement space, for double fence radar systems. Besides, admissible regions [26] are frequently used during in track-to-track association with optical observations to constrain the range and range-rate domain of search [6, 27,28,29].

Fig. 1
figure 1

Limiting association case affected by dilution of probability

Although there is extensive literature on track association theory, specially from the tracking community, only a few works describe the details and provide results in operational scenarios from the SST perspective. Regarding radar observations, Singh et al. [30] presents a simulated catalogue maintenance of low earth orbit (LEO) objects during a single day with both radar and optical observations. [4] provides simulated results for subsets of nearly circular (\(e<0.01\)) LEO objects observed by a radar during a single night. Siminski [31] analyses the association of pairs during 2 weeks of simulated radar observations with a single sensor and a theoretically visible population taken from the Meteoroid and Space Debris Terrestrial Environment Reference (MASTER) model. The situation is similar on the optical counterpart, e.g., Gadaleta et al. [24] provides simulation results concerning the effectiveness of the pair gating for optical tracks from 11 objects, and Yanez at al. [5] considers 3 geostationary earth orbit (GEO) objects during 9 days with real optical observations. In these papers, the different strategies are applied to particular scenarios, i.e., the focus is not on an operational catalogue build-up environment and reduced problems are tackled instead of a complete catalogue build-up from scratch. Besides, most do not tackle the complete association procedure from the observation reception to the initialisation of new RSOs in the catalogue. Accordingly, this paper intends to fill these gaps by: (1) presenting a novel track-to-track association methodology that uses the observation residuals to score hypotheses of RSO detections and includes several filtering and complexity reduction techniques to enable an efficient and robust performance on operational scenarios; and (2) providing results under a representative and massive operational survey radar catalogue build-up scenario, using clear association performance metrics.

This paper is organised as follows. In Sect. 2, we present the proposed association methodology, justifying the overall strategy and describing each of the steps involved in detail. In Sect. 3, we provide and discuss the results obtained in a simulated scenario representative of an operational case to evaluate the performance of the proposed methodology. Finally, in Sect. 4, we summarise and discuss the conclusions of this paper.

2 Methodology

The proposed track association methodology generates hypotheses of two, three and four tracks, by sequentially applying the different association steps or filters, depicted in Fig. 2, for each new track. This allows us to avoid a brute-force approach, i.e., analysing every possible combination of tracks. Accordingly, a much smaller set of hypotheses is analysed and maintained until there is enough evidence of their validity or are proven to be false, when a hard decision is made.

Fig. 2
figure 2

Steps of the proposed methodology

In the methodology, tracks are associated sequentially on a first-come first-served basis. When a new track arrives to the system, the first step is the generation of new hypotheses, involving not only this track but also previous hypotheses, i.e., associations with previous tracks. Therefore, depending on whether the hypotheses were generated during the association of the current or previous tracks, they are classified in one of the following subsets:

Previous hypotheses (\({\mathcal {D}}\)): Subset of hypotheses generated during the association of previous tracks. They represent the hypotheses tree history of the association problem.

New hypotheses (\({\mathcal {N}}\)): Subset of hypotheses that have been generated during the association of the current track.

The second step is the scoring, in which the likelihood that all tracks of each association belong to the same object is assessed. The third and fourth steps are the pruning and promotion, in which hypotheses may be discarded or promoted, respectively. Accordingly, depending on whether the hypotheses have been promoted or not, they are included in one of the following subsets:

Hypotheses promoted (\({\mathcal {P}}\)): Subset of hypotheses that have been promoted, i.e., their associated tracks are expected to belong the same RSO. A new potential object is generated from each promoted hypothesis.

Hypotheses under analysis (\({\mathcal {U}}\)): Subset of hypotheses that have not been yet promoted, but still could be promoted in the future or lead to the generation of additional hypotheses.

Superscript will be used to denote the number of tracks associated, e.g., \({\mathcal {N}}^i \cap {\mathcal {U}}^i\) is the subset of new hypotheses under analysis involving i tracks. Note that the set of all existing hypotheses is \({\mathcal {D}} \cup {\mathcal {N}} = {\mathcal {P}} \cup {\mathcal {U}}\).

Finally, there is an optional step, merging, intended to avoid the promotion of duplicated objects. The hypotheses tree is stored for the next track to arrive and the sequence of steps is repeated. By doing so, the remaining uncorrelated tracks and non promoted hypotheses can be considered in the future, since past uncorrelated tracks may be associated with future tracks. Besides, this approach also considers previous hypotheses that may lead to the generation of new hypotheses with subsequent tracks.

Each association step depends on the available information of each hypothesis, i.e., the number of associated tracks. For instance, only hypotheses with four tracks are allowed to be promoted (this is discussed in Sect. 2.2) and the pruning process becomes more restrictive with the number of associated tracks. Since there is no previous knowledge on the object to which the tracks belong, there is a huge number of possible combinations. The rationale behind including different gating processes and complexity reduction techniques on the association steps is to limit the growth of the hypotheses tree, as well as to trim it as much as possible, to keep the association problem affordable.

A preliminary step in the processing of associations of tracks is the observation compression, consisting in the transformation of a track into a single observation, known as attributable [2, 26], at the middle epoch of the track. The benefits of the observation compression are three: mitigating measurement noise effect, reducing the number of measurements and estimating measurement rates. It is a basic pre-processing technique used in data association and correlation problems [14]. This compression can be achieved by a least-squares or polynomial low degree fit to the observations of the track. The rate of change of a measurement is obtained by simply deriving the interpolating polynomial. If the track is long enough, it might be even possible to extract more than one meaningful fitted observation from a single track. Observation compression is required for Initial Orbit Determination (IOD) (details in Sect. 2.2) and to obtain the range-rate if the radar sensor does not provide it. A simple polynomial fitting of degree three has been used, but further considerations could be applied to the observation compression [32].

Complexity reduction techniques and multiple filtering steps are required because evaluating all possible combinations, i.e., brute-force approach, is not an option. Therefore, simple and fast methods are applied first, leaving the more accurate and computationally expensive methods and dynamical models for the last stages, when most of the false hypotheses have been already filtered out. The different association steps are now introduced.

2.1 Generation

The generation step is in charge of creating new hypotheses by combining two hypotheses under analysis. Therefore, from two hypotheses of N tracks, \({\mathcal {H}}_A\) and \({\mathcal {H}}_B\), also known as parent hypotheses, a new one, \({\mathcal {H}}_A \cup {\mathcal {H}}_B\), of \(N+1\) tracks is generated, i.e.:

$$\begin{aligned} \begin{aligned} &{\mathcal {H}}_A \cup {\mathcal {H}}_B= \left\{ \bigcup \limits _{k=1}^{N-1} {\mathcal {T}}_{k} \right\} \cup \left\{ {\mathcal {T}}_{A,N} \right\} \cup \left\{ {\mathcal {T}}_{B,N} \right\} \\& {\mathcal {T}}_{k}= {\mathcal {T}}_{A,k} = {\mathcal {T}}_{B,k} \qquad \forall k = 1,\, \ldots ,\, N-1, \end{aligned} \end{aligned}$$
(2)

where \({\mathcal {T}}_{\alpha ,k}\) is the k-th associated track of \({\mathcal {H}}_\alpha\). Note that according to the condition imposed before on the number of tracks of the new hypothesis, it is required that \({\mathcal {H}}_A\) and \({\mathcal {H}}_B\) have all but one track (N-th) in common. Figure 3 depicts the association tree that is generated until a hypothesis of four tracks is generated.

Fig. 3
figure 3

Sample generation step

During the generation of new associations, it is important to retain traceability of the parent hypotheses to be able to reconstruct the hypotheses tree. Only using this association tree it is possible to identify and discard false hypotheses in future association steps and stages. This traceability is denoted with \({\mathcal {F}}\), where \({\mathcal {F}} \left( {\mathcal {H}}_A \cup {\mathcal {H}}_B \right) = \left\{ {\mathcal {H}}_A , {\mathcal {H}}_B \right\}\).

Note that not all possible track combinations are considered as feasible hypotheses since it would lead to a computationally unaffordable growth of the hypotheses tree. Instead, not every possible combination of \(N>1\) is generated, but only those coming from combinations of two hypotheses meeting the following gating criteria:

$$\begin{aligned} g_G \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right) = \prod _{i=1}^{3} g_{G,i} \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right) , \end{aligned}$$
(3)

where \(g_{G,i}\) are the generation gating functions, given by:

$$\begin{aligned} g_{G,1} \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right)= & {} {\left\{ \begin{array}{ll} 1 &{} \text {if } t_{j+1}^{-} - t_{j}^{+} > \Delta {t}_{{\text {min}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\\ &{} \quad \forall j=1,\,\ldots ,\,\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert \\ 0 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$
(4)
$$\begin{aligned} g_{G,2} \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right)= & {} {\left\{ \begin{array}{ll} 1 &{} \text {if } t_{j+1}^{-} - t_{j}^{+} < \Delta {t}_{{\text {max}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\\ &{} \quad \forall j=1,\,\ldots ,\,\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert \\ 0 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$
(5)
$$\begin{aligned} g_{G,3} \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right)= & {} {\left\{ \begin{array}{ll} 1 &{} \text {if } \Vert \hat{{\varvec{x}}}_{B} - \hat{{\varvec{x}}}_{A} \Vert < \Delta {\hat{{\varvec{x}}}}_{{\text {max}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\\ 0 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$
(6)

being \(t_{j}^{-}\) and \(t_{j}^{+}\) the epochs of the first and last observations, respectively, of the j-th associated track from the potential new hypothesis \({\mathcal {H}}_A \cup {\mathcal {H}}_B\). \(\hat{{\varvec{x}}}_{\alpha }\) is the state estimated from the hypothesis \({\mathcal {H}}_\alpha\) by means of an orbit determination process detailed later in Sect. 2.2). Note that to generate a new hypothesis \({\mathcal {H}}_A \cup {\mathcal {H}}_B\), it is required that \(g_{G} \left( {\mathcal {H}}_A , {\mathcal {H}}_B \right) =1\). The conditions involved in Eq. 3 are:

  • Lower bound time span threshold (Eq. 4): the time span between the associated tracks must be higher than a certain fraction of the average orbital period, \(\Delta {t}_{{\text {min}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\). The rationale behind this criterion is avoiding the association of tracks that are not sufficiently spaced in time, since this is an undesirable situation in terms of orbit observability. This criterion becomes less relevant as more tracks are associated.

  • Upper bound time span threshold (Eq. 5): the time span between the associated tracks must be lower than certain number of days, \(\Delta {t}_{{\text {max}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\). This is required to avoid that the dynamical model error, growing along time, exceeds an acceptable level that may harm the association procedure.

  • Estimated state difference threshold (Eq. 6): the difference between the estimated states, \(\Delta {\hat{{\varvec{x}}}}_{\text {max}}\), is evaluated to avoid combining two associations that clearly belong to two very different orbits. The difference can be evaluated in terms of semi-major axis, \(\Delta {{\hat{a}}}= |{\hat{a}}_A - {\hat{a}}_B |\), eccentricity, \(\Delta {{\hat{e}}}= |{\hat{e}}_A - {\hat{e}}_B |\), and the absolute value of angle between the two angular momentum vectors, \(\Delta {{\hat{\theta }}}\). Since the estimation of the orbit may not be accurate enough, particularly from associations of one or two tracks, high enough threshold values should be considered.

Note that the three threshold values, \(\Delta {t}_{{\text {max}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\), \(\Delta {t}_{{\text {min}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\) and \(\Delta {\hat{{\varvec{x}}}}_{{\text {max}},\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert }\) may depend on the number of tracks involved, \(\vert {{\mathcal {H}}_A\cup {\mathcal {H}}_B}\vert\), since the orbit observability and estimated state accuracy tend to increase and decrease, respectively, with the number of tracks. Suggested values are presented in Sect. 3.

The generation step, summarised in Algorithm 1 below, creates a set of new hypotheses, \({\mathcal {N}}\), but it does not perform any further operation. The loop in line 5 can be easily parallelised, since each potential hypothesis can be independently considered and generated if the criteria are met. Besides, note that the condition in line 7 ensures that the new hypothesis has not been previously generated. This may happen in situations in which several hypotheses have all but one track in common, e.g., the following three hypotheses: \({\mathcal {H}}_A = \left\{ {\mathcal {T}}_1 , {\mathcal {T}}_2 \right\}\), \({\mathcal {H}}_B = \left\{ {\mathcal {T}}_1 , {\mathcal {T}}_3 \right\}\) and \({\mathcal {H}}_C = \left\{ {\mathcal {T}}_2 , {\mathcal {T}}_3 \right\}\). If not handled with care, up to three hypotheses containing the same tracks could be generated (\({\mathcal {H}}_A \cup {\mathcal {H}}_B\), \({\mathcal {H}}_A \cup {\mathcal {H}}_C\) and \({\mathcal {H}}_B \cup {\mathcal {H}}_C\)). To avoid this, a unique hypothesis, \({\mathcal {H}}_\star\) is generated but maintaining traceability, i.e., \({\mathcal {F}} \left( {\mathcal {H}}_\star \right) =\left\{ {\mathcal {H}}_A , {\mathcal {H}}_B , {\mathcal {H}}_C \right\}\).

figure a

2.2 Scoring

The scoring step assigns a figure of merit to each hypothesis, representing the likelihood of the association of tracks. To do so, an orbit determination (OD) method is used to obtain an estimation of the orbit using the observations from the associated tracks. Apart from providing the estimation itself, i.e., state and covariance: \(\left\{ \hat{{\varvec{x}}} , {\varvec{P}}_{x} \right\}\), of the involved associated data, it provides the difference between the predicted (computed) and actual (observed) measurements, i.e., the residuals. They represent the goodness of fit of the dynamical model to the observation data associated.

The proposed figure of merit consists in the difference between the actual observations, \({\varvec{z}}\), and the a-posteriori computed observations, \(\hat{{\varvec{z}}}\), projected on the a-priori measurement covariance \({\varvec{P}}_{z}^0\), i.e.:

$$\begin{aligned} d^2 \left( {\mathcal {H}} \right) = \frac{1}{\vert {{\mathcal {H}}}\vert } \sum _{{\mathcal {T}}\in {\mathcal {H}}} \frac{1}{\vert {{\mathcal {T}}}\vert } \sum _{{\varvec{z}}\in {\mathcal {T}}} \left( {\varvec{z}} - \hat{{\varvec{z}}} \right) ^{\text {T}} \left( {\varvec{P}}_{z}^0 \right) ^{-1} \left( {\varvec{z}} - \hat{{\varvec{z}}} \right) , \end{aligned}$$
(7)

where \({\varvec{P}}_{z}^0\) is the a-priori covariance of the measurements, a diagonal matrix containing the squared sigma of the expected noise of each measurement of the corresponding observation, assumed to be zero-mean Gaussian. Note that this figure of merit is a reduced Chi-squared statistic (when the number of observations is much greater than the number of estimated parameters) and can also be seen as a Mahalanobis distance but evaluated in the measurement space rather than in the orbit space and projected in the a-priori covariance space.

This figure of merit corresponds to the weighted Root Mean Square (RMS), the loss function of a batch least-squares OD. It takes into account every observation of each associated track and allows each track to contribute equally. In other words, the contribution of each track to the information matrix is clear [33], and so the covariance and residuals contributions when performing OD on the associated tracks. This is not the case of sequential estimators, which require additional smoothing processes to achieve similar results [14]. Besides, sequential estimations using large sets of observations may result in too optimistic (close to zero) covariance matrices, leading to insensitivity to upcoming observations [34]. Accordingly, we propose the use of the non-linear least-squares batch estimator to perform the OD. Moreover, we also suggest the use of the following features to improve the OD performance:

  • Relaxation factor, applied to the computed estimation correction, may improve the convergence rate and its use is desirable when far from the linear regime. The goal is to find a single scaling factor that applied to the correction computed during the least-squares leads to a minimum value of the loss function.

  • Levenberg–Marquardt algorithm [35], to solve the non-linear system, since it improves the radius of convergence with respect to the classical Gauss–Newton solver during the least-squares process [36]. This is particularly relevant for evaluating hypotheses of a low number of tracks, since the initial guess is usually far from the solution. The Levenberg–Marquardt solver can be used during the first iterations until certain loss function, weighted RMS, value is reached, switching then to the classical Gauss–Newton solver, which is less computationally demanding.

The choice of the dynamical model for the OD should consider two important aspects of the track association problem: (1) sensing data starvation: a small amount of sensing data, i.e., associated tracks, limits the attainable OD accuracy and (2) hypotheses filtering: as the association evolves, there are more hypotheses involving a low number of tracks than those involving a higher number of tracks thanks to the filtering process. Therefore, a trade-off between computational burden and dynamics accuracy is required. On the one hand, a two-body motion model, although computationally cheap, would not allow the association of tracks separated several days for objects in LEO , where the perturbation of the drag is not negligible, as well as in GEO, due to Earth’s oblateness and third body perturbations [33]. On the other hand, high fidelity numerical models, used for the orbit maintenance of objects catalogues, such as the Special Perturbations (SP) orbit propagation model [37], would allow an accurate characterisation of the dynamics, although increase the computational burden, since a numerical integration of the equations of motion is required.

To solve this trade-off, we propose the use of analytical or semi-analytical propagators, as they are a very suitable choice for rapid pre-processing assessments, such as the scoring of associations of a low number of tracks. It is relevant to mention that additional dynamical parameters, such as drag or solar radiation coefficients, may soak up force model errors during batch estimation. However, this should not pose a problem when performing the estimation with a low number of tracks since the role of the OD is the evaluation of the figure of merit.

Regarding the initial solution required for the OD, we propose the use of the State Vector Fitting method presented in [38] for hypotheses of a single track, although any other IOD method could be used. This IOD method uses attributables, so the track observations have to be compressed first. Note that in the radar case, the attributable provides a full description of the state (position and velocity). For hypotheses of two or more tracks, we propose the use of solutions of parent hypotheses. If the resulting figure of merit of a hypothesis is greater than certain value \(d_{\text {max}}\) (pruning threshold, see Sect. 2.3) then the OD is repeated using as initial solution the estimated solution from another parent hypothesis.

The scoring step, summarised in Algorithm 2, is the most demanding in terms of computational cost. However, loop in line 2 can be parallelised as each association can be independently evaluated.

figure b

To assess the suitability of the figure of merit, a subset of 2882 radar tracks from 321 RSOs with semi-major axis between 7140 and 7180 km (higher RSO density region of the complete RSO population presented in Sect. 3) have been associated via brute-force and then scored. Figure 4 shows the distribution of the figure of merit of each true (green) and false (red) hypotheses for hypotheses of two (top left), three (top right) and four (bottom left) tracks. To alleviate the brute-force procedure, hypotheses with \(d>10^{2}\) have been removed. Note that the number of true hypotheses of two and three tracks is lower than the number of true hypotheses of four tracks. Besides, the figure of merit associated to most false hypotheses of four tracks is higher than \(10^{2}\) and therefore, there is a clear separation between true and false hypotheses. This conclusion is aligned with the requirement from [18] of associating three or four tracks so that a meaningful state can be estimated before adding a new object to the catalogue. Besides, it is clear that the proposed figure of merit can be used to distinguish between true and false hypotheses.

Fig. 4
figure 4

Distribution of the figure of merit for true hypotheses (green) and false hypotheses (red) for hypotheses of two (top left), three (top right) and four (bottom) tracks in a simulated brute-force association scenario

2.3 Pruning

The pruning step aims at discarding those hypotheses that are clearly false to avoid considering them later in subsequent steps. Otherwise, these hypotheses would give birth to more false hypotheses and overpopulate the hypotheses tree, leading to an increase in the computational cost. It consists in a simple gating criteria:

$$\begin{aligned} g_R \left( {\mathcal {H}} \right) = {\left\{ \begin{array}{ll} 1 &{}\text {if } d \left( {\mathcal {H}} \right) > d_{\text {max}} \\ 0 &{}\text {otherwise} \end{array}\right. }, \end{aligned}$$
(8)

where \(d_{\text {max}}\) is the pruning threshold for the figure of merit, intended to discard most of the false hypotheses but not any true hypothesis. Therefore, a high enough value, aligned with Fig. 4 is recommended. Note that to prune a hypothesis \({\mathcal {H}}\), it is required that \(g_{R} \left( {\mathcal {H}} \right) =1\).

The pruning step, summarised in Algorithm 3, is the less demanding in terms of computational cost but crucial to keep the dimension of the association problem within practical limits.

figure c

2.4 Promotion

The promotion step is in charge of identifying those hypotheses with enough information, (i.e., number of associated tracks) and whose figure of merit is such that the associated tracks are expected to belong to the same object. This step performs a re-evaluation of the hypotheses with a high fidelity numerical model enabling (1) the refinement of the figure of merit and final confirmation of the hypotheses validity and (2) the initialisation of the orbit of the new detected object in the catalogue.

During the scoring step, the figure of merit is evaluated using the estimation from an OD. As described in Sect. 2.2, the choice of a semi-analytical propagator implied a limitation on the dynamics accuracy to reduce the computational burden. The use of the semi-analytical propagator is justified by the large number of hypotheses expected to pass through the filter and the limitation on the estimation accuracy given the relatively low amount of observations. However, only a few hypotheses of \(N_{\text {P}}\) tracks are expected to become potential candidates for promotion, if compared to the total number of hypotheses generated, and thus the increase of the computational burden, due to the use of a numerical propagator in the promotion step, is acceptable. The improvement on the dynamical model is aimed at increasing the gap between true and false hypotheses, i.e., lowering the figure of merit of hypotheses involving tracks from the same object while increasing the one of those involving tracks from different objects. This is particularly relevant for the OD of hypotheses involving many tracks, when the lower accuracy of the semi-analytical propagator could be insufficient to distinguish two close objects. Accordingly, the use of a high fidelity numerical model is proposed only for the promotion, in favour of the accuracy. The resulting estimation (orbit and covariance) are ready to be added to the catalogue. Therefore, three steps follow: selection of candidates, re-scoring with a high fidelity numerical propagator and promotion.

First, candidate hypotheses are selected, by means of the following figure of merit gating criterion:

$$\begin{aligned} g_{\text {PC}} \left( {\mathcal {H}} \right) = {\left\{ \begin{array}{ll} 1 &{}\text {if } d \left( {\mathcal {H}} \right)< d_{\text {max,PC}} \text { and } \vert {\mathcal {H}} \vert \geq N_{\text {P}}\\ 0 &{}\text {otherwise} \end{array}\right. }, \end{aligned}$$
(9)

where \(d_{\text {max,PC}}\) is the promotion candidates threshold for the figure of merit, used to retrieve candidates to be promoted. Note that to consider hypothesis \({\mathcal {H}}\) as candidate for promotion, it is required that \(g_{\text {PC}} \left( {\mathcal {H}} \right) =1\). Second, the hypotheses undergo a re-scoring process using a high fidelity numerical propagator and the candidates are filtered through a final figure of merit gating criteria:

$$\begin{aligned} g_{\text {P}} \left( {\mathcal {H}} \right) = {\left\{ \begin{array}{ll} 1 &{}\text {if } d \left( {\mathcal {H}} \right)< d_{{\text {max}},P} \text { and } \vert {\mathcal {H}} \vert \geq N_{\text {P}}\\ 0 &{}\text {otherwise} \end{array}\right. }, \end{aligned}$$
(10)

where \(d_{{\text {max}},P}\) is the promotion threshold for the figure of merit, used to promote candidates. Note that to promote a hypothesis \({\mathcal {H}}\), it is required that \(g_{P} \left( {\mathcal {H}} \right) =1\).

Third, the remaining candidates are sorted in order of increasing figure of merit. Then, the best hypothesis, in terms of figure of merit, is promoted. Those hypotheses with at least one track in common with the recently promoted one, i.e., incompatible hypotheses, are invalidated, since a single track is not allowed to belong to two different RSOs. This simple approach continues until all the candidates have been promoted or invalidated. Figure 5 depicts a sample promotion step, where \({\mathcal {H}}_A\) is promoted first, thus leading to the invalidation of any hypothesis containing tracks \({\mathcal {T}}_1\), \({\mathcal {T}}_2\), \({\mathcal {T}}_5\) or \({\mathcal {T}}_9\) (grayed out). Second, hypothesis \({\mathcal {H}}_B\) is promoted.

Fig. 5
figure 5

Sample promotion step

The promotion step, summarised in Algorithm 4, where \({\mathcal {C}}\) is used to denote the set of candidate hypotheses, may be demanding from the computational cost point of view, depending on the number of candidate hypotheses, due to the loop (line 2) involving the ODs, although it can be parallelised. It resembles a Nearest Neighbour (NN) algorithm since a hard decision is made considering only the best hypothesis recursively. Although not optimal, it is computationally fast and simple. For particularly cluttered scenarios, such as break-up events, it could make sense to replace this strategy by a global nearest neighbour (GNN) one, in which the total figure of merit of the promoted hypotheses is minimised [39]. However, note that the number of candidate hypotheses during a promotion step is usually low and thus the solutions obtained with the proposed approach is not expected to differ greatly from a GNN approach in terms of association performance.

figure d

2.5 Merging

The merging step is optional and aimed at combining already promoted hypotheses to avoid the addition of duplicated objects to the catalogue, i.e., two different promoted hypotheses corresponding to the same object. It consists in merging two already promoted hypotheses into a new one that contains all the tracks from the involved hypotheses. The two hypotheses are then replaced by the new one only if its figure of merit, obtained after the corresponding scoring, pass a gating test similar to the promotion.

The merging step, summarised in Algorithm 5, is not mandatory since the orbits estimated during the promotion step are expected to be accurate enough to perform an orbit-to-orbit correlation process against the existing catalogue to detect any potential duplicated object.

figure e

3 Results

A representative population of 4384 catalogable RSOs has been simulated. The distribution of the semi-major axis (a), altitude of perigee (p), eccentricity (e) and inclination (i) of the orbits of the simulated RSOs is presented in Fig. 6, together with the subset of 19,628 RSOs with altitude of perigee between 205 and 10,920 km present in Space-Track’s Satellite Catalogue (SATCAT) [40]. The primary axis shows the values of the histogram bins and the secondary axis represents the cummulative distribution function (CDF) of the distribution (solid lines). For the sake of comparison, each bin displays the bin’s raw count divided by the total number of counts. The two RSO population are similar in relative terms, although in absolute terms the simulated population is around one fourth of the SATCAT population.

Fig. 6
figure 6

Distribution of the semi-major axis (top-left), altitude of perigee (top-right), eccentricity (bottom-left) and inclination (bottom-right) of the simulated RSOs (black) and Space-Track’s SATCAT population with altitude of perigee between 205 and 10,920 (green)

The orbits of this RSO population have been propagated for 2 weeks and measurements for a single monostatic surveillance radar in mainland Europe simulated, including zero-mean Gaussian noise (details in Table 1). A total of 75,184 tracks are generated during the 2 weeks. The distribution of the track duration, normalised with the orbital period of the corresponding RSO is shown in Fig. 7. The mean track duration is of 0.34% of the orbital period, and most of observed track lengths are shorter than 0.65%, of the order of what is usually considered as TSA or VSA [6]. Note that the track duration is related to the information content of the track and it is always possible to obtain an initial state estimate since the state is fully observable. Besides, although the methodology is not restricted to a single sensor, the simulated only considers one sensor as this is the most critical situation: less observations per RSO and with potential observability problems.

Fig. 7
figure 7

Distribution of the track duration, normalised with the orbital period of the corresponding RSO

The methodology presented in Sect. 2 has been applied to this simulated scenario. The different threshold parameters are compiled and presented in Table 2. The threshold values used for the generation (lower and upper bounds time span, \(\Delta {t}_{\text {min}}\) and \(\Delta {t}_{\text {max}}\), respectively, and the estimated state difference threshold, \(\Delta {\hat{{\varvec{x}}}}_{\text {max}}=\lbrace \Delta {{\hat{a}}}_{\text {max}}, \Delta {{\hat{e}}}_{\text {max}}, \Delta {{\hat{\theta }}}_{\text {max}} \rbrace\)), introduced in Sect. 2.1, are rough values to avoid associating tracks corresponding to RSOs in different orbit regimes. The pruning threshold, \(d_{\text {max}}\), presented in Sect. 2.3, was set in line with the brute-force analysis (Fig. 4, discussed in Sect. 2.2) as a conservative value to discard false hypotheses. Note that we require four tracks to promote hypotheses (\(N_{\text {P}}=4\)) and the optional merge step has not been considered. The promotion thresholds, \(d_{\text {max,PC}}\) and \(d_{\text {max,PC}}\), explained in Sect. 2.4, correspond a figure of merit value that retains more than 97% of the true hypotheses and less than 1% of false hypotheses of the brute-force analysis. Note that any of these threshold values have been finely tuned for the simulated scenario.

Regarding the dynamical models, the propagation of the reference population has been performed with a high fidelity propagator (\(16 \times 16\) Earth gravitational field, Moon and Sun third body perturbations, cannonball model for the solar radiation pressure (SRP) and NRLMSISE-00 atmospheric density model), while the one used for the association methodology is a semi-analytical propagator considering zonal gravity field up to fourth order (secular, long- and short-periodic effects), atmospheric drag with static and exponential density decay (secular effects) and third body perturbations (secular and long-periodic effects), to avoid dynamical model matching. Furthermore, a track-to-orbit simulator has been added to avoid processing more tracks from an object once it has been already detected and catalogued. This allows emulating a typical cataloguing process by skipping tracks from objects that have been already detected (i.e., perfect success rate).

Table 1 Details of the simulated radar surveillance sensor
Table 2 Thresholds and parameters used for the association

3.1 Association performance metrics

To evaluate the performance of the association process, we have considered the following sets of hypotheses, depicted in Fig. 8:

  • True positives (TP): promoted hypotheses containing tracks that belong to the same object. They correspond to detected objects if there are no duplicated objects among the hypotheses promoted.

  • False positives (FP): promoted hypotheses containing tracks that do not belong to the same object.

  • False negatives (FN): detectable objects whose tracks are not present in any promoted hypothesis. They correspond to non-detected objects.

  • True negatives (TN): hypotheses under analysis containing tracks that do not belong to the same object. They have not been pruned nor promoted.

Fig. 8
figure 8

Sketch of the association metrics considered

Note that the concept of detectable objects is referred to objects with enough number of tracks (four).

Table 3 compiles relevant association metrics after a complete track association execution of the simulated scenario. The methodology is able to provide excellent results for the track association problem, since most of the objects can be detected (98.05%) while providing a very low rate of false detections (0.03%, one hypothesis only). Besides, the number of undetected objects is also low (1.95%). This is important during catalogue build-up, since the addition of wrong objects is very undesirable.

Table 3 Association metrics

3.2 Hypotheses breakdown

The number of hypotheses at the end of the simulation is presented in Table 4. The number of hypotheses under-analysis (3074) is lower than the number of promoted hypotheses (3877). This is possible thanks to the removal of incompatible hypotheses after a promotion and an indication of the convergence of the association process. Besides, there are more true hypotheses than false ones for any number of tracks, thanks to the different filters that allow to prune most of the false hypotheses from the tree. Finally, as expected, the number of hypotheses under-analysis decreases with the number of tracks. Figures 9 and 10 show the distribution of the figure of merit of the hypothesis of four tracks (last column of Table 4) as a function the semi-major axis and eccentricity of the corresponding RSO, as well as a function of the time span between first and last associated tracks, \(\Delta {t_{1,4}}\), respectively. True positives, false positives and false negatives are depicted in green circles, red crosses and purple triangles, respectively. The true positives are spread around a figure of merit of \(d \sim 0.44\). The figure of merit of the false positive is of \(d=5.35\) and, in fact, it is the 13th hypothesis with higher figure of merit of all the promoted hypotheses (i.e., there are only 12 promoted hypothesis with higher figure of merit). The nine false negatives are located on the upper region of the plots, since they were not promoted due to the figure of merit threshold. There is a noticeable correlation between the semi-major axis and the figure of merit, particularly for values hypotheses with \(a > 10{,}000\) km (see Fig. 9, left). This is related to the eccentricity, as shown in Fig. 9, right, as hypotheses with high eccentricity (higher semi-major axis, since we are dealing with low orbits) have greater figure of merit values. This is a consequence of the more complex dynamics of these RSOs and the differences between the dynamical models used for the propagation of the simulated RSO population and the track association. The observations from high eccentricity objects usually correspond to the perigees and thus, the orbit observability is affected. Regarding the time span, Fig. 10 shows clusters of hypotheses every half day, typical revisit time in low orbits, but there is not a noticeable degradation of the figure of merit with the time between the associated tracks. Although most of the objects are detected within the first week of tracks, and thus the hypotheses concentrate on time span values lower than 7 days, the association of tracks separated even up to 2 weeks is possible in some cases.

Fig. 9
figure 9

Distribution of the figure of merit of each hypothesis of four tracks as a function of the semi-major axis (top) and eccentricity (bottom) of the corresponding RSO

Fig. 10
figure 10

Distribution of the figure of merit of each hypothesis of four tracks as a function of the time span between first and last associated tracks

Table 4 Number of hypotheses at the end of the simulation

3.3 Objects breakdown

Figure 11 shows the semi-major axis and eccentricity distribution of the detected (3876) and undetected (77) objects. Only detectable objects (at least four tracks) have been considered in the undetected set. Among them, it is worth noting that there are objects with low eccentricity (\(e < 0.1\)) and low semi-major axis (\(a < 7250\) km) that remain undetected (17). These are objects with only 4 or 5 tracks during the 2 weeks, which could not be associated mainly due to a long time separation between them. As in Fig. 9, there is a relevant cluster of undetected objects around \(a \sim 16{,}000\) km, that corresponds to high eccentricity RSOs. Detecting eccentric RSOs is challenging due to the more complex dynamics and limited observability. Besides, almost half (38) of the undetected objects have only four tracks, which implies that if one of the tracks has any particular issue (too noisy or bad observability with respect to the other available tracks), the association of four tracks may be unattainable. In principle, the detection of these RSOs could be achieved by including tracks from additional sensors or simply by waiting more time for new tracks to arrive.

Fig. 11
figure 11

Distribution of the semi-major axis (top-left) and eccentricity (top-right) of detected (green circles) and undetected (purple triangles) RSOs

Fig. 12
figure 12

Evolution of the number of non-promoted and detected objects (promoted hypotheses) during the track association procedure (number of tracks processed)

3.4 Association effectiveness and cost

To illustrate the impact of the complexity reduction and filtering techniques of the methodology, which allow reducing the number of hypotheses of the association tree, Fig. 12 presents how the number of hypotheses grows during the association process. The abscissa axis represents the number of non-promoted hypotheses, while the ordinate axis gathers the number of promoted hypotheses. Both the proposed association methodology and a brute-force approach are shown for comparison. The results of the latter were obtained for baseline purposes only by sequentially generating combinations of two, three and four tracks and promoting hypotheses as soon as a combination of four tracks has been created, without any sort of figure of merit scoring nor filtering. The association process starts from scratch (zero promoted hypotheses) and the number of non-promoted hypotheses grows as more tracks are processed. The first hypothesis promotion happens after processing track #4453 in the methodology branch and after track #2234 in the brute-force one. The number of non-promoted hypotheses in the brute-force case grows continuously and reaches a value of the order of \(10^{18}\). On the other hand, with our methodology the number of non-promoted hypotheses grows first, then stabilises (maximum of 37,127 non-promoted hypotheses) and finally decreases. This is possible thanks to the pruning of incompatible hypotheses after promotion and the different filtering steps, that reduce the number of false hypotheses under analysis. The fact that the final number of promoted hypotheses in the methodology case (3899) almost reaches the one of the brute-force baseline (3953) is an indicator of the association effectiveness of the methodology. Besides, as the number of non-promoted hypotheses is directly related to the computational cost, the proposed methodology is much more computationally efficient than a brute-force approach.

Thanks to this association effectiveness and cost, the track association process of the whole simulated scenario can be completed within 3 h (using 24 threads Intel(R) Xeon(R) Gold 6142 @ 2.60 GHz). Solving the same track association problem with a brute-force approach would lead to several days and require an important amount of available memory. As a matter of fact, the reduced scenario that was set up to generate the brute-force analysis presented in Fig. 4 consisted in 2882 tracks and 321 RSOs (7.3% and 3.8% of tracks and RSOs of the complete simulation) took 15 h on the same machine. Finally, since this analysis corresponds to a completely cold start from measurement data for a single radar during 2 weeks, the methodology is expected to be suitable for real-time processing.

4 Conclusions

This paper has presented a novel track-to-track association methodology proposed for operational RSO catalogue build-up applications. The figure of merit used for the pruning and promotion of hypotheses is evaluated in the measurement space and allows us to effectively filter most false hypotheses. The proposed filters avoid an excessive growth of the association tree (as opposed to brute-force approaches). Besides, the performance of the track association procedure on a simulated survey radar scenario, representative of a real operational environment, has been analysed. We have shown how our methodology can be applied to detect most of the RSOs by associating tracks, corresponding the undetected ones to either very particular cases with high eccentricity or objects with limited observability. The shown success rate (true positives) is higher than 98% and the false positive rate lower than 0.03%, while keeping the number of missed objects (false negatives) lower than 2% at the same time. These association performance metrics support the suitability of the methodology.

The methodology can be applied to solve the track-to-track association problem with other types of survey data, such as optical observations [41], since the figure of merit can account for any kind of measurement. In the optical case, one of the main differences lies in the IOD algorithm required to obtain the initial solution for the OD. Besides, highly-eccentric orbits, such as Geostationary Transfer Orbit (GTO), pose a challenge to track-to-track association and orbit determination [42]. The methodology was conceived as a key component for catalogue build-up and maintenance. Although the main application of this track-to-track association algorithm is to detect new RSOs, the methodology can be extended to study the manoeuvre detection and estimation problem [43] and even the fragmentation detection problem that arises after a break-up event [44].