1 Introduction

Traditional ground-based navigation methods are becoming unsustainable as the space sector is evolving. Orbit determination techniques exploiting radiometric tracking are the most reliable and accurate methods, but are capable of processing only one spacecraft per tracking window [23]. This is because a two-way signal is exchanged between a ground facility and the satellite to determine the relative range and range-rate, thus requiring a considerable amount of time related to the antenna utilization, the signal processing, and the signal trip in space. Moreover, the costs related to the flight dynamics teams and facilities utilization account for a large budget of mission costs [12]. This is in contrast with the current wave of deep-space missions enabled by systems miniaturization that promote fast, diverse, low-cost, and highly autonomous deep-space exploration [18, 19, 24].

Autonomous navigation methods exploit observations from the environment to estimate components of the spacecraft state in deep-space [22]. Many autonomous navigation methods have been proposed so far. The X-ray pulsars navigation relies on the repetitive signals coming from X-ray pulsars to estimate the observer distance with respect to the solar system barycenter [2, 21]. The optical navigation in proximity of a celestial object exploits the target knowledge to determine the observer position, finding applications in the lunar environment (e.g., full-disk navigation [6, 8, 15], terrain relative navigation [7]), small body proximity (e.g., landmark navigation [3]), and satellite proximity (e.g., pose estimation [17]). The optical navigation in deep-space leverages on the acquisition of the line-of-sight (LoS) directions to deep-space objects with known ephemeris to determine the observer position [4, 10, 11, 14].

This work focuses on the autonomous optical navigation in deep-space. Previous works from the authors have shown that a minimum of two beacons line-of-sight directions are required for the deep-space navigation solution, and a selection criteria to assess the optimal couple of beacons that yields the highest accuracy in the navigation solution has been derived [9]. Inspired by this mechanism, the authors want to investigate the quality of the navigation solution in presence of multiple beacons and compare it with the two optimal beacons case. Thus, this paper formulates the deep-space optical navigation problem in presence of multiple beacons and derives its least squares solution. Perturbation models for the objects line-of-sight directions and ephemeris are introduced to derive the analytical solution covariance. The geometrical interpretation of the perturbations models is presented and shown to visualize the observer solution covariance in presence of multiple beacons. The sensitivity of the solution accuracy with the number of tracked beacons is validated by virtue of a test case, where the analytical and the numerical solutions are compared when tracking a number of planets and asteroids as deep-space beacons. Eventually, the navigation accuracy exploiting multiple beacons is compared to the one with two optimal beacons.

The paper is structured as follows. The deep-space optical navigation problem, solution, and covariance are described in Sect. 2. The geometrical interpretation of the perturbation models and their impact on the navigation covariance are elaborated and shown in Sect. 3. The validation of the navigation covariance for increasing number of beacons and in case of perturbed inputs is reported in Sect. 4 by virtue of a test case. Section 5 shows the comparison of the navigation accuracy exploiting multiple beacons with the two optimal beacons case. Final remarks are given in Sect. 6.

2 Deep-Space Optical Navigation

2.1 Problem Formulation

The deep-space optical navigation problem consists of the estimation of an observer position exploiting the line-of-sight directions to a number of objects, or beacons, acquired by on-board optical sensors such as navigation camera and star trackers. The problem geometry is shown in Fig. 1. Here, at a given epoch and in an inertial frame, an observer is located at an unknown position \(\varvec{r}\) with respect to the Solar System Barycenter (SSB). A number n of known deep-space objects are present, too. Given the i–th object, its inertial position is denoted \(\varvec{r}_i\), while its relative position with respect to the observer is denoted \(\varvec{\rho }_i\). Thus, the observer inertial position can be written as

$$\begin{aligned} \varvec{r} = \varvec{r}_i - \varvec{\rho }_i \qquad \qquad \ i = 1,.., n \end{aligned}$$
(1)

Note that, in Eq. 1, \(\varvec{r}\) is unknown, \(\varvec{r}_i\) are known by ephemeris models, and \(\varvec{\rho }_i\) are unknown. The latter can be expanded as \(\varvec{\rho }_i = \rho _i \varvec{\hat{\rho }}_i\), where \(\rho _i\) is the unknown range between the observer and the i–th object, and \(\varvec{\hat{\rho }}_i\) is the observer-to-object LoS, that can be measured on board. The objective of the deep-space optical navigation is to compute \(\varvec{r}\) given \(\varvec{\hat{\rho }}_i\), \(i = 1, .., n\).

Fig. 1
figure 1

Deep-space optical navigation geometry

2.2 Least Squares Solution

Considering beacons i and j in Eq. 1 it is possible to write

$$\begin{aligned} \varvec{r}_i - \rho _i \ \varvec{\hat{\rho }}_i = \varvec{r}_j - \rho _j \ \varvec{\hat{\rho }}_j \qquad \ {i = 1,.., n, \quad j = 1, .., n} \end{aligned}$$
(2)

Equation 2 is a system of \(n^2\) equations. Without any loss of generality, we restrict the analysis to unique couples of beacons; therefore

$$\begin{aligned} \varvec{r}_i - \rho _i \ \varvec{\hat{\rho }}_i = \varvec{r}_j - \rho _j \ \varvec{\hat{\rho }}_j \qquad \qquad {i = 1,.., n, \quad j > i} \end{aligned}$$
(3)

Equation 3 is a system of \(n (n-1)/2\) equations. It can be pre-multiplied by \(\varvec{\hat{\rho }}_i^\top\) and \(\varvec{\hat{\rho }}_j^\top\) yielding

$$\begin{aligned} \begin{aligned} \varvec{\hat{\rho }}_i^\top \varvec{r}_i - \rho _i \ \varvec{\hat{\rho }}_i^\top \varvec{\hat{\rho }}_i = \varvec{\hat{\rho }}_i^\top \varvec{r}_j - \rho _j \ \varvec{\hat{\rho }}_i^\top \varvec{\hat{\rho }}_j \\ \varvec{\hat{\rho }}_j^\top \varvec{r}_i - \rho _i \ \varvec{\hat{\rho }}_j^\top \varvec{\hat{\rho }}_i = \varvec{\hat{\rho }}_j^\top \varvec{r}_j - \rho _j \ \varvec{\hat{\rho }}_j^\top \varvec{\hat{\rho }}_j \end{aligned} \qquad {i = 1,.., n \quad j > i} \end{aligned}$$
(4)

Equation 4 is a system of \(n(n-1)\) equations. Now, note that \(\varvec{\hat{\rho }}_i^\top \varvec{\hat{\rho }}_i = \varvec{\hat{\rho }}_j^\top \varvec{\hat{\rho }}_j = 1\), and denoting \(\gamma _{ij}\) the angle between \(\varvec{\hat{\rho }}_i\) and \(\varvec{\hat{\rho }}_j\), we have

$$\begin{aligned} \varvec{\hat{\rho }}_i^\top \varvec{\hat{\rho }}_j = \varvec{\hat{\rho }}_j^\top \varvec{\hat{\rho }}_i = \cos \gamma _{ij} \end{aligned}$$
(5)

Thus, plugging Eqs. 5 into 4 and rearranging terms, we have

$$\begin{aligned} \begin{aligned} - \rho _i + \cos \gamma _{ij} \ \rho _j = \varvec{\hat{\rho }}_i^\top (\varvec{r}_j - \varvec{r}_i) \\ \cos \gamma _{ij} \ \rho _i - \rho _j = \varvec{\hat{\rho }}_j^\top (\varvec{r}_i - \varvec{r}_j) \end{aligned} \qquad \ {i = 1,.., n, \quad j > i} \end{aligned}$$
(6)

Equation 6 can be put in matrix form as

$$\begin{aligned} \underbrace{\begin{bmatrix} - 1 &{} \cos \gamma _{ij}\\ \cos \gamma _{ij} &{} - 1 \\ \end{bmatrix}}_{\varvec{ H}_{ij}} \begin{bmatrix}{\rho _i}\\ {\rho _j}\end{bmatrix} = \underbrace{\begin{bmatrix}{\varvec{\hat{\rho }}_i^\top \, (\varvec{r}_j - \varvec{r}_i)}\\ {\varvec{\hat{\rho }}_j^\top \, (\varvec{r}_i - \varvec{r}_j)}\end{bmatrix}}_{\varvec{b}_{ij}} \qquad \ i = 1,.., n \quad \quad j > i \end{aligned}$$
(7)

Stacking together the equations from Eq. 7 we arrive to

$$\begin{aligned} \underbrace{\begin{bmatrix} \varvec{H}_{12} \ \varvec{\varLambda }_{12} \\ \vdots \\ \varvec{H}_{ij} \ \varvec{\varLambda }_{ij} \\ \vdots \\ \varvec{H}_{n-1,n} \ \varvec{\varLambda }_{n-1,n} \\ \end{bmatrix}}_{\varvec{H}} \underbrace{\begin{bmatrix}{\rho _1}\\ {\vdots }\\ {\rho _i}\\ {\vdots }\\ {\rho _j}\\ {\vdots }\\ {\rho _n}\end{bmatrix}}_{\varvec{x}} = \underbrace{\begin{bmatrix}{\varvec{b}_{12}}\\ {\vdots }\\ {\varvec{b}_{ij}}\\ {\vdots }\\ {\varvec{b}_{n-1,n}}\end{bmatrix}}_{\varvec{b}} \end{aligned}$$
(8)

Where \(\varvec{\varLambda }_{ij}\) is the \(2 \times n\) matrix that maps \(\varvec{H}_{ij}\) to \(\varvec{H}\)

$$\begin{aligned} \varvec{\varLambda }_{ij} = \begin{bmatrix} \delta _{1i} \ \cdots \ \delta _{ci} \ \cdots \ \delta _{ni}\\ \delta _{1j} \ \cdots \ \delta _{cj} \ \cdots \ \delta _{nj} \end{bmatrix} \end{aligned}$$
(9)

Where, denoted c the column number, \(\delta _{ci} = 1\) if \(c=i\), and \(\delta _{cj} = 1\) if \(c=j\), while they are 0 otherwise. Now, note that \(\varvec{H}\) is a rectangular matrix with size \(n(n-1) \times n\), \(\varvec{x}\) is the unknown vector of size n, and \(\varvec{b}\) is the input vector of size \(n(n-1)\). Pre-multiplying Eq. 8 by \(\varvec{H}^\top\) leads to

$$\begin{aligned} \varvec{H}^\top \varvec{H} \ \varvec{x} = \varvec{H}^\top \varvec{b} \end{aligned}$$
(10)

From which the least squares solution can be determined as

$$\begin{aligned} \varvec{x} = \left( \varvec{H}^\top \varvec{H}\right) ^{-1} \varvec{H}^\top \varvec{b} \end{aligned}$$
(11)

Note that the solution in Eq. 11 is function of the observation geometry (\(\cos \gamma _{ij}\) in \(\varvec{H}\)), the line-of-sight directions knowledge (\(\varvec{\hat{\rho }}_i, \varvec{\hat{\rho }}_j\) in \(\varvec{b}\)) and the ephemeris knowledge (\(\varvec{r}_i, \varvec{r}_j\) in \(\varvec{b}\)).

2.3 Perturbation Models

The solution to the deep-space optical navigation problem in Eq. 11 is affected by uncertainties in the line-of-sight measurements and objects ephemeris. Thus, we need to model their perturbations to derive the solution covariance. Regarding the perturbed LoS modeling, in presence of small perturbations, the QUEST measurement model can be used to consider the errors in the line-of-sight directions [13]. The QUEST measurement model is a linear additive model, thus it considers the perturbed LoS direction as a linear sum of the true line-of-sight with a white noise process, that is

$$\begin{aligned} \varvec{\hat{\rho }}_{i}^{\epsilon } = \varvec{\hat{\rho }}{_i} + \varvec{v}{_i} \qquad \ {i = 1, .., n} \end{aligned}$$
(12)

where \(\varvec{\hat{\rho }}_{i}^{\epsilon }\) is the perturbed LoS direction, \(\varvec{\hat{\rho }}{_i}\) is the true one, and \(\varvec{v}{_i}\) is a white-noise process whose components have zero mean and standard deviation \(\sigma {_i}\). Denoting \(\text {E}\) the expected value operator, then [5]

$$\begin{aligned} \text {E}\left[ \varvec{v}{_i}\right] = \varvec{0} \qquad \qquad \text {E}\left[ \varvec{v}{_i} \varvec{v}_{i}^\top \right] = \sigma {_i}^2 \left[ \varvec{I} - \varvec{\hat{\rho }}{_i} \varvec{\hat{\rho }}_{i}^\top \right] = \sigma ^2_{i} \varvec{L}{_i} \qquad \ {i = 1, .., n} \end{aligned}$$
(13)

Equations 12 and 13 hold for small rotations, where the spherical surface generated by a rotation of the tip of \(\varvec{\hat{\rho }}{_i}\) is locally approximated by the tangent plane. Thus, \(\varvec{v}{_i}\) lies on this plane and is orthogonal to \(\varvec{\hat{\rho }}{_i}\), i.e.,

$$\begin{aligned} \varvec{\hat{\rho }}_{i}^\top \varvec{v}{_i} = 0 \end{aligned}$$
(14)

Note that this model alters the unitary norm of the LoS direction, yet it is a viable approximation in presence of small angles and widely exploited in literature [5]. When dealing with large angles, a multiplicative model can be exploited [16].

Regarding the objects ephemeris modeling, note that the ephemeris of every object in the Solar System are known up to a given accuracy. The planets ephemeris are accurately known since they have been extensively observed in the past, while smaller bodies like asteroids and comets have larger uncertainties. Thus, without loss of generality, this work considers a spherical uncertainty model for the objects positions in deep-space. This leads to the definition of the perturbed object position \(\varvec{r}_{i}^{\epsilon }\) as

$$\begin{aligned} \varvec{r}_{i}^{\epsilon } = \varvec{r}{_i} + \varvec{w}{_i} \qquad \ {i = 1, .., n} \end{aligned}$$
(15)

where \(\varvec{w}{_i}\) is the position uncertainty of the i–th object inertial position \(\varvec{r}{_i}\) estimated by ephemeris. The uncertainty is modeled as a range in all the direction, thus leading to a spherical perturbation. So,

$$\begin{aligned} \text {E}[\varvec{w}{_i}] = \varvec{0} \qquad \qquad \text {E}\left[ \varvec{w}{_i} \varvec{w}{_i}^\top \right] = w{_i}^2 \varvec{I} \qquad \ {i = 1, .., n} \end{aligned}$$
(16)

where \(w{_i}\) is the uncertainty radius and \(\varvec{I}\) the identity matrix.

2.4 Covariance Analysis

The perturbation models can be exploited to derive the solution covariance. In presence of perturbed line-of-sight directions and ephemeris uncertainty, the perturbed input to the system in Eq. 7 reads

$$\begin{aligned} \varvec{b}_{ij}^{\epsilon } = \begin{bmatrix}{\varvec{\hat{\rho }}_i^{\epsilon ^\top } \, \left( \varvec{r}_j^\epsilon - \varvec{r}_i^\epsilon \right) }\\ {\varvec{\hat{\rho }}_j^{\epsilon ^\top } \, \left( \varvec{r}_i^\epsilon - \varvec{r}_j^\epsilon \right) }\end{bmatrix} = \underbrace{\begin{bmatrix}{\varvec{\hat{\rho }}_i^\top \, (\varvec{r}_j - \varvec{r}_i)}\\ \varvec{\hat{\rho }}_j^\top {(\varvec{r}_i - \varvec{r}_j)}\end{bmatrix}}_{\varvec{b}_{ij}} + \underbrace{\begin{bmatrix}{m_{ij}}\\ {m_{ji}}\end{bmatrix}}_{\varvec{\varDelta b}_{ij}} \quad \ i = 1,.., n \quad \quad j > i \end{aligned}$$
(17)

where

$$\begin{aligned} \varvec{\varDelta b}_{ij} = \begin{bmatrix}{m_{ij}}\\ {m_{ji}}\end{bmatrix} = \begin{bmatrix}{\varvec{\hat{\rho }}_i^\top (\varvec{w}_j - \varvec{w}_i) + \varvec{v}_i^\top (\varvec{r}_j - \varvec{r}_i) + \varvec{v}_i^\top (\varvec{w}_j - \varvec{w}_i)}\\ {\varvec{\hat{\rho }}_j^\top (\varvec{w}_i - \varvec{w}_j) + \varvec{v}_j^\top (\varvec{r}_i - \varvec{r}_j) + \varvec{v}_j^\top (\varvec{w}_i - \varvec{w}_j)}\end{bmatrix} \quad \ i = 1,.., n \quad j > i \end{aligned}$$
(18)

Thus, stacking together \(\varvec{b}_{ij}\) from Eq. 17, we arrive to

$$\begin{aligned} \varvec{b}^{\epsilon } = \begin{bmatrix}{\varvec{b}_{12}^\epsilon }\\ \vdots \\ \varvec{b}_{ij}^\epsilon \\ \vdots \\ {\varvec{b}_{n-1,n}^\epsilon }\end{bmatrix} = \underbrace{\begin{bmatrix}{\varvec{b}_{12}}\\ \vdots \\ \varvec{b}_{ij}\\ \vdots \\ {\varvec{b}_{n-1,n}}\end{bmatrix}}_{\varvec{b}} + \underbrace{\begin{bmatrix}{\varvec{\varDelta b}_{12}}\\ {\vdots }\\ \varvec{\varDelta b}_{ij}\\ \vdots \\ {\varvec{\varDelta b}_{n-1,n}}\end{bmatrix}}_{\varvec{\varDelta b}} \end{aligned}$$
(19)

where \(\varvec{b}^{\epsilon }\) is the perturbed input, \(\varvec{b}\) the exact input, and \(\varDelta \varvec{b}\) the input error. Note that the input error has null mean (\(\text {E}[\varvec{\varDelta b}] = \varvec{0}\)) and known covariance (\(\text {E}[\varvec{\varDelta b}\varvec{\varDelta b}^\top ] = \varvec{B}\)). The expression of the input error covariance \(\varvec{B}\) is developed in Appendix A.

Now, plugging Eqs. 19 into 11, we have

$$\begin{aligned} \varvec{x}^\epsilon = \underbrace{(\varvec{H}^\top \varvec{H})^{-1} \varvec{H}^\top \varvec{b}}_{\varvec{x}} + \underbrace{(\varvec{H}^\top \varvec{H})^{-1} \varvec{H}^\top \varDelta \varvec{b}}_{\varDelta \varvec{x}} \end{aligned}$$
(20)

where \(\varvec{x}^\epsilon\) is the perturbed solution and \(\varvec{\varDelta x}\) the solution error. Thus, the solution error covariance is

$$\begin{aligned} \varvec{P} = \text {E}\left[ \varDelta \varvec{x} \varDelta \varvec{x}^\top \right] = (\varvec{H}^\top \varvec{H})^{-1} \varvec{H}^\top \varvec{B} \ \varvec{H} \ (\varvec{H}^\top \varvec{H})^{-\top } \end{aligned}$$
(21)

Note that the solution covariance is function of the observation geometry (\(\varvec{H}\)), the objects ephemeris knowledge (w inside \(\varvec{B}\)) and the line-of-sight uncertainty (\(\sigma\) inside \(\varvec{B}\)).

3 Geometrical Interpretation

3.1 Perturbation Models

The geometrical interpretation of the perturbation models is shown in Fig. 2. In particular, Fig. 2a shows the acquisition of an object line-of-sight by an observer in deep-space. The acquired line-of-sight \(\hat{\varvec{\rho }}^{\epsilon }\) is a measurement of the true line-of-sight \(\hat{\varvec{\rho }}\) affected by a given angular error \(\sigma\). The measurement generates an uncertainty cone with aperture \(\sigma\) whose vertex is placed at the observer location. Thus, the object can be anywhere inside this cone, since its relative distance to the observer is still unknown. As depicted in Fig. 2a, the cone reduces to a triangle in the planar case.

When solving the navigation problem with exact ephemeris but perturbed line-of-sight direction, the object position is known and the observer position is unknown. The uncertainty cone can be seen as reversed, so that it originates at the known object position and emanates toward the observer, as shown in Fig. 2b. The observer can be in any point inside this cone.

Figure 2c shows instead the observer uncertainty region in case of exact line-of-sight but uncertain ephemeris. Here, the object position is known up to a given accuracy (w) about its location. A spherical uncertainty, which reduces to a circular uncertainty in the planar case, is assumed for simplicity. The exact line-of-sight direction toward the observer, applied to every point of this sphere, generates a cylindrical uncertainty region with a radius equal to the ephemeris uncertainty. The observer can be anywhere inside this cylinder.

Eventually, Fig. 2d shows the observer uncertainty region in case of combined line-of-sight and ephemeris errors. In the three-dimensional case, a cone due to the perturbed line-of-sight originates from each point of the ephemeris uncertainty. The envelope of this volume, that is the grey area in Fig. 2d for the two–dimensional case, is the observer uncertainty region.

Fig. 2
figure 2

Geometrical interpretations: (a) Object acquisition; (b) LoS perturbation; (c) Ephemeris perturbation; (d) Combined perturbations

With respect to Fig. 2b, c, and d, the maximum observer uncertainty (here denoted \(\epsilon\)) is proportional to the object-observer distance (\(\rho\)), the object angular uncertainty (\(\sigma\)), and the object position uncertainty (w), which is function of its ephemerides knowledge. From simple geometry and by inspection of Fig. 2d, the maximum observer error is:

$$\begin{aligned} \epsilon = \rho \tan \sigma + w \end{aligned}$$
(22)

Considering a deep-space scenario with \(\rho\) spanning between 0.1 and 1 AU and an angular uncertainty \(\sigma\) of 15 arcseconds, the contribution of the measurement error spans between \(10^3\) km and \(10^4\) km.

Considering a reference deep-space velocity of 30 km/s (Earth–like), ephemerides errors of 1 second and 10 seconds lead to \(w = 30\) km and \(w= 300\) km, respectively. In general, the planets ephemerides are very accurate, thus w can be neglected in this case. This is not valid for asteroids, as their ephemerides can be refined only for short periods of observations. Thus, the measuruement error is predominant with respect to the ephemerides error in case of planets acquisition; the same can not be said in case of asteroids acquisitions.

3.2 Navigation Solution

Equation 8 requires at least two non-parallel objects to admit a solution. From a geometrical point of view, this is because every line-of-sight is a semi-direction that originates from the observer position and goes toward the objects locations, as shown in Fig. 3a. When reversing the problem, the semi-directions originate from the objects positions and ideally cross at the observer location. When the two directions are parallel (\(\gamma\) = 0 or \(\gamma\) = 180 deg in Fig. 3b), the observer can be in any point along this direction, thus leading to an undetermined solution. For non parallel directions (\(\gamma \ne\) 0 or \(\gamma \ne\) 180 deg) the LoS directions encounter at the observer position, as happens in Fig. 3b.

Fig. 3
figure 3

Geometrical interpretations of the navigation solution. (a) Line-of-sight acquisitions; (b) Navigation solution

Figure 4 shows the navigation solution covariance when considering the uncertainties in the objects ephemeris and the line-of-sight directions for the two beacons case (Fig. 4a) and in the multiple beacons case (Fig. 4b). Each beacon casts an uncertainty region toward the observer as the one depicted in Fig. 2d. The uncertainty regions are function of the objects ephemeris knowledge and line-of-sight directions accuracy. The uncertainty regions encounter in proximity of the observer position and their intersection is the observer position uncertainty. Figure 4a shows the observer uncertainty in case of two beacons and, by comparing it with Fig. 4b, it can be seen how in principle multiple beacons can further delimit the observer uncertainty region, provided that the beacons are well shifted apart.

Fig. 4
figure 4

Observer position uncertainty region in case of perturbations: (a) Two beacons case; (b) Multiple beacons case

4 Test Case

The accuracy of the deep-space navigation exploiting multiple beacons is assessed by virtue of a test case. The aim is to evaluate the navigation accuracy exploiting both the analytical and the numerical covariances for an increasing number of beacons. The analytical covariance is computed exploiting Eq. 21, while the numerical covariance is computed evaluating Eq. 11 with perturbed inputs and then computing the solution standard deviation.

The test case scenario is explained in the following. An observer is assumed to be on a heliocentric deep-space orbit whose elements in terms of semi-major axis a, eccentricity e, inclination i, pericenter anomaly \(\omega\), right ascension of the ascending node \(\varOmega\), and true anomaly \(\nu\) are reported in Table 1. Also, it is assumed that the observer can always acquire the line-of-sight directions of up to ten objects, that are five planets (Mercury, Venus, Earth, Mars, and Jupiter) and five asteroids (Ceres, Vesta, Kallisto, Eros, and Steins). These objects have been chosen arbitrarily. The objects ephemeris are read by the SPICE toolkit [1] for the time frame 2025–2040.

Table 1 Observer heliocentric orbital parameters

During the deep-space orbit, the observer is assumed to acquire the beacons line-of-sight directions once every two days. The beacons and the uncertainties related to the acquisitions are summarized in Table 2. The planets line-of-sight directions have been affected by a 3\(\sigma\) standard deviation of 15 arcseconds, while the line-of-sight directions to the asteroids by a 3\(\sigma\) standard deviation of 30 arcseconds. Similarly, the planets positions have been affected by a 3\(\sigma\) standard deviation of 1 km, while the asteroids positions have been affected by a 3\(\sigma\) standard deviation of 100 km. These values have been assumed liberally to run the test case; more educated values shall be used in case of specific simulations. Note that the LoS accuracy accounted for has a contribution due to attitude determination, object centroiding error, and a small margin. The LoS directions to the deep-space objects are usually acquired via centroiding techniques on the same images acquired for attitude determination, which can be more accurate than 9 arcseconds even with miniaturized sensors [10]. Centroding techniques achieve subpixel accuracy in the order of 0.1 pixels [20], this translates to 3.5 arcseconds for a typical star tracker (10 deg field-of-view, 1 Mpix sensor). A margin has been included for unmodeled effects (e.g., thermoelastic deformation of the spacecraft). Larger perturbations for the asteroids have been assumed owing to their smaller size with respect to planets; this affects both their detectability in deep-space images and their ephemeris determination by ground stations. Moreover, note that the beacons apparent magnitude and angular separation from the Sun have been neglected because the focus of the test case is on the covariance analysis as function of the number of beacons.

Table 2 Beacons and related uncertainties

Figure 5 shows the observer position error \(\delta r\) (Fig. 5a) and standard deviation \(\sigma _r\) (Fig. 5b) for the whole trajectory as function of the number of tracked beacons. The number of beacons follows the list in Table 2, so that the two beacons case considers Mercury and Venus, the three beacons case considers Mercury, Venus, and the Earth, and so on. While sharing a similar trend owing to the problem geometry, it can be seen how the two limiting case of two and ten beacons (2 B and 10 B, respectively) are shifted apart of almost two or three orders of magnitude in terms of both position error and standard deviation. Moreover, note how multiple beacons cases present smaller variations in position error and standard deviation with respect to few beacons cases. This is because the navigation solution is more robust to variations in the observation geometry, as sketched in Fig. 4b. Worth to mention is the improvement of the navigation solution due to the addition of Jupiter, it being a well-posed navigation beacon for the considered spacecraft orbit. A different ordering of Table 2 would lead to different trends with the increasing number of beacons.

Fig. 5
figure 5

Observer uncertainties as function of the number of tracked beacons: (a) Observer position error; (b) Observer standard deviation

The mean of the position error across the whole trajectory (\(\bar{\delta r}\)) is shown in Fig. 6a as function of the number of tracked beacons (\(n_b\)). It can be seen how the mean of the solution error decreases as the number of beacons increases. The beacons ordering in Table 2 affect the performances of the method; a different ordering with well-posed beacons first would directly lead to more accurate performances, with the other beacons slightly refining the solution accuracy. Similarly, Fig. 6b shows the standard deviation of the navigation solution across the whole trajectory (\(\sigma _r\)) as function of the number of tracked beacons, which decreases as \(n_b\) increases. Also, the analytical covariance computed by Eq. 21 is shown and is in good agreement with the numerical covariance. All in all, given the same observation geometry, both the navigation error and covariance decrease as the number of tracked beacons increases.

Fig. 6
figure 6

Observer position accuracy as a function of the number of tracked beacons: (a) Mean Error; (b) Standard Deviation

5 Least Squares and Optimal Beacons

The minimum number of beacons to solve Eq. 8 is two, and, in this case, the navigation problem reduces to a simple triangulation problem. In presence of n available beacons, it is beneficial to select the couple of beacons that yields the highest navigation accuracy. These are known as optimal beacons [9]. The trace of the solution covariance in Eq. 21 is used as a figure of merit to select the beacons. This is

$$\begin{aligned} {J_{kl} = \frac{2 \ \left( w_k^2 + w_l^2\right) }{\sin ^2 \gamma _{kl}} + \frac{1+\cos ^2 \gamma _{kl}}{\sin ^4 \gamma _{kl}} \varvec{z}_{kl}^\top \varvec{L}_{kl} \varvec{z}_{kl}} \end{aligned}$$
(23)

where k and l denote the k–th and l–th beacons, \(\varvec{z}_{kl} = \varvec{r}_l - \varvec{r}_k\), and \(\varvec{L}_{kl} = \sigma _k^2 \varvec{L}_k + \sigma _l^2 \varvec{L}_l\). The optimal beacons i and j are the ones that yield \(J_{kl}\) minimum, thus

$$\begin{aligned} {\{i,j\} = \text {arg} \min _{\begin{array}{c} {k \, = \, 1, \cdots , n} \\ {l \ > \ k} \end{array}} J_{kl}} \end{aligned}$$
(24)

The triangulation problem is now solved exploiting fixed couples of beacons from Table 2 and the optimal couples of beacons from Eq. 24. Figure 7 shows the mean navigation error and the standard deviation of the navigation solution exploiting fixed and optimal couples of beacons. It can be seen how the optimal beacons selection yields the minimum in both the mean error and standard deviation with respect to fixed couples. The optimal couples tracking windows are shown in Fig. 8. Jupiter, Mars, and the Earth are the most exploited beacons owing to the problem geometry. Moreover, the selection relies more on planets rather than on asteroids owing to the higher knowledge in their line-of-sight and position. The separation angles \(\gamma\) among different couples of beacons are shown in Fig. 9 along the spacecraft trajectory. Here, the mean angle weighted along the trajectory \(\bar{\gamma }\) shows how the couples formed by Jupiter are well separated on average, thus yielding superior performances.

Fig. 7
figure 7

Simple triangulation solution exploiting fixed couples and optimal couples of beacons: (a) Mean error; (b) Standard deviation

Fig. 8
figure 8

Optimal beacons tracking windows

Fig. 9
figure 9

Separation angles among different couples of beacons

We can now compare the navigation solution exploiting couples of optimal beacons to the multiple beacons with blind selection. This is shown in Fig. 10. It can be seen how the optimal beacons selection yields a navigation accuracy in terms of mean error and standard deviation comparable to the multiple beacons solution with \(n_b > 4\). This is because the fifth beacon from Table 2, Jupiter, is a well-posed beacon for the problem in hand. So, having Jupiter as one of the primary beacons would lower the number of multiple beacons required to have a similar accuracy to the optimal beacons case. Thus, selecting optimal beacons is a smart way of extracting the navigation information employing just two beacons, and its accuracy is comparable to the multiple beacons one when it takes into account well-posed beacons. However, the least squares solution with multiple well-posed beacons is slightly more accurate than the one with optimal beacons. This is because the uncertainty region generated by multiple beacons is bounded by the intersection of cones coming from various directions, while in the optimal beacons case the uncertainty region is cut by just two cones (see Fig. 4a and b).

Fig. 10
figure 10

Observer position accuracy as a function of the number of beacons compared to the optimal beacons: (a) Mean Error; (b) Standard Deviation

6 Conclusions

In this paper, the least squares solution and covariance to the deep-space optical navigation problem exploiting multiple beacons have been derived. The geometrical interpretation of the navigation solution and covariance as function of the number of beacons has been elaborated and shown, together with the perturbations involved in the estimation process. The analytical and numerical covariances have been evaluated for an observer on a deep-space trajectory as function of the number of tracked beacons, showing the increased robustness and accuracy of the navigation solution with multiple beacons. A comparison between the navigation solution exploiting optimal beacons and multiple beacons has shown a comparable navigation accuracy among the two, with a slightly more accurate solution with the multiple beacons at the cost of an increased number of tracked beacons.