Asymptotic behaviour of orbit determination for hyperbolic maps

We deal with the orbit determination problem for hyperbolic maps. The problem consists in determining the initial conditions of an orbit and, eventually, other parameters of the model from some observations. We study the behaviour of the confidence region in the case of simultaneous increase in the number of observations and the time span over which they are performed. More precisely, we describe the geometry of the confidence region for the solution, distinguishing whether a parameter is added to the estimate of the initial conditions or not. We prove that the inclusion of a dynamical parameter causes a change in the rate of decay of the uncertainties, as suggested by some known numerical evidences.


Introduction
This paper is concerned with the behaviour of the confidence region coming from an orbit determination process as the number of observation increases.
We recall that orbit determination consists of recovering information on some parameters (initial conditions or dynamical parameters) of a model given some observations and goes back to Gauss (1809). The solution, called nominal solution, relies on the least squares algorithm, and the confidence region summarises the uncertainties coming from the intrinsic errors in the observational process.
The problem under investigation is suggested by the numerical results in Serra et al. (2018), Spoto and Milani (2016) where some estimates are given for the cases of a map depending on a parameter and presenting both ordered and chaotic zones: the Chirikov standard map (Chirikov 1979) (see also Siegel and Moser 1971;Celletti et al. 2010 for its importance in Celestial Mechanics). The authors of Serra et al. (2018), Spoto and Milani (2016) constructed the observations by adding some noise to a true orbit of the map. Then, they set up an orbit determination process to recover the true orbit and observed the decay of the uncertainties as the number of observations grows. The experiments show that the result crucially depends on the dynamics and on whether the parameter is included in the orbit determination process or not. More precisely, if the observations come from an ordered zone (an invariant curve), then the uncertainties decrease polynomially both if the parameter is included or not. This behaviour was analytically proved to be true at least in the case where only the initial conditions are estimated in Marò (2020), using KAM techniques.
The numerical results coming from the chaotic case are more delicate. From the practical point of view, the problem of the so-called computability horizon occurs. This prevents orbit determination from being performed if the time span of the observations is too large and more sophisticated techniques must be employed, such as the multi-arc approach (Serra et al. 2018). Moreover, at least until the computability horizon, the uncertainties on the sole initial conditions decrease exponentially, while a polynomial decay is observed when the parameter is included in the orbit determination process.
In this paper, we give an analytical proof of this last result. We will consider a class of hyperbolic maps depending on a parameter. The existence of chaotic orbits, in the future or in the past, for these systems is given by the fact that all the Lyapunov exponents are supposed to be nonzero. Despite the above-described practical problem in computing a solution, we will always suppose that the least squares algorithm converges and gives a nominal solution. Hence, the estimates on the decay of the uncertainty are given asymptotically as the number of observations goes to infinity. To state the result, we recall that the confidence region is an ellipsoid in the space of the fit parameters and the uncertainties strictly depend on the size of the axes of such an ellipsoid.
We will prove that, in the case of estimating only the initial conditions, there exists a full measure set of possible nominal solutions for which all the axes of the related confidence ellipsoid decay exponentially. On the other hand, if the parameter is included in the orbit determination process, then there exists a full measure set of initial conditions and parameters for which the related confidence ellipsoid has an axis that decays strictly slower than exponentially.
Our analytical results are consistent with the numerical results in Serra et al. (2018), Spoto and Milani (2016). In the case of estimating the parameter, we cannot prove polynomial decay of the uncertainties; however, we will show that this occurs for a class of affine maps depending on a parameter. Perturbed automorphisms of the torus are representative of this class, including the famous Arnold's Cat Map.
We conclude stressing the fact that chaotic orbit determination is a challenge for both space mission and impact monitoring. Actually, the accurate determination of orbits of chaotic NEOs is essential in the impact monitoring activity (Milani and Valsecchi 1999). Another interesting case is given by satellites. When their operating life finishes, they are left without control in safe orbits, governed only by the natural forces. It has been noticed that many parts of this region are chaotic (Rosengren et al. 2015), due to the perturbed motion of the Moon. It is then important to track and determine orbits of non-operating satellites that could crash into operating ones. Finally, the targets of many space missions include the determination of some unknown parameter. Typical examples are the ESA/JAXA BepiColombo mission to Mercury, the NASA JUNO and ESA JUICE missions to Jupiter that are performed in a chaotic environment (Lari and Milani 2019).
Moreover, these results are related to a conjecture posed by Wisdom in 1987(see Wisdom 1987. Discussing the chaotic rotation state of Hyperion, it was proposed that "the knowledge gained from measurements on a chaotic dynamical system grows exponentially with the time span covered by the observations". In particular, this was related to the information on dynamical parameters like the moments of inertia ratios. The paper is organised as follows. In Sect. 2, we adapt the general description of the problem given in Marò (2020) to our situation and state our main results. Moreover, we briefly discuss our results as compared with the numerical simulations in Serra et al. (2018), Spoto and Milani (2016). Section 3 is dedicated to the proof of the result concerning the estimation of the sole initial conditions, while Sect. 4 is dedicated to the proof of the results in the case the parameter is included. Section 5 is dedicated to the study of a concrete example, and our conclusions are given in Sect. 6.

Notation and preliminaries on Lyapunov exponents
The main tool of our approach to the orbit determination problem is the notion of Lyapunov exponents of a differentiable map, of which we now recall the definition and the main properties as stated in the Oseledets Theorem. Let f : X → X be a diffeomorphism of a d-dimensional differentiable Riemannian manifold X endowed with a σ -algebra B and an f -invariant probability measure μ. We recall that a measure is We denote by F(x) the Jacobian matrix of f and, for n ∈ Z,by F n (x) the Jacobian matrix of f n which, by the chain rule, can be written as for n ≥ 1, 1 for n = 0, for n < 0. (1) Let us now introduce the Lyapunov exponents of f . Suppose that Given x ∈ X and a vector v ∈ T x X , let us define If γ ± (x, v) = γ ± (x, v), we use the notation In principle, the limits γ ± (x, v) depend on x and v. The following version of the classical theorem by ;  (see also Raghunathan 1979;Ruelle 1979) gives an answer to this problem. The interested reader can find more details on Lyapunov exponents in Barreira and Pesin (2013).
Theorem 1 (Oseledets) Let f : X → X be a measure-preserving diffeomorphism of a ddimensional differentiable Riemannian manifold X endowed with a σ -algebra B and an f -invariant probability measure μ, and assume that the Jacobian matrix F(x) satisfies (2). Then, for μ-almost every x ∈ X there exist numbers exists and exp γ 1 (x), . . . , exp γ r (x) (x) are its eigenvalues.

Definition 1
The numbers γ 1 (x), . . . , γ r (x) (x) given in Theorem 1 are the Lyapunov exponents of f at x, and for each γ i (x), i = 1, . . . , r (x), the dimension of the corresponding vector space E i (x) is called the multiplicity of the exponent.
Without loss of generality, one can assume that the diffeomorphism f is ergodic, so that the Lyapunov exponents and their multiplicities do not depend on x. The case of non-ergodic maps can be treated by the standard procedure of ergodic decomposition, obtaining similar results depending on the ergodic component to which the initial condition x belongs.
Definition 2 A diffeomorphism f : X → X is called hyperbolic if it has no vanishing Lyapunov exponents.
In the particular case of Hamiltonian maps, or of maps preserving the volume form of a manifold, one can easily deduce that hyperbolic maps necessarily have positive Lyapunov exponents, so are chaotic. Moreover, by Theorem 1-(i), it follows that for a hyperbolic map either f or f −1 is chaotic. This is an important remark for our main results.
In the following, we consider diffeomorphisms depending on a parameter k ∈ K ⊂ R and use the notation f k : X → X . We also assume that the dependence on k is differentiable and that the probability measure μ of the manifold X is f k -invariant and the map is ergodic for all k ∈ K . The Jacobian matrix of f n k with respect to x for n ∈ Z is denoted by F n k (x) and can be written as in (1). Assuming that (2) is satisfied, we can apply Oseledets Theorem to f k for all k ∈ K and find its Lyapunov exponents at μ-a.e. x ∈ X . Since the map f k is differentiable also with respect to the parameter k, we can also consider the Jacobian matrix of f k with respect to (x, k) which is denoted byF n k (x) for n ∈ Z. For the sake of simplicity, from now on we will only consider the case when X is an open domain of R d .This restriction has the only purpose of simplifying the notations and the exposition. The results can be readily extended to the general setting by using the metric of a Riemannian manifold.
Being f k a diffeomophism of a domain in R d , we use the notation ) for its components, so that its Jacobian matrix with respect to x reads Analogously, the Jacobian matrixF k (x)reads and we note that, for n ∈ Z, the matrixF n k (x) can be written as

Statement of the problem
A general statement of the problem can be found in Marò (2020). For the sake of completeness, here we recall and adapt it to the present notations.
Consider a map f k as in the previous section. Given an initial condition x, its orbit is completely determined by the iterations f n k (x) for n ∈ Z. Instead, let's suppose that we have been observing the evolution of the state of a system modelled by f k and that we have got the observations (X n ) for |n| ≤ N . Following Milani and Gronchi (2010), we set up an orbit determination process to determine the unknown parameters. We consider two different scenarios.
(A) Only the initial conditions x are unknown. (B) Both the initial conditions x and the parameter k are unknown.
In both cases, we search for the values of the parameters that best approximate, in the least squares sense, the given observations. We first define the residuals as We stress that, even if the expressions coincide, in case (A) the residuals are defined in terms of a fixed k, whereas in case (B) the value of k is to be determined. Subsequently, we call the least squares solution x 0 in case (A), or (x 0 , k 0 ) in case (B), the (local) minimiser of the target function We will not be concerned with the existence and computation of the minima. This is a very delicate task, solved via iterative schemes such as the Gauss-Newton algorithm and the differential corrections. These algorithms crucially depend on the choice of the initial conditions. See Gronchi et al. (2015), Ma et al. (2018) for some recent results on this topic for the asteroid and space debris cases. For the case of chaotic maps that we study in this paper, the problem is considered in Serra et al. (2018), Spoto and Milani (2016) where computational problems that occur for large N are treated with advanced techniques. In the following of this paper, we assume that the least squares solution x 0 , or (x 0 , k 0 ), exists and we refer to it as the nominal solutions.
In general, the observations (X n ) contain errors; hence, values of x, or of (x, k), that make the target function slightly bigger than the minimum value Q k (x 0 ), orQ(x 0 , k 0 ), are acceptable. This leads to the definition of the confidence region as where σ > 0 is an empirical parameter chosen depending on statistical properties and bounds the acceptable errors; the value of σ is irrelevant for our purposes, hence in the next sections we will set σ = 1. Expanding the target functions Q k (x) andQ(x, k) at the corresponding nominal solution up to second order, we get, using the notation introduced in (1) and (5), Under the hypothesis that the residuals corresponding to the nominal solution are small, we can neglect the terms ξ T n,k (x 0 ) andξ T n (x 0 , k 0 ). Then, we define the normal matrices as and the associated covariance matrices as Note that the matrices C N ,k (x) and N ,k (x) defined for case (A) are d ×d, while the matrices . Moreover, the normal matrices are symmetric and positive definite since f k is a diffeomorphism and the operators . The values σ x , σ y represent the marginal uncertainties of x 0 , y 0 , respectively, and depend on the value of σ F k (x) andF k (x) have maximum rank. Hence, the confidence regions can be approximated by the confidence ellipsoids given by for case (A) and bỹ for case (B). The covariance matrices N ,k (x) and˜ N (x, k) describe the corresponding confidence ellipsoids E N ,k andẼ N since the axes of the ellipsoids are proportional to the square root of the eigenvalues of the corresponding matrix and are directed along the corresponding eigenvectors. Since the matrix N ,k (x) is positive definite, its eigenvalues are all real and positive and we denote them by Analogously, we denote by 0 <λ (1) the eigenvalues of˜ N (x, k).
The regions E represent the uncertainty of the nominal solution: the values inside E are acceptable and the projections of E on the axes represent the (marginal) uncertainties of the coordinates. See Fig. 1.
We remark that the normal and covariance matrices also have a probabilistic interpretation, see Milani and Gronchi (2010).
From the point of view of the applications (e.g. impact monitoring Milani and Valsecchi 1999), it is of fundamental importance to know the shape and the size of the confidence ellipsoid E. Hence, the question that we here address, stated in a broad sense, is the following:

Problem 1
Given a map f k as in Sect. 2.1 and a nominal solution of the associated orbit determination process, describe the confidence ellipsoids for large N in cases (A) and (B).

Remark 1
The solution of the problem passes through the computation of the eigenvalues of the covariance matrices for large N . Note that they crucially depend on the dynamics, since we have to compute the linearisation of the system along an orbit.

Main results
In this paper, we consider Problem 1 for hyperbolic maps. We now state and comment the results, giving the proofs in Sects. 3 and 4.
For all k ∈ K ⊂ R, let f k : X → X be an ergodic hyperbolic diffeomorphism of an open domain X ⊂ R d with f k -invariant probability measure μ, and assume that f k satisfies (2).
For case (A), we have the following result Theorem 2 Let γ 1 , . . . , γ r be the Lyapunov exponents of f k , and let For μ-almost every x ∈ X , the eigenvalues λ for every i = 1, . . . , d.
Theorem 2 shows that the axes of the confidence ellipsoid defined in (9) shrink exponentially fast with the number of observation. In fact, the lengths of the axes of E N ,k (x) are the square roots of the eigenvalues of the corresponding covariance matrix N ,k (x). Hence, the exponential rate of decay of the uncertainties is controlled by the Lyapunov exponents of the orbit corresponding to the nominal solution.
We now show how the result changes in case (B). We prove the following Theorem 3 For μ-almost every x ∈ X the largest eigenvalueλ

is a positive number which decreases with N and satisfies
Thus, by Theorem 3, if the orbit determination problem includes the determination of the parameter k, the confidence ellipsoid defined in (9) has one of the axes which shrinks slower than any exponential. Since the uncertainties are the projection of the confidence ellipsoid on the direction of the parameters to be determined, in general the slow decay of this axis affects all the uncertainties, giving a lower bound to their speed of decay. In Sect. 5, we consider an example for which we can prove a more precise asymptotic behaviour forλ Remark 2 By the proof of Theorem 3, we cannot exclude thatλ (d+1) N converges to a positive constant. However, this would imply the failure of the orbit determination process, since the confidence ellipsoid wouldn't shrink to a point.

Remark 3
The methods used in Theorems 2 and 3 can be applied also to non-hyperbolic diffeomorphisms, showing a less than exponential decay of the uncertainties also in case (A). This problem was studied in Marò (2020) for nominal solutions living on invariant curves of exact symplectic twist maps of the cylinder, for which a sharp estimate for the rate of decay of the uncertainties was proved.

Comparison with the numerical results in Serra et al. (2018), Spoto and Milani (2016)
Our results in Theorems 2 and 3 are consistent with the numerical estimates in Serra et al. (2018), Spoto and Milani (2016). The authors considered a classical model in Celestial Mechanics: the well-known Chirikov Standard Map defined as f k : The data of an orbit determination process were produced adding a random Gaussian noise to the orbit with initial condition (x 0 , y 0 ) = (3, 0) and k = 0.5. This initial condition is close to the hyperbolic fixed point and is likely giving rise to a chaotic orbit. The differential corrections algorithm is then performed both in cases (A) and (B).
Working in quadruple precision, numerical instability of the differential corrections occurs for the number of observations N ∼ 300. For the same number of iterations, it is computed the largest eigenvalue of the state transition matrix. A linear fit gives a Lyapunov indicator of +0.086. It represents the largest Lyapunov exponent of the solution to which the differential corrections converge, i.e. the largest Lyapunov exponent of the nominal value.
To get a comparison with our results in case (A), we apply Theorem 2 to the Standard Map with initial condition given by the nominal value obtaining −γ 1 = γ 2 = γ * = γ * = 0.086 , hence assuming that the largest Lyapunov exponent coincides with the Lyapunov indicator. Hence, we expect the eigenvalues of the covariance matrix to shrink as In the numerical simulations for case (A) performed in Serra et al. (2018), Spoto and Milani (2016), it was computed the standard deviation of the components x and y at every iteration, corresponding to the values σ x and σ y in Fig. 1. By a linear fit in logarithmic scale, the authors got the slopes −0.084 for x and −0.083 for y, deducing numerically the following decay of the uncertainties as function of N Note that since the uncertainties σ x , σ y are proportional to the square root of the eigenvalues λ (i) N ,0.5 of the covariance matrix, the estimate (12) coming from Theorem 2 is in perfect accordance with the numerical results (13) obtained in Serra et al. (2018), Spoto and Milani (2016). Thus, Theorem 2 represents a proof of the following conjecture posed in Spoto and Milani (2016) for case (A): "exponentially improving determination of the initial conditions only is possible, and the exponent appears to be very close to the opposite of the Lyapunov exponent." Concerning case (B), the numerical simulations give a decrease in the uncertainties of the form N α , with different values of α < −0.5 for the parameters x, y, k. No quantitative conjecture is posed on the rate of decrease apart from being strictly less that exponential. This is consistent with our results in Theorem 3, and with the lower bound that we obtain for the decay of the uncertainties for the systems studied in Sect. 5.

Proof of Theorem 2
Let us fix k ∈ K and let x ∈ X be a point for which Oseledets Theorem holds. Consider the normal matrix C N ,k (x) as in (8) and, recalling that it is positive definite, denote by 0 < δ N ,k (x) its eigenvalues. Using Oseledets Theorem, for every i = 1, . . . , r let E i be the vector spaces of the decomposition of T x X ∼ = R d , and write for a unit vector By Oseledets Theorem and (3), for every ε > 0 there exist n + = n + (x) > 0 and n − = n − (x) < 0 such that hence, from (14) we can choose ε ∈ (0, γ * ) and findN (x) := max{n + (x), −n − (x)} such that for all N >N (x) and in the left-hand side we have neglected the terms with e 2(−|γ i |−ε)|n| for which the series converge. Hence, recalling the variational characterisation of the eigenvalues of a symmetric matrix, given ε ∈ (0, γ * ) there existsN (x) such that for N >N (x) e 2(γ * +ε)n =c + (x) + e 2(γ * +ε)(N +1) − 1 e 2(γ * +ε) − 1 .
Finally, using that by definition Since ε is arbitrary, the theorem follows.

Proof of Theorem 3
Let us introduce the auxiliary diffeomorphism Recalling (4), we can write the (d + 1) × (d + 1) Jacobian matrix G(x, k) of g with respect to (x, k) as follows and for n ∈ Z we denote by G n (x, k) the Jacobian matrix of g n with respect to (x, k). As in (1), by the chain rule it can be written as 2 (x, k)) . . . G(x, k) for n ≥ 1, 1 for n = 0, g n+1 (x, k)) . . . G −1 (g −1 (x, k)) for n < 0. .
Finally, we consider the auxiliary normal matrix which is related to the normal matrixC N (x, k) as shown in the following lemma.
Lemma 1 For all n ∈ Z, it holds Proof Formula (17) comes from the definitions noting that for n > 0 and a similar formula holds for n < 0. Formula (18) is a straightforward consequence of (17).
We now study the Lyapunov exponents of g. First, we consider the measure μ × δ k on X × K which is clearly g-invariant. Moreover, if f k is ergodic, the same holds for g. Thus, under assumption (2) for f k we can apply Oseledets Theorem to g, and obtain that for μalmost every x ∈ X the map g admits Lyapunov exponentsγ 1 , . . . ,γr and an associated decomposition In the following lemma, we describe the relation between the Lyapunov exponents of g and those of f k . Lemma 2 Given g as above, we haver = r + 1, and Proof First of all, if γ i is a Lyapunov exponent of f k , then it is also a Lyapunov exponent of g with multiplicity not smaller, in the sense thatγ i = γ i and dim E i ≤ dimẼ i for all i = 1, . . . , r . Actually, for every v ∈ E i and for μ-almost every x ∈ X , using (17) we have Then, we prove that the last exponent of g is zero. To this end, we recall that by Oseledets Theorem the eigenvalues of the matrix are e γ 1 , e γ 2 , . . . , e γ r with multiplicity dim E i and the eigenvalues of the matrix are eγ 1 , eγ 2 , . . . , eγ˜r with multiplicity dimẼ i . Now, by (17), for every n ∈ Z But since the Lyapunov exponents of f k are all different from zero and they are also Lyapunov exponents of g with multiplicity not smaller, we must haver = r +1,γr = 0 and dimẼr = 1.
The conclusion of the proof of Theorem 3 is a consequence of the following lemma. To state it, let us consider the normal matrix and denote its eigenvalues by 0 <δ (1)

is a positive number which increases with N and satisfies
N (x, k)} N is an increasing sequence of positive terms sincẽ and v TC N (x, k)v is the sum of 2N + 1 positive terms. Let us now denote by v 0 ∈ R d+1 the unit vector corresponding to the vanishing Lyapunov exponent of g, so that Hence, for every ε > 0 there existsn such that |G n (x, k)v 0 | 2 < e 2ε|n| for |n| >n, so that, for N >n for some constant c(x, k) independent on N . We now use Lemma 1 to find an estimate for the eigenvalues ofC N (x, k). From the variational characterisation of the eigenvalues and (18), Hence, for every ε > 0 there exists a constant c 1 (x, k) not depending on N such that Since ε is arbitrary, the result is proved.
To finish the proof of Theorem 3 is now enough to recall thatλ N (x, k)] −1 and apply Lemma 3.

An example
In this section, we present a class of maps for which the estimates on the eigenvalues of N ,k and˜ N in Theorems 2 and 3 can be made explicit. Inspired by some computations presented in Milani and Baù, we consider the case of an affine hyperbolic diffeomorphism One example of this class for d = 2 is the well-known Arnold's Cat Map.
Fixing a matrix A ∈ SL(d, Z) and a vector b ∈ R d , we define Since det A = 1, the Lebesgue measure m is C k -invariant. Finally, we assume that A has no eigenvalues of modulus 1, since as shown below this implies that C k is hyperbolic. We denote by δ 1 , . . . , δ d the eigenvalues of A. The orbits of the map C k can be computed explicitly; more precisely, we have Lemma 4 For every n ∈ Z, setting w = (1 − A) −1 b, for all x ∈ T d .

Proof
The case n = 0 is trivial. For n = 0, it follows directly noting that for n > 0 It is an easy consequence of this lemma that the matrices F n k (x) andF n k (x) introduced in (4) and (5) are constant and independent on x and k. For the same reason, the Lyapunov exponents of C k are constant everywhere.
We now give Theorem 2 for maps C k under the assumption that A is symmetric. In this case, the result is much sharper, giving the exact exponential rate of decrease for all the eigenvalues of the covariance matrix N ,k (x).
Proposition 1 Let C k : T d → T d be defined as above with A symmetric, and let γ 1 , . . . , γ d its Lyapunov exponents counted with multiplicity, that is, the exponents are not necessarily different. Then, the eigenvalues λ Proof Since A is symmetric, there exists an orthonormal matrix P such that A = P T P, = diag(δ 1 , . . . , δ d ), with δ i ∈ R for all i = 1, . . . , d, and in particular, the Lyapunov exponents of C k are given by γ i = log |δ i |. Since A has no eigenvalues of modulus 1, the map C k is hyperbolic (see Definition 2). Now, from (22), the normal matrix satisfies and its eigenvalues are where we recall that the eigenvalues δ i are real. We conclude using that We can say more also for the asymptotic behaviour of the largest eigenvalueλ (d+1) N of the covariance matrix˜ N of the orbit determination problem in case (B). In Theorem 3, we proved thatλ (d+1) N decreases slower than exponentially, the lack of a precise estimate being due to the uncertainty on the speed of convergence to zero in (20). In the class of maps we are studying in this section, we can be much more precise on the asymptotic behaviour of λ (d+1) N . Proof Using the same notation of Sect. 4, we consider the auxiliary map g :

Proposition 2 Let
, and recalling (17) and Lemma 4, its Jacobian matrix G n takes for all n ∈ Z the form for all (x, k), where we recall that w = (1 − A) −1 b . We note that the eigenvalues of G n are equal to δ n 1 , . . . , δ n d , 1, where δ 1 , . . . , δ d are the eigenvalues of A. Actually, choosing v i such that Av i = δ i v i , then G n (v i , 0) T = (A n v i , 0) T = δ n i (v i , 0) T , and, choosing the vector v 0 = (w, 1) ∈ R d+1 , we have We thus get that for the normal matrix C g N (x, k) it holds v T 0 C g N (x, k)v 0 = |n|≤N |G n v 0 | 2 = (2N + 1)|v 0 | 2 and for the normal matrixC N (x, k) of C k relative to case (B) of the orbit determination problem, we use (18) Hence, from the variational characterisation of the eigenvalues ofC N (x, k), we obtain that its smallest eigenvalueδ |w| 2 |w| 2 + 1 and the result follows recalling thatλ

Conclusions and future work
We have considered the problem of orbit determination under the assumption that the number of observations grows simultaneously with the time span over which they are performed. Following the numerical results in Serra et al. (2018), Spoto and Milani (2016) we have studied the asymptotic rate of decay of the uncertainties as the number of observations grows.
We have considered the problem for hyperbolic maps, for which all the Lyapunov exponents are not zero, depending on a parameter, and we have treated separately the cases in which the parameter is included or not in the orbit determination procedure. We have analytically proved that if the parameter is not included then the uncertainties decrease exponentially, while if the parameter is included, then the uncertainties decrease strictly slower than exponentially. This is consistent with the numerical results and gives a proof of one of the main questions posed in Serra et al. (2018), Spoto and Milani (2016).
Together with the results in Marò (2020), which considered the ordered case (KAM scenario), this paper is a step forward the complete understanding of the numerical results.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.