Measurements of $H_0$ and reconstruction of the dark energy properties from a model-independent joint analysis

Gaussian processes (GP) provide an elegant and model-independent method for extracting cosmological information from the observational data. In this work, we employ GP to perform a joint analysis by using the geometrical cosmological probes such as Supernova Type Ia (SN), Cosmic chronometers (CC), Baryon Acoustic Oscillations (BAO), and the H0LiCOW lenses sample to constrain the Hubble constant $H_0$, and reconstruct some properties of dark energy (DE), viz., the equation of state parameter $w$, the sound speed of DE perturbations $c^2_s$, and the ratio of DE density evolution $X = \rho_{\rm de}/\rho_{\rm de,0}$. From the joint analysis SN+CC+BAO+H0LiCOW, we find that $H_0$ is constrained at 1.1\% precision with $H_0 = 73.78 \pm 0.84$ km s$^{-1}$Mpc$^{-1}$, which is in agreement with SH0ES and H0LiCOW estimates, but in $\sim$6.2$\sigma$ tension with the current CMB measurements of $H_0$. With regard to the DE parameters, we find $c^2_s<0$ at $\sim$2$\sigma$ at high $z$, and the possibility of $X$ to become negative for $z>1.5$. We compare our results with the ones obtained in the literature, and discuss the consequences of our main results on the DE theoretical framework.


I. INTRODUCTION
Several astronomical observations indicate that our Universe is currently in an accelerated expansion stage [1][2][3][4][5]. A cosmological scenario with cold dark matter (CDM) and dark energy (DE) mimicked by a positive cosmological constant, the so-called ΛCDM model, is considered the standard cosmological model, which fits the observational data with great precision. But, the cosmological constant suffers from some theoretical problems [6][7][8], which motivate alternative considerations that can explain the data and have some theoretical appeal as well. In this regard, numerous cosmological models have been proposed in the literature, by introducing some new dark fluid with negative pressure or modification in the general relativity theory, where additional gravitational degree(s) can generate the accelerated stage of the Universe at late times (See [9][10][11] for a review). On the other hand, from an observational point of view, it is currently under discussion whether the ΛCDM model really is the best scenario to explain the observations, mainly in light of the current Hubble constant H 0 tension. Assuming the ΛCDM scenario, Planck-CMB data analysis provides H 0 = 67.4 ± 0.5 km s −1 Mpc −1 [12], which is in 4.4σ tension with a cosmological model-independent local measurement H 0 = 74.03 ± 1.42 km s −1 Mpc −1 [13] from the Hubble Space Telescope (HST) observations of 70 long-period Cepheids in the Large Magellanic Cloud. * Electronic address: abonilla@fisica.ufjf.br † Electronic address: suresh.kumar@pilani.bits-pilani.ac.in ‡ Electronic address: rafadcnunes@gmail.com Additionally, a combination of time-delay cosmography from H0LiCOW lenses and the distance ladder measurements are in 5.2σ tension with the Planck-CMB constraints [14] (see also [15] for an update using H0LiCOW lens based new hierarchical approach where the masssheet transform is only constrained by stellar kinematics). Another accurate independent measure was carried out in [16], from Tip of the Red Giant Branch, obtaining H 0 = 69.8 ± 1.1 km s −1 Mpc −1 . Several other estimates of H 0 have been obtained in the recent literature (see [17][18][19][20][21]). It has been widely discussed in the literature whether a new physics beyond the standard cosmological model can solve the H 0 tension [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. The so-called S 8 tension is also not less important. It is present between the Planck-CMB data with respect to weak lensing measurements and redshift surveys, about the value of the matter energy density Ω m and the amplitude or growth rate of structures (σ 8 , f σ 8 ). We refer the reader to [37] and references therein for perspectives and discussions on S 8 tension. Some other recent studies/developments [38][39][40][41][42][43][44][45][46][47][48] also suggest that the minimal ΛCDM model is in crisis.
A promising approach for investigation of the cosmological parameters is to consider a model-independent analysis. In principle, this can be done via cosmographic approach [49][50][51][52][53], which consists of performing a series expansion of a cosmological observable around z = 0, and then using the data to constrain the kinematic parameters. Such a procedure works well for lower values of z, but can be problematic at higher values of z. An interesting and robust alternative can be to consider a Gaussian process (GP) to reconstruct cosmological parameters in a model-independent way. The GP approach is a generic method of supervised learning (tasks to be learned and/or data training in GP terminology), which is implemented in regression problems and probabilistic classification. A GP is essentially a generalisation of the simple Gaussian distribution to the probability distributions of a function into the range of independent variables. In principle, this can be any stochastic process, however, it is much simpler in a Gaussian scenario and it is also more common, specifically for regression processes, which we use in this study. The GP also provides a model independent smoothing method that can further reconstruct derivatives from data. In this sense, the GP is a non-parametric strategy because it does not depend on a set of free parameters of the particular model to be constrained, although it depends on the choice of the covariance function, which will be explained in more detail in the next section. The GP method has been used to reconstruct the dynamics of the DE, modified gravity, cosmic curvature, estimates of Hubble constant, and other perspectives in cosmology by several authors [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72].
In this work, our main aim is to employ GP to perform a joint analysis by using the geometrical cosmological probes such as Supernova Type Ia (SN), Cosmic chronometers (CC), Baryon Acoustic Oscillations (BAO), and the H0LiCOW lenses sample to constrain the Hubble constant H 0 , and reconstruct some properties of DE, viz., the equation of state parameter w, the sound speed of DE perturbations c 2 s , and the ratio of DE density evolution X = ρ de /ρ de,0 . These are the main quantities that can represent the physical characteristics of DE, and possible deviations from the standard values w = −1, c 2 s = 1 and X = 1, can be an indication of a new physics beyond the ΛCDM model. To our knowledge, a model-independent joint analysis from above-mentioned data sets, as will be presented here, is new and not previously investigated in the literature. Indeed, a joint analysis with several observational probes is helpful to obtain tight constraints on the cosmological parameters. This paper is structured as follows. In Section II, we present the GP methodology as well as the data sets used in this work. In Section III, we describe the modelling framework providing the cosmological information, and discuss our main results in detail. In Section IV, we summarize main findings of this study with some future perspectives.

II. METHODOLOGY AND DATA ANALYSIS
In this section, we summarize our methodology as well as the data sets used for obtaining our results.

A. Gaussian Processes
The main objective in a GP approximation is to reconstruct a function f (x i ) from a set of its measured values f (x i ) ± σ i , where x i represent the training points or the positions of the observations. It assumes that the value of the function at any point x i follows a Gaussian distribution. The value of the function at x i is correlated with the value at other point x i . Therefore, we may write the GP as where µ(x i ) and cov[f (x i ), f (x i )] are the mean and the variance of the random variable at x i , respectively. This method has been used in many studies in the context of cosmology (e.g. see [54][55][56]). For the reconstruction of the function f (x i ), the covariance between the values of this function at different positions x i can be modeled as where k(x, x ) is a priori assumed covariance model (or kernel in GP language), and its choice is often very crucial for obtaining good results regarding the reconstruction of the function f (x i ). The covariance model, in general, depends on the distance |x − x | between the input points (x, x ), and the covariance function k(x, x ) is expected to return large values when the input points (x, x ) are close to each other. The most popular and commonly used covariance functions in the literature are the standard Gaussian Squared-Exponential (SE) and the Matérn class of kernels (M ν ). The SE kernel is defined as where σ f is the signal variance, which controls the strength of the correlation of the function, and l is the length scale that determines the ability to model the main characteristics (global and local) in the evaluation region to be predicted (or coherence length of the correlation in x). These two parameters are often called hyperparameters. They are not the parameters of the function, but of the covariance function. For convenience, in what follows, we redefine τ = |x − x |, which is consistent with all the kernels implemented here. The SE kernel, however, is a very smooth covariance function which can very well reproduce global but not local characteristics. To avoid this, the Matérn class kernels are helpful, and the general functional form can be written as where K ν is the modified Bessel function of second kind, Γ(ν) is the standard Gamma function and ν is strictly a positive parameter. An explicit analytic functional form for half-integer values of {ν = 1/2, 3/2, 5/2, 7/2, 9/2, ..} is provided by modified Bessel functions, and when ν → ∞, the M ν covariance function tends to SE kernel. Among other possibilities, ν = 7/2 and ν = 9/2 values are of primary interest, since these correspond to smooth functions with high predictability of derivatives of higher order, although these are not very suitable for predicting rapid variations. These Matern functions for GP in cosmology were first introduced in [56]. On the other hand, the hyperparameters Θ ≡ {σ f , l} are learned by optimising the log marginal likelihood, which is defined as , y is the vector of data, C is the covariance matrix of the data for a set of n observations, assuming mean µ = 0. After optimizing for σ f and l, one can predict the mean and variance of the function f (x * ) at chosen points x * through The GP predictions can also be extended to the derivatives of the functions f (x i ), although limited by the differentiability of the chosen kernel. The derivative of a GP would also be a GP. Thus, one can obtain the covariance between the function and/or the derivatives involved by differentiating the covariance function as Then, we can write where f (x i ) represent the derivatives with respect to their corresponding independent variables, which for our purpose can be the redshift z. This procedure can similarly be extended for higher derivatives in combination with f (x). The mean of the i th derivative and the covariance between i th and j th derivatives, are given by If i = j, then we get the variance of the i th derivative in Eq. (10). If the data for derivative functions are available, we can perform a joint analysis, which is the case in our study. Since one data type can be in terms of f (x) while another can be rewritten in terms of f (x), these different data sets can be combined. In what follows, we describe the data sets that we use in this work.

B. Data Sets
We summarize below the data sets used in our analysis.

Cosmic Chronometers (CC):
The CC approach is a powerful method to trace the history of cosmic expansion through the measurement of H(z).
We consider the compilation of Hubble parameter measurements provided by [73]. This compilation consists of 30 measurements distributed over a redshift range 0 < z < 2.

Baryon Acoustic Oscillations (BAO):
The BAO is another important cosmological probe, which can trace expanding spherical wave of baryonic perturbations from acoustic oscillations at recombination time through the large-scale structure correlation function, which displays a peak around 150h −1 Mpc. We use BAO measurements from Sloan Digital Sky Survey (SDSS) III DR-12 at three effective binned redshifts z = 0.38, 0.51 and 0.61, reported in [74], the clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample at four effective binned redshifts z = 0.98, 1.23, 1,52 and 1.94, reported in [75], and the high-redshift Lyman-α measurements at z = 2.33 and z = 2.4 reported in [76] and [77], respectively. Note that the observations are presented in terms of H(z) × (r d /r d,f id ) km s −1 Mpc −1 , where r d is co-moving sound horizon and r d,f id is the fiducial input value provided in the above references. In all the analyses presented in this article, we choose r d /r d,f id = 1. In appendix A, we show that different r d input values obtained from different data sets do not affect the GP analysis.
Supernovae Type Ia (SN): The SN traditionally have been one of the most important astrophysical tools in establishing the so-called standard cosmological model. For the present analysis, we use the Pantheon compilation, which consists of 1048 SNIa distributed in a redshift range 0.01 < z < 2.3 [78]. Under the consideration of a spatially flat Universe, the full sample of Pantheon can be summarized into six model independent E(z) −1 data points [79]. We consider the six data points reported by [80] in the form of E(z), including theoretical and statistical considerations made by the authors there for its implementation. Wellspring program 1 have measured six lens systems, making use of the measurements of time-delay distances between multiple images of strong gravitational lens systems by elliptical galaxies [14]. In the analyses of this work, we implement these six systems of strongly lensed quasars reported by the H0LiCOW Collaboration. Full information is contained in the so-called time-delay distance D ∆t . However, additional information can be found in the angular diameter distance to the lens D l , which offers the possibility of using four additional data points in our analysis. Thus, our total H0LiCOW sample comprises of 10 data points: 6 measurements of timedelay distances and 4 angular diameter distances to the lens for 4 specific objects in the subset information in H0LiCOW sample (see [81,82] for the description).

III. RESULTS AND DISCUSSIONS
First, we verify that analyses carried out from k Mν (τ ), with τ = 9/2 and τ = 7/2, and k SE do not generate Figure 1 shows the reconstructions from SN, BAO and CC data sets, using GP formalism on each data set individually. On the bottom-right panel, we show the H(z) reconstruction from all these data together. First, from the CC reconstruction, we obtain H 0 = 68.54 ± 5.06 km s −1 Mpc −1 , which has been used in the rescaling process of SN data to carry out the joint analysis with SN+BAO+CC (bottom-right). From SN+BAO+CC analysis, we find H 0 = 67.85±1.53 km s −1 Mpc −1 . Figure  2 (left panel) shows the GP reconstruction of D(z l ) from H0LiCOW data, where z l is the redshift to the lens. On the right panel, we show the reconstruction of H(z) function from SN+BAO+CC+H0LiCOW. In this joint analysis, we obtain H 0 = 73.78 ± 0.84 km s −1 Mpc −1 , which represents a 1.1% precision measurement. The strategy that we followed to obtain these results is as follows: 1. The SN+BAO+CC data set is used as in the previ- ous joint analysis, i.e., in terms of H(z) data reconstruction. Thus, now, we just need to re-scale the H0LiCOW data in some convenient way to combine all data for a joint analysis.
2. The time-delay distance in H0LiCOW sample is quantified as which is a combination of three angular diameter distances, namely D l , D s and D ls , where the subscripts stand for diameter distances to the lens l, to the source s, and between the lens and the source ls.
3. At this point, we can get the dimensionless comoving distance through the relationship where D A is the angular diameter distance and D(z) is defined asD(z) = z 0 dz E(z ) . In this way, we can have: 6 data points from time delay distance D ∆t , which we referred to asD ∆t , and 4 data points obtained from angular diameter distance D l , named asD l . Thus, we can add these 10 data points for joint analysis, and name simply the H0LiCOW sample (see left panel of Figure  2). Note that, to getD l , we directly use the eq. (12), where D A = D l . On the other hand, to ob-tainD ∆t , we have to take into account that eq. (11) depends on the expansion rate of the Universe through D s (z s , H 0 , Ω m ) and D ls (z l , H 0 , Ω m ), and in this case, we use the H 0 and Ω m best fit from our SN+BAO+CC joint analysis. can be reversed to obtain E(z) = 1 D (z) . So, we can make use of this possibility that offers the reconstruction of the first derivative of the dimensionless co-moving distanceD (z). For this purpose, we introduce the SN + BAO + CC data set in the form of 1/E(z) and the H0LiCOW data set in the form ofD(z), to obtain the GP reconstruction of dimensionless co-moving distance.
From the joint analysis SN+BAO+CC+H0LiCOW, we find H(z = 0) = 73.78 ± 0.84 km s −1 Mpc −1 . Figure 2 (right panel) shows the H(z) reconstruction from SN+BAO+CC+H0LiCOW. Figure 3 shows a comparison of our joint analysis estimates on H 0 with others recently obtained in literature. We note that our constraint on H 0 is in accordance with SH0ES and H0LiCOW+STRIDES estimates. On the other hand, we find ∼6σ tension with current Planck-CMB measurements and ∼2σ tension with CCHP best fit. We re-analyze our estimates removing BAO data (see appendix A). In this case, we find H 0 = 68.57 ± 1.86 km s −1 Mpc −1 and H 0 = 71.65 ± 1.09 km s −1 Mpc −1 from SN+CC and SN+CC+H0LiCOW, respectively.
In the context of the standard framework, we can also check the O m (z) diagnostic [83] If the expansion history E(z) is driven by the standard ΛCDM model, then O m (z) is practically constant and equal to the density of matter Ω m , and so, any deviation from this constant can be used to infer the dynamical nature of DE. Figure 4 shows the reconstruction of the O m (z) diagnostic. We find Ω m = 0.292 ± 0.046 and Ω m = 0.289 ± 0.012 at 1σ from SN+BAO+CC and SN+BAO+CC+H0LiCOW analyses, respectively. To obtain these results, we normalize H(z) with respect to H 0 to obtain E(z) for the entire data set except SN, where H 0 is taken from SN+BAO+CC, and SN+BAO+CC+H0LiCOW cases, 66 [114], the final data release of the BOSS data (BOSS Full-Shape+BAO+BBN) [113], The Carnegie-Chicago Hubble Program (CCHP) [16], H0LiCOW collaboration (H0LiCOW+STRIDES) [14], SH0ES [13], in comparison with the H0 constraints obtained in this work from the GP analysis using SN+BAO+CC+H0LiCOW.
respectively. The prediction from SN+BAO+CC is compatible with Ω m = 0.30 across the analyzed range, but it is interesting to note that for z > 2, we have Ω m < 0.30 at ∼2σ from SN+BAO+CC+H0LiCOW. These modelindependent Ω m estimates will be used as input values in the reconstruction of w.
The EoS of DE can be written as [84][85][86] where Ω m and Ω k are the density parameters of matter (baryonic matter + dark matter) and spatial curvature, respectively. In what follows, we assume Ω k = 0, which is a strong, though quite general assumption about spatial geometry. Figure 5 shows the w(z) reconstruction from SN+BAO+CC and SN+BAO+CC+H0LiCOW data combinations on the left and right panels, respectively. From both analyses, we notice that w is well constrained for z 0.5 with the prediction w = −1. Most of the data correspond to this range in numbers and precision. The GP mean excludes any possibility of w = −1 in the whole range of z under consideration. We observe that the best fit prediction is on w = −1 up to z ∼ 0.5 for both cases. The addition of the H0LiCOW data considerably improve the reconstruction of w for z < 1. Beyond this range, the best fit prediction can deviate from w = −1, but statistically compatible with a cosmological constant. Evaluating at the present moment, we find w(z = 0) = −0.999 ± 0.093 and w(z = 0) = −0.998 ± 0.064 from SN+BAO+CC and SN+BAO+CC+H0LiCOW, respectively.
Note that H0LiCOW sample improves the constraints on w(z = 0) up to ∼2.9%.
From the statistical reconstruction of w(z) and its derivative w (z), we can analyze the DE adiabatic sound speed c 2 s . Given the relation p = wρ, we can find .
(15) Figure 6 shows c 2 s reconstruction from SN+BAO+CC and SN+BAO+CC+H0LiCOW data combinations on the left and right panels, respectively. We note that the DE sound speed is negative at ∼1σ from SN+BAO+CC when evaluated up to z 2.5. It is interesting to note that the SN+BAO+CC+H0LiCOW analysis yields c 2 s < 0 at 2σ for z < 1. At the present moment, we find c 2 s (z = 0) = −0.218 ± 0.137 and c 2 s (z = 0) = −0.273 ± 0.068 at 1σ CL from SN+BAO+CC and SN+BAO+CC+H0HiCOW, respectively. Therefore, this inference on c 2 s rules out significantly the possibility for clustering DE models, and also the models with c 2 s > 0 up to high z at least at 1σ CL. The condition c 2 s > 0 is usually imposed to avoid gradient instability. However, the perturbations can still remain stable under c 2 s < 0 consideration [89][90][91][92]. Thus, if the effective sound speed is negative, this would be a smoking gun signature for the existence of an anisotropic stress and possible modifications of gravity. Recently, a possible evidence for c 2 s < 0 is found in [45], and also in a model-independent way from the Hubble data. Now, we look at some models which can potentially explain this result.
The Lagrangian L = G 2 (φ, X) + M 2 pl 2 R describes general K-essence scenarios. Here the function G 2 depends on φ and X = − 1 2 ∇ µ φ∇ µ φ, and R is the Ricci scalar curvature. In this case, the sound speed is given by where G 2,X ≡ ∂G 2 /∂X. Quintessence models correspond to the particular choice G 2 = X − V (φ), given c 2 s = 1. Thus, the usual quintessence scenarios are discarded from our results, which predict negative or low values of the sound speed.  Considering the so-called dilatonic ghost condensate [93], given by the Lagrangian, where λ and M are free parameters of the model, we can write c 2 s as with y =φ 2 e λφ/M pl 2M 4 . The condition y < −1/2 ensures negative sound speed values.
Another interesting possibility pertains to a unified dark energy and dark matter scenario described by G 2 = −b 0 + b 2 (X − X 0 ) 2 , where b 0 and b 2 are free parameters of the model [94]. In this case, the sound speed is where c 2 s < 0 for X < X 0 . The above mentioned cases are theoretical examples under the consideration of a minimally coupled gravity scenario, which can reproduce a possible c 2 s < 0 behavior. More generally, in the Horndeski theories of gravity [95][96][97], the speed of sound can be written as where prime denotes d/d ln a, and α i are functions expressed in a way that highlights their effects on the theory space [98], namely, kineticity (α K ), braiding (α B ) and Planck-mass run rate (α M ). Further, we define α = α K + 3/2α 2 B . Motivated for the tight constraints on the difference between the speed of gravitational waves and the speed of light to be 10 −15 from the GW170817 and GRB 170817A observations [99,100], we assume α T = 0 (tensor speed excess). Without loss of generality, we can consider α > 0 and the relation α B = R × α M , with R being a constant. For instance, for R = −1, we reproduce f (R) gravity theories. Different R values can manifest the most diverse possible changes in gravity. For a qualitative example, taking R = −1, the running of the Planck mass must satisfy the relationship  Finally, we analyze the function quantifying the ratio of DE energy density evolution over the cosmic time. Figure 7 shows X(z) reconstruction from SN+BAO+CC and SN+BAO+CC+H0LiCOW data combinations on the left and right panels, respectively. We note that the evolution of X is fully compatible with the ΛCDM model, and with the best fit modelindependent prediction around X = 1 up to z ∼ 1, in both analyses. It is interesting to note that X can cross to negative values when z > 1 and z > 1.5 at 2σ CL from SN+BAO+CC and SN+BAO+CC+H0LiCOW, respectively. It can also have some interesting theoretical consequences. First, DE with negative density values at large z came to the agenda when it turned out that, within the standard ΛCDM model, the Ly-α forest measurement from BAO data by the BOSS collaboration [101], prefers a smaller value of the dust density parameter compared to the value preferred by the CMB data. Thus, with the possibility of a preference for negative energy density values at high z, it is argued that the Ly-α data at z ∼ 2.34 can be described by a non-monotonic evolution in H(z) function, which is difficult to achieve in any model with non-negative DE density [102]. Note that in our analysis, we are taking into account the high z Lyman-α measurements reported in [76] and [77]. It is possible to achieve X < 0 at high z when the cosmological gravitational coupling strength gets weaker with increasing z [103,104]. A range of other examples of effective sources crossing the energy density below zero also exists, including theories in which the cosmological constant relaxes from a large initial value via an adjustment mechanism [105], and also by modifying gravity theory [106][107][108]. More recently, a graduated DE model characterized by a minimal dynamical deviation from the null inertial mass density is introduced in [109] to obtain negative energy density at high z. Also, seeking inspiration from string theory, the possibility of negative energy density is investigated in [110].
The reconstruction of w(z) and X(z) are robust at low z, where the DE effects begin to be considerable, and a slow evolution of the EoS is well captured at 68% CL. However, the error estimates are larger at high z, where the data density is significantly smaller and the dynamical effects of DE are weaker. The introduction of the H0LiCOW data slightly improves the estimated errors in this range, especially for 1.0 < z < 1.5. On the other hand, the uncertainties of smooth functions may have a greater amplitude than the highly oscillating functions, and in this way the propagation of errors to their derivatives can be overestimated [111]. In our case, the variation of the starting functions is quite smooth with respect to the data and their derivatives as well, leading to the propagation of errors with a greater amplitude, as can be seen in Figures 5 and 7 at high z. Other aspects that may influence this fact could be the strong dependence on z, as in the case of w(z), and the integrability of the functions with respect to z, as in the case of X(z) (for a brief discussion in this regard, see for example [54]).

IV. FINAL REMARKS
We have applied GP to constrain H 0 , and to reconstruct some functions that describe physical properties of DE in a model-independent way using cosmological information from SN, CC, BAO and H0LiCOW lenses data. The main results from the joint analysis, i.e., SN+CC+BAO+H0LiCOW, are summarized as follows: i) A 1.1% accuracy measurement of H 0 is obtained with the best fit value H 0 = 73.78 ± 0.84 km s −1 Mpc −1 at 1σ CL.
ii) The EoS of DE is measured at ∼ 6.5% accuracy at the present moment, with w(z = 0) = −0.98 ± 0.064 at 1σ CL.
iii) We find possible evidence for c 2 s < 0 at ∼ 2σ CL from the analysis of the function behavior at high z. At the present moment, we find c 2 s (z = 0) = −0.273 ± 0.068 at 1σ CL. iv) We find that the ratio of DE density evolution, ρ de /ρ de,0 , can cross to negative values at high-z. This behavior has already been observed by other authors. Here, we re-confirm this possibility for z > 1.5 at ∼2σ.
Certainly, the GP method having the ability to perform joint analysis has a great potential in search for the accurate measurements of cosmological parameters, and analyze physical properties of the dark sector of the Universe in a minimally model-dependent way. It can shed light in the determination of the dynamics of the dark components or even rule out possible theoretical cosmological scenarios. Beyond the scope of the present work, it will be interesting to analyze/reconstruct a possible interaction in the dark sector, where DE and dark matter interact non-gravitationally in a model-independent way, through a robust joint analysis. Such scenarios have been intensively investigated recently in literature. We hope to communicate results in that direction in near future.
Appendix A: H0 without BAO data, and effects of r d In this appendix, we derive constraints on H 0 and O m (z) diagnostic removing our BAO data set compilation as described in section II. Figure 8 shows O m (z) vs z reconstructed from SN+CC and SN+CC+H0LiCOW. For comparison, we also show the prediction with BAO. We find H 0 = 68.57 ± 1.86 km s −1 Mpc −1 and H 0 = 71.65 ± 1.09 km s −1 Mpc −1 from SN+CC and SN+CC+H0LiCOW data, respectively. Note that without BAO data these constraints are compatible with each other practically across the whole z range under consideration, where the addition of the H0LiCOW sample, significantly improves the reconstruction compared to SN+CC. It is also interesting to observe the behavior for z > 1.5, where we see that Om < 0.31. Combining BAO data with SN+CC+H0LiCOW, we observe a significant improvement in the reconstruction for the whole z range considered in the analysis. Predictions for z > 2 disagree at ∼1.5σ CL when the GP mean is compared between SN+CC+H0LiCOW and SN+CC+BAO+H0LiCOW.
On the other hand, the BAO measurements require a calibration of the sound horizon, either through BBN or the CMB. In all our analyses, we have used BAO data with the assumption r d /r d,f id = 1, where r d,f id is the fiducial input value. In order to quantify how much the r d value can influence the GP reconstruction, we have analyzed Om(z) with different r d input values. We have used r d values obtained from Planck-CMB data [12] and eBOSS Collaboration [112]. Figure 9 shows O m (z), reconstructed using r d = 149.30 Mpc (eBOSS estimation) and r d = 147.09 Mpc (Planck-CMB estimation). In short, we conclude that appropriate and different r d input values do not change the results significantly in all the analyses carried out in this work. Any input value of r d ∈ [135, 155] Mpc does not have statistical divergence compared to assumption r d /r d,f id = 1. That is, all analyses are consistent with each other at <1σ. Therefore, the GP analyses here are not sensitive to r d . That is why, we have presented the results in the main text assuming r d /r d,f id = 1.