Statistical analysis with cosmic-expansion-rate measurements and two-point diagnostics

Direct measurements of Hubble parameters $H(z)$ are very useful for cosmological model parameters inference. Based on them, Sahni, Shafieloo and Starobinski introduced a two-point diagnostic $Omh^2(z_i, z_j)$ as an interesting tool for testing the validity of the $\Lambda$CDM model. Applying this test they found a tension between observations and predictions of the $\Lambda$CDM model. We use the most comprehensive compilation $H(z)$ data from baryon acoustic oscillations (BAO) and differential ages (DA) of passively evolving galaxies to study cosmological models using the Hubble parameters itself and to distinguish whether $\Lambda$CDM model is consistent with the observational data with statistical analysis of the corresponding $Omh^2(z_i, z_j)$ two-point diagnostics. Our results show that presently available $H(z)$ data significantly improve the constraints on cosmological parameters. The corresponding statistical $Omh^2(z_i, z_j)$ two-point diagnostics seems to prefer the quintessence with $w>-1$ over the $\Lambda$CDM model. Better and more accurate prior knowledge of the Hubble constant, will considerably improve the performance of the statistical $Omh^2(z_i, z_j)$ method.


Introduction
The discovery of accelerating expansion of the Universe [1,2] created a big challenge for the modern science and stimulated cosmologists to investigate the essentials of this phenomenon. In order to explain present acceleration of the Universe, there should exist some mechanism providing a repulsive effect. There are two broad ways of achieving this: considering the modified gravity [3] or adding an exotic dark energy component [4] to the matter content of the Universe. The simplest solution along the second line of reasoning is the ΛCDM model in which the cosmological constant Λ acts as a repulsive component in addition to ordinary cold dark matter and -now dynamically unimportant -CMB radiation or cosmic neutrinos. However, the cosmological constant, while being the most parsimonious choice is far from being a satisfactory explanation both theoretically (fine tuning and coincidence problems) and from the observational point of view [5]. Because there is no clear theoretical preference for the alternative to the ΛCDM model, it is reasonable to take a phenomenological approach to parameterize the unknown by hypothetical fluid with an equation of state p = wρ where w coefficient might be constant or allowed to vary a e-mail: caoshuo@bnu.edu.cn with cosmic time w(z) = w 0 + w a z 1+z [6,7]. Such models are known as wCDM and CPL, respectively. Standard ΛCDM is nested within such classes of models.
The most straightforward technique to constrain cosmological equation of state is by constructing the Hubble diagram d L,A (z) using either luminosity or angular diameter distances to the objects whose redshifts are known [8][9][10][11][12]. This approach demands either standard candles like SN Ia or standard rulers like CMB acoustic peaks or BAO. One should be cautious, however, about the way they are calibrated in order not to fall into circularity problems with respect to the cosmological model assumed during the calibration. From this perspective, another very attractive probe -Hubble function at different redshifts H(z) -is becoming accessible. In particular, H(z) measurements from the so called cosmic chronometers, i.e. differential ages (DA) of passively evolving galaxies are free from any prior assumption concerning cosmology, only uncertainty being of astrophysical origin (the adopted population synthesis model).
Recently, using DA technique, Moresco et al. [13,14] provided another few H(z) measurements in addition to already existing data (see Ding et al. [15] for the compilation). They also used the whole compilation of H(z) from DA to constrain cosmology [16]. Expansion rates at different redshifts not only allowed to use this pure information for cosmographic purposes but opened also a new chapter in using the so called Om(z) diagnostics. They were introduced by [17] in order to distinguish between ΛCDM and other dark energy scenarios. This diagnostics is defined as where h(z) ≡ H(z)/100, and subsequently used it in [19] to perform this test on three accurately measured values of H(z) from BAO demonstrating a tension with the value of Ω m,0 h 2 given by Ade et al. [20]. Later, Ding et al. [15] collected a larger H(z) sample (6 from BAO measurements and 23 from DA measurements) to do this test confirming that the tension exists. The two-point diagnostics has an advantage that if we know Hubble parameters at n different redshifts, we can get n(n − 1)/2 pairs of data. This enlargement of statistical sample for inference occurs at the expense of non-trivial statistical properties of observables [21]. As already mentioned, H(z) can be used as a cosmological probe to constrain cosmological parameters directly [22][23][24][25]. However, it is also tempting to perform the fit cosmological parameters based on the two-point diagnostics. Therefore, in this paper we constrain the cosmological models not only using H(z) directly, but also using the two-point Omh 2 (z i , z j ) diagnostic. The rest of the paper is organized as follows. In Section 2, we briefly introduce the observational Hubble parameters, and present our methodology to constrain cosmology with Omh 2 (z i , z j ) probe. We show our results followed by discussion in Section 3. Finally, we conclude in Section 4.

Empirical H(z) data and constraints based directly on them
We used a collection of totally 36 measurements of H(z) shown in Fig. (1). Among them, 30 data points come from cosmic chronometers [13,14,[26][27][28][29][30], i.e. the differential ages of passively evolving galaxies as a function of redshift. Other 6 points come from the BAO peak position as a standard ruler in the radial direction [31][32][33][34]. Because the H(z) data come from two different techniques, and moreover one of the BAO points -the one at the highest redshift [34] -was obtained in a different way than other BAO data (from the Lyα forest) we also divided our data set (full n = 36 sample) into sub-samples: n = 35 points -high z BAO excluded, n = 30 -from cosmic chronometers (DA) only and n = 6 from BAO only. Such division is dictated by desire to reveal possible systematics due to inhomogeneous sample. We will use these data to estimate cosmological parameters denoted in short as p. In particular, p = {Ω m,0 , w} for wCDM and p = {Ω m,0 , w 0 , w a } for CPL model. It is obvious that ΛCDM model with p = {Ω m,0 } is nested within the above mentioned models and is equivalent to wCDM with w parameter fixed at w = −1 or CPL with w 0 = −1 and w a = 0 fixed, so in this case p = Ω m,0 . For completeness and cross-checks we will also report fits on the present matter density parameter in ΛCDM model. Let us note that we do not consider the Hubble constant H 0 as a free parameter for fitting. Therefore, as described in details below, we either marginalize over H 0 (in some specific way) or use an informative prior for it. In order to estimate the best fitted values of these parameters we will maximize the likelihood derived from the χ 2 function. In the case of constraints based on H(z) data it reads: Because we treat H 0 as a nuisance parameter, one can factor it out: H(z; H 0 , p) th = H 0 E(z; p) and rewrite Eq. (3) in the following way  where only the dimensionless expansion rate depends explicitly on the cosmological model parameters. Let us recall that in the wCDM model with constant w coefficient in the equation of state it reads: while and for the Chevalier-Polarski-Linder (CPL) parametrization [6,7], one has: Introducing auxiliary quantities: one can rewrite Eq. (4) as Now, it is easy to see that the reduced chi-square minimized with respect to the nuisance parameter H 0 is equal to  [20] and the right two panels correspond to the H0 from Riess et al. [37].
at H 0 = Q 2 /Q 1 and one can use it further to constrain parameters p without any prior assumptions about H 0 . This approach is alternative to standard procedure of marginalizing over H 0 .
Another approach is to take an informative prior for H 0 . Following Farooq [35], we will assume that the prior distribution of H 0 is Gaussian with the meanH 0 and the standard deviation σ H0 : Then, we can build the posterior likelihood function L H (p) by marginalizing over H 0 where the terms Q 1 , Q 2 , Q 3 are the same as in Eq. (7), and performing the integral analytically one arrives at the following expression for the posterior likelihood: where erf (x) = 2 √ π x 0 e −t 2 dt. Details of the derivation can be found in Farooq [35].
Then, we maximize the likelihood L H (p), with respect to the parameters p in order to find the best-fitted parameter values p 0 .

Constraints based on two-point diagnostics
So far, the two point diagnostic has been mostly used to test the validity of ΛCDM model and to some extent its generalizations [15,19,21]. Here, we will use the Omh 2 (z i , z j ) function for the purpose of constraining cosmological parameters p following the similar strategy as described above for expansion rates alone.
Introducing the simplifying notation: h(z) = H(z)/100 and e(z) = E(z)/100, one can express theoretically expected Omh 2 (z i , z j ; H 0 , p) th and observed Omh 2 (z i , z j ) obs two point diagnostics as The χ 2 function for the Omh 2 (z i , z j ) two point diagnostics is Then, we minimize this χ 2 function to find the best-fitted cosmological parameters.

Results and Discussion
Let us discuss the results starting with ΛCDM model. Numerical details are displayed in Table. (1) and comprise fits of Ω m,0 on different sub-samples using three techniques: reduced chi-square Eq.(9), chi-square with Gaussian priors on H 0 and Omh 2 two-point diagnostics. Results concerning wCDM model are reported in Table (2) and shown on Fig. (2) -Fig. (3). When we take the prior H 0 = 67.4 ± 2.4 km/s/Mpc from [20], the dark energy equation of state constraint is almost totally consistent with ΛCDM where w = −1. However, the prior H 0 = 73.24±1.74 km/s/Mpc from Riess et al. [36], favors Phantom behavior (w < −1). The conclusion is that fits are very sensitive to the value of H 0 , which is consistent with findings of Farooq [35]. When we use H(z) measurements from BAO and DA techniques separately, the results are different: H(z) data from DA favor quintessence(w > −1) while H(z) data from BAO favor phantom (w < −1) fields. Moreover the H(z = 2.34) point has a big leverage on the final results. This is consistent with conclusions of our previous works [15,21]. The reason lies in different systematic effects between BAO and DA. Better restrictive power of Omh 2 (z i , z j ) as compared with H(z) technique can be understood in terms of the sample size. Namely, a sample of n H(z) measurements provides us with n(n−1) 2 Omh 2 (z i , z j ) data-points. This advantage does not show up for small samples like n = 6 BAO H(z) data-points. However, the Omh 2 (z i , z j ) diagnostic have a certain drawback: because of H 0 is strongly degenerated with other cosmological parameters, it should be better to give a prior H 0 value.
Finally the results concerning CPL parametrization are shown in Table (3) and on Fig. (4) for 30 DA H(z) and the whole 36 H(z) sample. The ΛCDM model in which Table 2. Best fitted parameters in wCDM cosmological model using H(z) data alone and Omh 2 (zi, zj) two-point diagnostics. Fits done on different sub-samples are reported. First panel corresponds to the reduced χ 2 method. Second and third panels corresponds to priors on H0 taken after Planck [20] and after Riess et al. [37].  w 0 = −1 and w a = 0 is identified in Fig. (4)

Conclusion
With increasing number of cosmic chronometers [13,14] covering bigger redshift range, we are starting to di-rectly probe the expansion of the Universe through measurements of its expansion rates H(z) at different epochs. More importantly this sort of measurements is not entangled with cosmic distance ladder considerations or any other calibrations pre-assuming cosmological model. However, there have been some misunderstanding in this respect since additional measurements of H(z) from BAO peaks location were used in the literature as well. In order to discuss this issue and show the performance of H(z) data in the context of cosmological model testing, we used recently most complete, mixed data coming from differential ages of passively evolving galaxies together with BAO data-points. Besides such full, inhomogeneous data-set we considered homogeneous sub-samples as well. One of the conclusions was that BAO and DA data should not be mixed together for the purpose of testing cosmological models. This can be understood because BAO technique pre-assumes cosmological model in order to disentangle BAO peak position from the redshift-space distortions due to peculiar velocities of galaxies. Indications of a bias introduced by BAO data has also been noticed in Zheng et al. [21].
In this paper we used both pure expansion rates H(z) and two point diagnostics Omh 2 (z i , z j ). The latter has originally been invoked as a litmus test for the ΛCDM. There were ideas for using it in broader context [17] illustrated with simulated future data. Here, we applied the two-point diagnostics on the real data and demonstrated that they are able to give much stringent constraints on cosmological parameters. This is because of enhanced size of the data-set: from n original H(z) datapoints one can get n(n − 1)/2 two-point diagnostics. The price one pays is that they are strongly correlated. Let us stress that the chi-square function we used was not meant to follow the chi-square distribution, but it only served a purpose to define the likelihood function to be maximized with MCMC simulations. Even though the constraining power of Omh 2 (z i , z j ) two-point diagnostics is considerable, it suffers from being sensitive to the H 0 prior. Hence the performance of this method crucially depends on our knowledge about the correct value of the Hubble constant. When this work has been completed, Leaf & Melia [38] published an important paper in which they introduced a new type of two-point diagnostics, completely independent on the Hubble constant H 0 . They also gave much more rigorous treatment of statistical properties of this diagnostics. It would be interesting to use their approach in a similar way we did in this paper.