Cosmological model-independent test of $\Lambda$CDM with two-point diagnostic by the observational Hubble parameter data

Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range $0<z \leqslant 2.36$ to make a cosmological model-independent test of the $\Lambda$CDM model with two-point $Omh^2(z_{2};z_{1})$ diagnostic. In $\Lambda$CDM model, with equation of state (EoS) $w=-1$, two-point diagnostic relation $Omh^2 \equiv \Omega_m h^2$ is tenable, where $\Omega_m$ is the present matter density parameter, and $h$ is the Hubble parameter divided by 100 $\rm km s^{-1} Mpc^{-1}$. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of $Omh^2$ fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 $\sigma$ confidence interval. Therefore, we conclude that the $\Lambda$CDM model cannot be ruled out.


Introduction
Over the past few decades, there have been a number of approaches proposed to quantitatively investigate the expansion history and structure growth of the universe [see 1, 2, for recent reviews]. The observations of Type Ia supernovae (SNIa) [3,4] have provided ample evidence for an accelerating expansion to an increasing precision [5]. Other complementary probes support that phenomenon, including the baryon acoustic oscillation (BAO) measurements, the weak gravitational lensing, the abundance of galaxy clusters [6], the cosmic microwave background (CMB) anisotropies, the linear a e-mail: tjzhang@bnu.edu.cn growth of large-scale structure [7], and the Hubble constant H 0 [8]. There are plenty of cosmological models raised to account for the acceleration phenomenon, yet the best-fit one is still uncertain.
The existence of dark energy (DE) with negative equation of state (EoS) parameter w ≡ p DE /ρ DE is considered as a prevailing interpretation currently. In this context, the most popular DE model being used remains to be the simple cosmological constant cold dark matter (ΛCDM) model, with w = −1 at all times [9]. However, the popularity of ΛCDM model does not cover the issue that it suffers from fine-tuning and coincidence problems [10,11]. In addition, it is worth noticing about the possibility of DE with evolving EoS [12], i.e. the dynamical DE models (w = w(z)), such as Quintessence [w > −1, 13,14], Phantom [w < −1, 15], K-essence [w > −1 or w < −1, 16,17], and especially Quintom [w crossing -1, 18,19] models. Nevertheless, it deserves more profound, physical explanations for all these models. Moreover, Zhao et al. [12] find that an evolving DE can relieve the tensions presented among existing datasets within the ΛCDM framework. Meanwhile, it is useful to introduce diagnostics based upon direct observations and capable of revealing dynamical features of DE. One of these diagnostics is Om(z), which is defined as a function of redshift z [20,21], i.e., withh = H(z) H0 and H(z) denoting the Hubble expansion rate. Om(z) has the property of being Ω m for w = −1 case. Moreover, Shafieloo et al. [22] modified this diagnostic to accommodate two-point situations as follows, Om(z 2 ; z 1 ) =h 2 (z 2 ) −h 2 (z 1 ) In this case, if Om(z 2 ; z 1 ) ≡ Ω m held for any redshift intervals, it would substantiate the validity of ΛCDM.
In other words, the measurements of Om(z 2 ; z 1 ) = Ω m would imply a deviation from ΛCDM and the fact that other DE models with an evolving EoS should be taken into account.
In this paper, we first introduce the measurements of the observational H(z) data (OHD) and then exhibit the currently available data sets in Section 2. In Section 3, we apply two binning methods: the weighted mean and median statistics techniques to OHD, and obtain the binned OHD categorically. In Section 4, based on the binned OHD, we test the ΛCDM model with two-point Omh 2 (z 2 ; z 1 ) diagnostic. Finally, we summarize our conclusions in Section 5.

The observational H(z) data sets
The OHD can be used to constrain cosmological parameters because they are obtained from model-independent direct observations. Until now, three methods have been developed to measure OHD: cosmic chronometers, radial BAO size methods [23], and gravitational waves [24]. Jimenez et al. [25] first proposed that relative galaxy ages can be used to obtain H(z) values and they reported one H(z) measurement at z ∼ 0.1 in their later work [26]. Simon et al. [27] added additional eight H(z) points in the redshift range between 0.17 and 1.75 from differential ages of passively evolving galaxies, and further constrained the redshift dependence of the DE potential by reconstructing it as a function of redshift. Later, Stern et al. [28] provided two new determinations from red-envelope galaxies and then constrained cosmological parameters including curvature through the joint analysis of CMB data. Furthermore, Moresco et al. [29] obtained eight new measurements of H(z) from the differential spectroscopic evolution of early-type, massive, red elliptical galaxies which can be used as standard cosmic chronometers. By applying the galaxy differential age method to SDSS DR7, Zhang et al. [30] expanded the H(z) data sample by four new points. Taking advantage of near-infrared spectroscopy of high redshift galaxies, Moresco et al. [31] obtained two measurements of H(z). Later, they gained five more latest H(z) values [32]. Rencently, Ratsimbazafy et al. [33] provides one more measurement of H(z) based on analysis of high quality spectra of Luminous Red Galaxies (LRGs) obtained with the Southern African Large Telescope (SALT).
On the other side, H(z) can also be extracted from the detection of radial BAO features. Gaztañaga et al. [34] first obtained two H(z) data points using the BAO peak position as a standard ruler in the radial direction.
Blake et al. [35] further combined the measurements of BAO peaks and the Alcock-Paczynski distortion to find three other H(z) results. Samushia et al. [36] provided a H(z) point at z = 0.57 from the BOSS DR9 CMASS sample. Xu et al. [37] used the BAO signals from the SDSS DR7 luminous red galaxy sample to derive another observational H(z) measurement. The H(z) value determined based upon BAO features in the Lyman-α forest of SDSS-III quasars were presented by Delubac et al. [38] and Font-Ribera et al. [39], which are the farthest precisely observed H(z) results so far. Alam et al. [40] obtained three H(z) measurements with cosmological analysis of the DR12 galaxy sample.
Moreover, Liu et al. [24] present a new method of measuring Hubble parameter by using the anisotropy of luminosity distance, and the analysis of gravitational wave of neutron star binary system.
After evaluating these data points from [41,42], we combine these 43 OHD and present them in Table 1 and mark them in Figure 1. Note that it is obvious that the cosmic chronometer method is completely modelindependent, and one may misjudge the radial BAO method to be dependent of model since they involve in some fiducial ΛCDM models. However, in fact, the fiducial models cannot affect the results as mentioned in the references (e.g., see P. 5 of Alam et al. [40]). Moreover, the three H(z) measurements taken from Blake et al. [35] are correlated with each other, and also, the three measurements of Alam et al. [40] are correlated. This fact will affect the choice of binning range afterwards.
We use a ΛCDM model with no curvature term to compare theoretical values of Hubble parameter with the OHD results, with the Hubble parameter given by where cosmological parameters take values from the Planck temperature power spectrum measurements [43]. The best fit value of H 0 is 67.81 km s −1 Mpc −1 , and Ω m is 0.308. The theoretical computation of H(z) based upon this ΛCDM is also shown in Figure 1.
Being independent observational data, H(z) determinations have been frequently used in cosmological research. One of the leading purposes is using them to constrain DE. Jimenez & Loeb [25] first proposed that H(z) measurements can be used to constrain DE EoS at high redshifts. Simon et al. [27] derived constraints on DE potential using H(z) results and supernova data. Samushia & Ratra [44] began applying these measurements to constraining cosmological parameters in various DE models. In the meanwhile, DE evolution came into its own as an active research field in the last twenty years [45][46][47][48][49]. To sum up, the OHD are proved to be very promising towards understanding the nature of DE.   In the next section, we will bin the OHD in Table 1 by using two binning techniques.

Binning OHD
As stated by Farooq et al. [42,50], there are two techniques: the weighted mean and median statistics, which  can be used to bin Hubble parameter measurements. In [42], they listed two reasons to compute "average" H(z) values for bins in redshift space. On the one hand, the weighted mean technique can indicate whether the original data have error bars inconsistent with Gaussianity. On the other hand, the binned data can more clearly visually illustrate tendencies as a function of redshift, without the assumption of a particular cosmological model.
As for the weighted mean technique, the ideal choice should be a trade-off between bin size and number of measurements per bin that maximizes both quantities. In order to avoid correlations at one bin, we choose 3-4, 4-5, 5-6, and 5-6-7 measurements per bin, which separates the correlated data, and the last four measurements are binned by twos for all the cases.
According to Podariu et al. [51], the weighted mean of H(z) is given by where H(z i ) and σ i stand for the Hubble parameter data and the standard deviation for i = 1, 2, ..., N measurements in the binning redshift range. Similarly, the corresponding weighted bin redshift z and weighted error σ are as follows, and The goodness of fit for each bin, the reduced χ 2 ν , can be expressed as where the expected value and error of χ ν are unity and 1/ 2(N − 1). Thus the number of standard deviations which χ ν deviates from unity for each bin is presented as Non-Gaussian measurements, the presence of unaccounted for systematic errors, or correlations between measurements can result in large N σ . The weighted mean results for the binned H(z) measurements are listed in Table 2, where the N σ values are considerably small for all bins, just like results of Farooq et al. [42], hence indicating that the 43 OHD are not inconsistent with Gaussianity.
Since the median statistics technique originally proposed by Gott et al. [52] has a prerequisite which assumes that measurements of a given quantity are independent and that there are no systematic effects, and as previously mentioned the correlative measurements of OHD from Blake et al. [35] and Alam et al. [40] may contaminate the results, we decide to remove these data for the sake of purity. While other measurements are all uncorrelated and independent with each other, they are more sustainable for evaluation and free of negative influence on the binning results. Table 3 displays the results from median statistics technique. After assuming that there is no overall systematic error in the reduced OHD as a whole and all the remaining measurements are independent, it is convenient to use the median statistics to combine the OHD. As the number of the measurements increases and approaches to infinity, the median can be presented as a true value, therefore, this technique has the merits of reducing the effect of outliers of a set of measurements on the estimate of a true median value. Nevertheless, although OHD are a bit of short in quantity as opposed to the large amount of measurements needed to reveal the true value, we still employ this technique for comparison purpose. If N measurements M i (where i = 1, 2, ..., N ) are consid- where N ! represents the factorial of N . After applying this technique to the reduced 37 OHD for binning, we obtain the binned results as listed in Table 3, applying the same binning scheme presented above. The results seem reasonable, but the precision is less than the weighted mean results, which may be caused by the smaller amount of OHD.  Even though the OHD obtained from the BAO method are model-independent, one can still be confused by the employed fiducial models. Therefore, to avoid the confusion and also for comparison purpose, we decide to exclude the OHD from the BAO method and only consider the cosmic chronometer case, which also have the merit of being independent with each other. Table 4 and 5 present the results from both binning methods.
The results are all reasonable as well, and compared to the full binned OHD, the discrepancies are considerably small, which can be seen as evidence of the validity of the OHD derived from the BAO method. Since the dif-ferent measurements per bin do not significantly affect the results, we can acknowledge the robustness of the binning methods. 4 Testing the ΛCDM model with Omh 2 (z 2 ; z 1 ) diagnostic In our analysis, the validity of Omh 2 (z 2 ; z 1 ) diagnostic can be tested using H(z) results from cosmological independent measurements. On the basis of the above section, we apply binned OHD from both the weighted mean technique and the median statistics technique to the two-point Omh 2 (z 2 ; z 1 ) diagnostic. If Om(z 2 ; z 1 ) is always a constant at any redshifts, then it demonstrates that the DE is of the cosmological constant nature. In order to compare directly with the results from CMB, Sahni et al. [53] introduced a more convenient expression of the two-point diagnostic, i.e., where h(z) = H(z)/100 km s −1 Mpc −1 . The binned H(z) points calculated based upon the aforementioned  binning methods in each case therefore yield N (N − 1)/2 model-independent measurements of the Omh 2 (z 2 ; z 1 ) diagnostic, as shown in Figs. 2-7, where the uncertainty σ Omh 2 (z2;z1) can be expressed as follows, For ΛCDM, we have Omh 2 ≡ Ω m h 2 . The value of Ω m h 2 is constrained tightly by the Planck observations to be centered around 0.14 for the base of ΛCDM model fit [43]: the Planck temperature power spectrum data alone gives 0.1426 ± 0.0020, the Planck temperature data with lensing reconstruction gives 0.1415 ± 0.0019, and the Planck temperature data with lensing and external data gives 0.1413 ± 0.0011, all at 1σ confidence level (CL). As stated in [ As shown in Fig. 2, the results from the weighted mean cases, within 1σ confidence interval, are mostly continuous with both being constant (on average) and the Planck value, although some exceptions are presented and the best-fit Omh 2 (∆z) values fluctuate. Also, as shown in Fig. 3, the results from the median statistic cases are all continuous with both being constant and the Planck value within 1σ confidence interval. Note that these results are not continuous redshift intervals, it only shows the differences for different ∆z. Therefore, it would be useful if we plot the continuous results alone to extrapolate the outcomes. Then we illustrate these results with the binned OHD from both binning methods in Figs. 4, which shows that the values of Omh 2 for both binning methods fluctuate as the continuous redshift intervals change. The difference between the results from the weighted mean binning data and results from mean statistics is that the fluctuations from the former situation are more intense which makes the tendency more distinct from the first four panels of Fig. 4. Thus, it is fair to come up with the conclusion that the validity of ΛCDM is preserved.
Also, after binning the OHD from the cosmic chronometer method, the corresponding Omh 2 results with both binning techniques are demonstrated in Figs. 5-7. It is evident that the fluctuations of the best-fit Omh 2 values are more intense than the results from full binned OHD for both binning methods. However, on average, the results are constant at 1σ region. Hence, due to the two-point Omh 2 (z 2 ; z 1 ) diagnostic combined with binned OHD results, the ΛCDM model is favored. However, we note that the error bars of these results are much bigger than the Planck result, therefore we can only conclude that the flat ΛCDM model cannot be ruled out.

Conclusions and Discussions
In this paper, motivated by the investigations on the nature of DE, we test the validity of ΛCDM with the two-point Omh 2 (z 2 ; z 1 ) diagnostic by using 43 observational H(z) data (OHD) which are obtained from the cosmic chronometers and BAO methods.
Firstly, instead of direct employment of the OHD on the Omh 2 (z 2 ; z 1 ) diagnostic, we introduce the two binning methods: the weighted mean and median statistics to reduce the noise in the data. After binning OHD, we conclude that the original OHD are not inconsistent with Gaussianity, and the binned data are all reasonable as Tables 2 and 3     BAO method are not generally considered to be completely model-independent, even though as mentioned above the fiducial models indeed cannot affect the results (e.g., see Alam et al. [40] P. 5), which means the data are model-independent after all. Nevertheless, due to the trust issue raised by some scientists, we also apply the binning method to the OHD derived from the cosmic chronometer method alone as listed in Tables 4  and 5 for comparison. The results all seem reasonable and, compared to the full binned OHD, the discrepancies are considerably small, which can be the indirect evidence for the validity of the OHD derived from the BAO method. Since the different measurements per bin do not significantly affect the results, we can acknowledge the robustness of the binning methods.
Secondly, combined with the set of binned OHD and, we exploit the Omh 2 (z 2 ; z 1 ) diagnostic to test if the Omh 2 values are constant. From Figs. 2-7, we find that on average the Omh 2 values are mostly constant at 1 σ confidence interval. Therefore, the flat ΛCDM model is not invalid. However, this does not mean that the dynamical DE models are not worth considering.
It is worth noticing that more independent OHD would bring more accuracy toward the binning methods, which can result in more reliable Omh 2 (z 2 ; z 1 ) values. Also, as the number of OHD grows, the binned OHD would be more efficient as a data clarification instrument that can be employed on cosmological constraints. OHD with higher precision and larger amounts are needed and valuable.

Ordinary text
The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space.
One or more blank lines denote the end of a paragraph. Since any number of consecutive spaces are treated like a single one, the formatting of the input file makes no difference to T E X, but it makes a difference to you. When you use L A T E X, making your input file as easy to read as possible will be a great help as you write your document and when you change it. This sample file shows how you can add comments to your own input file.
Because printing is different from typewriting, there are a number of things that you have to do differently when preparing an input file than if you were just typing the document directly. Quotation marks like "this" have to be handled specially, as do quotes within quotes: " 'this' is what I just wrote, not 'that' ".
Dashes come in three sizes: an intra-word dash, a medium dash for number ranges like 1-2, and a punctuation dashlike this.
A sentence-ending space should be larger than the space between words within a sentence. You sometimes have to type special commands in conjunction with punctuation characters to get this right, as in the following sentence. Gnats, Thanks to the title a e-mail: magic1@xxx.xx b e-mail: magic2@xxx.xx gnus, etc. all begin with G. You should check the spaces after periods when reading your output to make sure you haven't forgotten any special cases. Generating an ellipsis . . . with the right spacing around the periods requires a special command.
T E X interprets some common characters as commands, so you must type special commands to generate them. These characters include the following: & % # { and }.
In printing, text is emphasized by using an italic type style.
A long segment of text can also be emphasized in this way. Text within such a segment given additional emphasis with Roman type. Italic type loses its ability to emphasize and become simply distracting when used excessively.
It is sometimes necessary to prevent T E X from breaking a line where it might otherwise do so. This may be at a space, as between the "Mr." and "Jones" in "Mr. Jones", or within a word-especially when the word is a symbol like itemnum that makes little sense when hyphenated across lines.
T E X is good at typesetting mathematical formulas like x − 3y = 7 or a 1 > x 2n /y 2n > x . Remember that a letter like x is a formula when it denotes a mathematical symbol, and should be treated as one.

Notes
Footnotes 1 pose no problem. 2

Displayed text
Text is displayed by indenting it from the left margin. Quotations are commonly displayed. There are short quotations This is a short a quotation. It consists of a single paragraph of text. There is no paragraph indentation. and longer ones. This is a longer quotation. It consists of two paragraphs of text. The beginning of each paragraph is indicated by an extra indentation. This is the second paragraph of the quotation. It is just as dull as the first paragraph.
Another frequently-displayed structure is a list. The following is an example of an itemized list, four levels deep.
-This is the first item of an itemized list. Each item in the list is marked with a "tick". The document style determines what kind of tick mark is used. -This is the second item of the list. It contains another list nested inside it. The three inner lists are an itemized list.
-This is the first item of an enumerated list that is nested within the itemized list. -This is the second item of the inner list. L A T E X allows you to nest lists deeper than you really should. This is the rest of the second item of the outer list. It is no more interesting than any other part of the item.
-This is the third item of the list.
The following is an example of an enumerated list, four levels deep.
1. This is the first item of an enumerated list. Each item in the list is marked with a "tick". The document style determines what kind of tick mark is used. 2. This is the second item of the list. It contains another list nested inside it. The three inner lists are an enumerated list.
(a) This is the first item of an enumerated list that is nested within the enumerated list. (b) This is the second item of the inner list. L A T E X allows you to nest lists deeper than you really should. This is the rest of the second item of the outer list. It is no more interesting than any other part of the item. 3. This is the third item of the list.
The following is an example of a description list.
Cow Highly intelligent animal that can produce milk out of grass. Horse Less intelligent animal renowned for its legs. Human being Not so intelligent animal that thinks that it can think.
You can even display poetry.
There is an environment for verse Whose features some poets will curse. For instead of making Them do all line breaking, It allows them to put too many words on a line when they'd rather be forced to be terse.
Mathematical formulas may also be displayed. A displayed formula is one-line long; multiline formulas require special formatting instructions.
x + y 2 = z 2 i Don't start a paragraph with a displayed equation, nor make one a paragraph by itself.
Example of a theorem: Lemma 1 All conjectures are interesting, but some conjectures are more interesting than others.
Proof Obvious.

Tables and figures
Cross reference to labelled table: As you can see in Table 1 on page 3 and also in Table 2 on page 3. A major point of difference lies in the value of the specific production rate π for large values of the specific growth rate µ. Already in the early publications [1][2][3] it appeared that high glucose concentrations in the production phase are well correlated with a low penicillin yield (the 'glucose effect'). It has been confirmed recently [1][2][3][4] that high glucose concentrations inhibit the synthesis of the enzymes of the penicillin pathway, but not the actual penicillin biosynthesis. In other words, glucose represses (and not inhibits) the penicillin biosynthesis.
These findings do not contradict the results of [1] and of [4] which were obtained for continuous culture fermentations. Because for high values of the specific growth rate µ it is most likely (as shall be discussed below) that maintenance metabolism occurs, it can be shown that in steady state continuous culture conditions, and with µ described by a Monod kinetics Pirt and Rhigelato determined π for µ between 0.023 and 0.086 h −1 . They also reported a value µ x ≈ 0.095 h −1 , so that for their experiments µ/µ x is in the range of 0.24 to 0.9. Substituting K M in (1) by the value K M = 1 g/L as used by [1], one finds with the above equation 0.3 < C s < 9 g/L. This agrees well with the work of [4], who reported that penicillin biosynthesis repression only occurs at glucose concentrations from C s = 10 g/L on. The conclusion is that the glucose concentrations in the experiments of Pirt and Rhigelato probably were too low for glucose repression to be detected. The experimental data published by Ryu and Hospodka are not detailed sufficiently to permit a similar analysis. Bajpai and Reuß decided to disregard the differences between time constants for the two regulation mechanisms (glucose repression or inhibition) because of the relatively Table 1 The spherical case (I 1 = 0, I 2 = 0) Equil.
x y z C S Points  Fig. 1 Pathway of the penicillin G biosynthesis very long fermentation times, and therefore proposed a Haldane expression for π.
It is interesting that simulations with the [4] model for the initial conditions given by these authors indicate that, when the remaining substrate is fed at a constant rate, a considerable and unrealistic amount of penicillin is produced when the glucose concentration is still very high [2][3][4] Simulations with the Bajpai and Reuß model correctly predict almost no penicillin production in similar conditions. Sample of cross-reference to figure. Figure 1 shows that is not easy to get something on paper.

Subsection
Carr-Goldstein based their model on balancing methods and biochemical knowledge. The original model (1980) contained an equation for the oxygen dynamics which has been omitted in a second paper (1981). This simplified model shall be discussed here.

Subsubsection
Carr-Goldstein based their model on balancing methods and biochemical knowledge. The original model (1980) contained an equation for the oxygen dynamics which has been omitted in a second paper (1981). This simplified model shall be discussed here.