1 Introduction

Accumulated data from various probes lead to the safe deduction that the universe have undergone two phases of accelerated expansion, at early and late cosmological times respectively. Such a behavior may require the introduction of extra degrees of freedom that are capable of triggering it (the simple cosmological constant can sufficiently describe the latter phase, but it is not adequate to describe the former one). A first main direction is the construct modified gravitational theories, that posses general relativity as a particular limit, but which on larger scales can produce the above phenomenology, such as in f(R) gravity [1,2,3], f(G) gravity [4], Galileon theory [5], f(T) gravity [6,7,8], Finsler gravity [9] etc (see [10,11,12,13] for reviews). The second main direction is to maintain general relativity as the underlying gravitational theory and introduce the the inflaton field(s) [14, 15] and/or the dark energy concept attributed to new fields, particles or fluids [16, 17].

One interesting approach for the description of dark energy arises from holographic considerations [18,19,20,21,22]. Specifically, since the largest length of a quantum field theory is connected to its Ultraviolet cutoff [23], one can result to a vacuum energy which at cosmological scales forms a form of holographic dark energy [24, 25]. Holographic dark energy is very efficient in quantitatively describe the late-time acceleration [24,25,26,27,28,29,30,31,32,33,34,35] and it is in agreement with observational data [36,37,38,39,40,41,42,43,44]. Hence, many extensions of the basic scenario have appeared in the literature, based mainly on the use of different horizons as the largest distance (i.e. the universe “radius”) [45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70].

One such extension is Barrow holographic dark energy, which arises by applying the usual holographic principle but using the recently proposed Barrow entropy instead of the Bekenstein–Hawking one. The later is a modification of the black-hole entropy caused by quantum-gravitational effects that deform the horizon, leading it to acquire a fractal, intricate, structure [71]. Hence, one results with an extended holographic dark energy, which includes basic holographic dark energy as a sub-case in the limit where Barrow entropy becomes the Bekenstein–Hawking one, but which in general is a novel scenario which exhibits more interesting and richer phenomenology [72].

In the present work we desire to use observational data from from Supernovae (SNIa) Pantheon sample, and from direct Hubble constant measurements with cosmic chronometers (CC), in order to constrain Barrow holographic dark energy, and in particular to impose observational bounds in the new Barrow exponent that quantifies the quantum-gravitational deformation and thus the deviation from usual holographic dark energy. The plan of the work is the following: in Sect. 2 we briefly review Barrow holographic dark energy. In Sect. 3 we present the various datasets, the applied methodology, and the information criteria that we will use. In Sect. 4 we provide the obtained results and we give the corresponding contour plots. Finally, in Sect.  5 we summarize and conclude.

2 Barrow holographic dark energy

In this section we present the cosmological scenario of Barrow holographic dark energy (for other cosmological applications of Barrow entropy see [73, 74]). Barrow entropy is a quantum-gravitationally corrected black-hole entropy due to the fractal structure brought about in its horizon, and it takes the form [71]

$$\begin{aligned} S_B= \left( \frac{A}{A_0} \right) ^{1+\Delta /2}, \end{aligned}$$
(2.1)

where A is the standard horizon area and \(A_0\) the Planck area. The quantum deformation, and hence the deviation from Bekenstein–Hawking entropy is quantified by the new exponent \(\Delta \), which takes the value \(\Delta =0\) in the standard, non-deformed case, while for \(\Delta =1\) it corresponds to maximal deformation.

We consider a flat Friedmann-Robertson-Walker (FRW) geometry with metric

$$\begin{aligned} ds^{2}=-dt^{2}+a^{2}(t)\delta _{ij}dx^{i}dx^{j}\,, \end{aligned}$$
(2.2)

where a(t) is the scale factor. As it was shown in [72], application of the holographic principle but using Barrow entropy (2.1), leads to Barrow holographic dark energy, whose energy density reads:

$$\begin{aligned} \rho _{DE}={C} R_h^{\Delta -2}. \end{aligned}$$
(2.3)

where C is a parameter with dimensions \([L]^{-2-\Delta }\), and \(R_h\) the future event horizon

$$\begin{aligned} R_h\equiv a\int _t^\infty \frac{dt}{a}= a\int _a^\infty \frac{da}{Ha^2}, \end{aligned}$$
(2.4)

where \(H\equiv \dot{a}/a\) is the Hubble parameter.

The two Friedmann equations are

$$\begin{aligned} 3M_p^2 H^2= & {} \ \rho _m + \rho _{DE} \end{aligned}$$
(2.5)
$$\begin{aligned} -2 M_p^2\dot{H}= & {} \rho _m +p_m+\rho _{DE}+p_{DE}, \end{aligned}$$
(2.6)

with \(M_p=1/\sqrt{8\pi G}\) the Planck mass. Moreover, \(p_{DE}\) is the pressure of Barrow holographic dark energy, and \(\rho _m\), \(p_m\) are respectively the energy density and pressure of the matter fluid. As usual we consider the two sector to be non-interacting, and thus the usual conservation equations hold

$$\begin{aligned}&\dot{\rho }_m+3H(\rho _m+p_m)=0, \end{aligned}$$
(2.7)
$$\begin{aligned}&\dot{\rho }_{DE}+3H\rho _{DE}(1+w_{DE})=0. \end{aligned}$$
(2.8)

In the following we focus on the case of dust matter, namely we assume that \(p_m=0\).

Introducing the density parameters \( \Omega _i\equiv \frac{1}{3M_p^2H^2}\rho _i\), in the case \(0\le \Delta <1\) one can easily extract the evolution equation for \(\Omega _{DE}\) as a function of \(x\equiv \ln a=-\ln (1+z)\), with z the redshift (with \(a_0=1\)), namely [72]

$$\begin{aligned} \frac{\Omega _{DE}'}{\Omega _{DE}(1-\Omega _{DE})}= & {} \Delta +1+ Q (1-\Omega _{DE})^{\frac{\Delta }{ 2(\Delta -2) } } \nonumber \\&(\Omega _{DE})^{\frac{1}{2-\Delta } } e^{\frac{3\Delta }{2(\Delta -2)}x}, \end{aligned}$$
(2.9)

with

$$\begin{aligned} Q\equiv (2-\Delta )\left( \frac{{C}}{3M_p^2}\right) ^{\frac{1}{\Delta -2}} \left( H_0\sqrt{\Omega _{m0}}\right) ^{\frac{\Delta }{2-\Delta }} \end{aligned}$$
(2.10)

a dimensionless parameter and where primes denote derivatives with respect to x. Furthermore, the equation of state for Barrow holographic dark energy, i.e \(w_{DE}\equiv p_{DE}/\rho _{DE}\), is given by

$$\begin{aligned} w_{DE}=-\frac{1\!+\!\Delta }{3} -\frac{Q}{3} (\Omega _{DE})^{\frac{1}{2-\Delta } } (1\!-\!\Omega _{DE})^{\frac{\Delta }{ 2(\Delta -2) } } e^{\frac{3\Delta }{2(2-\Delta )}x}. \end{aligned}$$
(2.11)

Barrow holographic dark energy is a new dark energy scenario. In the case \(\Delta =0\) it coincides with standard holographic dark energy \(\rho _{DE}=3c^2 M_p^2 R_h^{-2}\), with \({C}=3 c^2 M_p^2\) the model parameter. In this case (2.9) becomes \(\Omega _{DE}'|_{_{\Delta =0}}= \Omega _{DE}(1-\Omega _{DE})\left( 1+2\sqrt{\frac{3M_p^2\Omega _{DE}}{{C}}} \right) \), and can be analytically solved implicitly [24], while \(w_{DE}|_{_{\Delta =0}}=-\frac{1}{3}-\frac{2}{3}\sqrt{\frac{3M_p^2 \Omega _{DE}}{{C}}}\), which is the standard holographic dark energy result [25]. However, in the case \(\Delta >0\), where the deformation effects switch on, the scenario at hand departs from the standard one, leading to different cosmological behavior. Lastly, in the upper limit \(\Delta =1\), it coincides with \(\Lambda \)CDM cosmology.

3 Data and methodology

In this section we provide the various data sets that are going to be used for the observational analysis, and then we present the statistical methods that we employ. We use data from Supernovae type Ia observations together with direct H(z) Hubble data, and we apply the method of maximum likelihood analysis to in order to extract constraints on the free model parameters. As a final step, we will employ known information criteria in order to assess the quality of the fittings.

3.1 Cosmological probes

3.1.1 Type Ia Supernovae

Perhaps the most known and frequently used cosmological probe are distant Type Ia Supernovae. A supernova explosion is an extremely luminous event, with its brightness being comparable with the brightness of its host galaxy [75]. The observed light curves posses peak brightness mostly unaffected by the distance, thus can be used as standard candles. Specifically, one could use the observed distance modulo, \(\mu _{obs}\), to constrain cosmological models. We use the most recent data set available, namely the binned Pantheon dataset described at [75]. Finally, the corresponding likelihood reads

$$\begin{aligned} {\mathcal {L}}_{SNia}(Y;{\mathcal {M}})\sim \exp \left( -\frac{1}{2}\sum _{i=1}^{40} m_{i}C_{cov}^{-1}m_{i}^{\dagger }\right) , \end{aligned}$$
(3.1)

where Y is the vector of the free parameters of the cosmological model, \(m_{i} =\mu _{obs,i}-\mu _{theor}(z_i)-{\mathcal {M}}\) and \(\mu _{theor} = 5\log (\frac{D_{L}}{1Mpc}) + 25\), and \(D_L\) is the standard luminocity distance, given as \(D_L = c(1+z)\int _{0}^{z}\frac{1}{H(z)}\), that holds for a flat FRWL space-time, regardless of the underlying cosmology. Finally, \(C_{cov}\) is the covariance matrix of the binned Pantheon dataset. The parameter \({\mathcal {M}}\) is an intrinsic free parameter to the Pantheon dataset and quantifies a variety of observational uncertainties, i.e host galaxy properties, etc.

3.1.2 Cosmic chronometers

Data from the so-called “cosmic chronometers” (CC), are measurements of the Hubble rate, based upon the estimation of the differential age of passive evolving galaxies. The latter are galaxies with their emission spectra dominated by old stars population. The central idea is to use the definition of the Hubble rate, re-parametrized in terms of redshift, i.e

$$\begin{aligned} H \equiv \frac{\dot{a}}{a} = - \frac{1}{1+z} \frac{dz}{dt}. \end{aligned}$$
(3.2)

From this point, the redshift is relatively easily observed spectroscopically and the remaining work is to estimate the quantity dz/dt. As it was firstly proposed by Jimenez and Loeb in [76], this is possible via measuring the age difference between two sets of passively evolving galaxies, lying within a small redshift difference. The observational method and specific information from an astrophysical point of view are described in detail in [77, 78].

From a cosmological viewpoint, it is important to note that data from cosmic chronometers are essentially model independent, as long as we work within an FRWL space-time without extrinsic curvature. Furthermore, the redshift range of the available cosmic chronometers extends to 2, thus they allow for more stringent constraints to the cosmological models under study. Thus, cosmic chronometers are used widely in the field [42, 79,80,81]. In this work the sub-sample of [82], consisting of only CC data, is employed. The likelihood for the cosmic chronometers, assuming gaussian errors, reads

$$\begin{aligned} {\mathcal {L}}_{CC}(Y) \sim \exp \left[ -\frac{1}{2}\sum _{i=0}^{31} \frac{\left( H(z_i)_{theor} - H_{obs,i}\right) ^2}{\sigma _{i}^2}\right] , \end{aligned}$$
(3.3)

where \(\sigma _{i}\) are the corresponding errors.

3.1.3 Joint analysis

In order to obtain the joint observational constraints on the cosmological scenario by using P cosmological datasets, we first introduce the total likelihood function as

$$\begin{aligned} {\mathcal {L}}_{\text {tot}}(Y) = \prod _{p=1}^{P} {\mathcal {L}}_{i}, \end{aligned}$$
(3.4)

assuming Gaussian errors, and where no correlation between various data sets employed. Hence, the total \( \chi _{\text {tot}}^2\) function will be

$$\begin{aligned} \chi _{\text {tot}}^2 = \sum _{p=1}^{P}\chi ^2_{P}\,. \end{aligned}$$
(3.5)

The parameter vector has dimension k, namely the \(\nu \) parameters of the scenario, plus the number of hyper-parameters \(\nu _{\text {hyp}}\) of the applied datasets, i.e. \(k = \nu + \nu _{\text {hyp}}\). For the scenario of Barrow holographic dark energy, and since we are using Hublle rate and SNIa data, the free parameters are contained in the vector \(a_m = (\Omega _{m0},C,\Delta , h,{\mathcal {M}})\), with \(h=H_{0}/100\). We apply the Markov Chain Monte Carlo (MCMC) algorithm in the environment of the Python package emcee [83], and we perform the minimization of \(\chi ^2\) with respect to \(a_m\). We use 800 chains (walkers) and 3500 steps (states). Lastly, the convergence of the algorithm is verified using auto-correlation time considerations, and additionally we employ the Gelman-Rubin criterion [84] too for completeness.

3.2 Information criteria and model selection

As a final step, we apply the known Akaike Information Criterion (AIC) [85] and the Bayesian Information Criterion (BIC) [86], and the Deviance Information Criterion [87], in order to examine the quality of the fittings and hence the relevant observational compatibility of the scenarios.

The AIC is based on information theory, and it is an estimator of the Kullback-Leibler information with the property of asymptotically unbiasedness. Under the standard assumption of Gaussian errors, the corresponding estimator reads as [88, 89]

$$\begin{aligned} \text {AIC}=-2\ln ({\mathcal {L}}_{\text {max}})+2k+ \frac{2k(k+1)}{N_\mathrm{tot}-k-1}\,, \end{aligned}$$
(3.6)

with \({\mathcal {L}}_{\text {max}}\) the maximum likelihood of the datasets and \(N_\mathrm{tot}\) the total data points. For large number of data points \(N_\mathrm{tot}\) it reduces to \(\text {AIC}\simeq -2\ln ({\mathcal {L}}_{\text {max}})+2k\). On the other hand, the BIC criterion is an estimator of the Bayesian evidence [88,89,90], given by

$$\begin{aligned} \text {BIC} = -2\ln ({\mathcal {L}}_{\text {max}})+k \,\mathrm{log}(N_{\text {tot}})\,. \end{aligned}$$
(3.7)

Finally, the DIC criterion is based on concepts from both Bayesian statistics and information theory [87], and it is written as [90]

$$\begin{aligned} \mathrm{DIC} = D(\overline{a_m}) + 2C_{B}. \end{aligned}$$
(3.8)

The variable \(C_{B}\) is the Bayesian complexity given as \(C_{B} = \overline{D(a_m)} - D(\overline{a_m})\), with overlines denoting the standard mean value. Moreover, \(D(a_m)\) is the Bayesian Deviation, a quantity closely related to the effective degrees of freedom [87], which for the general class of exponential distributions, it reads as \(D(a_m) = -2\ln (\mathcal {L}(a_m))\).

In order to compare a set of n models we utilize the above criteria by extracting the relative difference of the involved IC values \(\Delta \text {IC}_{\text {model}}=\text {IC}_{\text {model}}-\text {IC}_{\text {min}}\), where \(\text {IC}_{\text {min}}\) is the minimum \(\text {IC}\) value in the set of compared models [91]. We then assign a “probability of correctness” to each model using the rule [88, 89]

$$\begin{aligned} P \simeq \frac{e^{-\Delta \text {IC}_{i}}}{\sum _{i=1}^{n}e^{-\Delta \text {IC}_{i}} }, \end{aligned}$$
(3.9)

with i running over the set of n models. The quantity P can be considered as a measure for the relative strength of observational support between these two models. In particular, employing the Jeffreys scale [92, 93], the condition \(\Delta \text {IC}\le 2\) implies statistical compatibility of the model at hand with the reference model, the condition \(2<\Delta \text {IC}<6\) corresponds to a middle tension between the two models, while \(\Delta \text {IC}\ge 10\) implies a strong tension.

4 Observational constraints

In this section we confront the scenario of Barrow holographic dark energy with cosmological data from Supernovae type Ia observations as well as from direct measurements of the Hubble rate, i.e. H(z) data, under the procedure described above. We are interested in extracting the constraints on the basic model parameter \(\Delta \), which quantifies the deviation from standard entropy, as well as on the secondary parameter C. We start by performing the analysis keeping C fixed to the value \({C}=3\) in \(M_p^2\) units, that is to the value for which Barrow holographic dark energy restores exactly standard holographic dark energy in the limit \(\Delta =0\). In this case we can investigate purely the effect and the implications of the Barrow exponent \(\Delta \). Additionally, as a next step we perform the full fitting procedure, handling both \(\Delta \) and C as free parameters.

Table 1 Observational constraints on the parameters of Barrow holographic dark energy (BHDE), and the corresponding \({\mathcal {L}}_{\text {max}}\), using SN Ia and CC datasets
Fig. 1
figure 1

The \(1\sigma \), \(2\sigma \) and \(3\sigma \) likelihood contours for Barrow holographic dark energy, in the case where we fix the model parameter \({C}=3\) in \(M_p\) units, using SNIa and H(z) data. Additionally, we present the involved 1-dimensional (1D) marginalized posterior distributions and the parameters mean values corresponding to the \(1\sigma \) area of the MCMC chain. \(\mathcal{{M}}\) is the usual free parameter of SNIa data that quantifies possible astrophysical systematic errors, [75]. For these fittings we obtain \(\chi ^2_{min}/dof = 0.8031 \)

Fig. 2
figure 2

The \(1\sigma \), \(2\sigma \) and \(3\sigma \) likelihood contours for Barrow holographic dark energy, in the case where both \(\Delta \) and C are free parameters, using SNIa and H(z) data. Additionally, we present the involved 1-dimensional (1D) marginalized posterior distributions and the parameters mean values corresponding to the \(1\sigma \) area of the MCMC chain. \(\mathcal{{M}}\) is the usual free parameter of SNIa data that quantifies possible astrophysical systematic errors [75]. For these fittings we obtain \(\chi ^2_{min}/dof = 0.8179\)

In Table 1 we summarize the results for the parameters. Moreover, in Figs. 1 and  2 we present the corresponding likelihood contours. In the case where C is kept fixed, we observe that \(\Delta =0.095_{-0.100}^{+0.093}\). As we can see, the standard value \(\Delta =0\) is inside the \(1\sigma \) region, however the mean value is \(\Delta =0.095\) and thus a deviation from the standard case is preferred. Furthermore, we can see that \(h=0.6895_{-0.0189}^{+0.0187}\) i.e we obtain an \(H_0\) value close to the Planck one \(H_0 = 67.37 \pm 0.54 \ \mathrm {km \, s^{-1} \, Mpc^{-1}}\) [94] instead to the direct value \(H_0 = 74.03 \pm 1.42 \ \mathrm {km \, s^{-1} \, Mpc^{-1}}\) [95], which was somehow expected since the Hubble parameter is constrained only from the CC data, since the distance modulus from supernovae Ia cannot directly constrain \(H_0\).

In the case where both \(\Delta \) and C are free parameters, we observe that \(\Delta =0.094_{-0.101}^{+0.093} \), which is quite similar with the previous C-fixed case. This implies that the deformation exponent \(\Delta \) is constrained not to have its standard value, i.e. deviation from standard holographic dark energy is slightly favored. Concerning the parameter C we find that \( 3.423_{-1.611}^{+1.753}\). Finally, for the Hubble rate we obtain \( h = 0.6892_{-0.0189}^{+0.0187}\) and thus, similarly to the fixed-C case, it is close to the Planck value.

As a final step, we test the statistical significance of the above constraints, implementing the AIC, BIC and DIC criteria described above. In particular, we compare the two versions of Barrow holographic dark energy, namely the one with C fixed and the one with both \(\Delta \) and C left as free parameters, with the concordance \(\Lambda \hbox {CDM}\) paradigm, and in Table 2 we depict the results. As we observe, C-fixed Barrow holographic dark energy is more efficient than the C-free scenario, as the extra free parameter does not contribute in the fit. This becomes evident from Fig. 2, where the \(1\sigma \) area of the parameter C is not closed. Due to the latter fact, the DIC criterion cannot quantify well the adequacy of the C-free model. Thus, it is imperative to use AIC to proceed with model selection. However, to compare the other two models, one can still use DIC. As \(\Delta DIC\) is smaller than 2, C-fixed and \(\Lambda \hbox {CDM}\) are statistically equivalent. Using AIC to compare all models used here, we find that C-free model is in middle tension with \(\Lambda \hbox {CDM}\) while C-fixed is statistically equivalent with \(\Lambda \hbox {CDM}\). Finally, \(\Lambda \hbox {CDM}\) paradigm seems to be slightly statistically preferred.

Table 2 The information criteria AIC, BIC and DIC for the examined cosmological models, along with the corresponding differences \(\Delta \text {IC} \equiv \text {IC} - \text {IC}_{\text {min}}\)

5 Conclusions

In this work used observational data from Supernovae (SNIa) Pantheon sample, as well as from direct measurements of the Hubble parameter from the cosmic chronometers (CC) sample, in order to extract constraints on the scenario of Barrow holographic dark energy. The latter is a new holographic dark energy scenario which is based on the recently proposed Barrow entropy, which arises from the modification of the black-hole surface due to quantum-gravitational effects. In particular, the deformation from standard Bekenstein–Hawking entropy is quantified by the new exponent \(\Delta \), with \(\Delta =0\) corresponding to standard case, while \(\Delta =1\) to maximal deformation. Hence, for \(\Delta =0\) Barrow holographic dark energy coincides with standard holographic dark energy, while for \(0<\Delta <1\) it corresponds to a new cosmological scenario that proves to lead to interesting and rich behavior [72]. Lastly, in the limiting case \(\Delta =1\) one obtains \(\rho _{DE}=const.=\Lambda \) and hence \(\Lambda \)CDM paradigm is restored, through a a completely different physical framework.

We first considered the case where the new exponent \(\Delta \) is the sole model parameter, in order to investigate its pure effects, i.e. we fixed the model parameter C to its value for which Barrow holographic dark energy restores exactly standard holographic dark energy in the limit \(\Delta =0\). As we showed, the standard value \(\Delta =0\) is inside the 1\(\sigma \) region, however the mean value is \(\Delta =0.094\), namely a deviation is favored. Additionally, for the Hubble rate we obtained a value \(0.6895_{-0.0189}^{+0.0187}\) close to the Planck instead to the direct value, which was expected since the Hubble parameter is constrained only from the CC data, since the distance modulus from supernovae Ia cannot directly constrain \(H_0\).

In the case where we let both \(\Delta \) and C to be free model parameters, we found that \( 0.094_{-0.101}^{+0.094} \) , and hence deviation from standard holographic dark energy is preferred. Concerning the Hubble rate we found that it is close to the Planck value too.

Finally, we performed a comparison of Barrow holographic dark energy with the concordance \(\Lambda \hbox {CDM}\) paradigm, using the AIC, BIC and DIC information criteria. As we showed, the one-parameter scenario is statistically compatible with \(\Lambda \hbox {CDM}\), and preferred comparing to the two-parameter one. In summary, Barrow holographic dark energy is in agreement with cosmological data, and it can serve as a good candidate for the description of nature.