Dark Matter Casts Light on the Early Universe

We show how knowledge of the cold dark matter (CDM) density can be used, in conjunction with measurements of the parameters of a scenario for beyond the Standard Model (BSM) physics, to provide information about the evolution of the Universe before Big Bang Nucleosynthesis (BBN). As examples of non-standard evolution, we consider models with a scalar field that may decay into BSM particles, and quintessence models. We illustrate our calculations using various supersymmetric models as representatives of classes of BSM scenarios in which the CDM density is either larger or smaller than the observed density when the early Universe is assumed to be radiation-dominated. In the case of a decaying scalar field, we show how the CDM density can constrain the initial scalar density and the reheating temperature after it decays in BSM scenarios that would yield overdense dark matter in standard radiation-dominated cosmology, and how the decays of the scalar field into BSM particles can be constrained in scenarios that would otherwise yield underdense CDM. We also show how the early evolution of the quintessence field can be constrained in BSM scenarios.


Introduction
The very early Universe before Big-Bang Nucleosynthesis (BBN) is a little-known cosmological era that should provide the answers to several very important questions, such as the origin of the baryon asymmetry in the Universe -possibly due to leptogenesis, the nature of the electroweak and perhaps other phase transitions, the possibility of grand unification, the mechanism for inflation, etc.. Unfortunately, as of today we have no direct observations of the period before recombination at ∼ 10 eV, though some constraints can be set using the abundances of the elements generated during BBN, and the cosmic microwave background (CMB) constrains models of inflation. High-energy colliders such as the Large Hadron Collider (LHC) can probe the state of matter at energies ∼ GeV and particle interactions at energies ∼ TeV, but the other properties of the early Universe, such as its expansion rate, are still relatively unconstrained.
In this paper we propose to use understanding of the properties of relic dark matter (DM) particles obtained from particle physics to obtain constraints on the properties of the very early Universe at temperatures ∼ 10 − 100 GeV, orders of magnitude above the scale of BBN.
For this purpose, we consider an observable linking particle physics and cosmology, namely the DM relic density. We assume that DM is cold, and composed of some type of stable weakly-interacting massive particle (WIMP) that was in thermal equilibrium in the early Universe and subsequently froze out. The cold dark matter density has been measured very precisely by the Planck Collaboration using the CMB and observations of the more recent Universe [1]. The Standard Model (SM) of particle physics does not provide any cold dark matter candidate, but many new physics beyond the SM (BSM) have such candidates. The dark matter relic density can be computed in any given BSM scenario, under the assumption that the early Universe was dominated by (SM) radiation, and very strict constraints can be set on the parameters of the BSM scenario using the Planck measurements [2].
In any given BSM scenario, a deviation of the measured cold dark matter density from a calculation based on measurements of the model parameters and standard radiationdominated expansion would be a signature of novel phenomena in the very early Universe. One might argue that, if the calculated relic density is different from the measured dark matter density, the corresponding BSM scenario is disfavoured. Here, however, we propose to reverse this argument: if the calculated relic density is different from the measured dark matter density, it could be because of novel phenomena in the early Universe. This orthogonal point of view will become particularly important if new particles are discovered at colliders or in dark matter detection experiments: using dark matter observables, it is not possible to constrain BSM scenarios in isolation, but the constraints have to be applied simultaneously to a combination of BSM and cosmological scenarios.
For this analysis, we study two different realistic cosmological scenarios: the case of a decaying scalar field, e.g., a modulus field, which modifies the energy content of the Universe and also injects entropy or BSM particles, and the case of a quintessence field, which could modify the energy content on its way to fulfilling its original purpose of generating dark energy with negative pressure in the recent Universe.
The rest of this paper is organised as follows. In Section 2 we review the standard calculation of relic density. Then, in Section 3 we introduce cosmological scalar field scenarios that can impact the relic density calculation, and discuss their possible effects. Next, in Section 4 we introduce as illustrations of BSM scenarios a selection of supersymmetric scenarios where the measured relic density can differ from that calculated assuming radiation-dominated expansion. Our results are given in Section 5 and our conclusions in Section 6.

Relic Density Calculation
The relic density calculation is generally performed in the standard cosmological model, in which the expansion rate of the Universe is given by the Friedmann equation. In the early Universe when the radiation density dominates this reduces to: where a is the cosmological scale factor and H the Hubble parameter. The radiation density reads where g eff is the effective number of degrees of freedom of radiation, which is given by the particle content of the Standard Model and the QCD equation of state (see, for example, [36,37]). Assuming that, in a given BSM scenario, only the lightest BSM particle is stable, and constitutes a suitable dark matter candidate that was originally in thermal equilibrium, the number of relic particles is obtained by solving the Boltzmann evolution equation [38,39]: where n is the number density of BSM particles, n eq is their equilibrium density, and σ eff v is the thermal average of the annihilation rate of pairs of BSM particles to SM particles.
To define σ eff v , it is useful to define first the annihilation rate of BSM particles i and j into SM particles k and l: where M is the transition amplitude, s is the centre-of-mass energy squared, g i is the number of degrees of freedom of the particle i, p kl is the final centre-of-mass momentum, given by and S kl is a symmetry factor equal to 2 for identical final particles and to 1 otherwise. The thermal average of the effective cross section is given by: where K 1 and K 2 are the modified Bessel functions of the second kind of order 1 and 2 respectively, and W eff is an effective annihilation rate: In order to solve the Boltzmann equation, it is necessary to have a link between time and temperature, which is given under the assumption of adiabaticity by where the radiation entropy density is given by with h eff the effective number of entropic degrees of freedom of radiation.
To solve this set of equations, one defines the ratio of the number density of BSM particles to the radiation entropy density Y (T ) ≡ n(T )/s rad (T ), and the ratio of the relic particle mass to the temperature, x ≡ m relic /T , and combines them into [38,39]: The freeze-out temperature T f is the temperature at which the relic particle leaves the initial thermal equilibrium, which is expected to happen at ∼ m relic /10 ∼ 10 − 100 GeV in many BSM WIMP scenarios.
Solving the equations down to the present temperature T 0 , we find that Y approaches a constant asymptotic value and the relic density so obtained is [38,39]: where ρ 0 c is the critical density of the Universe, given by and H 0 is the Hubble constant. The relic density can then be compared to the measurements of the dark matter density by the Planck Collaboration [1] to set constraints on the BSM scenarios.
In the following, we use SuperIso Relic v4.0 [40][41][42] to compute the relic density. Since it was shown that the theoretical uncertainties due to the cross section calculation at tree level and to the uncertainties in the QCD equation of state are of the order of a tenth [34-37, 43, 44], we add a 10% theoretical error to the Planck measurements and obtain the following 95% C.L. interval: 0.095 < Ωh 2 < 0.1428 . (2.15)

Cosmological Scenarios
The standard relic density calculation can be modified by the presence of scalar fields in the early Universe, which can affect the expansion rate by adding a new energy density, generate non-thermal relic particles, or inject entropy and affect the relation between time and temperature. In the following, we consider the case of a decaying pressureless scalar field and of quintessence as realistic examples of cosmological models affecting the early Universe. Since the freeze-out occurs at ∼ 10−100 GeV, a large deviation from the standard model of cosmology at this temperature could modify strongly the results, without having other consequences for the observable Universe. The strongest constraints that can be set on such cosmological scenarios are those from BBN. In the following, we compute BBN constraints for the scenarios of interest using AlterBBN v2.0 [45,46] and the conservative limits on the abundances of the elements given in [47].

Decaying primordial scalar field
We consider a pressureless scalar field φ of mass M φ that decays into radiation with a width Γ φ , and into BSM particles with a branching ratio b [32,33]. The evolution in time of the scalar field density ρ φ and the neutralino density n = ρ χ /m χ can be determined from the following equations: where σv is the thermally-averaged WIMP annihilation cross section, n eq is the WIMP equilibrium density, and H is the Hubble parameter, which depends on the total energy density in the Universe: In order to obtain a relation between the time and the temperature, one may use the following equation for the evolution of the radiation entropy density: The energy and entropy densities of radiation can be determined from the temperature according to: where g ef f and h ef f are the number of degrees of freedom of radiation energy and the entropy, respectively. We use the QCD equation of state "B" of Ref. [36] in our analysis. The decay width may conveniently be expressed as a function of the reheating temperature T RH [32,33], which is the temperature at which the scalar field density starts to be significantly reduced: We also defineρ φ ≡ ρ φ /ρ rad and the initial condition The above equations can be re-written as derivatives of Y φ = ρ φ /s rad and Y = n/s rad : with Eqs. (3.8) and (3.9) are controlled by the parameter Σ * defined in Eq. (3.5). In order to understand its role, we consider the entropy time-derivative equation (3.4) in the case where Σ * is constant. If T ∝ t α and the scale factor a ∝ t β , then H = βt −1 and we obtain: (3.11) The neutralino density will therefore be diluted very fast as Σ * → 1. In fact, one can derive a maximum value for Σ * where d log( Σ * )/d log(x) = 0. In the limit ρ φ ρ rad , from which it follows that This prevents any singularities in the term Σ * /1− Σ * , but limits the strength of the dilution. We have seen that the scalar field density can decrease in two ways: either by decay, or by dilution. Thus, the presence of the scalar field may modify the neutralino relic density from that calculated in the standard model of cosmology in three different ways. First, neutralinos can be diluted in the same way as the scalar field. As this phenomenon only changes the evolution of the temperature with time, it does not affect the neutralino density at a given temperature during thermal equilibrium, since the equilibrium density is determined by the temperature alone. Secondly, if the scalar field decays into SUSY particles, the neutralino density may increase. If the decay happens before freeze-out, however, the decay products will annihilate, unless the neutralinos are produced in a large amount, in which case the freeze-out temperature may be increased. Thirdly, if the scalar field density is large enough, it will change significantly the Hubble parameter and the freeze-out will occur sooner, thus increasing the density at freeze-out compared to the standard calculation. However, as we shall see, this last case corresponds also to that where dilution is important. Therefore, the only way to increase the relic density is if the scalar field decays also into BSM particles.

Quintessence
As an alternative, we also consider a quintessence field 1 , which satisfies the continuity equation: where the pressure and the energy density of the scalar field are P φ =φ 2 /2 − V (φ) and We have computed the scalar field density evolution with the temperature for three different standard quintessence potentials V (φ) [16]: a double exponential [48], an inverse power law [7], and a pseudo-Nambu-Goldstone boson potential [49]. We find that the scalar field density can be well approximated for the three potentials with a power law of slope 6 at high temperatures (zone 4 of Figure 1) and of slope 0 at low temperatures (zone 1). In the case of the double exponential potential, two additional power-law changes occur: the first to a slope 0 (zone 3) and then to a slope ranging from 3 to 6 (zone 2). Hence, we consider a simplified model whose free parameters are the temperatures T 34 , T 23 , T 12 at which the power-law changes occur, together with the slope in zone 2, n 2 .
In this model, there is no way to reduce the relic density compared to the standard cosmological model. The only possible influence of the scalar field is the WIMP density is at freeze-out. If the scalar field density is large enough while the WIMP is in thermal equilibrium, the Hubble parameter can be enhanced compared to the standard cosmological model. This would have the effect of advancing freeze-out and thereby increasing the relic WIMP density.

New Physics Scenarios
In order to illustrate the possible implications of such cosmological scenarios, we consider variants of the minimal supersymmetric extension of the Standard Model (MSSM) with CP and R-parity conservation, which is representative of a large class of WIMP models. The lightest neutralino is a well-motivated candidate for dark matter [2], and we assume in the following that 100% of cold dark matter is composed of neutralinos. The neutralino can be bino-like, wino-like, higgsino-like or a mixed state. These candidates are weaklyinteracting, and in conventional calculations bino-like neutralinos have in general a too large a relic density, apart in cases where they are associated with near-degenerate super- symmetric particles with which they can can coannihilate, or if annihilations are enhanced by resonances such as heavy Higgs bosons. Winos and Higgsinos can reach a relic density close to the observed dark matter abundance via coannihilations with charginos and/or neutralinos that are nearly degenerate with the lightest neutralino. On the other hand, light winos and Higgsinos generally have too small a relic density.
In the following we first choose as specific examples one MSSM scenario which would yield overdense DM according to the standard cosmological calculation, and one that would yield underdense DM. We also consider a sample of points in the phenomenological MSSM (pMSSM) with 19 free parameters specified at a low energy scale (the pMSSM19).

Point A
We first consider a point with a relic density that would be too large (Point A) according to the standard cosmological calculation. For this we modify the parameters of the best-fit point of the pMSSM with 11 free parameters specified at a low energy scale (the pMSSM11), which was found in [50] taking into account the constraints from ∼ 36 fb −1 of LHC data at 13 TeV, including those from direct searches for supersymmetric (SUSY) particles at the LHC, measurements of the Higgs boson mass and signal strengths, LHC searches for the heavier MSSM Higgs bosons, precision electroweak observables, the measurement of (g−2) µ [51], and flavour physics constraints from B-and K-physics observables. In addition, the constraints from the direct dark matter detection experiments PICO60 [52], XENON1T [53] and PandaX-II [54] were taken into account, together with the previous accelerator and astrophysical measurements. The cosmological constraint on the cold dark matter density measured by Planck [1] was also considered. The relic density at this point is therefore close to the measured dark matter density, but it is possible to increase the relic density while respecting the other constraints. This point has a bino-like neutralino of mass 381 GeV. As commented above, binos tend to have a relic density that is too large. However, thanks to the small mass splittings with the sleptons of the first and second generations, the relic density of this points is very close to the measured dark matter density. In order to obtain a larger relic density, we increase the mass parameter Ml 1,2 of the sleptons of first and second generation, taking Ml 1,2 = 400 GeV to get Ωh 2 = 1.27 according to the standard cosmological calculation, and a freeze-out temperature T fo ≈ 16 GeV. The parameters of Point B are given in Table 1.

Point B
In this case we modify the best-fit point in the constrained MSSM (CMSSM) found in [50]. This point has a higgsino-like neutralino and a relic density close to the dark matter density measured by Planck. We decrease M 12 to 3872 GeV in order to get a lower value of the relic density: Ωh 2 = 5.907 × 10 −3 and use SOFTSUSY [55] to calculate the spectrum. The parameters of point A are given in Table 2.

Sample of pMSSM19 Points
We consider in addition a sample of points in the pMSSM19 generated using SOFTSUSY [55] with a flat random sampling over the ranges given in Table 3 for the 19 parameters. After checking the theoretical validity of each point, we require it to have a light Higgs boson with mass between 122 and 128 GeV. We also require the lightest neutralino to be the lightest supersymmetric particle that constitutes dark matter, using the set-up presented in [56][57][58]. As the neutralino can be bino-like, wino-like, Higgsino-like or a mixed state, this approach allows considerable flexibility, making our analysis sufficiently general that it can indicate the possibilities also in other dark matter models.

Decaying primordial scalar field
We consider first the cosmological scenario with a scalar field decaying into radiation and SUSY particles. We perform a scan over the reheating temperature T RH and the initial scalar field density parametrised as the ratio between the scalar field density and the photon density at T = T init , κ φ = ρ φ ργ (T = T init ), and calculate the relic density of Points A and B specified in Section 4. We consider different values of the parameter η = b 1 GeV m φ , in order to study the effect of non-thermal production of SUSY particles on the relic density.
In each case we derive constraints on the scalar field parameters for our sample of pMSSM19 points so as to investigate the influence of the neutralino properties on the limits derived from the relic DM density. We start integrating the Boltzmann equations at a temperature T init = 40 GeV for point A and T init = 20 GeV for point B. For our sample of pMSSM19 points, we use T init = 1.5 × T fo , where T fo is the freeze-out temperature in the standard cosmological model. These choices were made in order to reduce the computation time while starting the calculation sufficiently long before freeze-out and the decay of the scalar field.
We first investigate the case where the neutralino has a relic density that is too large in the standard cosmological model, illustrated by Point A. The results of the scan over  the reheating temperature T RH and the initial scalar field density κ φ are shown in Figure 2, assuming that the scalar field does not decay into SUSY particles (η = 0). We can distinguish two zones in this figure: a zone at large initial scalar field density and small reheating temperature, where the relic density is strongly reduced, and the complementary zone where the presence of the scalar field does not modify the relic density. On the one hand, the dependence on κ φ of the dilution is rather clear: the larger κ φ is, the larger Σ * is initially, and the dilution is stronger. On the other hand, the value of the reheating temperature affects more the duration of the dilution than its strength. As illustrated in Figure 3, when T RH is small, Σ * can remain at its maximum during a large range of temperatures before its decrease due to the decay of the scalar field. The neutralino and scalar field densities decrease during this period with a slope −5, as expected when Σ * is at its maximum. For a large value of T RH , however, the fields are diluted over a smaller range of temperatures and the total decrease is reduced. Points respecting the Planck constraints, which we will refer to as accepted points, lie along a thin line in the log 10 (κ φ )/log 10 (T RH ) plane. They follow a line of slope ∼ 1 at small T RH that changes slightly at T RH ∼ 150 MeV to a slope 1.5. This transition is the result of the quark/hadronic phase transition, which lowers the number of radiation degrees of freedom. In particular, below T ∼ 150 MeV, pions become non-relativistic and no longer contribute to the radiation density. This feature is independent of the WIMP and scalar field properties, and is present in all the following results.
The line of accepted points becomes vertical at T RH ∼ T fo , which is to be expected when the scalar field decays completely during neutralino thermal equilibrium, as there is no possible modification of the relic density. Thus, we can derive a maximum value of the reheating temperature T RH T fo . One can also note that if T RH < T BBN lim RH ∼ 6 MeV, the scalar field density is too large during BBN, and the model is therefore excluded. This constraint is very general, as it is also independent of the WIMP properties, and thus applicable to any WIMP model. This limit gives us a lower bound for the reheating temperature, as well as a minimum value for the initial scalar field density κ φ using T RH = T BBN lim RH . For Point A, we can deduce κ φ 0.1, but this minimum value will depend on the nature of the WIMP.
No enhancement of the relic density is possible when η = 0. At small T RH and large κ φ , where the scalar field density could have increased the freeze-out temperature via its relation with the Hubble parameter, and thereby increased the relic density, the densities are in fact already significantly reduced by dilution. Therefore, in order to increase the relic density, it is necessary to consider non-thermal production of the WIMP, i.e., η > 0. In the case of Point A, the region of interest will be at small T RH and large κ φ , where the relic density is strongly reduced by dilution. The scalar field decay into SUSY particles provides an additional contribution to the relic density, and the DM density measured by Planck may be reached with the appropriate value of η. We test four different values of η in Figure 4, and notice that the larger η is, the more the line of accepted points is shifted towards small T RH .
We observe in Figure 5 that in the region of interest the relic density increases linearly  with η and T RH , which explains the observed feature. Similarly to what happens with the dilution, the parameter η impacts the strength of the non-thermal production of neutralinos, while T RH impacts the time between the freeze-out and the scalar field decay, during which the relic density can benefit from this new contribution. We find that the evolution of the relic density with respect to η and T RH can be approximated by: where a and b are numerical factors that depend, a priori, on the WIMP properties. For Point A, we find that a ≈ 7.68 × 10 10 GeV −1 and b ≈ 2.62 × 10 7 . This parametrization enables us to find the value of η required to get the correct relic density for a given reheating temperature. On the other hand, a maximum value of η can be calculated by considering the reheating temperature where the BBN constraints start excluding the model (T lim RH ≈ 6 × 10 −3 GeV): For our benchmark point, we calculate η Max ≈ 2.93 × 10 −10 . Thus, in this scenario the branching ratio into SUSY particles must be very small, which can be traced back to our choice of a scalar field with a very large initial density. We note also that the variation in η does not modify the constraints on κ φ and T RH that we derived in the case η = 0. Strong constraints on the scalar field parameters can therefore be derived, namely 6 MeV T RH T fo , κ φ 0.1 and η 2.93 × 10 −10 .

Point with a small relic density
As discussed previously, no enhancement of the relic density is possible when only entropy injection is considered. Therefore, one needs to allow the scalar field to decay into BSM  particles. We show in Figure 6 the result of scans over T RH and κ φ for Point B with four different values of η. In each scenario, the region of accepted points forms a U shape in the κ φ /T RH plane. The vertical right limit corresponds to T RH ∼ T fo , and does not move significantly as η increases. The vertical left limit, however, is shifted to the left along the T RH axis and the horizontal limit is shifted downwards towards lower values of κ φ . The constraints on T RH that we deduced for point A hold also in this case: T BBN lim RH T RH T fo . However, it is difficult to find parameter choices satisfying the constraints on κ φ and η.
The largest effect is in the case where the scalar field decays entirely into BSM particles and not into radiation. Thus, if a decay produces two SUSY particles, for example, b = 2 and m φ > 2m χ , so η < 1/m χ . In such a case, all the SUSY particles produced by the scalar field decay, starting from the neutralino freeze-out, constitute an overall contribution to the relic density that has to be added to the value of the relic density in the standard model, i.e., Y = Y stand + Y T=T fo φ /m χ . Therefore, one has a constraint on the scalar field density at freeze-out.

pMSSM19 sample
In the following, we study how the constraints on the scalar field depend on the WIMP properties disregarding the case of a relic density that is too small, as the constraints deduced in this case already showed an explicit dependence on the freeze-out temperature and the relic density at freeze-out.
We focus on the points in our pMSSM19 sample that have a relic density that is too large in the standard cosmological model, which leaves us almost exclusively with binolike neutralinos. We calculated the values of κ φ that give the correct relic density at T RH = T BBN lim RH , as shown in Figure 7, and find a very good correlation between the relic density calculated in the standard model and κ φ min .
The points in Figure 7 follow a line of slope ∼ 1. Thus, the minimum value of the initial scalar field density increases with the value of the relic density in the standard model. This can be understood because the larger the relic density at freeze-out is, the stronger must be the dilution for a given reheating temperature. The small scatter of the points at low relic density is due to numerical uncertainties alone, but we note a departure from this line at large Ωh 2 stand , when κ φ min 1. With a scalar field density of this order of magnitude, there is also a modification of the Hubble parameter, which advances freeze-out. This mechanism tends to increase the relic density, while the entropy injection decreases it. Overall, the dilution has a stronger effect, but a larger scalar field density is required to decrease the relic density down to the measured DM density.
Next, we calculate the maximum value of η and find a clear dependence on the WIMP mass, as seen in Figure 8. Indeed, the scalar field produces a fraction b of SUSY particles, which contributes as m χ × b to the WIMP mass density. Therefore, the larger m χ is, the more the relic density will be increased for a given value of η, and the smaller will be the maximum value of η. At first approximation, the maximum value of η is inversely proportional to the WIMP mass. However, another mechanism is at play: the larger T fo is, the larger is the interval of time during which the relic density benefits from non-thermal production, and the smaller η should be in order to reach the correct relic density. As T fo stand ≈ m χ /20, we can express the relation between η lim and m χ . However, as shown in Figure 8, when T fo departs from this approximation towards lower values, the second mechanism becomes more important, and we see a departure from the linear relation between m χ and η lim . This happens for neutralino masses smaller than ∼ 100 GeV in our sample of points. In any case, η must be very small, of the order of ∼ 10 −10 -10 −9 .

Quintessence
We now turn to the study of the quintessence model. This scenario only has the power to increase the relic density by advancing freeze-out. Therefore, we disregard the case of a standard relic density that is too large.  Table 3.

Point with a small relic density
We have scanned over the three temperature parameters such that T 0 < T 12 < T 23 < T 34 with T 0 = 2 × 10 −13 GeV, the temperature of the CMB at present time. We performed the scans for the two extreme values of the slope in zone 2 of Figure 1, namely n 2 = 3 and n 2 = 6. We have calculated the relic density of our benchmark CMSSM point for each set of quintessence parameters, and show the results in Figure 9.
The relevant parameters are T 34 and the ratio T 23 /T 12 . The smaller T 34 is, and the greater T 23 /T 12 is, the larger is the relic density. This can easily be understood as the larger the scalar field density is around freeze-out, the larger will be the increase of the relic density, and a small value of T 34 and a large difference between T 12 and T 23 helps in obtaining a large scalar field density as large temperatures. In the case n 2 = 3, the accepted parameter sets follow a line of slope ∼ 0.5, and we find a limit at T 23 /T 12 ∼ 6 × 10 8 and T 34 ∼ 10 −4 GeV, where the line reaches the limiting case T 34 = T 23 . A minimum value of T 34 can be found when T 12 = T 23 , where we find T 34 2 × 10 −9 GeV. In the case where n 2 = 6, the same minimal value can be found. However, the accepted parameter sets follow a line of slope 1, parallel to the limit T 23 = T 34 . There are, therefore, no maximum values for the temperature parameters.
In both cases, we note also that the accepted parameter sets are very close to the limit imposed by BBN, which mainly depends on the density of the scalar field at a temperature T ∼ 1 MeV.
When T 34 is smaller than 1 MeV, which must be the case for values of n 2 close to 3, it is possible to find simpler constraints on the scalar field properties. In this case, freeze out and BBN both occur during phase 4 of the scalar field evolution in the model. The scalar field density can thus be specified simply by its value at freeze-out, and determined at other temperatures according by the slope n 4 = 6. We can therefore disregard what happens in phases 1, 2 and 3. We show in Figure 10 the evolution of the relic density for Point B with the ratio of the scalar field density to the radiation density at freeze-out, ρ φ = ρ φ ρ rad (T = T fo ) when we consider only phase 4 of the model. The scalar field starts having an effect on the relic density when its density is comparable to the radiation density at freeze-out. The Hubble parameter is thus significantly modified and freeze-out is advanced. The relic density then increases with a slope ∼ 0.48, and then becomes constant. The presence of the scalar field has an effect, therefore, only for a certain range of densities. In addition, we note that points are excluded by BBN if ρ φ ρ rad (T=T fo ) 10 8 , which corresponds to ρ φ ρ rad (1 MeV) 1. Figure 11. The value of the scalar field density at freeze-out that is required to increase the relic density up to the observed DM density for our sample of pMSSM19 points. The neutralino mass is shown in color and parameter sets excluded by BBN are shown in grey.

pMSSM19 sample
In addition, we have calculated the value of ρ φ (T = T fo ) required to obtain the correct relic density in our sample of pMSSM19 points. The result is shown in Figure 11, which shows the dependence of ρ φ (T = T fo ) on the standard relic density. In a first approximation, ρ φ (T = T fo ) scales logarithmically with the standard relic density, with a slope ∼ −2. The smaller the standard relic density is, the larger the scalar field density must be around freeze-out in order to increase the relic density up to the DM density. The slope −2 can be understood from a simple calculation. Freeze-out occurs when the annihilation rate equals the expansion rate, in the standard cosmological model: with H 0 = 8π/3M 2 p . The comoving neutralino density Y stand can then be expressed as: which can be re-expressed using Eq. (5.3) as .
When the scalar field density is very large in the quintessence model, compared to the radiation density, we obtain similar equations: where we have used in Eq. (5.6) the fact that the scalar field density evolves as T n 4 with n 4 = 6. The relic comoving density Y in this scenario can then be re-written using Eq. (5.6) as: .
Finally, we can combine Eqs. (5.8) and (5.5) to obtain: This gives us the ratio between the scalar field density and the radiation density at the standard freeze-out temperature that is required to increase the relic density to the measured dark matter density: (5.10) We retrieve here the slope −2. We note, however, that this particular value appears only because n 4 = 6, and thus depends on the quintessence model. Residual annihilations occurring after freeze-out are taken into account by the factor Y (T = T fo )/Y (T = present) Y stand (T = T f o stand )/Y stand (T = present) 2 , which takes a value ∼ 10 in our sample of pMSSM19 points. This value is model-dependent, however, and we show in Figure 11 that wino-like neutralinos, for instance, require a larger scalar field density than higgsino-like neutralinos.
Finally, we note that for neutralinos with a standard relic density 3 × 10 −4 , the scalar field density is too large at 1 MeV and our scenario is ruled out by BBN.

Conclusions
The cosmological density of cold dark matter is now known with good accuracy, thanks to measurements by Planck and other cosmological and astrophysical observations. We have studied in this paper how this knowledge could be used to constrain possible non-standard evolution of the early Universe in specific dark matter scenarios. An optimist might assume that laboratory experiments would establish the parameters of some scenario for physics beyond the Standard Model sufficiently well for a discrepancy to be established between the cosmological measurements and model calculations in standard radiation-dominated cosmology. More conservatively, the combination of observations and model calculations could be used to constrain a combination of model parameters and early-Universe scenarios.
As examples of non-standard evolution in the early Universe before Big Bang Nucleosynthesis, we have considered scenarios in which a scalar field decays into some combination of Standard Model and other particles, and quintessence models with various classes of effective potential. Our calculations were illustrated using various supersymmetric models in which a calculation of the cold dark matter density assuming a conventional radiationdominated early Universe would yield a density that is either larger or smaller than the observed density. The measured cold dark matter density could be used in the case of a decaying scalar field to constrain the initial density of the scalar field, the reheating temperature after it decays, and the branching ratio for its decays into particles beyond the Standard Model. In the case of a quintessence model, the cold dark matter density could be used to constrain the evolution with temperature in the early Universe of the quintessence field.
Our results exemplify the idea that measurements by laboratory experiments could be used, in the context of a specific model for physics beyond the Standard Model, to constrain aspects of the physics controlling the evolution of the early Universe that would otherwise be invisible and inaccessible. In this way, collider and other laboratory experiments could serve as powerful telescopes, using dark matter particles as a novel type of messenger particle able to provide information about the early Universe that photons and neutrinos cannot provide.