1 Introduction

The existence of dark matter (DM) is, for all intents and purposes, an established fact, but its exact nature remains unknown. Thermal freeze-out of Weakly Interacting Massive Particles (WIMP) has long been the dominant dark matter paradigm. In this mechanism, dark matter particles of masses around the electroweak scale pair annihilate to Standard Model (SM) particles until the corresponding rate drops below the expansion rate of the Universe. Unfortunately, direct detection experiments have imposed stringent constraints on WIMPs, forcing cosmologists to consider alternative scenarios. Amongst these is the possibility that dark matter is part of a larger secluded sector [1]. In this case, dark matter can instead annihilate to new particles and the constraints from direct detection can be circumvented. Dark matter abundance is then mostly controlled by interactions that are internal to the secluded sector.

The overwhelming majority of the time, dark matter abundance computations are performed assuming the dark matter phase-space distribution is thermal, be it Maxwell–Boltzmann, Bose–Einstein or Fermi-Dirac. However, it was recently demonstrated in several papers that this assumption could break down under certain circumstances [2,3,4,5,6,7,8,9,10,11,12,13,14]. What happens in practice is that the processes that lead to dark matter destruction can prefer certain regions of the phase space, resulting in regions being depleted or overpopulated faster than can be balanced via equilibration processes. This typically leads to a larger amount of dark matter, as annihilation processes become less efficient, though in some cases it can lead to a smaller amount [9].

The breakdown of the thermal distribution approximation is in effect caused by dark matter annihilation processes taking place faster than kinematic equilibration processes. In many secluded sectors, these processes are one and the same, or at least related. This puts into question whether previous computations of dark matter abundance in secluded sectors are justified in using the approximation of thermal distributions.

In this paper, we revisit the computation of dark matter abundances in a benchmark secluded sector by properly evolving the necessary phase-space distributions. More precisely, we study the impact of deviations from thermal phase-space distributions for codecaying dark matter.

We find the following results. During codecaying freeze-out, the phase-space distributions can differ considerably from their thermal values, leading to a substantially larger amount of dark matter at this stage. However, annihilation remains efficient for longer, leaving only a slight excess of dark matter. We therefore confirm that the use of thermal distributions is a relatively good approximation for codecaying dark matter and that the impact on previous work should be small.

The rest of this paper is organized as follows. The benchmark model is presented in Sect. 2. The numerical procedure is described in Sect. 3. The results are shown in Sect. 4. Concluding remarks are presented in Sect. 5. A derivation of the expression for the \(2 \rightarrow 2\) scattering coefficients is presented in Appendix A. The method used to track the integrated densities under the assumption of thermal distributions is discussed in Appendix B.Footnote 1

2 Model

The benchmark secluded sector that we will consider is one in which the dark matter abundance is set by the so-called codecaying dark matter mechanism, which was first proposed in Ref. [15].

Consider a set of dark matter particles and some unstable mediators that can annihilate or decay to SM particles. The mediators and dark matter particles are assumed to have similar masses. In models of codecaying dark matter, the annihilation and decay rates of the mediators to SM particles are small and the mediators decouple very early on. In contrast, the dark matter candidates and the mediators interact strongly. This results in dark matter particles and mediators maintaining chemical equilibrium even after decoupling from the SM plasma. Dark matter particles are then converted to mediators which decay back to the SM particles. The dark matter abundance is determined by when the annihilation of dark matter particles to mediators becomes inefficient.

In practice, we will concentrate on the following simple benchmark model. The dark matter candidate is a complex scalar called \(\phi _A\) and is neutral under all SM gauge groups. It is kept stable by a global U(1) symmetry. The mediator is a real scalar \(\phi _B\) also neutral under all SM gauge groups. We assume the following Lagrangian for the new scalars

$$\begin{aligned} & {\mathcal {L}} \supset -m_A^2 \phi _A^\dagger \phi _A - \frac{m_B^2}{2}\phi _B^2 -\frac{\lambda _{AB}}{2} (\phi _A^\dagger \phi _A) \phi _B^2 \nonumber \\ & \qquad \quad - \frac{\lambda _{Bh}}{4}\phi _B^2 h^2, \end{aligned}$$
(1)

where h is the Higgs boson. In the next sections, we will assume dark matter of mass around the electroweak scale, which means few Higgs bosons are present in the plasma after the secluded sector has decoupled. This will ensure that little energy is exchanged between the two sectors via scatterings between Higgs bosons and \(\phi _B\)’s. A \(\lambda _{Bh}\) term of the exact form as in Eq. (1) would generally be accompanied by other interactions in a more realistic model. However, their inclusion would only affect the decoupling between the secluded and SM sectors, which could be mimicked by taking a different value of \(\lambda _{Bh}\), and would not otherwise affect the much later decoupling between \(\phi _A\) and \(\phi _B\) in which we are interested. In addition, we will assume that \(\phi _B\) decays with a decay width of \(\Gamma _B\) to some particles that remain in thermal equilibrium with the SM, though the exact nature of these particles is irrelevant to the evolution equations. Because such a decay width would break a \({\mathbb {Z}}_2\) symmetry, it can be small in a technically natural way. Additional terms could be included in the Lagrangian, but we will ignore them for the sake of simplicity.

This benchmark is motivated by confining secluded sectors, which provide a natural setting for codecaying dark matter [16,17,18]. For example, a confining secluded sector could contain two dark quarks that transform under a complex representation of the confining group. Chiral symmetry breaking would then result in three pseudo-Goldstone bosons, the dark pions. The neutral pion can then act as the mediator \(\phi _B\) and the equivalent of the charged pion can act as the dark matter candidate \(\phi _A\). In practice, the chiral Lagrangian of course contains additional terms and the couplings would be related. Furthermore, secluded sectors tend to have very universal properties, as their dynamics is generally controlled by a small subset of their lightest particles and a few parameters [18]. As such, even a minimal model is expected to be representative of many scenarios to a good approximation.

3 Numerical procedure

Consider the phase-space distribution \(f_i(\vec {x}_i, \vec {p}_i, t)\) of particle i. This is related to the number density \(n_i\) via

$$\begin{aligned} n_i = g_i \int \frac{d^3 p_i}{(2\pi )^3}f_i(\vec {x}_i, \vec {p}_i, t), \end{aligned}$$
(2)

where \(g_i\) is the number of internal degrees of freedom of particle i. Under the assumptions of isotropy and uniformity, \(f_i(\vec {x}_i, \vec {p}_i, t)\) is only a function of the magnitude \(p_i\) of its momentum and time, i.e., \(f_i(p_i, t)\).Footnote 2 The evolution of \(f_i\) is governed by the Boltzmann equation

$$\begin{aligned} \left. \frac{\partial f_i}{\partial t}\right| _{p_i} - H p_i\frac{\partial f_i}{\partial p_i} = \sum {\mathcal {C}}[f_i(p_i)], \end{aligned}$$
(3)

where the time derivative is taken at constant momentum \(p_i\), \({\mathcal {C}}[f_i(p_i)]\) are the different collision terms and H is the Hubble parameter. Alternatively, the Boltzmann equation can be expressed as

$$\begin{aligned} \left. \frac{\partial f_i}{\partial t}\right| _{p_i^c} = \sum {\mathcal {C}}[f_i(p_i)], \end{aligned}$$
(4)

where \(p_i^c = a p_i\), with a being the scale factor, is the comoving momentum. Both forms of the Boltzmann equation have their advantages and disadvantages. On one hand, using \(p_i^c\) presents the advantage that the dark matter distribution at a given \(p_i^c\) freezes at late time, which is not the case for \(p_i\). On the other hand, using \(p_i^c\) means that any finite grid will cover an increasingly small range of \(p_i\). This can cause problems when a process produces particles at a fixed momentum, as it might eventually become outside the grid.

For the dark matter \(\phi _A\), we use a grid of discrete values of comoving momentum \(p_A^c\) referred to as \(p^c_{A_i}\). For the mediator \(\phi _B\), we use a grid of momentum \(p_B\) referred to as \(p_{B_i}\). Both grids are chosen to initially encompass the overwhelming majority of particles. The grid of \(p_{B_i}\) is periodically updated to remove higher momenta at which the phase-space distribution has become negligible. All simulations are started at a time far before decoupling and using distributions initialized as Maxwell–Boltzmann with integrated densities and temperatures obtained using the method of Appendix B. It was verified that the decoupling takes place at a sufficiently low temperature for the Maxwell–Boltzmann distribution to be an excellent approximation.

It is convenient to introduce a reference mass \(m_{\text {ref}}\) and define \(x=m_{\text {ref}}/T\), where T is the temperature of the SM plasma. In practice, we will take \(m_{\text {ref}}\) to be the mass of \(\phi _A\). Assuming the energy content of the Universe to be dominated by the SM plasma, one can then verify that

$$\begin{aligned} \frac{dt}{dx} = \sqrt{\frac{45}{4\pi ^3}}\frac{g_*^{1/2}}{h_{\text {eff}}}\frac{M_{\text {Pl}}}{m_{\text {ref}}T}, \end{aligned}$$
(5)

where \(h_{\text {eff}}\) is the effective number of entropy degrees of freedom and

$$\begin{aligned} g_*^{1/2} = \frac{h_{\text {eff}}}{g^{1/2}_{\text {eff}}}\left( 1 + \frac{T}{3 h_{\text {eff}}}\frac{d h_{\text {eff}}}{dT}\right) , \end{aligned}$$
(6)

with \(g_{\text {eff}}\) denoting the effective number of energy degrees of freedom. Equation (5) can be used to turn the time derivatives of Eqs. (3) and (4) into the more convenient \(\partial f_i/\partial x\).

In this paper, we will be interested in two types of collision terms: \(1 \rightarrow 2\) decays and \(2 \rightarrow 2\) scatterings. First, consider the decay

$$\begin{aligned} P_1 \rightarrow P_2 P_3, \end{aligned}$$
(7)

and its inverse. Assume \(P_2\) and \(P_3\) are in equilibrium. The corresponding collision term is

$$\begin{aligned} C[f_1(p_{1_i})] = - \frac{\Gamma _1}{\gamma _1} \left[ f_1(p_{1_i}) - f_1^{\text {eq}}(p_{1_i}) \right] , \end{aligned}$$
(8)

where \(\Gamma _1\) is the decay width of \(P_1\) and \(\gamma _1 = E_1/m_1\) is the usual Lorentz factor.Footnote 3 Second, consider the scattering

$$\begin{aligned} P_1 P_2 \rightarrow P_3 P_4, \end{aligned}$$
(9)

and its inverse process. The collision term for \(P_4\) is then

$$\begin{aligned} C[f_4(p_{4_i})]= & \sum _{j, k} \Delta p_1 \Delta p_2 {\hat{W}}_{ijk}\nonumber \\ & \times \left[ f_1(p_{1_j}) f_2(p_{2_k}) - f_3(p_{3_m}) f_4(p_{4_i}) \right] , \end{aligned}$$
(10)

where \({\hat{W}}_{ijk}\) are the scattering coefficients (see Appendix A for the precise definition and other notations).

Fig. 1
figure 1

Evolution of \(Y_A\) and \(Y_A^{MB}\) for different decay widths \(\Gamma _B\). The masses are set to \(m_A = 200\) GeV and \(m_B=198\) GeV. The parameter \(\lambda _{Bh}\) is set to \(10^{-2}\) and the parameter \(\lambda _{AB}\) is adjusted for each case such that the DM abundance obtained under the assumption of thermal distributions reproduces the measured value

Fig. 2
figure 2

a Example of the evolution of \(Y_A/Y_A^{MB}\). b Example of the evolution of \(f_B\). Each curve corresponds to a value of \(x = x_0 + 50n\), with \(x_0 \simeq 63.35\) being the x at which the simulation is started for this benchmark and n an integer. The topmost curve corresponds to \(n=0\). Parameters are set as in Fig. 1. The vertical line corresponds to Eq. (14)

Several comments are in order. First, each element of \({\hat{W}}_{ijk}\) corresponds to a double integral and needs to be evaluated on a three-dimensional grid. It is practically impossible to perform every integral at each step. To circumvent this problem, \({\hat{W}}\) is evaluated on a three-dimensional grid before solving the evolution equations. During the evolution, values of \({\hat{W}}_{ijk}\) are evaluated using a trilinear interpolation on the grid. Second, the quantity \(f_3(p_{3_m})\) that appears in the \(2 \rightarrow 2\) processes is determined by fixing \(p_{3_m}\) from energy conservation and linearly interpolating the \(f_3\) distribution. Third, evaluating the double sum at every step can be extremely time-consuming when the grid is too fine, but using an insufficiently fine grid would lead to a large error in the computation of the number densities. We have found the best compromise between execution time and precision by taking a very fine grid and evaluating the double sum using only a subset of the points.

An adaptive step size is used as follows. An error is defined for \(\phi _A\) as

$$\begin{aligned} \sigma _A = \max _i \left| \frac{f_A^{\text {RK}_2}(p_{A_i}, x + \Delta x) - f_A^{\text {RK}_1}(p_{A_i},x + \Delta x)}{f_A(p_{A_i},x)}\right| ,\nonumber \\ \end{aligned}$$
(11)

where \(f_A^{\text {RK}_1}(p_{A_i}, x + \Delta x)\) and \(f_A^{\text {RK}_2}(p_{A_i}, x + \Delta x)\) are the distributions at \(p_{A_i}\) evolved from x to \(x + \Delta x\) using the first- and (midway) second-order Runge–Kutta method, respectively. A similar error \(\sigma _B\) is defined for \(\phi _B\). We then define \(\sigma \) as \(\text {max}(\sigma _A, \sigma _B)\) if the ratio of dark matter density and entropy density is above twice its present-day value and as \(\sigma _A\) otherwise. If \(\sigma \) is above a predetermined tolerance \(\epsilon \), the evolution of the densities by one step is reperformed using a reduced step size of

$$\begin{aligned} \min \left( a \Delta x(\epsilon /\sigma )^b, c \Delta x, \Delta x^{\text {max}} \right) , \end{aligned}$$
(12)

where we set \(\epsilon = 10^{-3}\), \(a = 0.9\), \(b = 0.33\), \(c = 1.1\) and \(\Delta x^{\text {max}} = 0.1\). This is repeated until the required precision goal is met. The \(f_A(p_{A_i},x)\) distribution is then updated to \(f_A^{\text {RK}_2}(p_{A_i}, x + \Delta x)\) and the evolution continues using a new step size determined by Eq. (12). To determine when to end the simulation, the following quantity is introduced

$$\begin{aligned} \Delta Y^{\text {r}}_{A_n} = \frac{|Y_A(x_{n+1}) - Y_A(x_n)|}{Y_A(x_{n+1}) + Y_A(x_n)}, \end{aligned}$$
(13)

where \(Y_A\) is the ratio of \(n_A\) and the entropy density and \(x_n = x_0 + 10n\) with \(x_0\) the value of x at which the simulation is started and n an integer. The simulation is stopped when more than 30 values of \(\Delta Y^{\text {r}}_{A_n} < 10^{-3}\) have been recorded.

Finally, the validity of the numerical method has been verified by making sure that the correct dark matter abundance is reproduced in the limit where the standard integrated density approach is expected to work. This will be shown in the next section.

4 Results

We show in Fig. 1 different cases of the evolution of \(Y_A\) and its equivalent under the approximation of Maxwell–Boltzmann distributions \(Y_A^{MB}\). The dark matter \(\phi _A\) is slightly heavier than the mediator \(\phi _B\). The method of Appendix B is used to compute \(Y_A^{MB}\). As can be seen, the abundance begins by following very closely its thermal value. As the dark matter starts to decouple, deviations can appear and lead to a temporary excess of \({\mathcal {O}}(20\%)\) in some cases. However, annihilation of dark matter remains efficient for a longer period of time and the temporary excess of \(Y_A\) mostly vanishes with time. Ultimately, only a small excess of dark matter is left, at best \(6\%\) for our benchmarks. Figure 2a shows an example of the ratio \(Y_A/Y_A^{MB}\). Note that both methods coincide when the decay width of \(\phi _B\) becomes sufficiently large, as can be seen in Fig. 1a and which serves as a validation of our method.

The physical reason for the deviation from the integrated density approach is best illustrated by Fig. 2b, which shows \(f_B\) at different times for a benchmark point. As should be clear, the phase-space distribution of \(\phi _B\) starts to deviate massively from the Maxwell–Boltzmann distribution as the system evolves. In detail, when x is large, the collisions \({\bar{\phi }}_A \phi _A \rightarrow \phi _B \phi _B\) take place between two particles almost at rest. This leads to the production of two \(\phi _B\) mediators with a peak in \(f_B\) appearing at the momentum

$$\begin{aligned} p_{\text {peak}}\sim \sqrt{2m_B (m_A - m_B)}. \end{aligned}$$
(14)

The process takes place too quickly for this peak to disappear via kinetic equilibration processes. This modifies the relative rate of the \({\bar{\phi }}_A \phi _A \rightarrow \phi _B \phi _B\) process and its inverse, increasing the dark matter abundance. It is easy to verify that deviations start to become noticeable in Fig. 1c around the point at which Fig. 2b deviates visibly from the Maxwell–Boltzmann distribution.

Smaller decay widths of the mediators were considered, but did not lead to much larger deviations of the dark matter abundance and proved numerically increasingly challenging. Other values of the mass splitting and \(\lambda _{Bh}\) were considered, but did not lead to any qualitative differences. The small increase in the slope around \(x\sim 900\) in Figs. 1 and 2a corresponds to the QCD phase transition, where a larger amount of time passes for a given x interval.

Finally, we mention that codecaying dark matter can be very difficult numerically, even under the assumption of thermal distributions. This explains the small instabilities at large x. Extensive numerical resources are required to obtain sufficiently accurate results, which is why few results are presented and the benchmark model is so simple.

5 Conclusion

It is a distinct possibility that dark matter is part of a larger secluded sector and that the dark matter abundance is set by interactions that are internal to the secluded sector. In many such scenarios, the interactions responsible for depleting the dark matter abundance will be the same or at least related to those responsible for the kinetic equilibration of the secluded sector. It is then possible that the phase-space distributions will differ from their thermal values during freeze-out, which could affect the dark matter abundance.

In this paper, we have considered the effect of non-thermal distributions on codecaying dark matter. We find that the dark matter abundance can differ substantially from its standard abundance during the decoupling process, but that a longer period of annihilation leads to only a small increase in the final abundance. We confirmed that the use of thermal distributions is therefore a relatively good approximation for codecaying dark matter, at least as far as the final abundance is concerned.

As a concluding remark, we mention that even though we have considered only a simple example, small deviations from thermal distributions in secluded sectors should be ubiquitous. Obviously, tracking the phase-space distributions is non-trivial and might not be practical or realistic under many circumstances. At least, one should keep in mind that a proper treatment of the phase-space distributions can lead to some corrections to the dark matter abundance in many scenarios of secluded sectors.