Centrality dependence of freeze-out temperature fluctuations in Pb-Pb collisions at the LHC

Many data in the High Energy Physics are, in fact, sample means. It is shown that when this exact meaning of the data is taken into account and the most weakly bound states are removed from the hadron resonance gas, the whole spectra of pions, kaons and protons measured at midrapidity in Pb-Pb collisions at sNN=2.76\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \sqrt{s_{NN}} = 2.76$\end{document} TeV can be fitted simultaneously. The invariant distributions are predicted with the help of the single-freeze-out model in the chemical equilibrium framework. The method is applied to the measurements in centrality bins of Pb-Pb collisions and gives acceptable fits for all but peripheral bins. The comparison with the results obtained in the framework of the original single-freeze-out model is also presented. Some more general, possible implications of this approach are pointed out.


Introduction
The unprecedented success of physics in modern times is the result of the application of two general principles: the theoretical modeling of a phenomenon and the experimental verification (of the predictions of the model). One of the currently most explored part of the standard model is the theory of strong interactions -the Quantum Chromodynamics (QCD). The QCD predicts a transition from a system of hadrons (strongly interacting particles which can be observed) to a system of partons (quarks and gluons which cannot be observed individually). This requires extremely high temperatures or densities of the system. The conditions necessary for the appearance of the deconfined phase (the partonic system) of QCD can be established in the laboratory now (for a wide review of the subject, from the theory to the experiment, see ref. [1]).
High-energy heavy-ion collisions are the tools for the creation of the deconfined phase. The matter originated during such a collision, extremely dense and hot, is compressed more or less in the volume of the narrow disc of the ion radius at the initial moment. After then the matter rapidly expands due to the tremendous pressure and cools simultaneously. The evolution of the matter can be described in the framework of the relativistic hydrodynamics [2]. During expansion the matter undergoes a transition to a hadron gas phase. The hadron gas continues the hydrodynamical evolution, assuming that the a e-mail: dariusz.prorok@ift.uni.wroc.pl collective behavior does not cease at the transition. The expansion makes the gas more and more diluted, so when mean-free paths of its constituents become comparable to the size of the system one cannot treat the gas as a collective system. This moment is called freeze-out. After then the gas disintegrates into freely streaming particles which can be detected. In principle, one can distinguish two kinds of freeze-out: a chemical freeze-out, when all inelastic interactions disappear and a kinetic freeze-out (at lower temperature), when also elastic interactions disappear. The measured hadron yields are fingerprints of corresponding hadron abundances present at the chemical freeze-out. The yields can be consistently described within the grand canonical ensemble with only three independent parameters, the chemical freeze-out temperature T ch , the baryochemical potential μ B and the volume of the system at the freeze-out, V [3]. This idea is the fundament of the Statistical Model (SM) of particle production in heavy-ion collisions. The measured p T spectra include information about the transverse expansion (radial flow) of the hadron gas and the temperature T kin at the kinetic freeze-out [4]. However, the alternative approach to freezeout was founded in [5,6] where the single freeze-out was postulated, i.e. the kinetic freeze-out coincided with the chemical freeze-out. This is the Single-Freeze-Out Model (SFOM). The suitably chosen freeze-out hypersurface and the complete inclusion of contributions from resonance decays enabled to correctly describe the Relativistic Heavy Ion Collider (RHIC) p T spectra.
With the first data on Pb-Pb collisions at √ s NN = 2.76 TeV from CERN Large Hadron Collider (LHC) [7,8] two new problems have appeared when the SM and hydrodynamics were applied for the description of particle production. The predicted proton and antiproton abundances were larger then measured ones [9] and low p T pions were underestimated [7,10]. This caused that the ratio p/π = (p +p)/(π + + π − ) was overestimated in the SM by a factor ∼ 1.5 [11]. Various explanation of this "puzzle" have been invented, but all fall outside the SM. These are: i) the incomplete list of resonances, there could still be undiscovered (high mass) resonances which after decays would increase more pion yields than proton ones, ii) the non-equilibrium thermal model, with two additional parameters describing the degree of deviation from the equilibrium, iii) hadronic inelastic interaction after hadronization and before chemical freeze-out, especially baryon annihilation, and iv) flavor hierarchy at freeze-out, which could result in two different freeze-out temperatures, one for non-strange hadrons, another for strange hadrons (for more details and references see [11]). And the later one: v) inclusion of resonance spectral functions [12,13].
In this work the generalization of the SFOM in the chemical equilibrium framework is postulated, which proved to be successful in the solution of the above problems [14] and well reproduces the results of [8]. This approach might be consider as the alternative (the vi)-th) possibility to the five ones listed above. However, in opposite to the original version of the SFOM, all parameters of the model (thermal and geometric) are estimated simultaneously from the spectra. This version was successfully applied to the description of the final spectra measured at RHIC for all centrality classes in the broad range of collision energy [15]. The new idea introduced into the SFOM in the present work is to randomize one of the parameters of the model. The model will be called the Randomized Single-Freeze-Out Model (RSFOM) from now on. It has turned out that the successful improvement is achieved only when the freeze-out temperature becomes a random variable and nothing is gained with the randomization of geometric parameters of the model. This approach was applied successfully to the most central bin of Pb-Pb collisions at √ s NN = 2.76 TeV in [14], for instance the ratio p/π was explained. In the present paper results for all centrality classes of the above-mentioned collisions are reported.

The model
To the favour of the reader we hereby repeat the description of the method, which is the same that was used in [14].
In the SFOM the invariant distribution of the measured particles of species i has the form where dσ μ is the normal vector on a freeze-out hypersurface, u μ = x μ /τ f is the four-velocity of a fluid element and f i is the final momentum distribution of the particle in question. The final distribution means that f i is the sum of primordial and decay contributions to the distribution. The freeze-out hypersurface is defined by the equations where the invariant time, τ f , and the transverse size, ρ max , are two geometric parameters of the model. For the LHC energies all chemical potentials can be put equal to zero, so the freeze-out temperature, T f , is the only thermal parameter of the model. The contribution from the weak decays concerns (anti-)protons mostly [8,16], hence secondary (anti-)protons from primordial and decay Λ(Λ)'s are subtracted. However, the data on p T spectra [7,8] are not "points" but, what is called in statistics, sample means (the division by N ev , the number of events in the sample, means that 1 ). In the large sample limit (the sample size goes to infinity), a sample mean converges to a distribution (theoretical) mean, not to just one value of the theoretical equivalent of a measurand (here eq. (1)). This is guaranteed by the weak law of the large numbers [17,18]. Therefore, the theoretical prediction should be also a random variable and the quantity to compare with the data -its average. For simplicity it is assumed that the theoretical prediction, eq. (1), is a statistic (a function of a random variable, by definition it is also a random variable) and that one of the parameters of the model, θ (θ = T f , τ f or ρ max ), is a random variable. Then the theoretical prediction becomes the appropriate average, where f (θ) is the probability density function (p.d.f.) of θ. This approach is more general but includes the standard one, if fluctuations of θ are negligible, then its p.d.f. is Dirac-delta like, f (θ) ∼ δ(θ −θ o ) and the average becomes the value at the optimal point θ o . It has turned out that only randomization of T f improves the quality of the fit, randomization of ρ max or τ f does not change anything. In fact, for the technical reasons, not T f is randomized but β f = 1/T f . From the statistical point of view these two possibilities are equivalent, because β f (T f ) has a unique inverse and vice versa [17]. Two p.d.f.'s are considered: and triangular where μ and σ are parameters of the log-normal p.d.f. whereasβ f and Γ are parameters of the triangular p.d.f.,  β f is the average of β f . The first is differentiable but has an infinite tail, the second is not differentiable but has a finite range. The choice is arbitrary, but two general conditions should be fulfilled, a p.d.f. is defined for a positive real variable and has two parameters so as the average and the variance can be determined independently. However, in both cases of p.d.f.'s, eqs. (4) and (5), fits of expression (3) to the whole data on p T spectra for the most central class of Pb-Pb collisions at √ s NN = 2.76 TeV [7] resulted in χ 2 /n dof = 1.49 with p-value = 2 · 10 −6 (n dof = 234), which is still unacceptable.
The second assumption of the model is purely heuristic -it states that the most weakly bound resonances should be removed from the hadron gas. To be more precise, all resonances with the full width Γ > 250 MeV (and masses below 1600 MeV) are removed [19]. These are: f 0 (500), h 1 (1170), a 1 (1260), π(1300), f 0 (1370), π 1 (1400), a 0 (1450), ρ(1450), K * 0 (1430) and N (1440) (see footnote 2 ). It should be noticed that the note attached to f 0 (500) 2 In fact, the hint for this assumption was the accidental observation that after the update of the f 0(500) mass to the lower one [19], the quality of the fit became worse. says: "The interpretation of this entry as a particle is controversial" [19] and the removal of this resonance has found a theoretical justification recently [20]. The exclusion of only f 0 (500) moves fits to the boundary of the acceptance, χ 2 /n dof ∼ 1.3 (p-value ∼ 0.001), nevertheless according to the rigorous rules of the statistical inference it is still not a "good" fit [17]. The removed resonances are weakly bound already in the vacuum, with the average lifetime τ < 1 fm, so it might happen that they are not formed in the hot and dense medium at all, at least in the case of central Pb-Pb collisions at extreme energy √ s NN = 2.76 TeV. Precisely, resonances correspond to attractive interactions between hadrons. In medium, this interactions are likely modified and one cannot exclude the possibility that they might be weakened to such an extent that some resonances disappear before the freezeout already. Anyway, this is a heuristic hypothesis, but it works very well. It should be stressed at this point that both assumptions are necessary, if only the removal of weakly bound resonances is applied (no randomization of any parameter), the fit for the most central class is still unacceptable, χ 2 /n dof = 1.5 (p-value = 10 −6 ). It looks like both assumptions (phenomena) strengthen each other. Centrality In the RSFOM the production of low-p T pions is enhanced slightly in comparison with the results of the SFOM in central bins. For higher p T in central bins and for all other bins fits of pions are the same in both models. Fits of kaons are practically the same and both models underestimate high-p T production in peripheral bins. Re-sults for low-p T protons and antiprotons are practically the same, slight overestimation but within errors for central bins goes gradually, with the centrality deterioration, to underestimation in peripheral bins. In high-p T region (p T > 3 GeV/c) fits of the RSFOM and the SFOM disagree and the disagreement deepens with p T . But both fits agree with the data within errors for first 5 centrality bins starting from the most central one. For higher centrality classes fits underestimate the high-p T production (the SFOM more).
In the most central classes, where only the RSFOM works, the determined temperature is of the order of 110-120 MeV, which is much lower than the estimate from yields, T ch 156 MeV [9] but agrees qualitatively with the values of the kinetic freeze-out temperature given in [10] and based on the blast-wave model [21] fits.     In the mid-central region both approaches, i.e. the RS-FOM and the SFOM, give acceptable fits, see tables 1, 2 and table 3. This exactly means that both models cannot be rejected there. Applying the Ockham razor principle one should choose the simpler model in this case, that is the SFOM. One should also remember that values of the freeze-out temperature presented in tables 1, 2 are the average values (over the sample), whereas the values of T f given in table 3 (the case with the non-random freeze-out temperature) and in table 4 (the same case but for Au-Au collisions at RHIC) are temperatures of "an average event" -one for each centrality class. One should notice here, that such "average event" might not have a real representative in the sample. Therefor the freeze-out temperatures from tables 1, 2 and table 3 are hardly similar and there is no reason they should be.

Conclusions
In summary, the chemical equilibrium Randomized Single-Freeze-Out Model has been applied successfully to the description of the production of identified hadrons measured at midrapidity in Pb-Pb collisions at √ s NN = 2.76 TeV [8]. This has been achieved with the help of the more general, direct interpretation of the data and the removal of the most weakly bound resonances from the hadron gas. Additionally, the chemical equilibrium SFOM without the above-mentioned two new assumptions was examined in this context. The correct description of spectra measured at mid-central classes of Pb-Pb collisions at √ s NN = 2.76 TeV and the failure of the SFOM in the two most central classes might suggest new phenomena occurring there. These phenomena seem to appear at the two levels: in individual events, where the production of identified hadrons in each collision can be describe within the chemical equilibrium SFOM but with the reduced content of the hadron gas, and in the whole sample, causing substantial differences among collisions belonging to the same central class. As a result, the two most central bins of Pb-Pb collisions at √ s NN = 2.76 TeV seem to be significantly inhomogeneous, during each event the thermal system is created indeed and with approximately the same size at its end, however with different temperature. The distribution of the freeze-out temperature means the distribution within a bin here. But the significant part of the freeze-out temperature fluctuations might be of non-thermal origin, so this would represent the possible variation of the freezeout conditions event-by-event within the bin. And the final shape of the spectra is the consequence of summing emissions from many different sources.
In conclusion, the centrality bins of Pb-Pb collisions at √ s NN = 2.76 TeV can be divided into 3 groups: the first, the 2 most central bins where the freeze-out temperature fluctuates significantly; the second, the mid central bins where the situation looks similar to that at the RHIC, the same freeze-out temperature, T f ∼ 150 MeV (see fig. 7), only ρ max factor ∼ 1.5 greater (τ f approx. the same), which causes that the volume is greater ∼ 2.5 times; the third, the peripheral bins where both approaches failed. And last, but not least, a great deal of data in high energy physics are averages, so in any theoretical modeling (of these data) one should be aware of possible misinterpretations when an average is compared with a prediction for a single event.
This work was not supported by any financial grant. Most of the calculations have been carried out using resources provided by Wroclaw Centre for Networking and Supercomputing (http://wcss.pl), grant No. 268.

Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Author's comment: All data generated during this study are contained in this published article.] Publisher's Note The EPJ Publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.