1 Introduction

The identified hadron ratios have been measured with all LHC detectors and results were compared with high-energy event generators available in the market [15]. The comparison, in general, is not very satisfactory.

In the present paper we would like to use data from the ALICE experiment performed with \(pp\) interaction of \(\sqrt{s}\) 7 TeV available energy [38] to test the particle creation description based on a thermodynamical approach.

The standard statistical picture is known to work well in the soft, low \({p_\bot }\), sector of the particle creation process, where the exponential fall of the transverse momentum distribution is observed. The hard inelastic scattering leads to the quark jet fragmentation with the power-law transverse momentum (transverse mass) distributions. Detailed studies of the measured charged particle transverse momentum (transverse mass) distributions suggested already some time ago that the very good agreement of the invariant differential cross section in the whole transverse momentum range can be obtained with “an empirical formula inspired by QCD” from [9],

$$\begin{aligned} E~{ \mathrm{d ^3} \sigma \over \mathrm{d} p ^3} ~=~ A\, \left( \frac{p_0}{{p_\bot }+p_0} \right) ^{n} \end{aligned}$$
(1)

(see, e.g., [10] for further discussion and references). It has been shown [11] that not only the fit of the simple form of Eq. (1) works well but the whole theoretical model of particle creation which stands behind it could be successfully applied to the highest available energy data on charged particle transverse momentum [12].

The model parameters found in [12] define the occupation of phase space for given charged particle transverse momentum. If the picture is self-consistent, the same set of parameters should give correct yields of different kinds of created particles. It is well known that the multiplicities of new created heavy particles are described to some extent by the Boltzmann statistical model (e.g., [13, 14]). The Tsallis modification undoubtedly increases the high \({p_\bot }\)s, and, obviously, the high transverse mass particle abundances. This should lead to the overabundance of heavy particles. We would like to look for the possibility to suppress this effect in a consistent way and to see if satisfactory results could be obtained.

2 Thermodynamical model

The thermodynamical picture of the particle creation process in hadronic collisions was the first and quite successful attempt to describe it. The elaborated and complete theory was presented in a series of papers by Hagedorn (see [1517] and references therein). The idea of the fireball together with the proposition that “all fireballs are equal” gives considerable predictions concerning the produced particle spectra.

One of the predictions was that the temperature of the “hadronic soup” (precisely defined) could not exceed a universal constant \(T_0\) of order of 160 MeV. This value comes not as a result of the procedure of parameter adjusting using multiparticle production (e.g., transverse momenta) data, but from an examination of the elementary particle mass spectrum.

The Hagedorn theory had been abundant for some time, when more sophisticated, jet- or QCD-based ideas appeared [18]. One of the reasons was the failure of the high transverse momenta description. The temperature of the fireball is defined as the parameter in the classical Boltzmann exponential term of the probability weights for phase space average occupation numbers. This gives the (asymptotic) form of the distribution of transverse momenta of the particles created from decaying fireballs. It was found that at high and very high interaction energies the predicted exponential fall does not agree with the observed high \(p_\bot \) behavior. Successes of QCD-based description of the hard processes gave deep insight into the nature of physics involved, and belief that this is just the right theory of the strong interactions, making the thermodynamical approach a very approximate, simple, and naive tool of limited applicability and thus of limited significance. But on the other hand, the simplicity of the theory and notorious constant lack of an effective QCD theory of soft hadronization processes give hope that the fireball idea can be enriched, modified and can become important again.

The Hagedorn idea was used again to describe the identified particle multiplicities in hadronization, both in \(e^+e^-\) annihilation and hadronic collisions. The grand canonical formalism of Hagedorn was replaced in the series of papers by Becattini et al. [19] by the canonical one, very relevant for studies of small systems like primary created fireballs for which the requirement of the exact conservation of some quantum numbers seems important.

In general, the thermodynamics of the system is determined by the partition function which can be written as

$$\begin{aligned} \mathcal{Z}( Q^0) =\sum \limits _{ {Q}} \delta (Q-Q^0) \, \prod \limits _{i,j} \mathcal{P}_{jk}^{\nu _{jk}} \ , \end{aligned}$$
(2)

where \(\mathcal{P}\) is the classical Boltzmann factor and \(j\) and \(k\) enumerate the particle types and momentum cells, \(Q^0\) is the initial fireball quantum number vector and \(Q\) is the respective vector of the particular state, and \(\nu _{jk}\) is the occupation number. Introducing the Fourier transform of \(\delta \) (and reducing the vector \(Q\) to 3-dimensional: charge, baryon number, and strangeness) Eq. (2) becomes

$$\begin{aligned} \mathcal{Z}(Q^0)&= {1 \over (2 \pi )^3}\, \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \mathrm{d}^3 \phi \mathrm{e}^{iQ^0\phi } \times \nonumber \\&\times \exp \left\{ \sum _{j=1}^{n_B} \,w_j \, \sum \limits _{k} \log ( 1- \mathcal{P}_{jk} \mathrm{e}^{-iq_j\phi })^{-1} \, \ + \ \right. \nonumber \\&+ \left. \ \sum \limits _{j=1}^{n_F} \,w_j \, \sum \limits _{k} \log ( 1+ \mathcal{P}_{jk} \mathrm{e}^{-iq_j\phi }) \,\right\} , \end{aligned}$$
(3)

where \(q_{j}\) is the quantum number vector of the particle \(j\) and \(w_j\) is the weight factor associated with the particle of the type \(j\). The first guess is that it should be equal to \((2 J_j + 1)\) and counts spin states. However, this does not seem to be so simple (see, e.g., [2022]) and other solutions introducing factors responsible for some wave-function normalization, which should disfavor heavier states, were found to be preferable by measurements. We will discuss this point later on.

With Eq. (3) we are ready for detailed numerical calculations.

2.1 Average multiplicities

With the known partition function \(\mathcal{Z}\) the average characteristics of the system can be obtained in the usual way. For the average multiplicity we have

$$\begin{aligned} \langle n_j \rangle&= w_j\ { V \over (2\pi )^3 }\, \,{1 \over (2\pi )^3} \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \mathrm{d}^3 \phi \, \times \nonumber \\&\times \int \mathrm{d}^3 p \,[ \mathrm{e}^{E/T}\,\mathrm{e}^{i \,\mathbf{q}_j \phi } \pm 1]^{-1}, \end{aligned}$$
(4)

where the upper sign is for fermions and the lower is for bosons. Because the \(\mathrm{e}^{-E/T}\) factor is expected to be small (for all particles except pions),

$$\begin{aligned} \langle n_j \rangle ~ \approx ~ {\mathcal{Z}(\mathbf{Q}^0 -\mathbf{q}_j) \over \mathcal{Z}(\mathbf{Q}^0)} \,w_j { V \over (2\pi )^3 }\, \int \mathrm{d}^3 p \,\mathrm{e}^{-E/T}~. \end{aligned}$$
(5)

The conventional Boltzmann–Gibbs description shown above could be, in principle, modified to allow for the description of the systems of not-completely-free particles: the correlation “strength”, however defined, was introduced with the help of the new non-extensivity parameter and the new statistics, which in such a case also has to be non-extensive. In the limit of the absence of correlations the new description approaches the Boltzmann form.

There could be infinitely many “generalized” statistics which fulfill such requirements. We choose the one which is simple and has a well defined theoretical background. In the present paper we test the possibility, proposed by Tsallis [23], based on the modification of the classical entropy definition,

$$\begin{aligned} S_\mathrm{BG}=-k \sum \limits _{i}^{W} \,\mathcal{P}_i\ln \mathcal{P}_i, \end{aligned}$$
(6)

by the new one,

$$\begin{aligned} S_{q}=k {1 \over q~-1}\left( {1-\sum \limits _{i}^{W} \,\mathcal{P}_i^q }\right) , \end{aligned}$$
(7)

with the new parameter \(q\) called the non-extensivity parameter. This modification has been adopted in other physical applications (see, e.g., [2426]).

Maximization of the entropy requirement with the total energy constraint leads to the probability of realization of the state \(i\) (with energy \(E_i\)) given by

$$\begin{aligned} \mathcal{P}_i={1\over \mathcal{Z}_q}\, [1-(1-q)/T_q(E_i-E_0)]^{1/(1-q)}, \end{aligned}$$
(8)

where \(\mathcal{Z}_q\) is the normalization constant related to \(\mathcal{Z}(\mathbf{Q}^0)\) of Eq. (2) where the Boltzmann terms \(x\) is replaced by the probabilities of the form of Eq. (8).

Equation (8) can be rewritten introducing a new symbol, \(\mathrm{e}_q\), defined as \( \mathrm{e}_q^{x} = [ 1+(1-q)x]^{1/(1-q)} \)

$$\begin{aligned} \mathcal{P}_i~\sim ~ \mathrm{e}_q^{ -E_i/T_q} \end{aligned}$$
(9)

and the modified partition function can then be written in the form

$$\begin{aligned} \mathcal{Z}_q(\mathbf{Q})=\sum \limits _\mathrm{states} {w_j \over (2\pi )^3} \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \int \limits _{0}^{2\pi } \mathrm{d}^3 \phi \, \mathrm{e}_q^{-E/T} \, \mathrm{e}^{i (\mathbf{Q}_0-\mathbf{Q})\,\phi }. \end{aligned}$$
(10)

Equation (5) with this modification of the partition function gives the abundances of the initially created particles in the hadronization process described by the modified, non-extensive, statistics.

3 Results

We have evaluated \(\mathcal{Z}_q\) functions (and \(\mathcal{Z}=\mathcal{Z}_1\) for the standard Boltzmann thermodynamics) for a variety of values of the thermodynamical model parameters: \(T\), \(V\), and a number of \(\mathbf{Q}\) values which cover the production of over 100 hadrons of masses below 2 GeV/c\(^2\). All decays of short-lived particles were then taken into account.

Measurements of identified particle ratios performed mostly by the ALICE Collaboration give the opportunity to test the modified statistical model of particle flavor creation in the new, higher-energy region. The lower-energy jet hadronization results have been analyzed by the series of papers by Becattini et al. (see, e.g., [13, 14]). It has been shown that the micro-canonical Boltzmann description works well for \(e^+e^-\) from \(\sqrt{s}\approx \)10 GeV [27] to 91 GeV [13] and \(pp\) and \(p \bar{p}\) interactions up to SPS energies, \(\sqrt{s}\approx \) 900 GeV [14].

The comparison of the results obtained with Boltzmann (non-extensivity parameter \(q=1\)) and Tsallis with the value of \(q = 1.12\), which were obtained in Refs. [11, 12], is shown in Fig. 1. We can clearly see the enhancement of the exponential ’tail’ for the modified statistics model. The non-extensivity parameter \(q\) values adjusted to high-energy data recently [28, 29] are 1.1–1.15, (\({\sim }1.17\) in Ref. [10]). We have shown the effect of the change of the non-extensivity parameter in Fig. 2. The difference between the Boltzmann and Tsallis statistics results are given for three values of \(q\); 1.10, 1.12, and 1.15. The biggest difference is seen for \(\Omega \). For lighter particles, even for \(\Xi \) the effect of a small change of \(q\) is not significant. Concluding, we can say that the relative particle multiplicities are not the appropriate observable to adjust the non-extensivity parameter.

Fig. 1
figure 1

Relative particle multiplicities, \(f = ( n_i/n_{ch})\), obtained for the Tsallis statistics (with \(T=150\) MeV, \(V=100\) fm\(^3\) and \(q=1.12\)) compared with the same ratios for the Boltzmann statistics (with \(T=150\) MeV, \(V=100\) fm\(^3\), and \(q=1\))

Fig. 2
figure 2

Enhancement of the relative particle multiplicity obtained for Tsallis statistics (with \(T=150\) MeV, \(V=80\) fm\(^3\)) with respect to the standard Boltzmann model for the non-extensivity parameter values equal to 1.10, 1.12, and 1.15 (squares, circles, and triangles, respectively)

We have tested also the particle multiplicity dependencies on the thermodynamical hadronization parameters \(T\) and \(V\). The effect seems to be almost negligible with the limits of possible changes, allowed by the data on transverse momentum distribution and total multiplicities. However, the effects of the volume \(V\) have to be taken with care, because the spatial and temporal history of the hadronization process is not exactly known. It is expected that the canonical picture should take into account the multichain idea (e.g., [19, 30]) of decomposition of the ’hadronic soup’ to chains of independently hadronized objects/fireballs. For the Boltzmann statistics, by the definition of extensivity, the sum of many hadron sources is equivalent to one big source [27]. This is, in general, not the case of Tsallis non-extensive statistics. But we can say that the strength of the non-extensivity is still not big and the effect of subdivision of the hadronization volume does not change much the conclusion regarding the identified particles ratios. Another important point to be mentioned here is connected to the effect of the canonical treatment of small fireballs which relates to the suppression of strange quark (and diquark, or strange diquark) production that was mentioned already in [1517]. Additionally the importance of the reaction volume in hadronic collisions in the canonical picture, specially for multistrange particles, like \(\Omega \), is discussed extensively in [31]. We have to say that the changes of the hadronization volume do not act strongly on the total multiplicities. The possible small changes which we can study do not affect the particle ratios in a significant way. Detailed studies, however, are needed to answer all these questions here.

We can say that the thermodynamical model predictions are, in a sense, very robust. They cannot be adjusted to the measured ratios, at least with reasonable possible changes of the hadronization parameters \(T, V\), and \(q\). This situation is, on the other side, very fortunate. The comparison of them with the experimental results could be the experimentum crucis of the model in general.

We have shown the results for particle ratios in comparison with the ALICE Collaboration data in Table 1. The thermodynamical parameter values (\(T\) and \(q\)) were taken from the literature and adjusted (\(V\)) to reproduce roughly the charged particle multiplicities. We have applied there the simply counting of spin states to calculate the weight factor \(w_i\), \((2J_i+1)\), and a strangeness suppression factor value of \(\gamma _s = 0.5\) acting of strange quark particle contents.

Table 1 Ratios of particle multiplicities calculated for Boltzmann and Tsallis statistics in comparison with results from the ALICE Collaboration

Problems when comparing the ALICE results with the prediction of the listed hadronization model results can be seen. Some ratios, especially those involving strange and multistrange hadrons, look unexplainable. As discussed above (Fig. 2) formal fits or readjustments of the model parameters (\(T\), \(V\), and even \(q\)) could not help. It should be mentioned here again that the model parameters are related to other interaction properties measured extensively with LHC, e.g., the transverse momentum distributions and total multiplicities, and their values are rather fixed. Any significant change of \(V\), \(T\) (and \(T\) with \(q\)) could disturb fits made for the charged particle inclusive spectra measured with very high accuracy and in a large range of the transverse momentum space.

There is, however, in Eqs. (4) and (5) the weight factor \(w_j\), which gives us some hope and freedom to get closer to the data. The simply obvious form of \((2J+1)\) is, in general, modified since there had been an experimentally found suppression of \(K\) mesons with respect to non-strange ones. The general statement is that the strange phase space is not fully available for particle production, which can be realized by multiplying the partition function by the special factor for each strange valence quark in the particle in question.

The strangeness suppression factor is also one of the basic parameters in the jet fragmentation model introduced by Feynman and developed finally by the Lund group [2022]. In the Lund jet fragmentation process new hadrons appear as a breaking of the color field string stretched between quarks moving apart by the production of a new pair of quarks (sometimes diquarks). If there is enough energy left, further breaks may occur and eventually only on-mass-shell hadrons remain. The creation of a new quark–antiquark pair in the Lund model is a kind of quantum tunneling process, so it is expected that the heavy quark creation is suppressed. It is usually assumed that \(u : d : s \sim 1 : 1 : 0.3\) [32]. Additionally, the \(w_j\) weight factor is related with the spin states of newly created hadrons—for mesons: pseudoscalar and vector states. The suppression here is not defined in the Lund fragmentation model. Counting the spin states gives a \(1:3\) ratio, but in the JETSET model this ratio is eventually close to \(1:1\), according to the ’tunnelling normalisation’.

The situation with baryon creation in the Lund model is much more complicated. The tunneling mechanism is also adopted here. We have here the probability of string breakup via the diquark mode and further combination of a quark and a diquark. If we take into account the pop-corn mechanism of diquark breakups and lack of general rules, we can have the number of parameters to be adjusted to the data comparable with the number of measured ratios to be used for this adjustment. The number of parameters describing the production of baryons measured with good accuracy in the experiments at LHC is at the moment higher than the number of such baryons itself [2022].

The Lund model and a particular JETSET hadronization generator are used also by the PHOJET [3335] program package for the recent theoretical examination of the LHC data description and comparison. Some parameters in PHOJET are different from the default Lund model values.

We first discuss the possibility of introducing the strangeness suppression factor \(\gamma _s\) for

$$\begin{aligned} w_j = (2 J\,+\,1) \times ( \gamma _s)^{N_j}. \end{aligned}$$
(11)

The \(N_j\) is the ‘degree of strangeness’ which is, in fact, not defined yet. It should be related to the contents of the particle \(j\). Three possibilities are rather natural.

$$\begin{aligned} N_j\!=\!\left\{ { \begin{array}{ll} S&{}\mathrm{strangeness~ of ~the~ particle~of~ the~ type}j\\ n_s&{} \mathrm{number~ of~ strange~ (or~ anti strange)~valence~quarks~of}j \\ n_{s \bar{s}}&{}\mathrm{number~ of~ }s \bar{s}~\mathrm{pairs~involved~to~create~the~particle}j \\ \end{array} } \right. \nonumber \\ \end{aligned}$$
(12)

The difference could be seen in the comparison of the \(K\) and \(\phi \) weights. For the direct \(K\) they are \(\gamma _s\), \(\gamma _s\), and \(\gamma _s\), for the three possibilities in Eq. (12), respectively. For the direct \(\phi \) they are 1, \(\gamma _s^2\), and \(\gamma _s\), for the first, the second, and the third possibility in Eq. (12). The actual situation is more complicated, because of the effect of decays of heavy resonances. To see the eventual results complete calculations have to be performed.

Some examples are given in Table 2. We show there only the ratios which are sensitive to the choice of \(N_j\) in Eq. (eq16).

Table 2 Some ratios of particle multiplicities calculated with \(T=150\ \mathrm{MeV}\), \(V=80\ \mathrm{fm}^3\), \(q=1.12\) and different strangeness suppression factor definitions in Eq. (12) (the strangeness suppression factor equal to \(\gamma _s = 0.5\)) (description like in Table 1)

It can be seen by the calculated ratios shown in Table 2 that the case (\(N_j=n_{s}\)) works well for strange mesons and the simple \((2J+1)\) factor results in values not far from the measurements. The multiplicity of \(\phi \) is crucial here, as expected.

Further model ‘fine tuning’ can involve the adjustment of the value of the strangeness suppression factor \(\gamma _s\). We did not, however, go very far. We have checked three values which have been used in the literature: 0.5 originally proposed by Feynman in the jet fragmentation model [18] and still of use [19], 2/3 used successfully by Becattini [36, 37] for the \(e^+e^-\) data and \(p \bar{p}\) results from SPS, and \(\gamma _s=1\) as the limit of no strangeness suppression. Some results for the \(n_{s}\) choice in Eq. (12) and spin counting states \(w_i=(2J_i+1)\) are shown in Table 3 (\(T=150\ \mathrm{MeV}\), \(V=80\ \mathrm{fm}^3\), \(q=1.12\)).

Table 3 Some ratios of particle multiplicities calculated for and different values of the strangeness suppression factor compared with the measurement results (description like in Table 1)

The agreement for \(\gamma _s=2/3\) seems slightly better than for 1/2. The no-suppression (\(\gamma _s=1\)) is in general worst.

The discrepancy exists still in ratios involving baryons \(\Omega \) and \(\Xi \). As has been said there is a great degree of freedom to modify the created baryon multiplicities. The diquark suppression factor \(\gamma _{qq}\) is the one possibility which we used and another factor was introduced specially for \(\Omega \) baryons \(\gamma _{ss}\) related to the creation of a double strange diquark.

Our final results on the ratios of particle multiplicities calculated for a strangeness suppression factor \(\gamma _s=2/3\) and extra diquark suppressions \(\gamma _{qq}=\gamma _{ss}=1/2\) are shown in Table 4 (\(T=150\ \mathrm{MeV}\), \(V=80\ \mathrm{fm}^3\), and \(q=1.12\)).

Table 4 Ratios of particle multiplicities calculated with different strangeness suppression factor \(\gamma _s=0.66\) and extra diquark suppressions \(\gamma _{qq}=\gamma _{ss}=0.5\) compared with the measurement results (description like in Table 1)

Because of relatively limited amount of data we do not wish at the moment to go further with ’tuning’ the suppression parameters (\(\gamma _s\), \(\gamma _{qq}\) and \(\gamma _{ss}\)). We would like to show the general possibility to improve the data description in the thermodynamical model by introducing diquark suppression factors. The additional suppression of heavy, strange baryons required by the modified thermodynamical model can be naturally realized this way.

With all the modifications described above the \(\chi ^2\) for the values listed in Table 4 gets lower from an enormous number of thousands for predictions shown in Table 1 and results in about 30. This value has a chance probability of \(p=0.0004\), equivalent to ‘3.5 \(\sigma \)’ deviation. It is, in fact, a disagreement, but it gives also hope to be reduced further with more sophisticated calculations and model improvements.

The ALICE Collaboration published data showing also particle ratios as a function of the particle transverse momentum. Taking into account that the modification of the statistics of the multiparticle production process was developed primarily for the transverse momentum description this kind of data could be valuable to verify the model. Comparison of our final model prediction and the data are shown in Fig. 3. The solid lines represent predictions of the discussed modified statistical model with the final suppressions as presented in Table 4, \(\gamma _s = 0.66\) and both diquark suppression factors \(\gamma _{qq}=\gamma _{ss}=0.5\). The Boltzmann statistics results (dotted lines) are also given for comparison. As seen in Fig. 3, the standard statistics does not work very well for the LHC ALICE data shown, as well as the modified Tsallis statistics without additional diquark suppression (dashed lines). Only introducing the diquark suppression effect with our chosen, first guess, values of suppression factors reproduces the data better. This is of course the effect of adding two new parameters and adjusting the model to match the points, but the question if a similar modification of the standard Boltzmann picture will give a similar result is still open.

Fig. 3
figure 3

Identified particles ratios as a function of the transverse momentum (transverse mass for \(\Omega / \Xi \)) for our final strangeness and diquark suppressions (solid lines) in comparison with ALICE data from [35, 8]. The ratios without diquark (and strange diquark) suppression are shown by the dashed lines. The results for Boltzmann statistics model are given also as dotted lines

4 Conclusions

The modified thermodynamical model parameters found analyzing the transverse momentum distributions measured at 7 TeV without any re-adjustment with the standard strangeness suppression factor \(\gamma \) of about 2/3 and additional suppressions of diquark and strange diquark production were used for the calculations of the identified particle multiplicities. We have shown that the introduction of non-extensive statistics to the thermodynamical theory of multiparticle production in hadronic collisions opens an interesting possibility for the description of the hadronization process.