Nuclear and quark matter at high temperature

We review important ideas on nuclear and quark matter description on the basis of high-temperature field theory concepts, like resummation, dimensional reduction, interaction scale separation and spectral function modification in media. Statistical and thermodynamical concepts are spotted in the light of these methods concentrating on the --partially still open-- problems of the hadronization process.


Introduction
At the roots of thermal field theory, back to the late 1960s, field theory calculations and ideas applied to nuclear physics were considered as "exotic" as the idea of using heavy atomic nuclei as projectile and target in highenergy accelerator experiments. In these heroic times the most prominent idea was to experimentally produce and study very hot nuclear matter, whatever it shall be [1,2].
Parallel to the achievements of QCD and the Standard Model of particle physics, the idea of a phase transition from "normal" nuclear and hadronic matter to a quark-gluon plasma (QGP) has emerged [3][4][5]. Transgressing the ideas of nuclear democracy [6,7] and an infinite tower of hadronic resonances not allowing to exceed the Hagedorn temperature [8], the MIT bag model of hadronsbased speculations about a phase transition to a plasma of free colored charges, a QGP, became popular [9,10]. This and the more and more progressing nuclear fluid treatment [11,12] at high bombarding energies in the range of 1 GeV/nucleon and upwards in fixed target experiments let the hydrodynamical models flourish. Since hydrodynamics relies only on the local conservation of energy, momenta and eventually of a few more Noether currents, Contribution to the Topical Issue "Exotic phenomena at the interface of nuclear and particle physics" edited by Balasubramanian Ananthanarayan, Bastian Kubis, Rishi Sharma.
Supplementary material in the form of a pdf file available from the Journal web page at http://dx.doi.org/10.1140/epja/i2017-12235-4 a e-mail: Biro.Tamas@wigner.mta.hu b e-mail: jakovac@caesar.elte.hu c e-mail: schram@phys.unideb.hu the only input needed to carry out such calculations is an equation of state, a connection between local pressure and energy density. In this way it provides a flexible framework to test underlying theories predicting various equations of state [13].
In the forthcoming decades it has been gradually revealed, that neither the QGP, nor the transition process is as simple as originally proposed. A remnant of confining forces, a long-range correlation between colored particles, in some respect reminding to (pre-)hadrons, in some other respect not being particle-like at all, pollute the naive picture of a free QGP [14,15]. More devastatingly, the non-perturbative infrared effects occur not only at low temperature, but with a low relative momentum between any pairs of particles at all temperatures [16,17]. Also the color deconfinement phase transition, at the beginning surmised to be of first order with a huge latent heat density, proved to be of a rather continuous transition with no uniquely fixed transition temperature point by more recent lattice QCD calculations with dynamical light quarks [18,19]. The "exact" transition temperature does not exist, only a position for a maximum in one or another susceptibilities can be obtained. While a general lowering trend in the deconfinement temperature, T c , can be observed from 175 MeV through 165 MeV for a long time and recently down to 158 MeV, the width of the transition zone is about 15-35 MeV itself. Since the transition is not of first or second order at vanishing baryochemical potential, the "correct and only" order parameter cannot be identified [20][21][22][23][24].
There are furthermore doubts about the applicability of hydrodynamics [25] at the very early stage of heavyion collisions and at the final hadronization process, when the quarks and gluons suddenly form hadrons. The details of the latter process are still unresolved; phenomenologybased fragmentation functions and modeling level stringor rope-decay scenarios are in use [26][27][28][29][30]. For the early phase, when nevertheless most of the final state entropy is supposed to be produced already, pictures utilizing the concept of coherent, nearly classical color fields dominate, describing color rope formation and more recently a colored glass condensate (CGC) [31][32][33][34].
In this concise review we shall concentrate on selected issues related to applications of models and achievements of high-temperature field theory to nuclear physics, in particular to relativistic heavy-ion reactions. After a short review of the properties of quark matter we deal with basic concepts of the hierarchy of scales and dimensional reduction. Then considering the structure of the QGP we review the spectral function approach and its main consequences for the medium properties, including the shear viscosity. This is followed by a review of special, nonlinear coherent states, showing a possibility to produce negative binomial distribution of numbers in quantum states. Finally, a short conclusion section rounds up this brief review with indications of some open problems in the field.

Properties of quark matter
Our picture about the properties of quark matter and the very definition of quark matter and quark-gluon plasma (QGP) underwent some changes in the passing decades. Starting with the picture of the plasma state as a "fourth phase" beyond solid, liquid and gas, and responding to the idea of local liberation of color charges, by now almost all quark or QCD-level descriptions, also that of a hadronic resonance gas or string theory fitting numerical lattice QCD equation-of-state results, are considered as dealing with "quark matter". We have learned step by step (and by doing more and more precise ab initio numerical experiments on bigger and bigger computer farms) that the QGP should have a very rich interaction structure. Around the transition to color deconfinement in terms of temperature and chemical potentials, in a grand canonical approach expanded in terms of the chemical potential to temperature ratio, μ/T , the real state of matter is far from having free color charges, quarks and gluons, in a classical ideal-gas-based plasma. Not only that the color freedom is only "asymptotic", being expressed only between pairs having relative momenta sufficiently larger than a characteristic scale, whose estimates range from 3T c to 6T c -10T c , but also hadron-like correlations survive well in the temperature zone of T c -4T c according to modern lattice data.
Beyond heavy mesons, like the cc or bb system, also new, on the hadron level exotic complexes, like glueballs, dibaryons, pentaquarks, etc. have been considered as playing a crucial role in forming the rich structure of the realistic QGP near and above T c . In particular the 1/T 2 fat tail of the interaction measure, (ε − 3p)/T 4 , at high temperature (T ∈ [T c , 4T c ]), that is so luring to be interpreted as a mass term, m 2 /T 4 ∼ 1/T 2 , has been given special thoughts by several authors [17,[35][36][37][38]. Also the question of the critical endpoint in the T -μ B plane, signaling the border between a first-order color deconfinement phase transition and a continuous crossover between hadronic resonance gas and QGP, has been studied in deep details relating different susceptibilities to the quality of underlying "freed" color degrees of freedom [39][40][41][42][43][44][45][46][47]. Finally the problem of a quarkyonic phase, the expected structure of quark matter at low temperature but high baryon density, and the coincidence or not coincidence of the color deconfinement transition with the chiral symmetry restoring phase transition are debated since long.
Beyond the plethora of more or less arbitrary (but often analytically tractable) models of QGP, the lattice regularized approach to solving QCD non-perturbatively by numerical strategies proved to be the one, which has received the most credits and trust in the community. Although it also has its limitations, e.g. it cannot deal with dynamical processes on the quantum level in real time, for the statistical-thermodynamical approach it delivers very useful insights into a strongly coupled, complex structure of matter, also called newly an sQGP. It also helped a lot to identify key field configurations, like the magnetic monopoles and the instantons, which may characterize the main physical difference between confined (hadronic) and deconfined (QGP) states of matter.
However, in particular the perturbative QCDdominated regime is hard to be reached by numerical simulation. Although by some tricky methods quite a few authors [48] squeezed out results at temperatures as high as 10T c -100T c , the real perturbative behavior, also approached by traditional perturbative QCD (pQCD), sets in only at unrealistic high temperatures. Certainly one of the problems is that, thinking in terms of temperature, T ≈ T c represents an average energy per degree of freedom, while in an accelerator experiment bringing heavy ions to collide the spread of the relative pair momenta goes in the order of several dozens or even hundreds T c . Therefore any approach can make only a part of the true behavior of the physical QGP available, and our complex picture has to be constructed based on the mosaics we have puzzled out so far.
High-temperature field theory, based on resummation and renormalization techniques starting with the analytic, perturbative approach, is a very special theoretical tool for obtaining a more intuitive picture about sQGP than only analyzing lattice QCD results. Finally, probably a comparison of correlation functions and density matrix elements obtained in both ways shall tell us new, hitherto unheard stories about the "real nature" of quark matter.
Finally, it can be enlightening to review briefly the thermodynamics of ideal gases polluted with objects having less than 3-dimensional kinetic degrees of freedom, but carrying strong and possibly long-ranged correlations. The most famous such objects are strings and ropes; they feature quasi-1-dimensional objects inside the plasma. The free energy density of an ideal gas will then be additively modified by an energy contribution reflecting the average length, ∼ n −1/3 , by a string tension, κ as besides the trivial f id contributions. Here we present the simplest, most straightforward implementation of this idea; more details can be taken from [49][50][51][52]. The non-relativistic ideal, non-equilibrium chemical potential in the Boltzmann approximation is given by The ideal gas free energy density contribution is its integral over the density n, resulting in The total free energy density is given as leading to a non-equilibrium chemical potential This sum of a rising and falling function of the density, n, can be equal to a foreseen constant -in this simple example zero-only above a critical temperature. Below that temperature the plasma with strings would not reach any finite equilibrium density; the system must disintegrate to disconnected objects, e.g. to hadrons. In terms of scaled quantities the non-equilibrium chemical potential is described by a function, The key function corresponding to our above model is given by (see fig. 1) with λ = 2 9 κ T n 1/3 eq,id (T ) .
This function has its minimum for g ( The condition for having a stable equilibrium density for the QGP then follows from g(x m ) = 3(ln λ + 1) ≤ 0 and reads as Interesting enough that, in the Boltzmann approximation, where n eq,id (T ) = T 3 /π 2 for each degree of freedom, and counting with the traditional 37 effective light degrees of freedom for a QGP, we obtain the result that This result is near to the one obtained from early studies of the static quark-antiquark potential for the relation between the string tension and the color deconfinement temperature in pure lattice gauge theories [53][54][55][56]. This very simple-minded model can also be extended to finite baryochemical potentials, and does perform appreciably [51].

High-T effective field theory
It is known since long, already from the perturbative QCD treatment of the quark-gluon plasma (QGP) that interactions play a decisive role, and the description of the equation of state at high temperature cannot be based solely on the model of an ideal gas of bare quarks and gluons. The more interesting that it can be and for a long time was being based on the ideal gas picture of quasiparticles, featuring the same number of degrees of freedom as colored elementary quarks and gluons do. The most prominent effect of the interaction is concentrated to effective masses and its recursive effects on the pressure, energy density and entropy density at a given temperature. First experiences on nontrivial problems in the noninteracting quasiparticle treatment arose from the study of the propagation of oscillatory excitations, so-called plasmons, in hot QGP: original calculations on the gluon damping coefficient, which determines the speed of thermalization of a QGP, seemed to depend on the gauge fixing choice. Even its sign was disputed in the beginning [57][58][59][60].
The solution was found by Pisarski and Braaten with a resummation procedure of the so-called hard thermal loops (HTLs) [61][62][63][64][65][66]. The basis of this approach is a division of elementary quanta according to their momenta: "hard" are the hot thermal ones (k ∼ T ) and "soft" are at momentum scales characteristic to the interaction (k ∼ gT to leading order). Infinitely many Feynman graphs are grouped together so that the damping rate and the effective mass (self-energy in the infrared limit) can be calculated with methods familiar from perturbative QCD. At high temperature the expansion according to the coupling strength, g, and according to the number of loops in Feynman diagrams, , is no more equivalent.
This, albeit is a big step forwards, does not solve alone all the problems. Most prominently the static magnetic gluon mass is of order g 2 T , occurs at a "supersoft" scale, and cannot be generated by HTL resummation techniques alone. One considers, e.g., a dilute magnetic monopole gas, whose density is proportional to n ∼ m 3 ∼ (g 2 T ) 3 , making a contribution to pressure and energy density at the level of p ∼ nT ∼ g 6 T 4 . In the perturbative QCD approach this term is related to an infrared divergence [67,68], and as such it is independent of UV renormalization schemes. The magnetic gluon mass of order g 2 T seems to be of genuine non-perturbative origin [69,70]. It plays a role also in the calculation of other physically relevant quantities, like shear viscosity. Lattice QCD calculations, on the other hand, obtained this static magnetic gluon mass via observing a reduced dimensional string tension for spacespace-like Wilson loops as well as hunting for magnetic monopole looking configurations during the Monte Carlo integration [71][72][73][74][75][76].
Basic formulas of high-temperature field theory make it possible to obtain order-of-magnitude estimates by assuming different dominant gluon field configurations, which contribute to the Euclidean path integral integrating the factors exp(iS/ ) with the action Here the chromoelectric field is related to the vector potential via the Euclidean time derivative, E ia = −∂A ia /∂τ . The quantum theoretical path integral, is carried out for (with their gauge equivalent) τ -periodic A ia fields with period β , and can therefore be reduced to a 3-dimensional partition sum at high temperature (small β ): with In obtaining this result one assumes a constant H(τ ) function in the narrow interval (0, β ). This is relevant in the study of the infrared behavior of the full, interacting theory.
For the sake of simplicity let us consider the pure Yang-Mills theory (i.e. QCD without quarks) for a while. The path integral trace is over vector potential configurations, these can be re-scaled by the interaction strength: gA → A transformation leads to an effective, reduced 3dimensional action with an effective coupling of the static magnetic mass revealing the sought finite-temperature partition sum as Since this formula does not contain the Planck constant any more, we may confirm that the chromo-magnetostatic features of QGP can be estimated by purely classical field theory means. At the same time they are genuinely nonperturbative. In the following we review a few assumed gluon field configurations and investigate the corresponding mass and density scales of gluons, in the original setting, before rescaling the vector potential with g. As a starting point we have to relate the magnitudes of the vector potential and that of the chromoelectric fields. We do this remembering that they are represented by canonically conjugate operators, satisfying Looking for quantum states possibly near to classical fields one singles out coherent states, where the Heisenberg uncertainty between the canonical operators is minimal. Henceforth we use the intuitive estimate assuming a quantization box of length L. We classify the gluon field configurations according to the magnitude of the vector potential and distinguish the following three fiducial classes: 1) The vector potential is large, of classical order (independent of ): A ∼ 1/gL. In this case E ∼ g /L 2 and the magnetic field strength becomes B ∼ A/L+gA 2 ∼ 1/gL 2 , also classical. It receives Abelian and non-Abelian contributions in equal magnitude. The field energy, is also classical and dominated by the magnetic contribution for g 2 1. Equating this value with the thermal gluon energy, H ∼ T , we obtain the relation 1/L ∼ g 2 T , i.e. the supersoft magnetic scale determines these configurations. The gluon density is estimated as being n ∼ 1/L 3 ∼ (g 2 T ) 3 and the magnetic screening mass, the gluon self-energy in the infrared limit, is estimated from m 2 A 2 ∼ g 2 A 4 : This tour de force in estimates ends up with the mass m ∼ g 2 T .
2) The vector potential and the electric field strength share the quantum order but they are independent of the coupling, g. In this case one typically deals with The dominant chromomagnetic field is of the same magnitude as the chromoelectric one. This describes a thermal state with equipartition and the energy density The characteristic scale from this is obtained as the thermal wavelength, L ∼ /T , and the effective screening mass becomes This Debye screening mass is of the order m ∼ gT / √ . 3) In principle a third class of configurations exists dominated by the classical chromoelectric field on the account of a vector potential of highly quantum order: E ∼ 1/gL 2 and A ∼ g /L. Physically this corresponds to the string picture and implies E A/L for g 2 1. The Abelian part of the chromomagnetic field, B Abel ∼ A/L ∼ g /L 2 ∼ g 2 E is then smaller than the chromoelectric field, while the non-Abelian contribution, B non-Abel ∼ gA 2 ∼ g 3 2 /L 2 is even smaller, negligible in the semiclassical weak coupling approach. It is interesting that the thermal energy, dominated by the classical chromoelectric field, again delivers a characteristic length scale of L ∼ 1/g 2 T . The screening mass effect, however, in this case is very small and of highly quantum nature: delivering at the end a mass scale of m ∼ g 2 · g 2 T .
Considering quasiparticles their mass is defined by the dispersion relation reflecting the Schwinger-Dyson equation with a general, complex self-energy Interacting with a medium during propagation is included in the general self-energy term, Σ. The resolution of eq. (25) for ω can also deliver complex values, the imaginary part of the frequency signalizes the so-called plasmon damping. In the infrared limit, |k| → 0 the general frequency is ω + iγ, satisfying In the weak damping limit, γ ω, the imaginary part of this equation defines and the real part constitutes a mass gap equation To leading order in the perturbative expansion Re Σ(ω, 0) = (m/ ) 2 with m being the mass scale derived above. With ω ∼ T / taken as hard thermal, the weak damping constant becomes γ ∼ m 2 / T . This is perturbatively the largest in the second class, γ 2 ∼ g 2 T / .

Internal structure of QGP
The fact that QCD is a strongly interacting theory changes several concepts originally stemming from the free particle world.

Particles in strongly interacting system
In an interacting field theory the notion of a "particle" needs careful definition. The problem is that the concept of a "particle" is associated with free field theory; but, in fact, there are various definitions that refer to the same physical phenomenon in non-interacting theories, any of them being appropriate to describe a free particle: -In free theory there exists a conserved particle number operatorN that commutes with the momentum operator too. The common eigenvectors of the energy, momentum and particle number in the N = 1 sector are the free particles. The N > 1 sector consists of direct products of one-particle states; the direct sum of all N -particle sectors provides the Fock-space construction. -The energy E and momentum k of a free one-particle state is connected by the dispersion relation E = E(k). Therefore the spectrum of the one-particle sector consists of a single energy level; let us denote it |E, k . The spectral density of this sector is therefore a single Dirac delta. To measure the spectral density we can use any operatorΦ that posesses only a one-particle form factor, i.e. E, k|Φ|0 = 0, but E, k; E , k |Φ|0 = 0. Then we define where ± refers to the commutator/anticommutator, depending on the bosonic/fermionic nature of the particles. In Fourier space this definition is equivalent to a weighted spectral density In a relativistic field theory, using the fundamental field as a measurement operator we have the spectral function Observable spectral functions are always positive for positive frequencies: this is required by unitarity. Nonobservable spectral functions (like the one of gluons) may have non-positive parts, and there is a discussion about their interpretation [77]. The spectral function in eq. (31) satisfies the sum rule -The spectral function remains unchanged at finite temperature, so a particle at finite temperature is the same object as a zero temperature particle. -As a consequence the wave function of the free particle is Ψ (t, x) ∼ e −iEt+ikx , an infinite extension plane wave, with |Ψ | 2 = 1 uniform probability density. -The linear response to a disturbance leads to the linear response function, or retarded Green's function. The retarded Green's function reads as in relativistic systems. This form is preserved at finite temperature.
In free theory these are all consequences of each other, therefore we unintentionally identify these concepts, and when we tell "particle", it means all of these at the same time.
However, in an interacting model all of these concepts yield different results, and so we have to release the identification of the above concepts.
-In a general field theory the number of conserved quantities is much smaller than the number of state labels (types of quantizable physical degrees of freedom); the only exceptions are integrable systems. In particular the particle number operator does not exists any more. -We can measure the spectral function in the same way as we did in the free case. But, because of the interactions, the spectrum of the free one-particle states will be mixed with the spectrum of the higher particle number states. These will provide a continuum contribution besides the free particle state. Since the spectrum is the subject of a sum rule, cf. (32), the height of the Dirac-delta peak cannot be the free one, it receives a multiplicative correction Z (wave function renormalization). -The spectrum is more complicated at finite temperature or at finite chemical potential: there the spectral function is nonzero for all frequencies (with the sole exception ω = 0), as a consequence of the scattering on particles in the environment. This broadens the Dirac-delta particle peak, resulting in a Lorentzian curve. Such an excitation is called quasiparticle. It, as opposed to the free case, does not represent a single energy level, but a collection of energy eigenstates. Here other excited/ground states can appear too, and the continuum is always present. A typical finite temperature spectrum can be seen in fig. 2. -The linear response function at zero temperature still contains a contribution from the Dirac-delta peak. For long times we obtain For long times the second part dies out, leaving a free-particle-like propagation: these are the asymptotic states.

Fig. 2.
Typical spectrum in an interacting theory. The original particle peak is broadened (finite lifetime or finite coherence length), other peaks can appear (excited and bound states), moreover we always have a multiparticle continuum contribution. The continuum threshold also can broaden [78].
-In numerous cases, however, there are no asymptotic states: if the particle mixes with zero mass particles (all charged particles do that), or the particle is not stable, or we are at nonzero temperature; practically in all realistic cases. The Lorentzian quasiparticle peak and the continuum part of the spectral function yield the retarded propagator where γ k is the half-width of the Lorentzian peak, and Γ k is some parameter determining the smoothing of the spectrum near the threshold. The quasiparticles for long times decay exponentially 1 . If γ k Γ k , for long times we can observe a fading quasiparticle response, in the reverse case, γ k Γ k , the long time behavior of the system is not particle-like at all.
Having said all these, we see that the particle concept becomes a dangerous ground, we must be very precise on what we are talking about. For example stating that the particles have temperature-dependent mass is sensible only in the quasiparticle sense: free particles are eigenstates of the Hamiltonian, they cannot have a mass changing with the temperature. The quasiparticles, on the other hand, are collections of energy eigenstates, and the coefficients of the combination can change with T . Therefore the position and the width of the quasiparticle peak can also change with the temperature.

Particles, spectral function and thermodynamics
There is still a possible definition for a particle through thermodynamics: in free theory each particle species represents a thermodynamic degree of freedom. Does it remain true in the interacting case, i.e. are also the nonperturbative particle species thermodynamic degrees of freedom? Let us seek an answer to this question by utilizing spectral properties.
The pressure of the free gas of different species is the sum of partial pressures P = n P n with where m n is the mass of the n-th particle, and ∓ refers to the bosonic/fermionic case, respectively. In particular the pressure at large temperature, in the Stefan-Boltzmann limit, T m, reads as where N b is the number of bosonic, N f the number of fermionic species. All fundamental free particles therefore contribute independently to the pressure. We do not need to have, however, fundamental particles to obtain thermodynamical degrees of freedom. In the case of well-separated quasiparticle excitations, that can be characterized by a phase shift δ(ω) of a scattered particle centered at ω = E(k), the Beth-Uhlenbeck formula [79] states Therefore all excitations contribute to the partition function exactly like a stable particle, irrespective whether it is a fundamental particle, or a bound state with internal structure and motion. This picture leads to the Hadron Resonance Gas (HRG) description of the QCD plasma [80]. Here all the possible hadrons, measured and identified at zero temperature, contribute to the thermal ensemble in the same way, irrespective of their width. The resulting pressure is in fact in a very good agreement with the pressure measured in MC simulations in the hadronic phase [81][82][83][84][85].
It fails, however, badly in the quark-gluon phase, about T > 150 MeV [85,47]. In fact, as it was first pointed out by Hagedorn, the HRG pressure would be divergent, if all hadrons were taken into account. This is due to the fact that the density of hadron states grows exponentially with the mass: ρ(m) ∼ m α e βH m , with T H = 1/β H the Hagedorn temperature, and an appropriate power α (e.g., α = 5/2). Then the pressure of all hadrons, written up as an integral for the mass density, diverges If we do not take into account all hadrons, just those that are listed in the Particle Data Book [86], or the hadrons below, say 3 GeV, then the pressure will not diverge, but still overshoots the pressure of the quark gluon plasma. This is a conceptual problem: at all temperatures the system in equilibrium realizes that phase where the grand canonical thermodynamical potential, in this case P V , is the largest. The quark gluon plasma with 8 gluons and N f quarks represents a system with 16 + 10.5N f bosonic degrees of freedom (all fermions have 4 Lorentzcomponents, 3 colors, and the factor 7/8 compared to the bosonic contribution). If we counted only the stable hadrons, i.e. pions and nucleons, as hadronic degrees of freedom, we would obtain 10 bosonic degrees of freedom, and so the QGP would have a larger number of degrees of freedom, which explains why there is a phase transition to the QGP phase. But if all the Particle Data Book hadrons are taken into account, this highly exceeds the QGP number of degrees of freedom, and we do not understand why a phase transition occurs at all. We must emphasize that the argument that the hadrons are not valid degrees of freedom in the QGP phase is not applicable, since the hadron phase represents a higher entropy state of matter, and so it forbids the change to the QGP phase.
So we are faced with the situation where we can describe the pressure of the strongly interacting plasma below ∼ 150 MeV (HRG), and above 300 MeV (QGP), but we do not understand why there is a phase transition, and we do not understand the pressure in the intermediate temperature range. What happens with the hadrons between 150 MeV < T < 300 MeV? The bound states must somehow disappear from the system as we rise the temperature, physically the hadrons must melt away.
How is this melting related to the Beth-Uhlenbeck formula, according to the fact that every quasiparticle resonance corresponds to a single thermodynamical degree of freedom? We should note that in the derivation of the result one must assume that the quasiparticles are independent, in the sense that all can be treated as separate Breit-Wigner peaks. This assumption, however, fails when we consider a system where the quasiparticle peaks occur densely, or if a multiparticle background is present. In such cases the quantum mechanical treatment of the quasiparticle peak contributions to the S matrix have complex coefficients [87][88][89]. The unitarity of the S matrix poses constraints among these coefficients: in this way the pole contributions are no more independent.
In field theory we can describe this process by observing the hadronic spectral functions [90,91,17]. A mathematically similar description can be obtained using the Mott transition analogy [92][93][94][95][96][97][98]. Hadronic spectral functions, as all spectral functions, consist of a quasiparticle peak and a continuum part. The weight of these parts, however, changes with the temperature. At small temperatures the quasiparticle peak is pronounced, it dominates the thermodynamics, and the gas of hadrons behaves as a gas of almost free particles. At high temperatures, however, the quasiparticle peak merges with the continuum, and the "particle" nature of the hadronic channel ceases to be true. This is accompanied by a drastic reduction of the partial pressure in this channel.
To set up a field theoretical model we construct a quadratic theory with the same statistical property (boson/fermion) and the same spectral function as the studied channel. For a scalar field this means that we write up the Lagrangian as with some kernel K. The kernel and the retarded Green's function are related as G ret (p) = K −1 (p 0 + iε, p), while the retarded Green's function and the two-point spectral function can be expressed from each other (41) Since the theory described by the effective Lagrangian (40) is quadratic, it is solvable; but its spectrum does not consists of free particles. To determine thermodynamics one has to start from a microscopically measurable quantity, which most conveniently can be chosen as the energy density, i.e. the expectation value of the 00 component of the energy-momentum tensor T 00 . Although we have a quadratic model, the energy-momentum tensor is not simple, due to the nonlocal nature of the kernel. The divergence of the energy-momentum tensor can be determined from the variation of the action with respect to a spacetime translation a μ [95] This leads to [90] T where (44) and the symmetrized derivative is defined as Once we know T μν , we can take its expectation value in equilibrium. We can use the Kubo-Martin-Schwinger (KMS) relation to write Finally, we renormalize the expressions and express pressure through thermodynamical relations. The result reads as [90,91,17] We should note that the pressure does not depend on the normalization of the spectral function, since the kernel is inversely proportional to this normalization factor (cf. (41)). In the free case, i.e. using the free spectral function (31) and K = p 2 − m 2 in the above formula, we get back the free pressure (36). If there are several Dirac-delta peaks in the spectrum, we also get back the sum of the free gas partial pressures, in accordance with the Beth-Uhlenbeck formula (38). But if the peaks are not independent, the exact pressure starts to deviate from the Beth-Uhlenbeck prediction. In fig. 3 we can see that when two peaks start to merge, the exact pressure decreases. It is not unexpected: when there is just one peak, the spectrum looks like a one-quasiparticle spectrum, and so the pressure must come from a single degree of freedom. Therefore starting from a spectrum with two separate peaks and continuously approaching a one-peak spectrum, the pressure also changes smoothly from the two-particle pressure to the one-particle one.
This observation leads to the explanation of the Gibbs paradox in interacting systems [90]. The original paradox, valid in free systems is that if the molecules of two gases differ only in a tiny, continuously disappearing thing (e.g. mass difference, or a tiny "flag"), then the two gases are different as long as the difference is present, but are the same if the difference is exactly zero. This leads to a nonanalytic contribution to the entropy (mixing entropy). In interacting gases, however, the energy spectrum does not consist of infinitely thin Dirac deltas, but there is a line broadening coming from different sources (e.g., thermal motion, or finite density). Then with vanishing mass difference the spectral functions become more and more overlapping, as is shown in fig. 3. As a consequence the pressure will continuously reduce from the 2-independent particle pressure to a 1-particle pressure (where the mass is somewhere between the masses of the two peaks), as is also shown in fig. 3. In an interacting system, therefore, the Gibbs paradox leads to a continuously vanishing mixing entropy.
In fig. 3 also the contribution of the multiparticle continuum (cut) part to the pressure is shown. Its spectrum is not quasiparticle-like, as can be seen in the left panel of fig. 3. The corresponding pressure term is much lower Hadronic pressure reduction at high T , quark pressure reduction at low T , based on the one and same Lagrangian eq. (40), due to merging peaks. For comparison, lattice data from [84] are indicated by triangles. For more details see [17].
than the pressure of the quasiparticle systems: in the figure it had to be enlarged by a factor of 10 to be visible at all. This explains why a huge pressure reduction appears when a peak merges with the continuum, i.e. when it melts. The original narrow peak structure corresponds to an almost free quasiparticle, with pressure close to the free pressure. When the peak gets merged in the continuum, the spectrum does not contain a particle, it becomes more and more like the continuum part of fig. 3, therefore the corresponding pressure is also smaller and smaller. In the course of a continuous merging procedure the pressure smoothly changes from the one free particle pressure to zero: the particle is melted, it disappears from the thermal ensemble. We can define the number of thermodynamical degree of freedom as the ratio of the exact pressure and the free pressure. With this definition, during melting the number of thermodynamical degrees of freedom changes continuously to zero.
This mechanism makes it possible to explain why there is no divergent pressure beyond the Hagedorn temperature in the QCD plasma, or why the hadronic pressure does not overshoot the QGP pressure. The hadron spectrum changes with the temperature from quasiparticle peaks to peaks merged with the continuum. As discussed above, this results in the reduction of the thermodynamical degrees of freedom effective for the total pressure. The situation is just the opposite for the QGP degrees of freedom: at small temperatures the spectrum in the quark channel is just a continuum, no particle-like excitations there, the partial pressure is zero. At high temperatures the spectra in the QGP channels become more and more particle-like; although even at about T ≈ 300 MeV the number of thermodynamical degrees of freedom is only about 80% of the free case (cf. fig. 4).
In this way, within the picture of melting quasiparticle peaks, the QCD pressure computed in MC simulations can be reproduced and interpreted in correct physical terms. The main prediction of this model is that the hadronic thermodynamic degrees of freedom do not vanish suddenly above the critical temperature, there is a sizable temperature regime, until about T = 330 MeV, where they still dominate the pressure. In this melting hadron peak regime, however, we do not have quasiparticles as excitations, just a mixture of hadron-like and dissociated quark-gluon-like behavior. This is indeed a new type of nuclear matter. There are plenty of other approaches to describe QGP pressure which obtain agreement with numerical lattice QCD data. For the QGP phase the NNLO HTL-pt method [99] deserves a closer inspection. This method, however, cannot follow smoothly over to the hadronic phase. Another method, the Polyakov-Nambu-Jona-Lasinio (PNJL) model [100][101][102], uses effective fourpoint quark interactions or explicit mesonic degrees of freedom and non-perturbative glueball degrees of freedom in the Polyakov loops. Above we intended to demonstrate only what should happen with the spectral function (and in general any other operator correlation) when passing from the hadronic to the QGP world -without discussing the underlying effective field theory in detail.

Continuous mass fits to lattice EoS
Quark matter, searched for in relativistic heavy-ion collisions, reveals itself in signatures on observed hadron spectra which are interpreted in terms of quark level properties. In particular scaling of the elliptic flow component v 2 with the constituent quark content of the finally observed mesons and baryons [103][104][105][106][107][108][109] and successful description of p T -spectra of pions and antiprotons using quark coalescence rules for hadron building [110] utilize the fast hadronization concept of quark redistribution 2 . Albeit this simple idea brings also problems with it, e.g. in dealing with energy conservation and entropy increase, these issues can be resolved by using a distributed mass quasiparticle model for quark matter [113], and are in accord with the quark matter equation of state obtained in lattice QCD calculations [114]. The surmised mass distribution gives rise to specific equation of state (pressure as a function of temperature, p(T )), and reversed, a mass distribution may be outlined from knowledge on the p(T ) curve.
While traditional, fixed mass quasiparticle models already succeed to describe the equation of state obtained in lattice QCD [115], those mass values are themselves temperature dependent. Furthermore a temperaturedependent width is associated to the quasiparticle mass, too [116][117][118]. The factor between the massive and massless relativistic ideal pressure in the Boltzmann approximation relates the observed pressure in a nontrivially interacting system to the mass distribution of a conjectured continuous mixture of different mass particles Here we assume that the complete temperature dependence stems from the ideal pressure factor and w(m) is independent of T . This is a simplifying assumption, without any deeper theory behind. This means that the observed equation of state in terms of the pressure ratio to the Stefan-Boltzmann limit, representing the effective fraction of thermodynamical degrees of freedom, is a socalled Meijer transform of a conjectured continuous mass distribution: Using the scaled variables, t = m/T c and z = T c /T , and the redefined functions σ(z) : This integral transformation, the so-called Meijer transform, can be inverted analytically, The respective high-temperature expansions of the pressure ratios, based on the expansion of the K 2 (z) Bessel function in the mass distribution formula, and that one applied in perturbative QCD, are worth to be compared: with γ being the Euler-Mascheroni constant and Λ the renormalization subtraction scale. The contribution of Matsubara zero modes is non-analytic in terms of m 2 , therefore the a i coefficients in eq. (53) may depend on the coupling g via infrared cut-offs at the scale m. Indeed, among others this is the origin of the plasmon contribution of order g 3 [119]. We note that σ(z) can also be obtained for Bose or Fermi distributions instead of the Boltzmann one; the numerical difference is overall minor, less than six per cent at vanishing chemical potential. The basic result on the Debye screening length in a QGP supports the assumption that m 2 ∼ g 2 T 2 sets the scale for a simplified treatment of the quark matter pressure at high temperature. The comparison of pQCD and mass distribution results above reveals that m 4 = m 2 2 , whence the necessity of a width in the mass distribution emerges. Alone this fact indicates that the spectral function cannot be a simple sum of quasiparticle peaks, it must contain appreciable widths, possibly even a continuum part. In the HTL expansion of the pressure, which more sophisticatedly also includes g 4 T 4 terms with IR cut-off at gT scale resulting in g 3 T 4 contribution to the free energy density, first obtained in [120], it is also obvious that one must go beyond the simple summation of quasiparticle peak contributions [119]. The temperature dependence of the pressure ratio to the massless ideal gas value is concentrated on the temperature dependence of the coupling constant: g = g(T ) in the traditional interpretation. We have recently pursued an alternative approach to the quasiparticle mass distribution in quark matter [110,113], where a temperatureindependent w(m) distribution is reconstructed from the pressure ratio σ(T ) = p(T )/p id (0, T ) curve: It is interesting to play around with some analytic formulas with respect to the Meijer transformation. The simple exponential ansatz leads to a certain power-law-tailed form of w(m) with a threshold mass gap at m = λ: Since such a mass distribution would have a diverging m 2 and also diverging expectation values for higher powers of m, we conclude that it must be Indeed lattice results on σ(T ) all satisfy such a constraint with a corresponding value of λ. The smallest such λ value, found numerically, is then the Boltzmannian estimate for the mass gap. It is a remarkable property of this approach that it indicates a temperature-independent threshold (smallest mass) in the w(m) spectrum for lattice QCD pressure data [121,122]. The pressure is, however, not known analytically, the numerical results are smeared with error bars. This problem is more severe in the light of the fact that eq. (50) constitutes an integral transformation (the Meijer K transformation, a generalization of the Laplace transformation). There is no mathematical guarantee that the inverting transformation eq. (52) leads to close results for w(m) from close functions for σ(T ). In fact this is known as the "inverse imaging problem" [123][124][125].
However, based on the above assumptions one can obtain some supportive knowledge about a T -independent w(m) mass distribution when the pressure p(T ) satisfies certain inequalities. In particular we prove that if the pressure p(T ) is below the corresponding ideal gas pressure with a given mass M 0 at all temperatures, then the mass distribution is exactly zero for all masses below M 0 . For inequalities with other than ideal gas pressure curves as In the following we relate the mass gap to the behavior of σ(T ) using the generalized Markov inequality to estimate upper bounds on the integrated probability density function for the mass being lower than a given value. The general form of the Markov inequality is given by [128][129][130][131] with measure μ, a real valued μ-measurable function f , and a monotonic growing non-negative measurable real function g. The proof, based on the monotonity of integration, can be presented in a few lines. For a nonnegative and monotonic growing function for t ≤ f (x). We obtain This quantity can be bounded by A division by g(t) ≥ 0 delivers the original statement in eq. (57).
In order to apply this inequality to the mass spectrum we choose f (m) = tM/m. In this case and the Markov inequality reads as For a continuum mass spectrum dμ(m) = w(m)dm can be chosen with w(m) being the probability density function. The generalized Markov inequality stated above is valid for general probability measures 3 μ possibly including bound state contributions. Now we discuss a few examples for monotonic rising functions g(z), which allow us to draw some conclusions about the integrated probability for masses below M . Applying the special form of g(t) = t n we arrive at whence we obtain It is easy to see that the negative integral moments of the mass on the right-hand side of the above inequality are connected to the negative integral moments of scaled pressure σ(T ) = p(T )/p id (0, T ). The final inequality for the probability of having masses smaller than M is given by Let us apply this result to the simplest majorant, that of a fixed mass relativistic ideal gas. In this case σ(T ) ≤ Φ(M 0 /T ) with some M 0 (cf. dashed line in fig. 5). Equation (64) leads to in this case. Should it hold for arbitrary high n, the righthand side of this inequality is zero for all M < M 0 and divergent for M > M 0 . In the second case it is not restrictive, since P (M ) < 1 anyway, in the first case this means a mass gap up to M 0 . We note that this conclusion holds for a general non-negative Φ(x), for which the integrals in eq. (65) are finite for all n > 0. Thus the Bose-Einstein or the Fermi-Dirac distribution could as well be applied instead of the Boltzmann one. Another possible majorant is the exponential function, σ(T ) ≤ exp(−λ/T ) (cf. the dotted line in fig. 5 for λ = T c ). In this case eq. (64) delivers The large n limit of this result is given by to leading order in 1/n. Again the right-hand side approaches zero for M ≤ λ and diverges for M > λ. This points out a mass gap stretching to (and including) λ from zero. The most striking inequality is obtained by using g(t) = Φ(1/t). This function is also admissible, its rise from zero to one is strictly monotonic. Equation (64) leads to For t = 1 using the numerical value Φ(1) ≈ 0.81 one arrives at P (M ) ≤ 1.23σ(M ), which can be directly read off from numerical simulation or theoretical predictions of σ(T ). . The most restrictive are the lowest-temperature data for σ(T ), they are, however, also the most contaminated by errors. It is probably safe to conclude that as much as 90-95% of the masses are above 1.5T c ≈ 440 MeV according to these data. Our mathematical treatment of the mass gap leaves the point m = 0 in the possible mass distribution as a special case. Assuming that there were such a contribution of finite measure, i.e. P (0) = a were a finite value between zero and one, one would conclude from the definition eq. (54) that in this case σ(T ) ≥ a. There is no sign of such an indication in lattice QCD data.
Finally we note that there is a potential to use our method presented in this paper in a context wider than quark matter: the quasiparticle test based on the generalized Markov inequality can in principle be done for any system with a sufficiently known thermal equation of state. The estimate for a lowest mass can then be checked against knowledge on the mass spectrum obtained from the study of correlation functions.

α(Q 2 ) T vs. α eff (T)
Talking about non-perturbative effects in hightemperature QCD, at a first glance is a paradoxical issue. However, there always have been warnings coming from a few experts [132][133][134][135]. By the majority such warnings have been long ignored: upon the famous proof by Linde [67], that the problem of non-perturbativeness was an infrared effect, it was generally believed that one does not have to consider this above T c .
The 1/ log-like pole behavior of the running coupling constant has been encountered by phenomenological shifts in the renormalization point energy scale from T c a bit [136][137][138] in a formula for the effective thermal coupling conjectured to be a good approximation: Even without digging into the delicate issues of QCD deep, one can easily convince himself that this approximation could only hold if the thermal distribution of relative Q 2 values in a QGP were sharp. This is, however, not the case, as we shall demonstrate it below. First we summarize the results we arrive at by considering the thermal distribution of Q 2 in a QGP: 1) The thermal distribution of Q 2 values are not peaked around T 2 , rather they are maximal at Q 2 = 0 between two massless particles; the Boltzmann distribution being just a particular example. The width of the distribution is proportional to T 2 .
2) The expectation value of a non-perturbative (NP) order parameter, being one until Q 2 = Λ 2 and zero otherwise, is non-vanishing at arbitrary temperatures. For high T it goes like Λ 2 /T 2 upon the constant probability near to Q 2 = 0. 3) As a consequence at arbitrary high temperatures there is a relative measure of NP effects. In the pressure this occurs already at the subleading term. 4) The lattice EoS results subleading terms are seen in the scaling (e − 3p)/T 2 = const at high temperatures. pQCD would predict an inverse logarithmic fall of this value.
In the following we outline the support for these statements. The relativistic kinematics for pairs of massless particles delivers The distribution of Q 2 values is then given by Using the Dirac-delta functional for the integral over cos θ this can be written as . (74) This value is always between zero and one, its integral is one due to its construction in eq. (72). Since the thermal parton distribution, f (E), is positive, the numerator is maximal at Q 2 = 0. The maximal value of the distribution is given by with c some constant depending on the distribution. In particular for the Boltzmann distribution, f (E) ∝ e −E/T , the Q 2 -distribution can be given (cf. fig. 8), in analytic form as (76) Let us now consider a non-perturbative quantity, like the string tension, which is zero above Q 2 = Λ 2 and around constant below this momentum scale. The ideal order parameter is given by a step function, o = Θ(Λ 2 −Q 2 ). The expectation value of such an order parameter at tem- by using the integration variable x = Q 2 /T 2 . This is the integrated distribution function of the thermal Q 2 distribution. By definition this approaches the value one from below, so as a function of T (or T 2 ) it starts with the value 1 and continuously decreases. For high enough temperature, T Λ, the distribution is nearly constant, so we can use (75) For Boltzmann distribution c = 1/4. This means that at any temperature there are non-perturbative (NP) effects to subleading order in T 2 (cf. the 1/16x 2 curve in fig. 9).
In particular for the equation of state (EoS) of hightemperature matter, among others for the quark-gluon plasma, NP effects are present already at this level. Owing to the fact that the pressure is zero in the confining phase, one may consider that it is proportional to 1 − o :   The interaction measure is also non-perturbative to leading order Lattice gauge field theory calculations, especially SU (3) Yang-Mills, in fact show a rather constant value for the parameter (cf. fig. 10). Perturbative QCD suggests leading corrections going with α(T )T 2 , which has to be smaller than the non-perturbative constant in the above equation. (The problem is that α(T ) = α(Q 2 ) also must contain NP effects, the one-loop inverse logarithm is not integrable with the P (Q 2 ) distribution.) Elementary thermodynamics of ideal gases including string-like objects, gives account to this subleading behavior. Here we briefly describe this mechanism based on the more detailed presentation in ref. [51]. We denote the number density of colored sources by c = c i n i and assume that the temperature and number density dependent free energy density is given by For straight color tubes with constant cross sectional area in three dimensions one naturally assumes γ = 1/3. In this way the total free energy density contains a term depending on the total color density as a fractional power: The chemical potentials, the pressure and energy density can be derived from this as follows: Utilizing these results the interaction measure becomes The contribution by the ideal gas is zero for massless objects, for each massive degree of freedom it is proportional to m 2 . For a QGP made of (nearly) massless quarks and gluons only the stringy contribution remains, in which all densities are proportional to T 3 , as n i = ν i T 3 . In this case one obtains For straight objects γ = 1/3, and this term is proportional to T 2 , and the same follows for the non-ideal contribution to the total pressure too: A further remarkable property of this picture is that at the edge of mechanical stability, defined by vanishing total pressure, p = 0, the energy density is given by For massless constituents in the QGP e id = 3p id , and p id = T n i , and we obtain the following energy per particle: For a hadronization temperature of 167 MeV, conjectured in earlier lattice calculations, this would be E/N = 1 GeV; a value remarkably close to the result of phenomenological fits of hadronic ideal gas mixtures (the so-called "Statistical Model") to heavy-ion experiments at various bombarding energies [139][140][141][142][143]. Since the hadronic matter has almost zero pressure compared to a QGP, the deconfinement phase crossover transition temperature is indeed close to the mechanical instability point defined by p = 0. Getting a glimpse of figs. 11 and 12 one realizes that the non-perturbative leading correction to ideal pressure term is negative and scales with the Casimir of the charge in the SU (N ) gauge group, namely with N 2 − 1. Since strings pull, it is natural that they give a negative correction to the ideal pressure. Since they are mainly made of chromoelectric flux, it is natural that the effective string tension scales like N 2 − 1. However, for also resulting in O(T 2 ) corrections, in (2 + 1)-dimensional lattices the elementary correction per color source must be like f/c ∼ κ ln . This hints towards a very different mechanism for the origin of such corrections in lower-dimensional Yang-Mills systems.
Summarizing this subsection, we have shown on the basis of general arguments that non-perturbative effects, even those which cease at a sharp momentum cut-off, contribute to thermal expectation values at arbitrary high temperatures. Based on the thermal distribution of Q 2 values it was demonstrated that this contribution is of the relative order of Λ 2 /T 2 to any thermally averaged quantity. A physical picture of such non-perturbative corrections to the ideal gas equation of state is offered by an el-ementary study of the thermodynamics of straight strings with a naturally density-dependent average length.

Shear viscosity bounds
Accelerator experiments suggest that the matter formed in heavy-ion collisions is a very good fluid close to be a perfect one [144,145], which means that the characteristic dimensionless η/s ratio (η being the shear viscosity, s the entropy density) is very small. The quantity η/s, apart from the fact that it appears directly in hydrodynamical formulas like the sound attenuation length [79], can be interpreted as a fluidity measure of an ultra-relativistic gas that characterizes the viscosity on its own scale [146].
The experimental tool to access this quantity is measuring the flow anisotropy, in particular its second angular moment, v 2 . The desired parameters of the corresponding fluid model can be obtained by fitting the model predictions to the experimental curves [147,148]. The result of these studies is that the observed η s ∼ 1 4π with a coefficient of order one. The value 1 4π has a specific significance, since, as is usually said, it is the "theoretical lower bound".
But, as opposed to the folklore, the status of 1/4π being a theoretical lower bound for the η/s ratio, is far from being proven. We try to review in this section what are the assumptions and approximations behind this conjecture.
The idea that η/s can have a lower bound, was first suggested in [149]. The authors realized that in the kinetic approach η/s ∼ Eτ , where E is the quasiparticle energy and τ is its lifetime. For a quasiparticle E > ΔE where ΔE is the width of the quasiparticle peak, therefore, using the uncertainty principle η/s ∼ Eτ > ΔEτ , meaning that η/s has a lower bound. This elegant way of thought, however, cannot be considered as a proof of the lower bound, since it uses the kinetic, quasiparticle approach which is not really suitable to describe the small viscosity regime. The point is that kinetic theory estimates the shear viscosity to be η ∼ 1/σ, where σ is a cross section. In weakly coupled theories σ ∼ g 4 , where g is the coupling constant. Since g must be small in order that the kinetic, Boltzmann-equation approach be applicable, only the large viscosity regime is accessible in this way. In this range of applicability several studies in the literature computed the shear viscosity using the Boltzmann-equation or quasiparticle approach [118,[150][151][152][153][154]. But, when the theory is more and more strongly coupled, higher-order processes become dominant too [155,156], and the simplest kinetic argumentation loses its validity.
One surmises that perturbation theory also can be used to calculate the η/s ratio. The entropy density is defined through the thermodynamics from free energy, the shear viscosity by the Kubo formula But with the perturbative approach there are several problems. The first one is that in realistic applications, for example for QCD, near to the critical regime of the crossover, perturbation theory is not really applicable. At somewhat higher temperatures the perturbation theory still needs heavy machinery including resummations to reliably predict thermodynamical quantities like pressure or entropy density, but after some efforts one can give a relatively good description [157]. However, the shear viscosity is much harder to access. The fundamental problem is that perturbation theory can compute corrections to a quantity calculated in the free theory. But the shear viscosity is infinite in a free gas. Therefore we should compute corrections to infinity which is a hard task. In a strict diagrammatic approach one has to re-sum ladder diagrams [158,159]. One can use 2PI resummation to perform the task [160], or, concentrating only to the most important pinch singular contributions, quantum Boltzmann equations [161][162][163][164]. One can also apply renormalization group techniques to approach the shear viscosity [165], yet, even after the most thorough job, one can expect only a "small" correction to infinity, that means to large numerical values: one typically gets η ∼ 1/(g 4 log g) as the leading-order estimate. For small viscosities, just like in the kinetic approach, would need strong coupling, and so perturbation theory is not applicable there. An alternative approach to calculate the shear viscosity could be the lattice Monte Carlo technique. There have been in fact attempts to extract this information from lattice, calculating the energy-momentum tensor correlation function [166,167]. The obtained results have been in the η/s ∼ 0.1-0.2 regime. Unfortunately the measurements cannot be performed without strong assumptions. The reason is that hydrodynamics is an effective description of the matter only at so large timescales that are hard to access from a Euclidean lattice. Therefore the present MC simulations have very small sensitivity to the desired transport regime [168]. the shear viscosity can be computed also in classical theories [169]. Here one is not restricted by the Euclidean formalism, on the other hand, the quantum interpretation is much more involved.
Since small viscosity involves large couplings, therefore methods that use the inverse coupling as expansion parameters are of great importance. Unfortunately these dual partners are rarely known. Therefore the conjectured AdS/CFT duality [170] has a big relevance, even though here the weakly coupled theory is a conformal field theory, and so its symmetries are not the same as the symmetries of QCD. Nevertheless one can calculate the η/s ratio in the infinitely strongly coupled (the t'Hooft coupling λ = g 2 N c is infinite) N = 4 supersymmetric Yang-Mills theory with this method [171] resulted in η/s = 1/4π. The significance of this result is raised by the fact that the infinitely coupled theory is expected to have the smallest shear viscosity; in fact, in this model the 1/λ corrections are all positive [172]. The KSS result, together with the conjecture of the lower bound based on the kinetic approach was then advertised as "the lower bound for the η/s ratio is 1/(4π)".
But, as we see, the two pivots of the argumentation are coming from the quasiparticle and the conformal field theory limits, and so these are not as general as it is usually thought. In fact, soon after the announcement of the "lower bound" there appeared constructions that violate, or at least challenge the 1/(4π) value.
In the framework of non-relativistic theories one can carry out such calculations [173][174][175]; for example theories where with the growing number of field components the shear viscosity remains constant, but the entropy density grows with the number of the components. It also seems valid that when we start to deviate from the quasiparticle approximation, for example with the inclusion of the continuum besides the quasiparticle peak, the shear viscosity starts to decrease [176][177][178]. In fact, in very general grounds one can argue for a lower bound not for η/s, but for ηT 3 /s 2 [177,178].
From the Ads/CFT side, there are also doubts about the universality of the 1/(4π) bound. Model studies of gravity models, where besides the leading-order AdS action there are corrections (higher-order terms in curvature, other fields like dilaton) that lead to the conclusion that in these models the η/s ratio can go below 1/(4π) [179][180][181][182]. It is not clear if in a general, consistent gravity model there exists at all a lower bound [183,184].
From the experimental side it seems that the η/s ratio of the strongly interacting plasma is O(1 − 2)/(4π) [147,185,186]. Comparing with the η/s values of other matters like water or even superfluid 4 He, the η/s value really seems very low [187], but if we use a fluidity measure better suited for non-ultra-relativistic matter, then QCD does not seem to be extraordinary [146].
So, summarizing the content of this section, although in quasiparticle systems and some conformal theories we really expect to have a lower bound for the shear viscosity, in a strongly interacting matter like QCD there is no well-established proof for that. It is also true, that the numerical value of 1/(4π) is so small, that it is not easy to provide such experimental setup where we could violate this bound. But, since this bound is not a constant of nature, it can happen that in some future collider experiments it will still be violated.

Field or particle?
The particle-wave duality appears in an interesting aspect in the heavy-ion collisions. The classical picture of a particle is a point-like object traveling on a world-line in the spacetime; if the particle is free of interaction, the worldline is a straight line (or geodesic line). On the other hand, the free quantum particle has infinite extension in space as a plane wave.
Interacting particles or waves penetrating and trespassing a medium get distorted. The distortion effect depends on the nature of the interaction, its localization and strength. Typically particle-like interactions are extremely localized, not only in space, but also in time. The straight world-line receives kicks once in a while. An extended medium, on the other hand, acts for long and makes the particle world-line smoothly curved. The same classification for extended waves includes changes in the dispersion relation by phase shifts in the former case and an overall change in the latter case. Static and large media, in particular, modify the free particle dispersion relations (propagators) by inducing a self-energy part, which re-scales the effective mass and adds a quasiparticle width. A continuum part in a spectral function, however, is a sign for creating and annihilating particles during the interaction between the quantum objects and the medium.
Traditional high-temperature field theory considers the environment as given, in most cases keeping a sharp value of the temperature. This so-called heat bath is assumed to be God-given and very few thoughts are dedicated to the problem: where does this temperature comes from? What mechanism keeps its value so constant? And how should we describe the QGP if none?
In principle all thermal effects are results of the same interaction. It is therefore legitimate to seek for approaches which do not assume a temperature, but calculate it. Or at least investigate the effects due to un-sharp values of it -a step forward-testing some simple distributions. This so-called superstatistical approach [188][189][190][191][192][193][194] is based on a particular distribution of β values in the thermal weight, exp(−βH). The simplest such distribution, having a width of β-values and allowing only non-negative ones, is an Euler-Gamma distribution. This converts the Boltzmann-Gibbs weight in a Tsallis-Pareto form: often experienced in particle spectra measured in highenergy collisions. Here β = 1/T and Δβ/ β = 1/ √ n. In the n → ∞ limit the distribution of β values narrows to a Dirac delta, and the above statistical weight converts to the well-known Boltzmann factor. Candidates for physical mechanisms therefore, which would explain the occurrence of the temperature, T , in a dynamical system, should also explain whether or not the width Δβ is small enough under the circumstances given. Such fluctuations are indeed observable in experimental data and were studied in the non-extensive framework [195][196][197].
A thermodynamical interpretation of the parameters T and n can be given starting from Einstein's idea relating the statistically evenly occupied phase space volume to the notion of entropy [198,199], Picking up a subsystem with energy H out of E has then the probability assuming no correlation other than induced by fixing the total energy to E. In this microcanonical view the environmental factor, In the expansion H E up to quadratic terms one obtains Comparing this result with eq. (91) one interprets the parameters as with C = dE/dT being the total heat capacity of the system. In the infinite reservoir (thermodynamical) limit, C → ∞, one gets back the width of the Euler-Gamma distribution. On the other hand, for n → ∞, considering a sharp β value, one obtains the textbook result T 2 Δβ 2 = 1/C for the variance. Indeed near to the maximum the Euler-Gamma distribution, as many other, is well approximated by a Gaussian. However, the Gaussian assumption cannot be extended to negative β values, therefore the Euler-Gamma assumption is superior. In interacting, "real" systems the natural dynamics itself must determine the actual distribution of β values. For any finite-sized system, a width of this distribution is compulsory. The task of understanding the emergence of temperature and other thermal effects, and calculating the actual values of T and C, and perhaps n in existing physical systems, sharpens even more in quantum (field) theory. If β, or for that matter any function of it, is associated to an operator in the quantum description, then its width cannot be narrowed down to zero in practice. (Only theoretically, on the cost of having infinite width for other, non-commuting operators.) Here the question to be answered is, how to describe a statistical operator part during the quantum evolution, which -under certain approximations to the physical reality-behaves like factorizing to a Boltzmann-Gibbs, or similar, and a unitary factor.
For this purpose considering quantum states with finite width can be of help. Contrary to point-particles (zero width in location, infinite width in momentum) and to plane-waves (infinite width in location, zero width in momentum) more general states, in particular coherent states look more realistic. In the rest of this section we give a short overview of properties of coherent states and show a possible way of an unusual interpretation of being thermal.

State labels
We consider generalized coherent states defined by with z = √ te iΘ . Such constructions in quantum optics are called "nonlinear coherent states". This state overlaps with the n-quantum state, the overlap probability being From the normalization of the coherent state, |z , a normalization of the factor p n (t) follows: (100) The expectation value of any function of the number operator,n = a † a, is given by This construction ensures that p n (t) is a probability distribution in n.
We also construct a complete set based on coherent states as follows: (102) Here, after integrating over Θ one obtains a Kronecker δ nm under the double sum and ends up with a single sum: A sufficient condition for completeness is ∞ 0 dt p n (t) = 1, wishing a complete set for all possible Fock spaces based on |n , it is also necessary. This makes p n (t) to a probability distribution function of t as well. One may consider the distribution in the number of quanta n as primary statistics, while in the coherent state parameter t = |z| 2 as superstatistics.

Operator eigenstate
While the most known coherent states show a Poisson statistics in n, and are eigenstates of the annihilation operator, a, the generalized versions are eigenstates to a more complex operator. To construct this operator is related to the problem of regularizing the phase operator in quantum optics [200][201][202][203].
We request that |z , defined as in eq. (98), is an eigenstate with eigenvalue z to the following operator: Here a is an annihilating (a † is a creating) operator, and n = a † a is the number operator. g(n) is a yet unspecified function of the number operator. The action of this operator F on the general coherent state causes that can be re-indexed to F |z = ∞ n=0 g(n + 1) (n + 1)p n+1 e i(n+1)Θ |n . (106) One can derive a recursion law by comparing this result with In conclusion the following relation has to be satisfied: This specifies, to a known quantum number distribution, p n (t), the necessary function g(n), or serves as a recursion rule for a given g(n), fixing the operator F , to obtain the distribution p n (t). The recursion is solved by Finally p 0 (t) can be obtained from the normalization condition. At the end of this reconstruction also the completeness constraint, ∞ 0 dt p n (t) = 1, has to be checked.

Glauber and negative binomial states
The most known, traditional coherent state is defined by g(n) = 1. This results in a Poisson distribution in n and in an Euler-Gamma distribution in t: In this case |z is an eigenstate to the F = a annihilation operator. The negative binomial coherent state is based on the negative binomial distribution (NBD), It is a normalized NBD in n, and at the same time is an Euler-Beta distribution in t. From the recursion eq. (108) one obtains the necessary g(n) function for modifying the operator a to F = ag(n): so this NB coherent state satisfies One realizes a one-dimensional boost property beyond such NB states when introducing the rapidity-like notation: t/k = sinh 2 ζ. Using this notation the distribution and the Fock representation of such a state are rewritten as Using the velocity variable v = tanh ζ, the corresponding Lorentz factor is given by γ = cosh ζ and the overlap probability between two NB coherent states becomes This result reminds us of a Tsallis-Pareto distribution with the energy variable replaced by a relative velocity squared in a (2 + 1)-dimensional vector notation. The possibility of using such a notation is related to the su(1, 1) algebra structure of operators forming an NB coherent state, an interesting connection which shall be discussed below. For this purpose it is enlightening to express the overlap between NB coherent states in terms of the complex numbers z 1 and z 2 . We introduce the following vector: The NB state overlap written this way converges to the known overlap between Glauber coherent states for large k → ∞: It is an interesting question whether we can connect some physical property of particle-like objects to this quantity. Interpreting the v i 's as magnitudes of velocities and obtaining the corresponding Lorentz factors from the energy to mass ratio of pointlike massive objects one would consider In this interpretation the overlap between two NB states decays asymptotically as a power-law of the relative velocity of relativistic massive particles moving on a plane. It is useful to extend the notation of NB states with the index k, referring to the k parameter in the underlying NBD distribution. From now on we use the following notation: with z k = √ kf e iΘ for the NB states and for the NBD distribution. This distribution provides an average number of n = (k + 1)f . We are primarily interested in the effect of annihilating a quantum from an NB state. The annihilating ladder operator acts as that can be re-indexed into Consider now that (124) Here we recognize z k+1 = f (k + 1)e iΘ as a factor and arrive at the elegant result In the k → ∞ limit again the familiar Glauber coherent state emerges, as an eigenstate of the annihilation operator, a. On the other hand, for an NB state, the annihilation of a primary quantum can be represented by an upgrade of the parameter k by one, times the corresponding complex eigenvalue, z k+1 . This helps us to answer the question of to what operator an NB state is an eigenstate. We compare the above result with the action of another operator √n based on the relation This helps to recognize the following NB eigenvalue equation: Based on this it is now easy to work out the corresponding NB algebra. In the previous eigenvalue equation (128) the following operator occurs: defines K 0 =n + (k + 1)/2 as a linear expression usingn.
The commutators among the K-operators form an SU (1, 1) algebra: The Casimir operator is given as:

Creation of an NB state from Fock vacuum
Another important question is how to create an NB state from the vacuum. For the ordinary coherent state a unitary operator, also called Weyl operator, does the job. For a more general analysis we consider again general functions of the number operator,n, mixed with annihilation and creation operators. This describes a coupling to a field with field-energy-dependent coefficients.
The known operator identity, if [A, λ] = 0 and [B, λ] = 0, helps us to obtain the necessary form for |z k , k = U |0 . We choose A = α zf(n) a † and B = −β z * a /f(n). Please note that B = −A † . The commutator (134) leads to λ = |z| 2 αβ. We seek our evolution operator from the Fock vacuum to an NB state in the form Here e B |0 = |0 due to B|0 = 0, since B in the exponent is proportional to the annihilation operator. Expanding the exponential one obtains Since we have the form with z = √ t e iΘ , is achieved when using f (n) = ξ √n + k: In order to satisfy normalization of the resulting NB state, From this we express and gain the following negative binomial distribution: For the superstatistics normalization, we utilize the Euler Beta integral which after changing the integration variable from t to w should become unity: or -expressing w(t)-the superstatistics is normalized if This requirement identifies Φ as being After these manipulations only α, β and ξ remain undetermined. A purposeful choice is The natural logarithm of the evolution operator becomes in this case This is neither Hermitean nor anti-Hermitean, therefore can only be interpreted as With this definition one has a statistical operator of This underlines the necessity of thinking in complex time paths or equivalently assuming an environmental factor, which is not unitary. The anti-Hermitean part of ln U gives a guess for the evolution Hamiltonian while the Hermitean part for the statistical factor Here Please note that in this case the statistical environmental operator contains a factor in the Tsallis-Pareto form: with t = |z| 2 interpretable as the average number of quanta in an NB state.

Physical sources of NB states
Various physical mechanisms may lead to an NBD in particle numbers produced in high-energy collisions. The NB coherent state, reviewed above, is just one of them; more closely we did not specify the Hamiltonian which produces such a state. Typically, as described in eqs. (151) and (152), a linear field coupling of the particle modes, as described by the phase factors e ±iΘ and the annihilation and creation operator, are necessary for the desired result. However, other than for the classical Glauber coherent state, the coupling (or equivalently the amplitude of the external field creating the particles) does depend on the number of particles to be produced. In particular for the NB coherent state this dependence is algebraic, involving the square root function -as specified in eq. (153). This effect is typically a multiparticle effect, the mathematical expression involves infinitely many quanta for the same mode. There are further hints and speculations about the origin of negative binomial particle distributions. A simple phase space cell statistics, "throwing" stones of quanta into phase space cells repeatedly in unrestricted number -as bosons behave-predicts a combinatoric factor in the probability to find exactly n particles in k cells. The Pólya distribution extends this picture to the same probability, when the investigated system of n and k is a part of a bigger system consisting of N particles altogether in K total number of phase space cells: In the large environment limit, K → ∞, N → ∞ while f = N/K is kept finite, one arrives at the NBD distribution (120). This approach predicts that k + 1 = n /f ∝ N part , so high-multiplicity events are closer to the Poissonian, if f is universal.
In the derivation of an NB state in quantum optics, the f parameter occurs as a squeeze parameter [203,204]. Analyzing bosonic wave packet statistics, onefold filled bosonic states give a correlation factor of 2 at zero relative momentum. With M -fold occupation of the same state it reduces to The logarithmic cumulants for an NB state, defined by G(z) = p n z n and ln G(z) = ∞ n=1 C n (z n − 1), are This looks like a (k + 1)-fold overload of the simple Bose case, given if k = 0 [205,206].
In perturbative QCD calculations [207][208][209] KNO scaling + DGLAP give nearly NBD with constant k, related to Λ QCD and expressed by the n-variance. In experimental findings NBD is in fact slightly violated, so more sophisticated distributions are also discussed.
In a minimal information statistical approach, the Tsallis-Pareto distribution in stead of the Boltzmann-Gibbs is viewed as a maximal entropy state. Utilizing the Tsallis' or Rényi entropy formula [189,210] the usual canonical constraint on the average energy indeed leads to Using further assumptions about reservoir fluctuations, further entropy formulas can be constructed, as expectation values of formal logarithms, behaving additively [198].
Finally we have to mention the color glass condensate picture [31][32][33][34] where particles are produced directly from the decay of a nearly classical non-Abelian field. A glittering glasma with k-fold ropes delivers also NBD in the final state. In this model the main NBD parameter, k is calculated as k = κ(N 2 c − 1)Q 2 s R 2 /2, with κ string constant, N c number of colors, Q s saturation energy scale and R radius in transverse plane. Its value is about the number of tubes, produces NBD with this parameter. The actual estimates in ref. [211] go to a few hundreds for heavy-ion collisions at RHIC.

Lattice field theory with canonical Tsallis distribution
Lattice field theory is still the only systematic nonperturbative computational tool to solve physical problems related to the strong interaction. While at its dawn it produced only a qualitative insight into the characteristic features of the strongly interacting matter, in the last decades it has accomplished a lot and produced reliable quantitative predictions on equilibrium quantities such as the transition temperature or the equation of state. Among them, for example, a precise study of the transition temperature has been done with staggered fermions and results from different groups tend to agree quite remarkably [212,213]. It has been shown that the deconfining transition is a crossover for physical quark masses [214] and the deconfining transition and the chiral transition essentially coincide.
Despite the remarkable success of lattice calculations regarding equilibrium quantities, a direct investigation of out-of-equilibrium properties is not possible using conventional lattice gauge theory methods. There are attempts to describe near-equilibrium properties, especially via spectral functions. For a more detailed overview we direct the reader to a recent review [215] on this topic.
Here we present a completely different approach within the lattice gauge theory framework, which is based on the generalized, non-extensive thermodynamics. Nonextensive thermodynamics is regarded as an effective theory for non-equilibrium effects and long-range correlations [216]. One possible manifestation of it in the language of probability distributions is the Tsallis distribution, where E denotes the energy of a state and β is the inverse temperature. This power-like function restores the Gibbs factor in the c → ∞ limit: Observed particle spectra in high-energy pp and heavy-ion reactions can be well described with the above distribution [217,218]. Via the Gamma-distribution the Tsallis weight can be rewritten as which is a specific case of the superstatistical approach [219,220]. That means that averages with the Tsallis distribution can be calculated from the Gibbs expectation values at different β's: one simply has to average them over the inverse temperature, β, which obeys a Gammadistribution. The corresponding partition functions are connected as As a consequence, so as to take into account nonequilibrium effects, it is possible to apply the Tsallis distribution in lattice gauge theory simulations instead of the Gibbs one. The calculation of expectation values can be realized using the superstatistical method. For an exploratory study, the simplest, pure SU (2) theory has been chosen and investigated so far [221][222][223].
In a lattice simulation, the physical temperature is determined by the period length in the Euclidean time direction: β = N t a t . It is clear that one cannot use the resampling technique of the traditional, Gibbs distributed configurations in order to evaluate the Tsallis averages. Those configurations are available for a few integer N t 's only, therefore the necessary coverage of the Gamma distribution is not possible. However, if the time-like lattice spacing, a t follows a Gamma distribution, so do the physical β = N t a t = 1/T inverse temperature values at a given, fixed N t . We assume that the mean value a t is equal to the space-like lattice spacing, a s . Then the ratio θ = a t /a s follows a normalized Gamma distribution with the mean value 1 and a width of 1/ √ c. Inspecting ZEUS e + e − data we obtain c ≈ 5.8 ± 0.5. In the numerical calculations outlined next the value c = 5.5 has been used.
The form of equation (165) for gauge fields is given as In general, one needs to calculate the Tsallis expectation value of an observableÂ[U ] over lattice field configurations U . IfÂ has a formÂ = θ v A, one obtains Here the lattice action generally can be written as where a = S ss [U ] contains space-space oriented plaquettes and b = S ts [U ] contains time-space oriented plaquettes. In the c → ∞ limit the Gamma distribution approximates δ(θ − 1), and one gets back the traditional lattice action S = a + b, and the corresponding averages. For a given finite c, one can exchange the θ integration and the path integral and obtains exactly the power-law-weighted expression.
One can indeed explicitly perform the θ integration and derive an effective action, which turns out to be a logarithm of a Bessel function. As it is not particularly easy to carry on simulations with that effective action, we consider another solution here.
According to eq. (167), it is possible to modify the direct numerical update algorithms used in the usual canonical Gibbs simulations for our superstatistical ensemble. We use the Metropolis algorithm here, but the heat-bath should also work for pure gauge fields. Now there is an additional variable, the anisotropy, which has to be generated according to a Gamma distribution with the Tsallis parameter c. Simulations for different c values has been performed in the range of 5.5-1024.0, the lower value being relevant in high-energy collision experiments, while c = 1024 is meant to approximate the c → ∞ (Gibbs) limit. At a given c the numerically generated random θ anisotropy values show, that for a really smooth reconstruction of the Euler-Gamma distribution one needs random values in the order of 10 4 -10 5 . Of course, this criterion considerably elongates the simulations and makes them computationally more expensive.
Once an anisotropy parameter is given, the standard Metropolis sweep is applied to update the gauge fields. After a sweep through the whole lattice a new θ is thrown and so on. The ensemble averages are calculated with averaging the gauge configurations over the generated θ values in the standard way. The numerical calculations show that the -here outlined-generalized Monte Carlo method is stable and functioning. Simulations have been performed on various lattices up to 10 3 × 2 and 10 4 sizes and different thermodynamical quantities, including the equation of state, have been estimated [221,222] via the calculation used in the pioneering work of [224]. The c = 1024 results can also be compared directly to those obtained in [224].
The deconfining transition can also be studied within the superstatistical framework by investigating the behavior of the Polyakov loop. Results on this are illustrated in fig. 13 for c = 5.5 (a realistic value from p T -spectra) and for c = 1024 (which represents effectively the traditional, Gibbs limit). The Polyakov loop expectation values and the fourth-order cumulants have been calculated and the critical couplings have been determined using a functional fit to the data. At c = 1024 the result is x c = 4/g 2 c = 1.85 (for both quantities) which is in a good agreement with the known result for the SU (2) LGT on N t = 2 lattices [225,226]. At c = 5.5 one obtains x c = 2.12 with the same type of fits (again, consistently for the Polyakov loops and the cumulants as well) which means that the transition temperature moves towards higher values in that case. We are interested in the amount of the change in the temperature values. Based on [226] the critical couplings can be translated to temperatures, and if we denote the critical temperature for the canonical Gibbs ensemble with T c , for c = 1024 the transition occurs (trivially) at T c and for c = 5.5 one gets the value T ≈ 1.3T c .
This result, taken at face value, suggests a considerable increase of the transition temperature, supposedly due to non-equilibrium effects. However, one could argue that during the simulation θ = 1, i.e. it is the inverse temperature, which has the mean value of 1/T . Due to the properties of the Gamma distribution, the mean of the temperature itself is determined by 1/θ = c/(c − 1) ≈ 1.22. Nevertheless, the dynamical effect is definitely larger than the trivial statistical factor of 1.22 obtained by this argument.
One should make a further remark here. In the above superstatistical Monte Carlo method the anisotropy θ is actually the bare anisotropy. This parameter should be regarded as an additional coupling (or, one can imagine having different coupling parameters in space-like and timelike directions on the lattice) and it should be renormalized accordingly in order to obtain physical results in the simulation. In principle this renormalization can be incorporated in the above algorithm if the functional form between the bare and renormalized anisotropies is known. Then an additional step is required in the update sweep: after generating the actual anisotropy parameter θ according to Gamma distribution (which should be called the physical anisotropy) one should calculate the bare anisotropy θ 0 using this functional form and perform the update with θ 0 (which, in that case is not really Gamma distributed any more).
For this modification one would actually need to know θ 0 as a function of θ in the whole (0, ∞) interval to cover the whole range of the Gamma distribution. As anisotropic lattices are also used in the usual simulations, this function has been determined [227], although only for θ ≤ 1. The reason is practical: anisotropic lattices are used mainly for simulating systems at finite (high) temperature and for that purpose θ ≤ 1 suffices. As a consequence, for renormalizing the superstatistical lattice gauge theory, first the above function should be determined for the region θ > 1. That could be nontrivial, although some relevant physical quantities do not depend strongly on the temperature below the deconfining transition.
Because of these difficulties, renormalization is not considered here. Clearly, the procedure would influence the value of the transition temperature. Nevertheless, we think that the qualitative results obtained from the superstatistical Monte Carlo simulation are still reliable. Therefore it is safe to conclude that experiments aiming at producing quark matter under circumstances characteristic to high-energy collisions should consider the possibility of an up to about 30 per cent higher T c than predicted by traditional Monte Carlo lattice calculations. (Let us mention that similar effects are reported -although in opposite direction-based on completely different studies: including more and more quark fields in the QCD Lagrangian [228], or applying strong external magnetic fields [229], the critical temperature seems to decrease.)

Conclusion and open problems
In conclusion we gave a review of several "exotic" methods dealing with the QCD deconfinement transition from hadron to quark and gluon dominated matter and back, picking out both phenomenological model approaches and computation-technique developments. Our main line of thoughts of dwelling into questions of the complexity of the structure of quark-gluon plasma (QGP) was accompanied by the playful experimentation with a possible extension of the Boltzmann-Gibbs canonical view in thermodynamics for non-equilibrium phenomena and finite reservoir effects.
Keeping as a main subject the exploration of the wellestablished high-temperature field theory approach, we contrasted a number of phenomenological approaches, also as speculative physical models, with the most important findings. There is no sudden switch from the hadronic world to an ideal quark-gluon plasma. The complex universe of a QGP in the range T c -4T c , also called an sQGP by some, is full with beautiful physical effects. These effects go far beyond an effective quasiparticle mass, which is more or less proportional to the temperature. The increasing thermal width and the occurrence of continuum parts in the physical spectral function have competing effects on the total pressure. Their balance results in an equation of state describing a matter softer than the radiative Stefan-Boltzmann gas with massless constituents.
This phenomenon can be interpreted within a stringlike interaction picture considering the (color-)densitydependent contribution to the free energy density. Also an assumption of emerging and melting quasiparticle peaks in the spectral function, as well as a higher correlation effect on generated mass terms agrees both with leadingorder pressure corrections in perturbative QCD and with numerically obtained lattice Monte Carlo data.
Beyond the equilibrium view, very few things can be done in exact quantum field theory. The linear response approximation allows us to wring out some information on transport properties near the thermal equilibrium; most famous being the shear viscosity coefficient. Here also some traps occur in thinking, we have tried to point out some of them in this review. On the other hand, we have introduced arguments in favor of considering non-Gibbsean weighting of configurations as if we were dealing with nonlinear (non-Glauber) quantum coherent states of radiation or -with a very similar effect-with an unsteady environment resembling an Euler-Gamma distributed inverse temperature, as superstatistics.
Certainly we have left a number of problems undisclosed. Just to list a few we formulate questions for future consideration, emerging from high-energy experimental results: Is most of entropy produced initially or during a fast hadronization at the end of these reactions? Is there any real thermalization (even if no sharp temperature is possible in an event-averaged ensemble of measured data) or are all temperature effects just illusion, reflecting barely an Unruh-like temperature (acceleration based, with no heat container)? How to produce dynamical states closely but not exactly reminding to thermal equilibrium states without ever reaching an equilibrium?
We close this short review with the hope that the Reader could find as much enjoyment in passing these ideas through his/her mind, as the Authors definitely did.
This work has been supported by the Hungarian National Research, Development and Innovation Office (NKFIH) under the contract numbers OTKA 104260 and 104282. This work has also been supported by the Helmholtz International Center for FAIR within the LOEWE program launched by the State of Hesse. TSB is supported by NKFIH/OTKA project Nos. 104238 and 104260.
Open Refereeing Upon invitation by the communicating editor, referees and authors of this article have agreed to release the original, unedited material of the refereeing procedure for the benefit of the reader. This material is contained in the electronic supplementary material of this article.

Open Access
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.