Heavy-flavor production and medium properties in high-energy nuclear collisions - What next?

Open and hidden heavy-flavor physics in high-energy nuclear collisions are entering a new and exciting stage towards reaching a clearer understanding of the new experimental results with the possibility to link them directly to the advancement in lattice Quantum Chromo-dynamics (QCD). Recent results from experiments and theoretical developments regarding open and hidden heavy-flavor dynamics have been debated at the Lorentz Workshop"Tomography of the quark-gluon plasma with heavy quarks}, which was held in October 2016 in Leiden, the Netherlands. In this contribution, we summarize identified common understandings and developed strategies for the upcoming five years, which aim at achieving a profound knowledge of the dynamical properties of the quark-gluon plasma.


Introduction
Over the last decade, different experimental observables have been used for the characterization of the quark-gluon plasma (QGP) [1]. Heavy quarks play a crucial role as a probe thanks to their large mass with respect to the temperature of the plasma consisting of gluons and light quarks. Therefore, heavy quarks are ideal probes for the study of the QGP properties [2,3] because they are produced in the very early stage of the collision testing the entire space-time evolution of the system. The availability of a heavier (bottom) and lighter (charm) flavor offers the unique possibility to probe different stages of this space-time evolution. For bottom the thermalization time is likely to be larger than the lifetime of the plasma, so that such a non-fully thermalized probe can carry information starting from the earliest moments after its initial creation. For charm on the other hand there is an increasing number of experimental indications for a high degree of equilibration. This in turn means that most information on the evolution history is lost and the late stages around freeze-out dominate the observed behavior. In addition, from the theoretical point of view, the large mass of heavy quarks makes the evaluation of the so-called quarkonium correlators and transport coefficients feasible directly from first principle QCD calculations.
The experimental results from the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have surprisingly shown a large suppression of the transverse momentum dependent nuclear modification factor R AA of heavy-quark hadrons, which is defined as the ratio of the yields in AA and pp collisions, scaled by the averaged number of binary nucleon-nucleon collisions. In addition, a large heavy-flavor elliptic flow v 2 has been observed in heavy-ion collisions. This puts additional pressure on theoretical models to reproduce both properties at the same time. Similar to the developments in the area of photon yields and flow, this fruitful challenge already helps theorists to gain a better understanding of the relevant physical processes required in their models of quarkonium production in heavy-ion collisions.
While the measurements of the dynamics of heavy quarks in the medium became feasible in the last decade, the physics of quarkonium production and dissociation is historically one of the main probes of the existence of the QGP and has been studied for nearly thirty years.
The new experiments at the LHC and their relation to the results from RHIC and SPS allow clarifying the expected quarkonium melting along with the recombination and regeneration dynamics in the plasma. Moreover, new insights were obtained from the recent developments in lattice QCD from the evaluation of the spectral functions and the possibility at the LHC to reconstruct experimentally the presence of single excited states in the QGP for bottomonium states. This is opening up the possibility to have stringent constraints from both the theoretical and experimental sides for the understanding of the quarkonium production in the plasma.
The Lorentz workshop Tomography of the quark-gluon plasma with heavy quarks [4], which was held on [10][11][12][13][14] October 2016 in Leiden, the Netherlands, provided a platform to discuss recent results from experiments and theoretical developments regarding open and hidden heavyflavor dynamics in high-energy nucleus-nucleus collisions. Three Discussion Groups were set up to debate in detail among the experts implications and open issues concerning the theoretical and experimental results. They are centered around the following broader questions: -Which of the proposed energy-loss mechanisms are compatible with the present lattice results? -What are the next steps for the comparison of the different models for the heavy-quark energy loss in the QGP? -What are the current crucial experimental issues and limitations? Can we identify key observables?
The paper gives a summary of the main conclusions and recommendations of the Discussions Groups.
2 Challenges in QCD theory related to heavy-flavor probes (Discussion Group 1) Theoretical efforts (cf. e.g. refs. [1,2,3,5,6,7] for reviews) discussed in the Discussion Group may be broadly grouped in two classes: -First principles calculations of the equilibrium (static and real-time) properties of the QGP from QCD. -Connection of such properties to phenomenological models and experimental results, which requires relating equilibrium processes to non-equilibrium ones (through linear response theory or beyond).
The challenges in the first area concern e.g. efficient ways to evaluate observables using different approaches, such as resummed perturbation theory, effective field theories or lattice QCD calculations. The latter area on the other hand requires efforts to link phenomenological ideas, such as a momentum dependent diffusion coefficient for heavy quarks or an effective transport coefficientq to theoretically well-defined observables. This second step requires close interactions among theorists, model builders and experimentalists. The joint sessions with other discussion groups focused mostly on these aspects.

Calculations of the equilibrium properties of the quark-gluon plasma from QCD
Among the observables, which characterize the physics of the QGP, we considered the following areas:

Bulk thermodynamics
Lattice QCD computations have converged on an equation of state [8] in the range of 150-300 MeV for a realistic QCD medium with light u, d and s quarks. They can thus now reliably be used as input to hydrodynamic descriptions of the bulk evolution in heavy-ion collisions [9], which may still be based on older parameterizations [10]. Recently, studies incorporating dynamical charm quarks have also seen progress and showed that the effects of charm already set in at temperatures as low as 400 MeV [11], provided that the plasma lives long enough to reach chemical equilibrium, which may be the case in future generations of heavy-ion collision experiments [12].
The lattice can also study thermodynamic fluctuations and correlations with an emphasis on charm-quark physics [13,14]. An interesting outcome is that charm fluctuations and correlations, which are sensitive to the open-charm meson and charmed baryon sector, are well described by hadron resonance gas below T c . Above the transition temperature the hadron resonance gas description is not adequate. At the same time, the lattice calculations of charm fluctuations and correlations indicate possible existence of charm meson and baryon like resonances in an extended temperature region (T 200 MeV) [15]. These hadronic like excitations are different from the vacuum charm hadrons and their presence above T c should be taken into account in the phenomenological models aiming at the description of heavy-flavor production. For temperatures T > 300 MeV the description of fluctuations and correlations in terms of weak coupling calculations appears to be sufficiently accurate [16,17,18]. Therefore, models based on solely quark quasi-particles are appropriate in this temperature range.
To make further progress more accurate lattice results on fluctuations and correlations of charm are needed. Furthermore, quasi-particle models, which include both quark and hadron like degrees of freedom above T c that fit the lattice results need to be developed. This will ensure to get adequate understanding of the relevant degrees of freedom in the charm sector.

Quarkonium and baryon spectroscopy
Progress on QGP spectroscopy takes place on several fronts. The main distinction concerns whether the quarks are light or heavy, with charm being intermediate. Spectral information is contained in temporal correlation functions; since the temporal extent of the Euclidean time direction is inversely proportional to the temperature, higher temperatures are more difficult to analyze.
A complementary way to study in-medium properties of various excitations comes from the analysis of spatial correlation functions. They are less sensitive to e.g. transport but provide information on dissolution of bound states and chiral symmetry restoration (for light quarks). The advantage is that they are not constrained by the finite temporal extent, but the disadvantage is the less direct relation with phenomenological questions [19].

Spectral reconstructions using Bayesian inference
The reconstruction of spectral functions from Euclidean correlation functions represents an ill-posed inverse problem [20]. As the number of available simulation points is usually much less than the number of frequency bins in which the spectrum is discretized, naive χ 2 fits will lead to degenerate sets of spectra that all reproduce the correlators within statistical errors. In order to give meaning to the inversion additional information needs to be provided and the Bayes theorem may be used to systematically include such "priors". This is achieved by introducing a regulator functional, which competes with the usual χ 2 fitting functional in order to select the "most probable" spectrum.
Bayesian approaches to spectral function computation, such as the Maximum Entropy Method (MEM) [21] or the more recent Bayesian Reconstruction (BR) [22], differ both due to the regulator term used as well as in their numerical implementation. Importantly, two different implementations may give different results as long as the "Bayesian continuum limit" (infinite number of data points, vanishing statistical errors) has not been taken. In this limit, the problem is well-posed [23] and all methods should agree, but in practice the limit is far from being reached.
Over the past two years the systematic artifacts of the two methods have been much better understood, in particular in cases where only a small number of data points is available, e.g. on the lattice at high temperatures. The standard implementation of the MEM introduces a limitation to the space of functions among which the spectrum may be chosen [24]. This may lead to an overly smooth reconstruction [25]. One way of testing this limitation is by changing the size of the search space, e.g. via the number of data points included. This is the approach followed by the FASTSUM collaboration [26,27]. The BR method on the other hand uses a different regulator and does not restrict the search space a priori. In turn, it may suffer from the appearance of numerical "ringing" that can mimic the presence of spectral peaks. Here, the comparison of reconstructions using test cases where peaked structures are absent, e.g. non-interacting spectra have been used as test cases [28].
Differences in the outcomes of the two methods can only be resolved as we proceed towards the Bayesian continuum limit. Several groups are actively working on increasing the statistics of the underlying data sets and/or generating lattices with more finely spaced Euclidean time axes.
It may be noted that non-Bayesian approaches, such as Tichonov-Morozov [29] and Backus-Gilbert [30], as well as constraints following from the analyticity properties of the underlying correlation functions [31], have recently gained attention. They do not contain an explicit use of prior information, even if certain "regulators" are needed in practice.
Quarkonium spectral functions from NRQCD with full relativistic light quarks Heavy quarks may be treated within a sequence of effective field theories. There are several lattice groups using NRQCD (non-relativistic QCD) to treat bottom quarks propagating through a quark-gluon plasma with N f = 2+1 dynamical (i.e. fully relativistic) light flavors [26,27,28].
In the effective thermal field theory setup, NRQCD is the first theory obtained when integrating out ultraviolet degrees of freedom. Since NRQCD relies on the scale separation between the temperature and the heavy-quark mass and temperatures up to 2T c 400 MeV are studied, its application is fully justified. While one group uses lattices with a very fine temporal spacing (a τ 0.035 fm, a s = 3.5a τ ) their light medium degrees of freedom such as pions are heavier than in nature [26,27]. The other group utilizes lattices designed to provide a realistic medium with almost physical pion mass but on which the spatial lattice spacing may become very fine (0.07 a s = a τ 0.12 fm), which limits the validity of the NRQCD approximation [28].
At low temperatures and for surviving bound states in the QGP, in particular the Υ (1S), groups are in approximate agreement, although uncertainties on the width are still present. For other channels, notably P-wave states within the QGP, this is not yet the case.
Heavy quarkonium spectral functions from pN-RQCD The effective field theory (EFT) for quarkonium at the scale of the relative momentum transfer mv, namely pN-RQCD (potential non-relativistic QCD, a lower energy version of NRQCD) has also been generalized to a thermal environment [32]. For tightly bound quarkonia like Υ (1S) it can be used to calculate in-medium meson properties and the quarkonium thermal width [33], induced by an imaginary part in the in-medium potential [32,34,35]. Such an imaginary part can be related to gluo-dissociation and inelastic parton scattering in the medium [33,36,37]. For the Υ (1S) a dissociation temperature of about 450 MeV is obtained within pQCD [38,39], consistent with the lattice results mentioned above. One can provide heuristic arguments on how the pNRQCD approach in weak coupling can be generalized to strong coupling [40]. The essential observation in this argument is that when the binding energy is small the potential is the same as the energy of static QQ pair and thus can be calculated on the lattice using Wilson loops.
Another line of argument for the existence and definition of the static potential beyond weak coupling has been given in ref. [41]. It was shown that if a well-defined Lorentzian peak exists in the spectrum of the Wilson loop, its late time behavior can be described by a Schrödingerlike equation with a time independent potential, whose real and imaginary parts are related to the position and width of the peak. Recent works in quenched and full QCD ranging up to ∼ 2T c have all reported the observation of such well-defined peak structures [42,43,44,45]. More theoretical work is needed to understand the relation between the energy of static QQ pair at T > 0 obtained from Wilson loops and the pNRQCD definition of the potential.
Using such a static potential and its in-medium modification obtained from the lattice, the bottomonium and charmonium spectrum in the S and P-wave channel were computed recently [44,46] (see [47] for a weak coupling analysis). It was found that the narrow vacuum states are hierarchically modified; this is caused by their transi-tions into open heavy-flavor states or excited bound states (strong [48] or electromagnetic decays are not included). The weakest bound states are affected most quickly. The modification consists both of a broadening of the states, as well as a shift to lower masses, which however due to the concurrent lowering of the open charm threshold still leads to a decrease in the in-medium binding energy. A comparison of the width and the binding energy can also be used for defining a melting temperature [34,39], which however does not mean that all features would have completely disappeared from the spectral function.
Since on the lattice one cannot decipher the microscopic origin of the broadening observed, the width may be related to both the phenomena of gluo-dissociation and inelastic parton scattering mentioned above, and to transitions to a color octet state with repelling interaction that is often discussed within perturbative approaches, which in turn contributes to the presence of unbound heavy quarks in the medium.
Apart from reducing systematic errors on the lattice side and going to higher order on the perturbative side, an open issue is to address finite velocity corrections on the lattice. These are expected to be strongly suppressed in the non-relativistic regime, but nevertheless worth a consideration [49].
Spectral functions from the Bethe-Salpeter equation There exists a multi-year effort to also compute the spectra of heavy quarkonium and open heavy-flavor using the T-matrix approach [50,51,52,53]. In this approach a 3-D reduced version (T-matrix) of the Bethe-Salpeter equation is used to capture the physics of the in-medium bound states and/or resonances. In ref. [53], a step was undertaken toward a self-consistent solution of the coupled onebody (self-energy) and two-body (T-matrix) equations in the thermal environment, thereby highlighting the role of off-shell self-energy feedback to the two-body scattering. The calculated Euclidean temporal correlator from this work was compared to the lattice results [54]. The apparent deviation of the calculated correlator ratio from lattice results at small Euclidean time was due to the fact that different reference correlators were used in the Tmatrix approach (vacuum reference correlator) and the lattice (reference at a bit above T c ).
The concept of a two-body in-medium potential is inherent in this approach. While the heavy-quark internal energy and free energy from lattice calculations were used as the input potential in the T-matrix to bracket theoretical uncertainties in previous works, recently an in-medium potential has been defined and extracted in this thermodynamic many-body approach [55], stipulating in particular how finite-width effects (in both potential and HQ propagators) affect the extraction of the underlying interaction kernel. Yet, it has not been resolved whether or how this potential is related to the in-medium heavy quark potential discussed above. The connection of the T-matrix approach to functional methods in QCD, such as Dyson-Schwinger computations might allow to connect it to the effective field theory approaches in the future.
The use of Bethe-Salpeter equations has also recently been advocated to extract a potential for charmonium at finite charm mass [56,57]. The work applies the HAL-QCD method of extracting a potential for a finite-mass twobody system [58] rigorously defined at zero temperature. It has thus raised questions about the applicability of the HAL-QCD method approach in the presence of a thermal medium, which are not yet finally settled.

Spectral functions in fully relativistic approaches
The methods discussed above can in principle be applied to any type of Euclidean correlators. A particular highlight discussed during the workshop was related to the fate of baryons at finite temperature. In particular, first results on parity doubling [59] are consistent with expectations: in-medium effects in the hadronic plasma have been observed, with a stronger effect in the negative-parity channels. It remains to be seen whether and how this will impact models based on the hadron resonance gas and statistical hadronisation.
One limitation of the lattice NRQCD approach is the lack of a continuum limit and possible limitations of the approximation for charm quarks. For the bottom quark NRQCD on a full relativistic sea should be accurate, while charm might require a fully relativistic lattice QCD approach. Large and fine enough lattices that allow for a continuum extrapolation and also allow for a relativistic treatment of bottom quarks will become available in the near future but will still be limited to the quenched approximation.
While significant progress has been made the problem of quarkonium properties at high temperatures is far from being solved. Further improvements in the precision of the lattice calculations and refinements of the Bayesian methods will be needed, but these alone will not be sufficient to solve the problem. Clearly, a synergy of different approaches will be needed to reliably establish quarkonium properties in the high temperature medium. Examples of such synergy would include the combined use of NRQCD and pNRQCD. One can calculate the spectral functions in pNRQCD and from these obtain the Euclidean time correlation function that can be compared to the ones obtained in lattice NRQCD calculations, thus providing a sanity check. Similarly, the calculation of spatial and temporal correlation functions in the relativistic approach can be compared to the correlators obtained in the T-matrix approach. Such comparison could also be performed for open heavy-flavor hadrons. Both pNRQCD and the Tmatrix approach would benefit from improved lattice calculations of the heavy-quark anti-quark potential. These calculations should be pursued in the future.

Heavy and light flavor diffusion
If a light or heavy quark has a momentum of at most of the order of the temperature with respect to the medium rest frame, its movement can be characterized by a "transport coefficient", known as the diffusion coefficient D [60,61]. The value of D may depend modestly on the quark mass but, within the validity of the diffusive description, is independent of the momentum (cf. e.g. ref. [62] for a review).
In general, a lattice determination of D is challenging, because it requires resolving fine features of a spectral function (see above). Perturbatively, for heavy quarks, the width of the "transport peak" is ∼ α 2 s T 2 /m [63] and therefore very small. However, in broad analogy with the concept of a static potential for quarkonium physics, an EFT approach permits to reduce also the open heavyflavor problem to a purely gluonic correlator [64,65], which has no transport peak [66]. Apart from permitting for the first NLO computation of a transport coefficient [67], this formulation has led to a well manageable lattice challenge [68]. By now even the continuum limit has been reached [69,70], with a result that can be directly used in phenomenology. Ultimately, the importance of 1/m corrections in the case of charm also needs to be addressed; these can be reduced to a "higher-order" gluonic correlator.
It is important to stress the role played by systematic errors in lattice determinations of transport coefficients. The analysis of ref. [69] contains a realistic estimate of systematic uncertainties and therefore a large but most likely reliable error bar. In contrast, some previous analyses appear to have much smaller error bars [71], however the errors are statistical only and therefore underestimates.
For light quarks, the transport peak is broader than for heavy quarks and therefore the numerical challenge is not quite as hard. Nevertheless, the perturbative width [72] is much smaller than those indicated by current Bayesian spectral reconstructions, which is a reason for further investigating the issue. Indeed, the area under the transport peak can be well constrained. This implies that if a spectral reconstruction overestimates the width of the peak, it underestimates its height (D). Values of D accounting for this uncertainty in the quenched continuum limit can be found in refs. [73,74], whereas recent results at a finite lattice spacing of the unquenched theory can be found in refs. [75,76].
An improved ab-initio calculation of the heavy-quark diffusion constant is needed. Lattice calculation of this quantity with dynamical quarks is prohibitively expensive if not impossible. One way to go forward would be to extend the quenched calculations to higher temperatures and match the weak coupling and lattice results with the help of a K factor. Assuming that this K factor is independent of the number of quark flavors one could get an estimate of the heavy-quark diffusion constant for T > 300 MeV.

High-p T jets
Heavy-flavor jets generated in the experimental setting have typically large transverse momenta with respect to the medium. Treating such jets on a non-perturbative level poses a conceptual challenge. In phenomenological studies, a Fokker-Planck equation parametrized by two coefficients,q andq L , is frequently used. The basic premise behind the Fokker-Planck equation is that the number density of the hard particles (integrated over momenta p T of the phase space distribution) is conserved. However, in QCD there are processes already at leading order in α s , which violate the Fokker-Planck equation [77], e.g. a collinear splitting Q → Qg where both the quark and the gluon carry a large p T ("p T /2"), or a scattering of Q with a hard medium gluon whereby its momentum is changed by a large amount.
On the lattice, where every process is included in the measurement, it is not possible to separate those reactions, which do allow for a Fokker-Planck treatment from those that do not. In contrast, in a perturbative approach, this can be done in principle. Consequently, the perturbative side can be "boosted" into an EFT approach, in which the contributions of certain "soft" momentum scales are resummed to all orders. This has recently led to an approach in which "soft contributions" to the so-called transverse collision kernel C(k ⊥ ) can be defined beyond leading order [78] and subsequently measured through EFT lattice simulations [79,80,81,82,83]. If the contributions of "hard scatterings" [84] are properly added, a value can in principle be obtained forq, as a certain moment of C(k ⊥ ). The value ofq L remains to be determined. The unintegrated C(k ⊥ ) may also parametrize a subset of non-Fokker Planck processes [77].
Further progress on high-p T probes can be made through the use of the EFT approach. In particular, the use of soft collinear effective theory could be beneficial. The nonperturbative information for the relevant processes would be then included in the coefficients of this effective theory.

Heavy-flavor diffusion and jets
The phenomenological description of diffusion and highenergy jets makes use of quantities such as the diffusion coefficient D and the jet quenching parametersq andq L mentioned above. The status of their determination from QCD was discussed in the paragraphs above and will not be repeated here.

Effective processes
The goal of a model is to simplify the technical description of a particular process and at the same time to shed light on the relevant underlying physics. Along this way it may borrow concepts from theory even if these are extended beyond their formal range of applicability. In the end however, it has to be made sure that the model faithfully describes the process it was designed for.
One example is the concept of a screening mass in the context of the in-medium modification of the heavy-quark potential. The values of the potential determined on the lattice have indeed been reproduced with an Ansatz that combines the vacuum physics of a confined QQ in the form of a Cornell potential with that of a weakly coupled quark-gluon plasma via the concept of a generalized Gauss law [85]. This leads to analytic expressions for the real and imaginary parts of the potential, which depend only on a single temperature dependent parameter m D (T ) appearing in the form of an in-medium mass. While this in-medium mass can now be used to easily describe the inmedium modification of the heavy-quark potential it may not be immediately generalized for use in other physical situations, such as e.g. those including light quarks.
The concept of effective running coupling α(r, T ) [45,86,87,88] may be seen in a similar fashion, i.e. it represents a quantity that allows a concise description of processes related to heavy-quark interactions, however its value is definition-dependent and may not be used in every context. It has been applied with success in transport approaches [89,90,91]. Another definition of an effective coupling, useful for the EFT description of soft observables, can be found in ref. [92].

Open quantum systems
In contrast to the classic picture [93], in which quarkonium melts when Debye screening affects the bound state radius, i.e. rm D ∼ 1, it is nowadays believed that the bound state dissolves through dynamical scattering processes already in the regime rm D < 1, i.e. when Debye screening is not yet efficient [39,47,94]. In order to describe this dynamics the concept of open quantum systems [95,96,97,98,99,100,101,102,103,104] has recently received strong interest. (Related approaches can be found in, e.g., refs. [105,106].) The separation of scales between the constituent quarks and the thermal medium invites a natural distinction between environment and subsystem that underlies this approach. The description can either be based on the time evolution of a reduced density matrix, i.e. a so-called master equation, or on the evolution of a particular realization of the subsystem usually involving a wave function.
In the open quantum system approach, the real and imaginary parts of the in-medium heavy-quark potential can be related to the stochastic evolution of the in-medium heavy quarkonium wave function [98]. In particular, the imaginary part is related to the strength of in-medium noise. This approach however is only applicable at early times and does not yet include dissipative effects required for consistent thermalization. Its extension towards thermalization has been applied to Υ physics in a static medium in ref. [100]. Currently, increasingly advanced theoretical work is underway [102], with the goal of systematically connecting the language of open quantum systems with that of effective field theory [104]. In particular in [104], it has been calculated the nuclear modification factor R AA for the states Υ (1S) and Υ (2S) in a strongly coupled plasma. This is the first calculation that takes into account both the singlet and octet contributions. The corresponding evolution equations account both for dissociation and recombination and have promising phenomenological applications.
At the same time, a new Schrödinger-Langevin approach has been put forward [107], which, with the inclusion of a non-linear contribution in the stochastic evolution of the quarkonium wave function, allows the heavy quarkonium state to thermalize at late times. It has been thoroughly explored in 1-dimensional settings and its formulation in terms of a Schrödinger equation bodes well for relating its parameters to quantities on the EFT side in the future.
To improve the phenomenological models by using inputs from lattice QCD a more detailed understanding of color screening is needed. This can be achieved by comparison of weak coupling and lattice results on static quark free energy. Further theoretical work is needed to relate phenomenological models of quarkonia production to the lattice results on the complex potential. The implications of the octet degrees of freedom in pNRQCD on the dynamical models of quarkonium production also need to be better understood.

Phenomenology and experiment of open heavy-flavor probes (Discussion Group 2)
The Discussion Group envisages three concrete questions: -Is there a tension between the R AA measurements of non-prompt J/ψ and B mesons? -Is the pp baseline of open heavy-flavor production under theoretical control? What are the uncertainties in energy-loss predictions and due to the theoretical uncertainties due to the pp baseline? -How can we rule out energy-loss models?
In a subsequent dedicated open heavy-flavor discussion, a focus was put on the measurement capabilities and community needs related to the future sPHENIX program [108]. Very valuable bilateral discussions took place with the quarkonia (see Chapter 4) and lattice discussion groups (see Chapter 2). The general charges for these mutual discussions were observables of mutual interest and how to connect relevant quantities in energy-loss calculations to quantities that can be computed on the lattice, respectively.  hadron R AA [114,115,116]. This consistency of suppression across parton flavors was a surprise on both theoretical and experimental grounds. From the theoretical side, generically all energy-loss models predict less B-meson suppression than for D mesons and charged hadrons. At the partonic level, energy loss decreases as a function of the parton mass due to the suppression of small angel gluon radiation [117]. Predictions for the suppression of hadronic observables convolve the different production spectra of the various parton flavors with their energy loss and then subsequent relevant non-perturbative fragmentation functions. As a result, there can be a non-trivial momen- tum dependence to the hadronic flavor ordering predictions as a function of momentum, with D mesons generally as suppressed as charged hadrons at low p T [118,119,120].

Open heavy-flavor production and model descriptions
Experimentally, a well-known [112] combination of R AA (N part ) of non-prompt J/ψ, the decay products of B mesons, measured by CMS [113] and D mesons by ALICE [114,115] in a single plot showed a clear separation of heavy-flavor hadrons by mass, as can be seen in the lower panel of Fig. 1.
The rapidity selection of the data was identified as playing an important role. CMS measured charged hadrons and D mesons in |y| < 1 as compared to |y| < 2.4 for the B mesons. Previous CMS results [113] show that the nonprompt J/ψ R AA (y) decreases surprisingly quickly with increasing |y|, as depicted in Fig. 2. It is worth noting that 1.) there is a CMS non-prompt J/ψ R AA (p T ) measurement with the same |y| < 2.4 rapidity cut as for the B mesons, and these two measurements show a consistent suppression and 2.) there is no immediately obvious theoretical explanation for the decrease in J/ψ R AA (y).
The members of the CMS collaboration present committed to measuring the spectra with the same rapidity selections, and ultimately differentially in η and p T , with future larger data samples.

pp baseline
Through discussions at this workshop, there was a general appreciation for the significant theoretical uncertainties associated with single open heavy-flavor production via fixed order at next-to-leading-log (FONLL) [121] or the general-mass variable flavor number scheme (GM-VFNS) [122,123] and agreement that future energy-loss calculations should explore the propagation of uncertainties due to the production calculations through to their final suppression predictions. Fragmentation functions are an additional concern: it seems that both light and heavy-flavor fragmentation functions fail to describe top energy LHC measurements [124,125]; see the upper panel of Fig. 3, which shows the differential production rate of D * mesons (per-jet) as a function of z. There is also now evidence for double parton scattering in heavy-flavor production, which is not included in any generator [126,127]. The community will need to seek advice from the heavy-flavor production practitioners to understand in detail the extent to which the c and b spectra can be varied to accurately reflect the uncertainties in the shape of the production spectra.
There is a general consensus amongst heavy-ion experimentalists that state-of-the-art heavy-flavor generators do not reproduce correlations measurements in pp collisions. Through discussions at the workshop, it was realized there are extremely few measurements related to the correlations of heavy flavor, in either angle or in momentum. It is also clear that those measurements that do exist are not necessarily well described by current state-of-the-art NLO heavy-flavor generators, particularly for collinear production of heavy-quark pairs [128,129]; see the lower panel of Fig. 3. Nevertheless, there was consensus amongst the participants that 1.) the future of the heavy-flavor community lies in distinguishing measurements related to correlated observables and 2.) the self-consistent merging of sophisticated NLO production codes (and their matching to parton showers and hadronization) with energy-loss calculations is an open problem.

Model tests
In the last years, several groups have advanced approaches, which study the interaction of heavy quarks in an expanding QGP [61,89,90,91,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144]. These approaches are either based on a Fokker-Planck equation or use the full Boltzmann collision kernel to describe the collisions of heavy quarks with the partons of the QGP. Also different approaches for the plasma expansion are used. For bottom quarks the Fokker Planck and the Boltzmann approach yield almost identical results. For the charm quarks differences between both approaches have been reported [135] what should be investigated in detail in near future.
All these approaches come to the common conclusion that the standard perturbative QCD cross sections [145] are not sufficient to describe the observed energy loss of heavy quarks in the plasma [61,146]. The model differ in the way in which non-perturbative elements are included. Some of them have modified the Debye mass to have a smooth transition between collisions with low momentum transfer, which require hard thermal loop calculation and collisions with large momentum transfer, which are described by pQCD [130,131,132,133,134]. Others use temperature dependent coupling constants [61, 89, 90, 91, 135, [124]. (Bottom) Azimuthal angular correlation distribution between BB pairs in pp collisions at √ s = 7 TeV measured by CMS. The data are compared to several generators [128]. 136,137], which increase towards the critical temperature T c . A third approach is based on the existence of heavy quark -light quark bound states close to T c [138,139,140,141].
The observables studied in these approaches are the R AA and v 2 . For heavy quarks with low transverse momentum the predicted v 2 values are sensitive to the late stage of the expansion because the heavy quarks get their elliptic flow by collisions with the light plasma partons. This is only possible after the spatial eccentricity of the plasma constituents is converted into v 2 in momentum space. This takes time. The v 2 at higher transverse momenta is caused by different path lengths of the heavy quarks in the QGP [147]. Whereas the temperature dependent coupling constant as well as the existence of resonances provoke a high collision rate close to T c , where the elliptic flow of the light partons is already developed, the modified Debye mass increases the collision rate during the whole expansion process. In principle, with more precise data, one may therefore hope that experimental data can decide which is the more realistic scenario.
It was suggested that moderate p T ∼ 10 GeV/c is the ideal momentum region to experimentally test the predicted flavor hierarchy of b-decay products compared to c and u/d/s/g decay products because m B /p T is large and the region will have a good overlap of experimental observables with sufficient statistics at RHIC and LHC. Thus, precise measurements of R AA (p T ) for B are considered the highest priority in the field. At the same time, both heavy-flavor production and many energy-loss calculations assume p T m Q and fragmentation functions appear to be under the best control for p T m Q (as inferred from the anomalous baryon to meson ratio), and so the most phenomenologically relevant momentum region is currently under the least theoretical control.
In general, there was consensus that correlation measurements, especially in momentum [148,149,150], hold great promise for providing a distinguishing future measurement. However, there is currently a lack of distinguishing predictions from a wide range of theoretical energy-loss models.
While not mentioned at the workshop, comparing measurements across the √ s NN lever arm, with its attendant change in temperature profiles as a function of time, should provide experimental constraint on energy-loss models. Additionally, although their specific purpose is not "model killing", there are two ongoing collective actions begun in 2016 -the Heavy Quark Working Group [151] and the EMMI Rapid Reaction Task Force [152] -devoted to the extraction of transport coefficients by confronting theoretical models with experimental data and to investigate the different physics behind the various approaches. Such collaborative work will necessarily lead to some standardization, as was the case for the JET Collaboration [153, 154], and will point towards improvements needed for some of the models.

Discussion on differential and precision measurements
There was consensus that theoretical calculations must describe data across flavor, √ s, p T and centrality dependencies. Thus, future sPHENIX bottom related measurements will provide a critical cross check of our physical understanding of quark-gluon plasma formation in colliders. With the current projection for the detector setup, which does not include the possibility of particle identification, measurements at sPHENIX will be limited to bjets, charged hadrons, and, possibly, to non-prompt J/ψ, as shown in Fig. 4 [108]; there will be no c quark related measurements. It was pointed out, however, that it may be possible to design the time projection chamber to provide some particle identification (PID), or even add a dedicated time-of-flight detector. These changes would allow measurements of prompt and non-prompt J/ψ and D mesons, DD correlations, and possibly even (exclusive) B mesons. There was consensus amongst the participants that measurements in what was termed the "low-p T " 15 GeV/c region are the most interesting from a theoretical viewpoint. At these low momentum scales, the bottomquark mass is similar to its momentum. One expects from perturbative physics a transition from radiation dominated energy loss to collisional energy loss for bottom quarks [131,143,155]; at higher momenta the pQCD-based calculations are expected to be under better theoretical control due to asymptotic freedom. From the AdS/CFT approach, the energy-loss calculations are under the most control at the lowest momenta: momentum fluctuations calculations [156,157,158,159] agree exactly at p = 0 but then differ by various powers of the Lorentz boost γ 1 . From an experimental standpoint, this momentum region is of significant interest as there will be overlap of statistically significant measurable quantities at both RHIC and LHC.

Energy loss and quarkonia joint discussion
A discussion of potential common ground between the two discussion groups began with the topic of boosted quarkonia, where the question was posed "In what kinematic range (if any), does energy loss become the dominant mechanism of nuclear modification for quarkonia?" This question arises naturally based on the flatness of R AA at large values of p T measured for prompt J/ψ and Υ (1S), with a value similar to that of other hadron species. It was pointed out that the relevance of color singlet versus octet configurations in quarkonium production at large p T , which is still not definitively understood, is crucial to 1 Since the strength of momentum fluctuations from some of these calculations [156,157] do not satisfy the fluctuationdissipation theorem, there is some debate as to whether the origin of the momentum fluctuations in those calculations is in fact thermal or is an artefact of the calculational setup [159]. address this question. On the other hand, a comparison with heavy-quark pairs in the antenna configuration (i.e. propagating with small opening angle), e.g., as measured with "fat" jets with two b-tags, could be informative.
The need to further understand quarkonium production led to a discussion of measurements of associated hadro-production. The use of quarkonia with two particle correlations, e.g., J/ψ-hadron, and/or jet reconstruction, e.g., to measure the J/ψ jet fragmentation function, were pointed out as promising directions. With larger data samples, photon+quarkonium and double quarkonium production are also of interest, as these quarkonia should then initially be produced in the octet configuration, although care has to be taken to understand the contribution of double parton scattering.
The diffusion of the open heavy flavor prior to combination into quarkonia and quarkonia break-up due to momentum fluctuations from the medium that can be related to the diffusion coefficient must affect the low-p T production of quarkonia. Thus, measurements of quarkonia down to p T = 0 might provide a complimentary way to constrain experimentally the heavy-quark diffusion coefficient.
Finally, the issue of heavy-flavor correlations, such as DD, was raised. Such correlations are considered a promising avenue to understand heavy-quark energy loss [160,161,162]. From the point of view of dilepton resonances and the dilepton continuum, such heavy-flavor correlations can be considered a background to other processes, e.g., Drell-Yan production. The question was posed to what extent the interest in such measurements is convergent between the two groups. In particular, is there interest in low p T and/or low invariant mass measurements from the energy-loss point of view. The response to this question was found to be model dependent, with different models that incorporate energy loss being interested in different kinematic regimes. This was pointed out as an area that needs further elucidation from the theory side. In particular, it would be desirable to compare predictions for heavy-flavor correlations with different models using the same kinematic selections.

Energy loss and lattice joint discussion
There was significant interest from both the energy loss and lattice communities to make contact between the areas of research. In particular, there is a strong desire from the energy-loss community for guidance from the lattice community on the temperature and momentum dependence of several quantities of direct interest in energyloss models, such as the heavy-quark diffusion coefficient, the Debye screening scale, whether heavy-light resonances persist for T > T c , the running of the coupling with temperature (albeit this is an observable dependent desire) andq. The lattice community emphasized that the quantities they can compute must be gauge invariant objects related to imaginary time correlators.

Heavy-flavor diffusion coefficient
Much of the discussion focused on the calculation of the heavy-flavor momentum diffusion coefficient D. In principle, D is a function of temperature T and heavy-flavor mass m Q . One may also consider extending possible definitions of D to larger momentum values, outside the hydrodynamic regime. There are currently two calculations of D from quenched lattice QCD: one at finite heavyquark mass (but without a continuum extrapolation) [71] and one with an infinite heavy-quark mass (but with continuum extrapolation) [69]. The results from the two approaches are currently consistent within the large systematic uncertainties. Both of these lattice calculations are currently limited to quenched approximations and require p = 0. Going beyond the quenched approximation will be difficult numerically but not conceptually. Going beyond the small momentum limit is conceptually difficult. A number of suggestions were made: -List the variety of relevant transport coefficients, e.g.
the drag coefficient and the longitudinal and transverse diffusion coefficients, and determine gauge invariant means of making contact with these quantities through lattice calculations. -Examine the T and p dependence of the pole structure of the spectral function of the heavy quark current at asymptotically high temperatures via pQCD techniques. When weak coupling calculations of the spectral functions for arbitrary spatial momenta are available one can attempt to make contact with lattice QCD studies of the current correlators at different spatial momenta. -Perform a Taylor expansion in p of the relevant four point function computed on the lattice. Such a calculation can be done on the lattice, although it remains technically very difficult to compute spectral functions, also at p = 0. Note that this would include the contributions from both above and below the light cone, and include the correct smearing effects. -Take the phenomenologically extracted D(T, m Q , p T ) from energy-loss model comparisons to data, derive a related spectral function, and compare to pQCD and lattice calculations.

Debye screening
Additional discussion focused on lattice calculations of several of the dynamical masses commonly used in energyloss calculations [163,164]: the Debye electric screening mass µ E and the magnetic screening mass µ M defined in terms of gluon propagator. Lattice calculations of these masses depend on the choice of the gauge [165,166,167], though in some class of gauges the gauge parameter dependence is mild [168]. At leading order the electric mass is gauge independent. A gauge invariant definition of the electric and magnetic screening masses is possible on the lattice (see e.g. Ref. [169]), but it is not clear how these definitions are related to the finite temperature gluon propagator used in the energy-loss calculations. A potentially useful approach is to use lattice calculations to measure the transverse momentum squared per unit distance imparted to a high momentum parton,q [81].

Other points of contact
Additional points of contact between energy loss and lattice calculations include computing the polarization loop or quark number fluctuations from the energy-loss side and comparing to lattice calculations; the susceptibility is another potential observable mutually calculable from the energy-loss perspective and from the lattice.
4 Phenomenology and experiment of quarkonium production (Discussion Group 3)

Open experimental and theoretical issues
LHC measurements are now reaching a high level of accuracy, thanks in particular to the Run2 data taking where a large luminosity has been collected and a considerable effort has been done to reduce the systematic uncertainties. The comparison of the experimental results with the theory calculations is, however, still limited by the large uncertainties on the model inputs. The nuclear modification of the parton distribution functions (the so-called "shadowing") has very large uncertainties, since very little experimental constraints are available, in particular for gluon distributions. At present, the charm cross section is the other main source of uncertainty, and also in this case limited experimental guidance is available. The usage of very different input values complicates even further the comparison between data and theory models [2] and an agreement on the values to be adopted, possibly driven by the available experimental results, would clearly ease such a comparison. The importance of the experimental measurement of the charm cross section, both at forward and mid-rapidity, i.e. in the kinematic range covered by the experiments, is, therefore, considered as fundamental to gain further insight in this field. The cross sections should be directly measured in pA and AA collisions, to simultaneously take into account the charm production cross section and its medium modifications. The precision on such measurements, required to provide a meaningful comparison between data and experiments, should be smaller than 10%. Given the role played by bottomonium (Υ ) states, in particular at the LHC and in the forthcoming sPHENIX experiment at RHIC, a similar cross section measurement for open-bottom is also mandatory. The quarkonium production in pA and AA collisions is affected by the so-called cold nuclear matter (CNM) effects. While these mechanisms are dominant in pA interactions, an extrapolation is needed to estimate their role in AA collisions, where CNM effects are underlying the QGP-related ones. Advantages and limitations of an approximation for CNM effects in Pb-Pb at forward rapidities, based on the factorization of CNM effects measured at forward and backward rapidities, in pA collisions, have been debated. While it is clear that this approach provides at first order the size of some of the currently expected CNM mechanisms in AA, it is still a model dependent definition and there are several assumptions, which may limit its validity. In case of shadowing, as dominant CNM mechanism, the pA data should cover the same x range as in AA, while in case of coherent energy loss, pA and AA data should be compared at the same center of mass energy. Factorization holds also if the cc pair absorption in the nucleus is the main CNM effect, provided that a unique absorption cross section describes the rapidity dependence of the quarkonium production. However, this situation is realized only at energies much lower than available at the LHC. The validity of the factorization approach has to be assessed, for both p T -integrated and p T -differential results, in all the theory models describing the quarkonium results in pA interactions.
Finally, the current interpretation of the medium modification of the quarkonium states is still limited by the absence of precise feed-down measurements. Recent LHCb results point to a maximum feed-down contribution to the Υ (1S) state, from the χ b , Υ (2S) and Υ (3S) states, of the order of 30% [170]. However, the present measurements do not cover the kinematic range of the AA bottomonium results and precise measurements at the LHC, in particular for the χ b , extended down to zero p T would help to precisely assess the feed-down contribution. For the interpretation of the J/ψ results, the feed-down from the χ c is an experimentally challenging key-measurement, to be performed in the near future. However, also in this case, the extension of the p T coverage down to zero turns out to be crucial for this measurement.

New observables
To address these open experimental and theoretical issues, possible new observables have been discussed at the workshop and are briefly reviewed in the following.

Quarkonia
The nuclear modification factor R AA is the most widely used observable for quarkonium studies both by RHIC and LHC experiments. However, the information conveyed by the R AA is strongly bound to the evaluation of the pp reference. From the experimental point of view, the often limited statistics of the pp reference, or even the absence of pp data collected at the same center of mass energy as AA collisions, might result in a limitation to the accuracy achievable in the R AA . From the theory side, the use of pp collisions as a reference holds only under the assumption that quarkonia are formed in inelastic scatterings before QGP is formed. If this is not the case, the relevant reference should be the total charm or bottom production cross section per unit of rapidity. Clearly the two approaches are equivalent as long as the open heavy-flavor cross section scales with the number of binary collisions.
The importance of new quarkonium observables to complement the information provided by the R AA measurement, in pA and AA collisions, has been discussed. It was agreed that experiments should also provide quarkonium yields in the colliding systems under study, to be compared with similar quantities extracted from theory calculations. The quarkonium elliptic flow (v 2 ) and the ratio of the average quarkonium transverse momentum square in AA and pp collisions (r aa ) can also provide additional informations and theory models should address all these aspects in a consistent way. The quarkonium polarization is considered an interesting observable, even if experimentally the measurement is challenging and, at present, no theory guidance on the expected degree of polarization in pA and AA collisions is provided. When studying excited and ground quarkonium states, the ratio of the two yields, even normalized to the corresponding yields in pp collisions, might provide additional insight, in particular if a partial cancellation of the experimental and theoretical uncertainties is expected. However, to provide a full picture of the fate of quarkonium resonances in pA or AA collisions, it has been stressed that results provided in terms of ratios should always be accompanied by quarkonium yields and R AA measurements.
The importance of the quarkonium normalization to the open heavy-flavor production is well assessed and the feasibility of such a study has been extensively addressed. To investigate the centrality dependence of the quarkonium production, the open heavy-flavor production has to be experimentally measured down to zero transverse momentum (p T ), with an accuracy that should be smaller than 10%, in order not the represent a limit for the precision of this observable. For p T -differential studies theory guidance is crucial to link the quarkonium and the charm/bottom transverse momentum. The proposed normalization should be investigated both for charmonium and bottomonium resonances, comparing the yields to the open charm and bottom mesons, respectively.
RHIC and LHC quarkonium R AA results are usually presented as a function of the number of participant nucleons (N part ), evaluated within a Glauber model. While this variable represents an easy way to compare to theory calculations, the use of additional variables as the number of charged produced particles in a given rapidity range (dN charged /dη), possibly normalized to the transverse area, is proposed. These quantities, whose precise definition should be agreed among the experiments, are more correlated to the energy density reached in the collisions, thought to be responsible for the underlying physics.

Open heavy-flavor correlations via dileptons
The dilepton continuum between the φ and J/ψ masses is dominated by simultaneous semileptonic decays of correlated DD mesons. The shape of the invariant mass spectrum is sensitive to the initial angular correlation of the cc quark pair [171,172]. To leading order these are produced back-to-back and lead to relatively high-mass dileptons with low pair-p T . Gluon-splitting on the other hand creates dileptons with small invariant mass and high pair-p T . A double differential study in pp collisions should be able to constrain the relative importance of various production mechanisms. In heavy-ion collisions, heavy-quark energy loss will lead to a modification of the dilepton invariant mass spectrum. Besides a softening due to high-p T suppression, the invariant mass spectrum is also sensitive to an angular decorrelation. In contrast to D mesons, in decays of heavier B mesons any initial correlation is washed out. The overall sensitivity to various energy-loss mechanisms, discussed in Chapter 3, still needs to be studied.

Open issues on theory and phenomenology aspects
During the meeting, some discussion was convened together with the other Discussion Groups. The main points are summarized in the following.

Spectral functions from lattice QCD versus experiment
As discussed in Chapter 2 (section A.2.), lattice QCD calculations show strong in-medium modifications of quarkonium spectral functions. In particular, a strong broadening around the pole mass is observed [37]. With a width of about 200 MeV, the quarkonium lifetime becomes short enough to decay within the QGP and, hence, the broadening should be observable in the dilepton decay channel, akin the broadening of the ρ meson. At present, there is no experimental evidence for a deviation from the vacuum line shape. However, there has been no real effort to look for such modifications that might hide below the radiative tail. Given that one has not observed a modification of the φ meson, having a vacuum width between the ρ and the J/ψ or Υ (1S), a quantitative estimate for the yield of in-medium quarkonium decays would be important before an experimental search. The situation for the Υ (1S) might be better than for the φ, as it can exist (and decay) also in the QGP phase, while the φ feels only the relatively short hadronic gas phase. Lattice QCD, formulated in Euclidean time, cannot provide these estimates so phenomenological models are required.
If the quarkonia would be in equilibrium with the medium, then the measurement of the broadening would be equivalent to the measurement of the suppression, as the probability to decay into dimuon pairs equals Γ µ + µ − /Γ Tot (T ) with Γ µ + µ − being the partial width and Γ Tot (T ) the total width. The probability to decay as a dimuon pair would then be ∞ 0 Γ µ + µ − e −ΓTot(T (t))t dt while the average width for the dimuon pair would bē This encompasses both the case of small broadening (Γ Tot ≈ Γ µ + µ − ), in which caseΓ µ + µ − ≈ Γ µ + µ − and the case of finite broadening. One should make a detailed study of the integral in the numerator to see if a small time/temperature window contribute to the broadening. However, the problem is that the relative probability of quarkonia decaying into dimuons versus total decay probability is several orders of magnitude smaller. Even if it would be extremely interesting to experimentally detect a broadening despite such small branching ratio, a quantitative understanding would require understanding the vacuum branching ratio of quarkonia into dileptons at much higher precision than available today.

Phenomenological model descriptions of quarkonium formation
In this section, the phenomenology of the model descriptions of quarkonium formation is discussed.

Quarkonium formation time
An important debate was triggered on the time it takes to form a particular quarkonium state, the so-called formation time t F . Although it cannot be defined rigorously outside a specific model and a specific environment, this concept is commonly used in order to guide the phenomenology, both for pA and AA collisions. A rough estimate of the formation time can be achieved based on semi-classical considerations: assuming a state can only be well defined after the two heavy quarks have rotated at least once around each other leads to t F ≈ 1/(m Q v 2 ) for Coulomb states where v is the velocity of the heavy quark in the bound state. Previous work relying on dispersion relations [173] came with an estimate t F ∼ 1/(m 2 − m 1 ) with ground state mass m i , close to the Heisenberg time, of the order of 0.44 fm/c for J/ψ and 0.32 fm/c for Υ .
Historically, the equilibration times τ 0 in heavy-ion collisions were believed to be much larger than the quarkonium formation times, as relying on dynamics involving softer scales. However, the estimation of these equilibration times have shrunk over the past decade [174,175,176], challenging this assumption.
During the workshop, no general consensus on whether quarkonia enter the QGP as a fully formed bound object was reached. However, this is not really important for the understanding of nuclear modification factor as will be discussed below. The basic point is that dissociation of quarkonia does not necessarily imply that the heavy Q andQ pairs resulting from the dissociated quarkonia become totally uncorrelated. As long as the correlation persists there is a chance that at least some of the Q andQ pairs will form a quarkonium state again. So, one has to deal with the formation of quarkonium states inside the QGP, irrespective whether the formation time of quarkonium is smaller or larger than the QGP formation time.

Models of quarkonium formation in heavy-ion collisions
There have been many attempts to explain the nuclear modification factor of J/ψ in terms of sequential suppression picture (see e.g. [177,178,179]). However, it was realized already in the early 2000s that (re)generation of charmonia inside the QGP is possible and will affect the J/ψ nuclear modification factor significantly ( [180,181,182,183,184] and references therein). The models based on these ideas were able to explain the R AA of J/ψ at RHIC and successfully predicted the J/ψ R AA at LHC (see e.g. ref. [185]). Models based on the sequential suppression picture (see e.g. Refs. [186,106]), as well as models that include in-medium bottomonium formation, fairly describe the observed pattern of bottomonium production in heavy-ion collisions at RHIC and LHC. However, a quantitative understanding of the hot matter effect mechanisms affecting Υ states would require a better understanding of feed-down contributions at low p T and on the rapidity-dependence of the R AA . In fact, recent measurements of the feed-down fraction to Υ (1S) with p T > 6 GeV/c by LHCb [170] challenge the frequently used fraction of 50% (based on higher p T measurements by CDF [187]), suggesting values closer to (or even below) 30%. Such low feed-down fractions would imply suppression of directly produced Υ (1S) at the LHC. Furthermore, sequential suppression models predict a minimum Υ R AA value at mid-rapidity, while current data from AL-ICE [188] and CMS [189] suggest a rather flat trend. A careful measurement of the rapidity dependence of Υ R AA is, therefore, of great importance for the future.
In section II, we discussed dynamical models of quarkonium production that make contact with QCD. The common feature of all these models is that quarkonium states are formed inside the QGP. Furthermore, the basic ingredient of all these models are the force between the heavy Q andQ that is related to the real part of the potential at T > 0, and the stochastic force of the medium acting on the quark or anti-quark, which can be related to the imaginary part of the potential at T > 0 or to the heavy-quark diffusion constant. In its simplest realization, such models amount to Langevin dynamics of correlated QQ pairs [190,191,192,193]. The stochastic forces of the medium will eventually destroy the correlations between Q andQ, but for a QGP with finite life time some of the QQ pairs will remain correlated and will form quarkonium states again. Which states will be formed from the correlated QQ pairs depends on their distribution in the relative distance at the time of the freeze-out or bound state formation. Calculations show that this distribution is peaked for small relative distances between the Q andQ [190,193], so ground state quarkonia are more likely to be formed than excited states. Therefore, we could talk about sequential quarkonium formation in the QGP rather than sequential suppression.
The above discussion was mostly focused on QQ pairs that were correlated at the time of QGP formation or earlier. This is the diagonal (re)generation of quarkonia. The off-diagonal (re)generation, where the quarkonium state is formed from initially uncorrelated Q andQ can also be studied in Langevin dynamics [191]. This mechanism depends on the total number of heavy quark anti-quark pairs. The Langevin dynamics of correlated QQ pairs was embedded in the realistic hydrodynamic background and the J/ψ R AA was calculated at RHIC [190,191] and LHC [192]. The model was able to explain the experimental results both at RHIC and LHC quite well. It was found that at RHIC the off-diagonal recombination is small [191], while it is very important at LHC [192]. This approach can be considered as a more microscopic realization of statistical recombination of ref. [185], in fact it is a microscopic calculation of the correlation volume that enters in recombination models. It should be noted that current lattice QCD calculations, e.g. Ref. [13], indicate a rapid charm quark equilibration on time scales similar to light quark equilibration times (≈1 fm/c) as well as charmonium dissociation temperatures close to T c , and hence support offdiagonal regeneration.
The Langevin dynamics of correlated QQ pairs is a valid approach in the limit of loosely bound Q andQ. So, it is clearly not applicable to ground state bottomonium. In that case, a full quantum treatment is required [107]. A possible way out is to treat the tightly bound Υ (1S) as a distinct particle whose number is described by a rate equation, while other bottomonium states are treated using Langevin dynamics [193]. An effort to construct viable models that are based on the idea of bottomonium formation in the QGP is underway [193,194].
The above debate is strongly linked to the interpretation of quarkonium suppression as a dissociation process. In the original sequential dissociation proposal [93], it is assumed that bound states (color singlet) cannot be fully formed before the QGP is created. It is important to stress that shrinking of the plasma equilibration time scales supports in fact the hypothesis of [93]. Even if t F was found to be much smaller then τ 0 , some participants doubted that the correct physical picture should be the one of fully formed quarkonia entering a QGP, as the large fields existing in the pre-equilibrium phase would presumably prevent any binding of the QQ pair, giving rise to correlated QQ state at the time of thermalization, what could be appear as an effective increase of the vacuum formation time, which is also observed when an equilibrated QGP has been reached [195].

Role played by comovers
Discussion on the role of comovers [196,197,198] in the interpretation of R AA results has taken place. One important point was that comovers are a rather abstract concept that scales with the particle multiplicity but without a clear connection to partons or hadrons. While comovers seem to play a role to explain the suppression of excited states in pA collisions, their role in AA collisions (and also in pp) has extensively been debated, without reaching a firm conclusion. Since comovers are proportional to the particle multiplicity, it would be important to present nuclear modification results as a function of dN ch /dη and seek for some universality features. In the comover model, the dissociation cross sections is chosen as free parameter. Suggestions were made to calibrate these on the width of the spectral functions deduced from lattice QCD. The comover models could help to constrain the hadronic breakup cross section of different quarkonium states. This would have important implications for the modelling of quarkonium suppression by the hadronic medium at late stages of heavy-ion collisions.

Acknowledgments
We are very thankful for the generous support by the Netherlands Organisation for Scientific Research (NWO), the National Institute for Subatomic Physics (Nikhef) in Amsterdam and the Lorentz fonds. This work has been supported in part by the U.