## Abstract

Measurements of event-by-event fluctuations of charged-particle multiplicities in Pb–Pb collisions at \(\sqrt{s_{\mathrm {NN}}}\) \(=\) 2.76 TeV using the ALICE detector at the CERN Large Hadron Collider (LHC) are presented in the pseudorapidity range \(|\eta |<0.8\) and transverse momentum \(0.2< p_{\mathrm{T}} < 2.0\) GeV/*c*. The amplitude of the fluctuations is expressed in terms of the variance normalized by the mean of the multiplicity distribution. The \(\eta \) and \(p_{\mathrm{T}}\) dependences of the fluctuations and their evolution with respect to collision centrality are investigated. The multiplicity fluctuations tend to decrease from peripheral to central collisions. The results are compared to those obtained from HIJING and AMPT Monte Carlo event generators as well as to experimental data at lower collision energies. Additionally, the measured multiplicity fluctuations are discussed in the context of the isothermal compressibility of the high-density strongly-interacting system formed in central Pb–Pb collisions.

## Introduction

According to quantum chromodynamics (QCD), at high temperatures and high energy densities, nuclear matter undergoes a phase transition to a deconfined state of quarks and gluons, the quark–gluon plasma (QGP) [1,2,3,4,5]. Heavy-ion collisions at ultra-relativistic energies make it possible to create and study such strongly-interacting matter under extreme conditions. The QGP formed in high-energy heavy-ion collisions has been characterised as a strongly-coupled system with very low shear viscosity. The primary goal of the heavy-ion program at the CERN Large Hadron Collider (LHC) is to study the QCD phase structure by measuring the properties of QGP matter. One of the important methods for this study is the measurement of event-by-event fluctuations of experimental observables. These fluctuations are sensitive to the proximity of the phase transition and thus provide information on the nature and dynamics of the system formed in the collisions [6,7,8,9,10,11,12]. Fluctuation measurements provide a powerful tool to investigate the response of a system to external perturbations. Theoretical developments suggest that it is possible to extract quantities related to the thermodynamic properties of the system, such as entropy, chemical potential, viscosity, specific heat, and isothermal compressibility [6, 13,14,15,16,17,18,19,20,21]. In particular, isothermal compressibility expresses how a system’s volume responds to a change in the applied pressure. In the case of heavy-ion collisions, it has been shown that the isothermal compressibility can be calculated from the event-by-event fluctuation of charged-particle multiplicity distributions [17].

The measured multiplicity scales with the collision centrality in heavy-ion collisions. The distribution of particle multiplicities in a given class of centrality and its fluctuations on an event-by-event basis provide information on particle production mechanisms [22,23,24]. In this work, the magnitude of the fluctuations is quantified in terms of the scaled variance,

where \(\langle N_{\mathrm{ch}} \rangle \) and \(\sigma _{\mathrm{ch}}^2\) denote the mean and variance of the charged-particle multiplicity distribution, respectively. Event-by-event multiplicity fluctuations in heavy-ion collisions have been studied earlier at the BNL-AGS by E802 [25], the CERN-SPS by the WA98 [26], NA49 [27, 28], and CERES [29] experiments, and at the Relativistic Heavy Ion Collider (RHIC) by the PHOBOS [30] and PHENIX [31] experiments. A compilation of available experimental data and comparison to predictions of the event generators are presented elsewhere [19]. In this work, measurements of the scaled variance of multiplicity fluctuations are presented as a function of collision centrality in Pb–Pb collisions at \(\sqrt{s_{\mathrm {NN}}}\) \(=\) 2.76 TeV using the ALICE detector at the LHC.

In thermodynamics, the isothermal compressibility (\(k_{T}\)) is defined as the fractional change in the volume of a system with change of pressure at a constant temperature,

where *V*, *T*, *P* are the volume, temperature, and pressure of the system, respectively. In general, an increase in the applied pressure leads to a decrease in volume, so the negative sign makes the value of \(k_{T}\) positive. In the context of a description in terms of the grand canonical ensemble, which is approximately applicable for the description of particle production in heavy-ion collisions [5], the scaled variance of the multiplicity distribution can be expressed as [17],

where \(k_{\mathrm{B}}\) is the Boltzmann’s constant, and \(\langle N_{\mathrm{ch}}\rangle \) is the average number of charged particles. Measurements of fluctuations in terms of \(\omega _{\mathrm{ch}}\) can be exploited to determine \(k_{T}\) and associated thermodynamic quantities such as the speed of sound within the system [17, 32].

Measurements of the multiplicity of produced particles in relativistic heavy-ion collisions are basic to most of the studies as a majority of the experimentally observed quantities are directly related to the multiplicity. The variation of the multiplicity depends on the fluctuations in the collision impact parameter or the number of participant nucleons. Thus, the measured multiplicity fluctuations contain contributions from event-by-event fluctuations in the number of participant nucleons, which forms the main background towards the evaluation of any thermodynamic quantity [33, 34]. This has been partly addressed by selecting narrow intervals in centrality and accounting for the multiplicity variation within the centrality of the measurement. The remainder of participant fluctuations is estimated in the context of an MC Glauber model in which nucleus–nucleus collisions are considered to be a superposition of nucleon–nucleon interactions.

Thus, the background fluctuations contain contributions from independent particle production and correlations corresponding to different physical origins. The background-subtracted fluctuations can be used in Eq. (3) to estimate \(k_{T}\) with the knowledge of the temperature and volume from complementary analyses of hadron yields, calculated at the chemical freeze-out [35, 36].

In addition to fluctuations in the number of participant nucleons, several other processes contribute to fluctuations of the charged particles multiplicity on an event-by-event basis [17, 37]. These include long-range particle correlations, charge conservation, resonance production, radial flow, as well as Bose–Einstein correlations. Since these contributions can not be evaluated directly, the value of \(k_{T}\) extracted and reported in this work amounts to an upper limit.

The article is organized as follows. In Sect. 2, the experimental setup and details of the data analysis method, including event selection, centrality selection, corrections for finite width of the centrality intervals, and particle losses are presented. In Sect. 3, the measurements of the variances of multiplicity distributions are presented as a function of collision centrality. Additionally, the dependence of the fluctuations on the \(\eta \) and \(p_{\mathrm{T}}\) ranges of the measured charged hadrons are studied. The results are compared with calculations from selected event generators. In Sect. 4, methods used to estimate multiplicity fluctuations resulting from the fluctuations of the number of participants are discussed. An estimation of the isothermal compressibility for central collisions is made in Sect. 5.

## Experimental setup and analysis details

The ALICE experiment [38] is a multi-purpose detector designed to measure and identify particles produced in heavy-ion collisions at the LHC. The experiment consists of several central barrel detectors positioned inside a solenoidal magnet operated at 0.5 T field parallel to the beam direction and a set of detectors placed at forward rapidities. The central barrel of the ALICE detector provides full azimuthal coverage for track reconstruction within a pseudorapidity (\(\eta \)) range of \(|\eta |<0.8\). The Time Projection Chamber (TPC) is the main tracking detector of the central barrel, consisting of 159 pad rows grouped into 18 sectors that cover the full azimuth. The Inner Tracking System (ITS) consists of six layers of silicon detectors employing three different technologies. The two innermost layers are Silicon Pixel Detectors (SPD), followed by two layers of Silicon Drift Detectors (SDD), and finally, the two outermost layers are double-sided Silicon Strip Detectors (SSD). The V0 detector consists of two arrays of scintillators located on opposite sides of the interaction point (IP). It features full azimuthal coverage in the forward and backward rapidity ranges, \(2.8< \eta < 5.1\) (V0A) and \(-3.7< \eta <-1.7\) (V0C). The V0 detectors are used for event triggering purposes as well as to evaluate the collision centrality on an event-by-event basis [39]. The impact of the detector response on the measurement of charged-particle multiplicity based on Monte Carlo simulations is studied with the GEANT3 framework [40].

This analysis is based on Pb–Pb collision data recorded in 2010 at \(\sqrt{s_{\mathrm {NN}}}\) \(=\) 2.76 TeV with a minimum-bias trigger comprising of a combination of hits in the V0 detector and the two innermost (pixel) layers of the ITS. In total, 13.8 million minimum-bias events satisfy the event selection criteria. The primary interaction vertex of a collision is obtained by extending correlated hits in the two SPD layers to the beam axis. The longitudinal position of the interaction vertex in the beam (*z*) direction (\(V_{\mathrm{z}}\)) is restricted to \(|V_{\mathrm{z}}| < 10\) cm to ensure a uniform acceptance in the central \(\eta \) region. The interaction vertex is also obtained from TPC tracks. The event selection includes an additional vertex selection criterion, where the difference between the vertex using TPC tracks and the vertex using the SPD is less than 5 mm in the *z*-direction. This selection criterion greatly suppresses the contamination of the primary tracks by secondary tracks resulting from weak decays and spurious interactions of particles within the apparatus.

Charged particles are reconstructed using the combined information of the TPC and ITS [38]. In the TPC, tracks are reconstructed from a collection of space points (clusters). The selected tracks are required to have at least 80 reconstructed space points. Different combinations of tracks in the TPC and SPD hits are utilized to correct for detector acceptances and efficiency losses. To suppress contributions from secondary tracks (i.e., charged particles produced by weak decays and interactions of particles with materials of the detector), the analysis is restricted to charged-particle tracks featuring a distance of closest approach (DCA) to the interaction vertex, DCA\(_{\mathrm{xy}} < 2.4\) cm in the transverse plane and of DCA\(_{\mathrm{z}} < 3.2\) cm along the beam direction. The tracks are additionally restricted to the kinematic range, \(|\eta |<0.8\) and \(0.2< p_{\mathrm{T}} < 2.0\) GeV/*c*.

### Centrality selection and the effect of finite width of the centrality intervals

The collision centrality is estimated based on the sum of the amplitudes of the V0A and V0C signals (known as the V0M collision centrality estimator) [39]. Events are classified in percentiles of the hadronic cross section using this estimator. The average number of participants in a centrality class, denoted by \(N_{\mathrm{part}}\), is obtained by comparing the V0M multiplicity to a geometrical Glauber model [41]. Thus, the centrality of the collision is measured based on the V0M centrality estimator, whereas the measurement of multiplicity fluctuations is based on charged particles measured within the acceptance of the TPC.

A given centrality class is a collection of events of measured multiplicity distributions within a range in V0M corresponding to a mean number of participants, \(\langle N_{\mathrm{part}} \rangle \). This results in additional fluctuations in the number of particles within each centrality class. To account for these fluctuations, a centrality interval width correction is employed. The procedure involves dividing a broad centrality class into several narrow intervals and correcting for the finite interval using weighted moments according to [42, 43],

Here, the index *i* runs over the narrow centrality intervals. \(X_{\mathrm{i}}\) and \(n_{\mathrm{i}}\) are the corresponding moments of the distribution and number of events in the *i*th interval, respectively. With this, one obtains, \(N = \sum _{\mathrm{i}}n_{\mathrm{i}}\) as the total number of events in the broad centrality interval.

The centrality resolution of the combined V0A and V0C signals ranges from 0.5% in central to 2% in the most peripheral collisions [39]. A correction for the finite width of centrality intervals has been made with Eq. 4 using 0.5% centrality intervals from central to \(40\%\) cross-section and \(1\%\) intervals for the rest of the centrality classes.

### Efficiency correction

The detector efficiency factors (\(\varepsilon \)) were evaluated in bins of pseudorapidity \(\eta \), azimuthal angle \(\varphi \), and \(p_{\mathrm{T}}\). By defining \(N_{\mathrm{ch}}(x)\) as the number of produced particles in a phase-space bin at *x*, *n*(*x*) as the number of observed particles at *x*, and \(\varepsilon (x)\) as the detection efficiency, the first and second factorial moments of the multiplicity distributions can be corrected for particle losses according to the procedure outlined in Refs. [44, 45]:

and

respectively. Here, *m* denotes the index of the phase-space bins and *i*, *j* are the bin indexes. \(\delta _{x_{\mathrm{i}}x_{\mathrm{j}}}=1\) if \(x_{\mathrm{i}}=x_{\mathrm{j}}\) and zero otherwise. The variance of the charged-particle multiplicity is then calculated as:

The correction procedure is validated by a Monte Carlo study employing two million Pb–Pb events at \(\sqrt{s_{\mathrm{NN}}}\) \(=\) 2.76 TeV generated using the HIJING event generator [46], and passed through GEANT3 simulations of the experimental setup, taking care of the acceptances of the detectors. The efficiency dependencies on \(\eta \), \(\varphi \), and \(p_{\mathrm{T}}\) are calculated from the ratio of the number of reconstructed charged particles by the number of produced particles. In order to account for the \(p_{\mathrm{T}}\) dependence of efficiency, the full \(p_{\mathrm{T}}\) range (\(0.2< p_{\mathrm{T}} < 2.0\) GeV/*c*) was divided to nine bins (0.2–0.3, 0.3–0.4, 0.4–0.5, 0.5–0.6, 0.6–0.8, 0.8–1.0, 1.0–1.2, 1.2–1.6, 1.6–2.0) with larger number of bins in low \(p_{\mathrm{T}}\) ranges. In the Monte Carlo closure test, the values of \(\langle N_{\mathrm{ch}} \rangle \), \(\sigma _{\mathrm{ch}}\), and \(\omega _{\mathrm{ch}}\) of the efficiency corrected results from the simulated events are compared to those of HIJING at the generator level to obtain the corrections. By construction, the efficiency corrected values for \(\langle N_{\mathrm{ch}} \rangle \) match with those from the generator, whereas \(\sigma _{\mathrm{ch}}\) and \(\omega _{\mathrm{ch}}\) values differ by \(\sim 0.7\) and \(\sim 1.4\)%, respectively. These differences are included in the systematic uncertainties.

### Statistical and systematic uncertainties

The statistical uncertainties of the moments of multiplicity distributions are calculated based on the method of error propagation derived from the delta theorem [47]. The systematic uncertainties have been evaluated by considering the effects of various criteria in track selection, vertex determination, and efficiency corrections.

The systematic uncertainties related to the track selection criteria were obtained by varying the track reconstruction method and track quality cuts. The nominal analysis was carried out with charged particles reconstructed within the TPC and ITS. For systematic checks, the full analysis is repeated for tracks reconstructed using only the TPC information. The differences in the values of \(\langle N_{\mathrm{ch}} \rangle \), \(\sigma _{\mathrm{ch}}\), and \(\omega _{\mathrm{ch}}\) resulting from the track selections using the two methods are listed in Table 1 as a part of the systematic uncertainties. The \(DCA_{\mathrm{xy}}\) and \(DCA_{\mathrm{z}}\) of the tracks are varied by \(\pm 25\)% to obtain the systematic uncertainties caused by variations in the track quality selections. The effect of the selection of events based on the vertex position is studied by restricting the *z*-position of the vertex to \(\pm 5\) cm from the nominal \(\pm 10\) cm, and additionally by removing restrictions on \(V_{\mathrm{x}}\) and \(V_{\mathrm{y}}\). The efficiency correction introduces additional systematic uncertainty as discussed earlier. The experimental data were recorded for two different magnetic field polarities. The two data sets are analyzed separately and the differences are taken as a source of systematic uncertainties.

The individual sources of systematic uncertainties discussed above are considered uncorrelated and summed in quadrature to obtain the total systematic errors reported in this work. Table 1 lists the systematic uncertainties associated with the values of \(\langle N_{\mathrm{ch}} \rangle \), \(\sigma _{\mathrm{ch}}\), and \(\omega _{\mathrm{ch}}\).

## Results and discussions

Figure 1 shows the corrected mean (\(\langle N_{\mathrm{ch}} \rangle \)), standard deviation (\(\sigma _{\mathrm{ch}}\)), and scaled variance (\(\omega _{\mathrm{ch}}\)) as a function of \(\langle N_{\mathrm{part}} \rangle \) for the centrality range considered (0-60%) corresponding to \(N_{\mathrm{part}}>45\). Uncertainties on the estimated number of participants, \(\langle N_{\mathrm{part}} \rangle \), obtained from Ref. [38], are smaller than the width of the solid red circles used to present the data in the centrality range considered in this measurement. It is observed that the values of \(\langle N_{\mathrm{ch}} \rangle \) and \(\sigma _{\mathrm{ch}}\) increase with increasing \(\langle N_{\mathrm{part}} \rangle \). The value of \(\omega _{\mathrm{ch}}\) decreases monotonically by \(\sim 29\)% from peripheral to central collisions.

### Comparison with models

The measured \(\omega _{\mathrm{ch}}\) values are compared with the results of simulations with the HIJING and the string melting option of the AMPT models. HIJING [46] is a Monte Carlo event generator for parton and particle production in high-energy hadronic and nuclear collisions and is based on QCD-inspired models which incorporate mechanisms such as multiple minijet production, soft excitation, nuclear shadowing of parton distribution functions, and jet interactions in the dense hadronic matter. The HIJING model treats a nucleus-nucleus collision as a superposition of many binary nucleon-nucleon collisions. In the AMPT model [48], the initial parton momentum distribution is generated from the HIJING model. In the default mode of AMPT, energetic partons recombine and hadrons are produced via string fragmentation. The string melting mode of the model includes a fully partonic phase that hadronises through quark coalescence.

In order to enable a proper comparison with data obtained in this work, Monte Carlo events produced with HIJING and AMPT are grouped in collision centrality classes based on generator level charged-particle multiplicities computed in the ranges \(2.8< \eta < 5.1\) and \(-3.7< \eta <-1.7\), corresponding to the V0A and V0C pseudorapidity coverages. The results of the scaled variances from the two event generators are presented in Fig. 1 as a function of the estimated number of participants, \(N_{\mathrm{part}}\). As a function of increasing centrality, the \(\omega _{\mathrm{ch}}\) values obtained from the event generators show upward trends, which are opposite to those of the experimental data. It is to be noted that the Monte Carlo event generators are successful in reproducing the mean of multiplicity distributions. This follows from the fact that the particle multiplicities are proportional to the cross sections. On the other hand, the widths of the distributions originate from fluctuations and correlations associated with effects of different origins, such as long-range correlations, Bose–Einstein correlations, resonance decays, and charge conservation. Because of this, the event generators fall short of reproducing the observed scaled variances.

### Scaled variance dependence on pseudorapidity acceptance and \(p_{\mathrm{T}}\) range

Charged-particle multiplicity distributions depend on the acceptance of the detection region. Starting with the measured multiplicity fluctuations within \(|\eta |<0.8\) and \(0.2<p_{\mathrm{T}}<2.0\) GeV/*c* with a mean \(\langle N_{\mathrm{ch}} \rangle \) and scaled variance of \(\omega _{\mathrm{ch}}\), the scaled variance (\(\omega _{\mathrm{ch}}^{\mathrm{acc}}\)) for a fractional acceptance in \(\eta \) or for a limited \(p_{\mathrm{T}}\) range with mean of \(\langle N_{\mathrm{ch}}^{\mathrm{acc}} \rangle \) can be expressed as [31],