## Abstract

The pseudorapidity density of charged particles, \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \), in p–Pb collisions has been measured at a centre-of-mass energy per nucleon–nucleon pair of \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 8.16 TeV at mid-pseudorapidity for non-single-diffractive events. The results cover 3.6 units of pseudorapidity, \(|\eta |<1.8\). The \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \) value is \(19.1\pm 0.7\) at \(|\eta |<0.5\). This quantity divided by \(\langle N_{\mathrm{part}} \rangle \) / 2 is \(4.73\pm 0.20\), where \(\langle N_{\mathrm{part}} \rangle \)is the average number of participating nucleons, is 9.5% higher than the corresponding value for p–Pb collisions at \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 5.02 TeV. Measurements are compared with models based on different mechanisms for particle production. All models agree within uncertainties with data in the Pb-going side, while HIJING overestimates, showing a symmetric behaviour, and EPOS underestimates the p-going side of the \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \) distribution. Saturation-based models reproduce the distributions well for \(\eta >-1.3\). The \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \) is also measured for different centrality estimators, based both on the charged-particle multiplicity and on the energy deposited in the Zero-Degree Calorimeters. A study of the implications of the large multiplicity fluctuations due to the small number of participants for systems like p–Pb in the centrality calculation for multiplicity-based estimators is discussed, demonstrating the advantages of determining the centrality with energy deposited near beam rapidity.

## Introduction

Particle production in proton–nucleus (pA) collisions is influenced by nuclear effects in the initial state. In particular, p–Pb collisions are a valuable tool to study initial-state effects, which are present as a consequence of the nucleons being bound into nuclei. Additionally, the particle multiplicity is an important tool to study the various theoretical models of gluon saturation, which contain different treatments of the upper limit in the growth of the parton density. Therefore, pseudorapidity density measurements can provide constraints to the modelling of the initial state at small Bjorken-*x*. Moreover, evidence for collective phenomena have been observed in p–Pb collisions, with the magnitude of the effects increasing with event multiplicity [1,2,3,4,5,6,7,8,9]. Proton–nucleus collisions serve as a tool to study also final-state effects that are sensitive to the formation of a Quark–Gluon Plasma in heavy-ion collisions, under active scrutiny by the community [10]. For these reasons, it is important to understand the collision geometry and the global properties of the system produced in p–Pb collisions.

This paper presents a measurement of the primary charged-particle density in p–Pb collisions, \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\), at a nucleon–nucleon centre-of-mass energy of \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 8.16 TeV for pseudorapidities \(|\eta _\mathrm {lab}|<1.8\) in the laboratory system. A primary charged particle is defined as a charged particle with a mean proper lifetime \(\tau \) larger than 1 cm / *c*, which is either produced directly in the interaction, or from decays of particles with \(\tau \) smaller than \(1~\hbox {cm}/c\), excluding particles produced in interactions with the beam pipe, material of the subdetectors, cables and support structures [11]. The dominant processes in p–Pb collisions are the non-diffractive ones. Diffractive events can be single-, double- or central-diffractive and results are presented for non-single-diffractive (NSD) events. Data are compared to other experimental measurements available in pp, p–Pb, d–Au and AA collisions. Results are compared also with simulations (performed with HIJING 2.1 [12, 13], EPOS 3 [14,15,16] and EPOS LHC [17]) and calculations incorporating the saturation of the gluon density in the colliding hadrons (MC-rcBK [18, 19] and KLN [20, 21]).

The rest of this article is organised in the following way: Sect. 2 describes the experimental conditions and the detectors used to measure the centrality of the event and the pseudorapidity density of charged particles. In Sect. 3, the centrality determination methodologies are described, both the ones using the multiplicity distributions of charged particles and the alternative one that relies on the energy collected in the neutron Zero-Degree Calorimeters (ZDCs). Section 4 explains, in detail, the analysis procedure to measure the \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \). The systematic uncertainties are described in Sect. 5, and the results along with comparisons to models are presented in Sect. 6. A brief summary and conclusions are given in Sect. 7.

## Experimental setup

The p–Pb data were provided by the Large Hadron Collider (LHC) in December 2016. There were two configurations that were exploited: in one, denoted by p–Pb below, the proton beam circulated towards the negative *z* direction in the ALICE laboratory system, while \(^{208} {\mathrm{Pb}}\) ions circulated in the opposite direction; in the second configuration, denoted by Pb–p, the direction of both beams was reversed. The total luminosity was \(0.06~\hbox {nb}^{-1}\), corresponding to around 120 million minimum-bias (MB) events in the p–Pb and Pb–p configurations. The beams in both rings have the same magnetic rigidity. The nucleon–nucleon centre-of-mass energy was \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 8.16 TeV, with both p and Pb beams at 6.5 TeV per proton charge. Due to the asymmetric collision system, there is a shift in the centre-of-mass rapidity of \(\Delta y = 0.465\) in the direction of the proton beam.

Full details of the ALICE detector are given elsewhere [22, 23]. The main element used for the analysis was the Silicon Pixel Detector (SPD): the two innermost cylindrical layers of the ALICE Inner Tracking System [22], made of hybrid silicon pixel chips. The SPD is located inside a solenoidal magnet that provides a magnetic field of 0.5 T. The first layer covers \(\vert \eta _{\mathrm{lab}}\vert <2.0\) for collisions at the nominal Interaction Point (IP), while the second covers \(\vert \eta _{\mathrm{lab}}\vert <1.4\). The layers have full azimuthal coverage and radii of \(3.9~\hbox {cm}\) and \(7.6~\hbox {cm}\), respectively. In total, the SPD has \(9.8\times 10^6\) silicon pixels, each of size \(50\times 425~\mathrm{{\upmu }m}^2\).

The MB trigger signal is given by a hit in both the V0 hodoscopes [24]. The V0 detector is composed of two arrays of 32 scintillators positioned at 3.3 m (V0A) and -0.90 m (V0C) from the nominal IP along the beam axis. Each array has a ring structure segmented into 4 radial and 8 azimuthal sectors. The detector has full azimuthal coverage in the pseudorapidity ranges \(2.8<\eta _{\mathrm{lab}}<5.1\) and \(-3.7<\eta _{\mathrm{lab}}<-1.7\). The signal amplitudes and particle arrival times are recorded for each of the 64 scintillators. The V0 is well suited for triggering thanks to its good timing resolution (below 1 ns) and its large angular acceptance. The timing is used to discriminate the beam–beam collisions from background events, like beam–gas and beam–halo events, produced outside the interaction region. The neutron ZDCs [25] are likewise utilised for background rejection. The neutron calorimeters, ZNs, are quartz-fibre spaghetti calorimeters placed at zero degrees with respect to the LHC beam axis, positioned at 112.5 m (ZNA) and \(-112.5\) m (ZNC) from the nominal IP. ZNs detect neutral particles emitted at pseudorapidities \(\vert \eta _{\mathrm{lab}}\vert >8.7\) and have an energy resolution of around 18% for neutron energies of 2.56 TeV. ALICE is equipped also with the proton calorimeters, ZPs, which are not used in the analysis.

A subsample of 6.8 million events is analysed for p–Pb collisions, with an average number of interactions per bunch crossing, \(\langle \mu \rangle \) of 0.004. A subsample of 2.7 million events is analysed for Pb–p collisions, with \(\langle \mu \rangle = 0.007\). The comparison of p–Pb and Pb–p results is used to assess the systematic uncertainties. The hardware MB trigger is configured to have high efficiency for hadronic events, requiring a signal in both V0A and V0C. Beam–gas and beam–halo interactions are suppressed in the analysis by requiring offline the arrival time of particles in the V0 and ZN detectors to be compatible with collisions from the nominal IP. The contamination from background is estimated to be negligible through control triggers on non-colliding bunches.

The event sample after trigger and timing selection consisted of NSD, single-diffractive (SD), and electromagnetic (EM) interactions. The MB trigger efficiency for NSD events is estimated to be 99.2% using the DPMJet Monte Carlo event generator [26], and 99.5% using HIJING 1.36 [27]. HIJING 1.36 combines perturbative-QCD processes with soft interactions, and includes a strong impact parameter dependence of parton shadowing. DPMJet is based on the Gribov-Glauber approach and treats soft and hard scattering processes in a unified way. It includes incoherent SD collisions of the projectile proton with target nucleons; these interactions are concentrated mainly on the surface of the nucleus. The generated particles are transported through the experimental setup using the GEANT3 [28] software package. SD collisions are removed in DPMJet by requiring that at least one of the binary nucleon–nucleon interactions is NSD. The SD and EM contaminations are estimated from Monte Carlo simulation studies to be around 0.03% and below 0.3%, respectively.

Among the selected events in data, 99% had a primary interaction vertex. In DPMJet this fraction was 99.6% (99.8% for HIJING 1.36), with a trigger and selection efficiency for events without a primary vertex of 28% (23.1%). Taking into account the difference of the fraction of events without a vertex in the data and the simulation, the overall selection efficiency for NSD events in the analysis is estimated to be 97.0% (96.2%) according to DPMJet (HIJING 1.36).

## Centrality determination

The Glauber model [29, 30] is used to calculate the number of participating nucleons (participants), \(N_{\mathrm{part}}\), and the corresponding number of nucleon–nucleon collisions, \(N_{\mathrm{coll}}\), which depend on the collision impact parameter, *b*. Indeed, the number of produced particles changes with the variation of the amount of matter overlapping in the collision region; \(N_{\mathrm{part}}\) and \(N_{\mathrm{coll}}\) describe quantitatively this variation. In pA collisions, \(N_{\mathrm{coll}}\) \(=\) \(N_{\mathrm{part}}\) \(-1\). Using the Glauber model, it is possible to calculate the probability distributions of the relevant parameters, \(N_{\mathrm{part}}\) and \(N_{\mathrm{coll}}\), which for pA collisions are loosely correlated to *b*. Centrality classes are defined as percentile intervals of the visible cross section, which determines the event sample after the selections described in Sect. 2. The number of participating nucleons and nucleon–nucleon collisions are calculated, accordingly, for the visible cross section.

The centrality is determined for three different estimators, two of which are based on observables well separated in pseudorapidity to limit the effect of short-range correlations in the collision region. The method founded on multiplicity-based estimators is derived by fitting the measured charged-particle multiplicity distributions with an \(N_{\mathrm{coll}}\) distribution obtained from the Glauber model convoluted with a Negative Binomial Distribution (NBD) to model the multiplicity produced in a single collision. Multiplicity fluctuations play an important role in pA collisions. The range of multiplicities used to define a centrality class in the case of pA collisions is of the same order of magnitude as the multiplicity fluctuations width [31]. Therefore, a biased sample of nucleon–nucleon collisions is selected using multiplicity. Samples of high-multiplicity events select not only a class with larger than average \(\langle N_{\mathrm{part}} \rangle \), but also one which is widely spread in \(N_{\mathrm{coll}}\) and that leads to deviations from the scaling of hard processes with Multiple Parton Interactions (MPI). These high-multiplicity nucleon–nucleon collisions have a higher particle mean transverse momentum \(p_\mathrm{T}\), and are collisions where MPI are more likely [4]. The opposite happens for low-multiplicity events.

The centrality determined from the hybrid method, described in Sect. 3.2 using the energy deposited in the ZDCs, on the contrary, minimises biases on the binary scaling of hard processes. Indeed, the ZDCs detect, at large \(\eta \) separation from the central region, the nucleons produced in the interaction through the nuclear de-excitation process or knocked out by participants (called slow nucleons). A heuristic approach based on extrapolation from low-energy data is discussed in a previous publication [31].

### Centrality from charged-particle distributions

In the method based on multiplicity estimators [31], the events are classified into centrality classes using either the number of clusters in the outer layer of the SPD (CL1 estimator) with acceptance \(\eta _{\mathrm{lab}}< 1.4\), or the amplitude measured by the V0 in the Pb-remnant side, A-side, for p–Pb (V0A estimator) or in the C-side for Pb–p (V0C estimator) collisions. The amplitudes are fitted with a Monte Carlo implementation of the Glauber model assuming that the number of sources is given by the \(N_{\mathrm{part}}\) / 2 convoluted with an NBD, which is the assumed particle production per source, parametrised with \(\mu \) and *k*, where \(\mu \) is the mean multiplicity per source and *k* controls the contribution at high multiplicity. The nuclear density for Pb is modelled by a Woods–Saxon distribution for a spherical nucleus with a radius of \(6.62 \pm 0.06~\hbox {fm}\) and a skin thickness of \(0.55 \pm 0.01~\hbox {fm}\) [32]. The hard-sphere exclusion distance between nucleons is \(0.40 \pm 0.40~\hbox {fm}\). For \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 8.16 TeV collisions, an inelastic nucleon–nucleon cross section of \(72.5 \pm 0.5~\hbox {mb}\) is used, obtained by interpolation of cross section experimental values [32].

The measured V0A distribution with the NBD-Glauber fit is shown in Fig. 1. A similar fit has been performed for the CL1 estimator. The failure of the chosen fit function for amplitudes smaller than about 10 is due to trigger inefficiencies in peripheral collisions. The average number of participants, collisions and nuclear overlap function, \(\langle T_{{\mathrm{pPb}}}\rangle \), are calculated from the NBD-Glauber simulation for every defined centrality class. The values for the different estimators are given in Table 1. The systematic uncertainties are obtained by repeating the fit, varying the Glauber parameters (radius, skin thickness and hard-sphere exclusion) within their uncertainties. The number of participants for all selected events is on average \(N_{\mathrm{part}}\) \(= 8.09 \pm 0.17\). The increase in the average \(N_{\mathrm{part}}\), when calculated for NSD collisions only, is of around 2% and within systematic uncertainties. The geometrical properties determined with the NBD-Glauber model are robust and approximately independent of the centrality estimator used, within the model assumptions of this approach.

### Centrality from Zero degree Calorimeter and the hybrid method

The ZNs detect the slow neutrons produced in the interaction. The multiplicity of slow nucleons is monotonically related to \(N_{\mathrm{coll}}\), and can, therefore, be used to determine the centrality of the collision [31]. The ZPs are not used, since the uncertainty on \(N_{\mathrm{coll}}\) would be much larger. The experimental distribution of the neutron energy spectrum measured in the Pb-going side, \(E_{\mathrm{ZNA}}\), is shown in Fig. 2 and it is used for the hybrid method, which aims to provide an unbiased centrality estimator. It is based on two assumptions, the first is that the event selection based on the energy deposited in the ZDCs is free from the multiplicity fluctuation biases in the particle production at mid-rapidity. The second assumption is that the wounded nucleon model holds [33] and that some observables, defined below, scale linearly with \(N_{\mathrm{coll}}\) and \(N_{\mathrm{part}}\) allowing one to establish a relationship to the collision geometry. Two sets of \(\langle N_{\mathrm{coll}} \rangle \) are calculated: \(N_{\mathrm{coll}}^{\mathrm{mult}}\) and \(N_{\mathrm{coll}}^{\mathrm{Pb-side}}\) for each centrality bin *i* estimated using ZN. The first set is computed assuming that the charged-particle multiplicity at mid-rapidity is proportional to the \(N_{\mathrm{part}}\): \(\langle N_{\mathrm{part}} \rangle \) \(^{\mathrm{mult}}_{i}=\) \(\langle N_{\mathrm{part}} \rangle \) \(_{\mathrm{MB}}\cdot (\langle \text {d}N_\mathrm{ch}/\mathrm {d}\eta _{lab}\rangle _{i}/\langle \text {d}N_\mathrm{ch}/\mathrm {d}\eta _{\mathrm{lab}}\rangle _{{\mathrm{MB}}})\), where \(\langle N_{\mathrm{part}} \rangle \) \(_{\mathrm{MB}}\) is the average number of participating nucleons in MB collisions reported in Table 1, and, consequently: \(\langle N_{\mathrm{coll}} \rangle \) \(_{i}^{\mathrm{mult}}=\) \(\langle N_{\mathrm{part}} \rangle \) \(^{\mathrm{mult}}_{i}-1\). The second set is calculated using the Pb-side multiplicity: \(\langle N_{\mathrm{coll}} \rangle \) \(_{i}^{\mathrm{Pb-side}}=\) \(\langle N_{\mathrm{coll}} \rangle \) \(_{\mathrm{MB}}\cdot (\langle S\rangle _{i}/\langle S\rangle _{\mathrm{MB}})\), where *S* is the raw signal of the innermost ring of V0A for p–Pb (\(4.5<\eta _{\mathrm{lab}}<5.1\)) and V0C for Pb–p collisions (\(-3.7<\eta _{\mathrm{lab}}<-3.2\)). A comparison of the \(N_{\mathrm{coll}}\) values obtained for the various estimators is reported in Table 2 for p–Pb collisions. The two different sets are consistent among each other and with the values calculated for Pb–p. The systematic uncertainties come from the uncertainty on the \(N_{\mathrm{coll}}\) for 0–100% in Table 1 summed with the maximum difference between the \(N_{\mathrm{coll}}^{\mathrm{mult}}\) and \(N_{\mathrm{coll}}^{\mathrm{Pb-side}}\).

## Analysis procedure

The technique for the \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\) measurement is the same as the one employed at \(\sqrt{s_{\scriptscriptstyle {\mathrm{NN}}}}\) = 5.02 TeV [31, 34]. The pseudorapidity acceptance in the laboratory system depends on the position of the primary interaction vertex along the beamline, \(z_\mathrm{vtx}\). The position of the primary vertex is obtained by correlating hits in the two silicon-pixel layers (SPD vertex). The selection of a reconstructed vertex within \(|z_\mathrm{vtx}|<15~\hbox {cm}\) allows a range of \(|\eta _{\mathrm{lab}}|<1.8\) to be covered. In order to maximise the pseudorapidity coverage, instead of tracks we use tracklets (short track segments) formed using two hits in the SPD, one in the first and one in the second layer. In order to select combinations corresponding to charged particles, the angular difference in the azimuthal direction, \(\Delta \varphi \), and in the polar direction, \(\Delta \theta \), of the inner and outer layer hit with respect to the reconstructed primary vertex is determined for each pair of hits. Afterwards, the sum of the squares of the weighted differences in azimuth and polar angles \(\delta ^{2}=(\Delta \varphi /\sigma _{\varphi })^{2}+(\Delta \theta /\sigma _{\theta })^{2}\) is required to be less than 1.5, where \(\sigma _{\varphi }=60\) mrad and \(\sigma _{\theta }=25\sin ^{2}\theta \) mrad, where the \(\sin ^{2}\) factor takes the dependence of the pointing resolution on \(\theta \) into account. With such a requirement, tracklets corresponding to charged particles with \(p_\mathrm{T}\) \(>50\) MeV / *c* are effectively selected. Particles with lower \(p_\mathrm{T}\) are mostly absorbed by the detector material or lost due to the bending in the magnetic field. A cross check utilising pp collisions [35] has shown full compatibility of analyses using tracklets and tracks, where the tracks have been reconstructed in the Time Projection Chamber matched with clusters in the Inner Tracking System.

The raw multiplicity measured by tracklets needs to be corrected for (i) the acceptance and efficiency of a primary track to be reconstructed as a tracklet, (ii) the contribution from combinatorial tracklets, i.e. those whose two hits do not originate from the same primary particle, (iii) the difference between the fraction of events without a vertex in the data and in the simulation and (iv) the secondary-particle contamination. The first three corrections are computed using simulated data from the HIJING 1.36 or DPMJet event generators. The centrality definition in the simulated data is adjusted such that the particle density is similar to that in real data for the same centrality classes. The correction factors (i) and (ii), determined as a function of *z* and \(\eta _{\mathrm{lab}}\), are on average around 1.5 for the acceptance and reconstruction efficiency, and around 0.02 for the combinatorial background removal in MB and centrality-dependent measurements at mid-rapidity, independently of the estimator selected and the centrality class. At \(|\eta _{\mathrm{lab}}|=1.8\) the combinatorial background contribution reaches a maximum value of 0.07. We further correct the measurement by the difference in the fraction of events without a vertex observed in data and simulation. The correction for MB \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\) amounts to 2.2% (3.4%) when using DPMJet (HIJING 1.36). Since the centrality classes are defined as percentiles of the visible cross section, the centrality-dependent measurements are not corrected for the trigger inefficiencies. Differences in strange-particle content observed at lower beam energies [6, 36] have been used for a data-driven correction applied to the generator output, giving rise to a correction factor of \(-0.6\%\), independent of centrality.

## Systematic uncertainties

Several sources of systematic uncertainties were investigated. The uncertainty coming from the selection of the tracklet quality value \(\delta ^{2}\) is negligible at mid-rapidity and amounts to 0.5% at \(|\eta _{\mathrm{lab}}|=1.8\). The other uncertainties associated to the MB \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\) are independent of the pseudorapidity. The uncertainty resulting from the subtraction of the contamination from weak decays of strange hadrons is estimated to be about 1.3%. It is estimated by varying the amount of strange particles except kaons by \(\pm 50\%\). The uncertainty in detector acceptance and reconstruction efficiency is estimated to be 2.2% by carrying out the analysis for different slices of the \(z_{\mathrm{vtx}}\) position distribution and with subsamples in azimuth. The measurement for Pb–p collisions gives rise to an additional contribution of 1.8%, when reflected in \(\eta _{\mathrm{lab}}\), for the most peripheral centrality bins (80–100%), and 1.1% for 60–80% at \(|\eta _{\mathrm{lab}}|=1.8\), and is added to the systematic uncertainty for acceptance. For the other centrality bins and the MB result the difference among p–Pb and Pb–p is negligible and already accounted for in the acceptance and reconstruction efficiency uncertainty. The uncertainty related to the trigger and event selection efficiency for NSD collisions is estimated to be 0.8% by taking into account the differences in the efficiency obtained with HIJING 1.36 and DPMJet. An additional 1.2% uncertainty comes from the difference in the scaling factors due to the events without vertex using the two event generators, as discussed in Sect. 4. A Monte Carlo test was also carried out with DPMJet to check the difference in the results obtained from NSD generated events and from selected events, resulting in a difference of 0.2% for the MB result, absorbed in the trigger efficiency uncertainty, and of 1.7% (0.2%) for 80–100% (60–80%) centrality bins. The contribution due to the subtraction of the background is studied using an alternative method where fake hits are injected into real events and it gives rise to a 0.3% uncertainty. The uncertainty from the material budget is 0.1%, while the uncertainty due to the particle composition amounts to 0.3%. The contributions from the extrapolation down to zero \(p_\mathrm{T}\) and from the pileup are found to be negligible.

The final systematic uncertainties assigned to the measurements are the quadratic sums of the individual contributions. An overview of the systematic uncertainties is presented in Table 3. For MB \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\), they amount to 3.0%. For centrality-dependent measurements the total uncertainty for central events is 2.6%. For the most peripheral events it is 3.1% at mid-rapidity and 3.6% for \(|\eta _{\mathrm{lab}}|=1.8\). The difference in uncertainty between the MB and the centrality-dependent measurement is mostly due to the contributions from the selection efficiency for NSD, which are not included in the centrality-dependent measurement, and to the difference among p–Pb and Pb–p collisions, which is more relevant for the most peripheral events at \(|\eta _{\mathrm{lab}}|=1.8\).

## Results

The pseudorapidity density as a function of \(\eta _{\mathrm{lab}}\) is presented in Fig. 3 for \(|\eta _{\mathrm{lab}}|<1.8\). An asymmetry between the proton and the lead hemispheres is observed, and the number of charged particles is higher in the Pb-going side (positive \(\eta _{\mathrm{lab}}\)). The ALICE measurement is compared with the pseudorapidity density measured by CMS [37] showing very good agreement within systematic uncertainties, although CMS results exclude prompt leptons. The result is also compared with several models with different descriptions of particle production, all shifted by \(\eta _{\mathrm{lab}}=0.465\) to take into account the shift to the laboratory system. In the improved HIJING 2.1 [12, 13] version the Cronin effect is included, as well as a strong nuclear shadowing effect (sg \(=0.28\)) in order to explain the global properties of the final hadron system in p–Pb collisions [34]. The model describes well both the normalisation and the shape of the distribution for the Pb-going side, while it overestimates the p-going side, showing a symmetric behaviour, as for the p–Pb collisions at 5.02 TeV. The \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\) versus \(\eta _{\mathrm{lab}}\) is compared with two different versions of EPOS. EPOS LHC [17] is a tune of EPOS 1.99 based on LHC data. It is designed to describe all bulk properties of hadronic interactions and based on Gribov-Regge theory for partons. It incorporates collective effects with a separation of the initial state into a core and a corona. EPOS LHC reproduces the Pb-going side, although it underestimates the p-going side of the distribution, showing a stronger asymmetry than data. EPOS 1.99 contains collective flow parametrised at freeze-out, while EPOS 3 [14,15,16] includes a full viscous hydrodynamical simulation. It starts from flux tube initial conditions, which are generated in the Gribov–Regge multiple scattering framework. It reproduces the most forward part of the distribution in the Pb-going side, but underestimates both the normalisation, the mid-rapidity part and the p-going side of the \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta _{\mathrm{lab}}\) distribution. Finally, the distribution is compared with two saturation-based models: MC-rcBK [18, 19] and KLN [20, 21], which contain a mechanism to limit the number of partons and particles produced. The MC-rcBK results are obtained using the McLerran-Venugopalan model (\(\gamma =1\)) [59] for the Albacete–Armesto–Milhano–Quiroga–Salgado initial conditions [60]. Saturation-based models are the ones which perform better, underlining the necessity of a mechanism to limit the number of partons produced. Indeed, both MC-rcBK and KLN reproduce the distribution well, within the uncertainties of data, and start to deviate in the region \(\eta _{\mathrm{lab}}<-1.3\). The MC-rcBK model better predicts the p–Pb collisions at 8.16 TeV than the distribution at 5.02 TeV. The shadowing mechanism used by HIJING is not sufficient to limit the partons produced in the p-going side. Both EPOS and HIJING contain final-state effects, and the performance is worse than for models based on initial-state effects only, like MC-rcBK and KLN. This means that for the \(\mathrm {d}N_{\mathrm{ch}}/\mathrm {d}\eta \) observable final-state effects do not play a role, for the models considered. Nevertheless, all models lie within about 10% when compared with data, and reproduce within systematic uncertainties the Pb-going side.