Bose-Einstein correlations of charged hadrons in proton-proton collisions at $\sqrt{s} =$ 13 TeV

Bose-Einstein correlations of charged hadrons are measured over a broad multiplicity range, from a few particles up to about 250 reconstructed charged hadrons in proton-proton collisions at $\sqrt{s} =$ 13 TeV. The results are based on data collected using the CMS detector at the LHC during runs with a special low-pileup configuration. Three analysis techniques with different degrees of dependence on simulations are used to remove the non-Bose-Einstein background from the correlation functions. All three methods give consistent results. The measured lengths of homogeneity are studied as functions of particle multiplicity as well as average pair transverse momentum and mass. The results are compared with data from both CMS and ATLAS at $\sqrt{s} =$ 7 TeV, as well as with theoretical predictions.


Introduction
Bose-Einstein correlations (BECs) are quantum statistical in nature and were used for several decades to probe the size and shape of the particle emitting region in high energy collisions [1,2]. These techniques are employed to characterize the size of the emission region at the freeze-out stage of the evolving system. Such studies have been performed by the CMS Collaboration at the CERN LHC in proton-proton (pp) collisions at √ s = 0.9 TeV [3, 4], 2.36 TeV [3], and 7 TeV [4]. In these analyses, the one-dimensional (1D) correlations were measured in terms of the invariant relative momentum q 2 inv = −q µ q µ = −(k 1 − k 2 ) 2 = m 2 inv − 4m 2 π where k i refers to the four-momentum of each particle of the pair. The pion mass (m π ) is assumed for all of the charged particles, since pions constitute the majority of the produced hadrons. Multi-dimensional analyses of the correlation functions in pp, proton-lead (pPb), and leadlead (PbPb) collisions were performed by CMS [5] to explore the size of the source in different directions. Similar studies were also performed by other experiments [6][7][8][9][10][11][12][13][14][15][16]. This paper uses CMS data at √ s = 13 TeV to extend the investigation of one dimensional BECs with charged hadrons produced in pp collisions to include both a higher center-of-mass energy and higher charged particle multiplicities (up to 250 particles).
Studies using pp (and later pPb) events with very high charged particle multiplicities led to the observation of "ridge-like" correlations (i.e., near-side (∆φ ∼ 0) long-range (|∆η| > 2) anisotropic azimuthal correlations) [17][18][19][20][21][22] associated with collective flow. In nucleus-nucleus collisions, such structures can be parameterized by a Fourier expansion, providing information about the initial collision geometry and its fluctuations. In hydrodynamic models, initial state anisotropies are propagated to the final state via ultrarelativistic inviscid fluid evolution up to the freeze-out stage of the system. Additional measurements employing high multiplicity (HM) events in pp and in pPb collisions at the LHC resulted in evidence of collective behavior [23,24] even in such small colliding systems. Altogether, these results indicate that events with high multiplicity produced in pp collisions exhibit some properties similar to those in relativistic heavy ion collisions. The origin of such phenomena in small systems is still under debate [25], and BEC studies supply complementary information to shed light onto the origin of the observed similarities.
In pp collisions, dynamical correlations in the kinematic region of interest for BEC studies can also arise from processes such as resonance decays and jets. In particular, for events with a small number of particles, the relative non-BEC contribution is enhanced. On the other hand, events in the high multiplicity range in pp collisions are more likely than events with similar multiplicities in heavy ion collisions to be affected by multi-jets. Therefore, the importance of accurate removal of these background effects is enhanced for the correlations studied in the current investigation. To address this requirement, the analysis is performed with three techniques that differ from each other in their dependence on simulations.
Correlation functions are used to find the 1D radius fit parameter (R inv , also called the length of homogeneity [26]), and the intercept parameter (λ), corresponding to the intensity of the correlation function at q inv = 0. Fitted values of R inv and λ are presented as functions of event multiplicity as well as average pair transverse momentum (k T = 1 2 | p T,1 + p T,2 |) and mass (m T = √ m 2 π + k 2 T ). The results are also compared to both previous experimental data and to theoretical predictions. This paper is structured as follows: Sections 2-4 describe the experimental setup, the datasets and Monte Carlo (MC) simulations employed in the analysis, and the event and track selections, respectively. The generation, correction, and fitting procedures for the correlation func-tions, and the systematic uncertainties in those procedures, are detailed in Sections 5 and 6, respectively. Results are presented in Section 7, including comparisons with results from pp collisions at √ s = 7 TeV and theoretical predictions. Finally, Appendix A gives additional details on the analysis techniques and Appendix B describes the study of an anticorrelation that was previously seen at lower energies [4,5]. This anticorrelation is also investigated in pp collisions at 13 TeV over the broad multiplicity range covered by this analysis. In particular, the depth of the dip is nonzero for events with high multiplicity. A more detailed discussion is given in an appendix because this topic is outside the main physics thrust of this paper.

The CMS detector
The central feature of the CMS detector is a superconducting solenoid of 6 m internal diameter. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL, |η| < 3), and a brass and scintillator hadron calorimeter (HCAL, |η| < 3), each composed of a barrel and two endcap sections, where η is the pseudorapidity. In addition to the barrel and endcap detectors, quartz-fiber Cherenkov hadron forward (HF) calorimeters (3 < |η| < 5) complement the coverage provided by the barrel and endcap detectors on both sides of the interaction point. These HF calorimeters are azimuthally subdivided into 20 • modular wedges and further segmented to form 0.175×0.175 (∆η×∆φ) "towers". A muon system located outside the solenoid and embedded in the steel flux-return yoke is used for the reconstruction and identification of muons up to |η| = 2.4. The silicon tracker measures charged particles within the pseudorapidity range |η| < 2.5. It consists of 1440 silicon pixel and 15 148 silicon strip detector modules. For nonisolated particles of 1 < p T < 10 GeV and |η| < 1.4, the track resolutions are typically 1.5% in p T and 25-90  µm in the transverse (longitudinal) impact parameter [27]. The BPTX (Beam Pickup for Timing for the eXperiments) devices are used to trigger the detector readout. They are located around the beam pipe at a distance of 175 m from the IP on either side, and are designed to provide precise information on the LHC bunch structure and timing of the incoming beams. Events of interest are selected using a two-tiered trigger system [28]. The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a time interval of less than 4 µs. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage. A detailed description of the CMS detector, together with a definition of the coordinate system and kinematic variables, can be found in Ref. [29].

Data and simulated samples
This analysis uses pp data at √ s = 13 TeV collected at the LHC in 2015. The data were taken using a special LHC configuration providing an average of 0.1 interactions per bunch crossing, resulting in a very low probability of simultaneous pp collisions (pileup). The events were selected using minimum-bias (MB) and HM triggers, with samples corresponding to total integrated luminosities (L) of 0.35 and 459 nb −1 , respectively. The different luminosities of the MB and HM samples are due to different prescale factors applied to the number of events that pass the selection criteria of the respective triggers.
The MB events are selected using a trigger that requires at least one tower on either side of the HF to have a deposited energy above 1 GeV. This trigger mainly reduces effects from detector noise, beam backgrounds, and cosmic rays, while maintaining a high efficiency (greater than 99% for reconstructed track multiplicities above 10, as estimated with simulated samples) for events coming from inelastic proton-proton collisions.
To increase the number of HM events, three triggers with different multiplicity thresholds are used. At L1, these triggers require the scalar sum of the transverse energy in the ECAL, HCAL, and HF towers to be larger than 40, 45, or 55 GeV. At the HLT stage, events are selected by requiring track multiplicities larger than 60, 85, or 110, pre-selected by L1 at 40, 45, or 55 GeV, respectively. In the HLT, tracks are reconstructed using pixel detector information. The low pileup configuration is critical in ensuring a high purity of single pp collisions in the HM dataset.

Event and track selections
Events are selected offline by requiring all of the following conditions: • At least one reconstructed vertex with a distance with respect to the center of the nominal interaction region of less than 15 cm in the longitudinal (along the beam) direction and of less than 0.15 cm transverse to the beam.
• Beam-related background suppression by rejecting events for which less than 25% of all reconstructed tracks pass the high-purity selection as defined in Ref. [27].
• Coincidence of at least one tower with total energy above 3 GeV in both of the HF calorimeters, a criterion that selects primarily nondiffractive events.
• |σ(p T )/p T | < 0.1, where σ(p T ) is the uncertainty in the p T measurement.
• |d xy /σ(d xy )| < 3.0 and |d z /σ(d z )| < 3.0, where the transverse (d xy ) and longitudinal (d z ) distances are measured with respect to the primary vertex (defined as the one with the highest track multiplicity in the event), while σ(d xy ) and σ(d z ) are the uncertainties in the d xy and d z measurements, respectively.
In addition, each track is required to have at least one valid hit in one of the pixel detector layers in order to reduce the contamination from processes such as electron pairs from photon conversion and tracks from decays of long-lived resonances.
When determining the reconstructed charged particle multiplicity of an event, slightly different track requirements than those listed above are imposed to be consistent with the criteria used by the HLT to determine this event multiplicity. The quantity N offline trk includes tracks within |η| < 2.4 with p T > 0.4 GeV, selected without the requirement on the number of valid pixel detector hits. Variable bin widths are used, from 3 to 10 units of multiplicity, depending on the value of N offline trk . The corresponding particle level multiplicity, N tracks , is corrected for the acceptance and efficiency, as described below, and is used for comparisons with other experiments.
The probability of reconstructing multiple tracks from a single primary particle is of the order 10 of 10 −3 and is negligible compared to the other quantities. Using these estimates, correction 11 factors for each track in a given (η,p T ) bin are determined [37]. 12 5 Bose-Einstein correlation analysis

Definitions of signal and background
For each event, the sample containing the BEC signal is formed by pairing all same-sign tracks (i.e. ++ or −− and denoted "SS") with |η| < 2.4 and p T > 0.2 GeV. Opposite sign pairs (i.e. +− and denoted "OS"), within the same |η| and p T ranges, are used by two of the analysis methods for removing non-BEC contributions to the correlation functions. The distributions in terms of the relative momentum of the pair q inv are divided into bins of the reconstructed charged particle multiplicity, N offline trk , and of k T .
Although no particle identification is used, the contamination by particles other than charged pions is expected to be small, since pions are the dominant hadron species in the sample. For instance, the ratio of kaons to pions is about 12%, and protons to pions is roughly 6% for 7 TeV pp collisions [38]; for pp collisions at 13 TeV [39], the ratios are 11-12% and 5-6%, respectively, depending on the track multiplicity range. The unidentified kaons and protons contaminate the correlation function mainly in the low-q region, where the BEC effect is stronger (the contribution of nonidentical particle pairs depletes the signal). The impact of this contamination on the results is covered by systematic uncertainties.
Ideally, the background distribution (reference sample) should contain all the physics effects that are present in the signal distribution, except for the BECs. This reference sample can be constructed in several ways, most commonly by mixing tracks from different events, as in this analysis. The default reference sample (called η-mixing) is constructed by pairing SS tracks from different events using the same procedure as Refs. [3,4]. Two events are mixed only if they have similar reconstructed charged particle multiplicity in each of three pseudorapidity ranges: −2.4 < η < −0.8, −0.8 < η < 0.8, and 0.8 < η < 2.4. For determining this matching criterion, a weight is assigned to each track of the event, depending on the η range in which it occurs, and these weights are summed for each event. Then, the events are ordered according to this sum and the mixing is done by selecting two adjacent events in the ordered list and pairing each track in one event with all of the tracks in the other one. After being combined in this way, both events are discarded and not included in other pairings.
After choosing the reference sample, a correlation function is defined as a single ratio (SR) having the signal distribution, i.e., the q inv distribution of pairs from the same event as the numerator, and the reference distribution as the denominator: where C 2 (q inv ) refers to the two-particle correlation defined in Eq. (1) by a SR, N sig and N ref correspond to the number of pairs estimated by the value of the integral of the signal and reference distributions, respectively. Refinements of this definition are presented in Section 5.4 and in Appendix A.

Coulomb interactions and correction
The correlation functions include the effect of the quantum statistics obeyed by the pair of identical particles, but are also sensitive to final-state interactions between the charged hadrons. The Coulomb final-state interaction [40] affects the two-particle relative momentum distribution in different ways for SS or OS pairs, creating a depletion (enhancement) in the low q inv range of the correlation function caused by repulsion (attraction) for SS (OS) pairs. The effect of the mutual Coulomb interaction is incorporated in the factor K, the squared average of the relative wave function Ψ, as K(q inv ) = d 3 r f ( r) |Ψ( k, r)| 2 , where f ( r) is the source intensity of emitted particles, with r and k representing the pair relative separation and relative momentum, respectively [5]. For point-like sources, f ( r) = δ( r) and the integral gives the Gamow factor, which in the case of SS and OS pairs is given by: where ζ = αm/q inv is the Landau parameter, α is the fine-structure constant, and m is the particle mass [41].
In a previous CMS analysis [5], no significant differences in the final results were observed in the case of pions when correcting with the Gamow factor or with the full estimate derived for extended (as opposed to point-like) sources [40, [42][43][44]. Therefore, in this analysis the corrections for the final state Coulomb interaction are performed using the Gamow factor.

Fitting the correlation function
Ideally, as in the case of static systems, the two-particle correlation function can be related to the Fourier transform of the emitting source distribution at decoupling. Because of their simplicity, parameterizations of the Gaussian type have been used to relate the source distribution and the measured correlation function. In Ref.
[3-5], the Gaussian distribution was studied and yielded fit results much worse than for an exponential function or the Lévy class of parameterizations.
In this analysis, the fits performed to the data correlation functions employ symmetric Lévy stable distributions, where C 2,BE (q inv ) refers to the two-particle BEC, C is a constant, R inv and λ are the (BEC) radius and intercept parameters, respectively. The exponent a is the Lévy index of stability satisfying the inequality 0 < a ≤ 2. If treated as a free parameter when fitting the correlation functions, this exponent usually returns a number between the value characterizing an exponential function (a = 1) and that for a Gaussian distribution (a = 2). More details can be found in Ref. [45]. The additional term, linear in q inv (proportional to the constant ), is introduced to account for long-range correlations that may be absent in the reference sample. The fit values for depend on the multiplicity range : negligible for high multiplicity bins (above 100 tracks), and reaching ∼0.2 GeV −1 for low multiplicity bins (below 20 tracks).
The Lévy distribution with a as a free parameter returns the best quality fits, but it is not adopted for extracting the results because it does not allow a direct interpretation of the shape of the source distribution by means of a Fourier transform. Fitting with a pure Gaussian distribution (a = 2) returns very poor quality fits for all of the multiplicity and k T bins. Therefore, an exponential form (with a fixed at 1.0) is used for the default fit function. For this parameterization, the functional form for the correlation function, e −q inv R inv , is the Fourier transform of a source function ρ(t, x) corresponding to a Cauchy-Lorentz distribution. A Laguerre-extended exponential fit function [46] returns a χ 2 /dof of the order of unity (where dof is the number of degrees of freedom) and yields the same R inv values as the simple exponential case, with the caveat that the resulting BEC fits depend on additional parameters from the Laguerre polynomial expansion.
A χ 2 test is used in the fitting procedure. A negative log-likelihood ratio, assuming that the bin content of the signal and that of the reference sample histograms are Poissonian distributions (as implemented in Refs. [47,48]) returned results consistent with the χ 2 approach.

Analysis techniques
As discussed in the Introduction, both low and high multiplicity correlation functions in pp collisions are particularly sensitive to the influence of non-BEC effects such as resonance decays and jets. To ensure an accurate accounting of these background contributions, and especially to investigate any variability of the final results, the background removal is performed with three techniques that have different degrees of dependence on simulations. Since two of these methods were used in previous CMS BEC studies [3][4][5], and are described in detail in Ref.
[5], they are only briefly summarized here. Additional details can be found in Appendix A.
For the double-ratio (DR) technique [3-5], the numerator is an SR as in Eq. (1) applied to the data, and the denominator is an SR computed similarly with MC events simulated without BEC effects. In each case, the reference samples for data and simulation are constructed in the same way. The DR correlation function is fitted using Eq.
(3) to obtain R inv and λ. This procedure reduces any bias due to the construction of the reference sample, with the advantage of directly removing non-BEC effects remaining in the data SR. However, it requires the use of well-tuned MC simulations to describe the overall behavior of the data.
The cluster subtraction (CS) [5] technique uses only SRs from data. Correlation functions for OS pairs are used for parameterizing the contamination from resonances and jet fragmentation (called "cluster contribution"), which are still present in the correlation function [17,19,49]. The amount of these contributions that is present in the SS pairs is evaluated by using the same shape found for OS pairs and varying the overall scale to fit the data (details are given in Ref. [5]). To find R inv and λ, the correlation function is fitted with a function combining the cluster and Bose-Einstein contributions, with the latter parameterized using Eq. 3.
The hybrid CS (HCS) method, is similar to a method used for pPb data by the ATLAS experiment [48], has less dependence on MC simulations than the DR method and a smaller number of fit parameters to be adjusted to data than the CS method. In contrast to the CS procedure, which is fully based on the control samples in data, the HCS technique uses MC simulations for converting the fit parameters from OS single ratios into those for SS.
The first step is to fit the SR constructed using MC simulations separately for SS and OS track and k T with the respective Gaussian fit from Eq. (4). The following q inv ranges are excluded from the fits: 0.2 < q inv < 0.3, 0.4 < q inv < 0.9, and 0.95 < q inv < 1.2 GeV. Coulomb interactions are not included in the simulation. The error bars represent statistical uncertainties and in most cases are smaller than the marker size. pairs using: where B and σ B are fit parameters used to describe the amplitude and the width of the peak near q inv = 0 and α B defines the overall shape of the function. For OS pairs, the following regions in q inv are excluded from the fitting process due to the contamination of resonance decays: 0.2-0.3 GeV, 0.4-0.9 GeV, and 0.95-1.2 GeV. In addition, the range q inv < 0.1 GeV is excluded because this region has a large contribution from three-body decays. A Gaussian shape (α B = 2) provides a reasonable overall description of the distributions in N offline trk and k T bins. The χ 2 /dof values are, in general, not compatible with unity. This is expected because of the distorted shape of the distributions, which are dependent on the MC simulation. Examples of SRs using PYTHIA 6 (Z2* tune) are shown in Fig. 1. Two conversion functions relate the amplitudes and the widths of the fits to SS (++ and −−) to those found for OS (+−) MC correlations: The parameters found when fitting the conversion function for the widths, ρ = 0.82 ± 0.04 and β = 0.077 ± 0.013, are independent of k T for the PYTHIA 6 (Z2* tune), whereas the parameters relating the amplitudes, µ and ν, are functions of k T . In Fig. 2, the relations between the fit parameters for different bins in k T and N offline trk are shown for MB and HM events (for a given k T range, each point in Fig. 2 represents an N offline trk bin).
[fm]  Similarly, SRs for OS pairs in data are constructed and fitted with the function given by Eq. (4), yielding the parameters B and σ B for OS data. The conversion functions based on simulation are used to calculate B and σ B for SS pairs in data. The resulting estimated background, found using Ω(q inv ) in Eq. (4), is included in Eq. (7) to fit SS data: where C 2,BE (q inv ) describes the BEC component as in Eq.
(3) (with a=1). So, the final fit function has two components: one with fixed parameters to describe non-BEC effects and another with fitted BEC parameters. In Fig. 3, examples of fits using this combined function are shown. The shape of the correlation function is not trivial and cannot be described by a simple function, since it is distorted by many components, such as resonances, jets, etc., and the individual contribution of each of these components is not known. Furthermore, when fitting the correlation function, only statistical uncertainties are considered, which are, in general, smaller than 0.5%, depending on the bin. Therefore, it is expected that fitting with a simple exponential function would not necessarily result in a χ 2 /dof near unity.

Systematic uncertainties
The following sources of systematic uncertainties are investigated: the bias from the particular choice of reference sample with respect to all other constructed possibilities, the bias from the MC simulation tune adopted in the analysis, the influence of the track selection and relevant corrections, the interference of tracks from pileup collisions, the bias from zand xy-vertex positions, the efficiency of the HM triggers, and the effect from the Coulomb correction. The more important biases are those due to the reference samples and the MC simulations. The track selection lead to a nonnegligible effect, mainly in higher-k T bins, where larger contamination from jets is expected. In addition, the Coulomb correction has a significant contribution. To compute the systematic uncertainties associated with the nonnegligible effects mentioned above, samples of measurements of R inv and λ are generated by varying the corresponding source of bias in bins of N offline trk and k T , and the root mean square (RMS) spread of each sample is associated with the systematic uncertainty in that bin. The other potential sources of bias listed above returned maximal deviations of the order of the statistical uncertainties (∼1-5%), and are not included in the estimate of the total systematic uncertainty.
To estimate the systematic uncertainties associated with the construction of the reference sample, additional samples are constructed with alternative techniques. Within the category of mixed events, tracks are randomly combined from samples of 25, 40, or 100 events, all of which are in the same range of N offline trk . Reference samples are also constructed with tracks from the same event used to form the signal sample, but making pair combinations such that only one of the two tracks has its three-momentum reflected with respect to the origin, i.e., (p x , p y , p z ) → (−p x , −p y , −p z ). Another case corresponds to rotating one of these two tracks by 180 degrees in the plane transverse to the beam, i.e., (p x , p y , p z ) → (−p x , −p y , p z ).
The uncertainties related to the reference samples and to the MC samples are estimated by associating these two sources and repeating the measurements eighteen times, (6 reference samples)× (3 MC samples). For the reference samples, the default η-mixing method and the five reference samples described above are used. For MC simulation, samples are generated using PYTHIA 6 (Z2* tune), PYTHIA 8 (CUETP8M1 tune for MB and 4C tune for HM), and EPOS LHC.
For the track selection, in addition to the default definition in Section 4, five additional different configurations were considered, changing combinations of looser and tighter criteria on the track variables. These six configurations were used to build a sample of measurements for different track selections. and k T , with their respective fits. The label "(Exp. × Gauss.) fit" refers to the same-sign data and is given by Eq. (7). The label "Gaussian fit" corresponds to Eq. (4) applied to opposite-sign data and "Background" is the component of Eq. (7) that is found from the Gaussian fit using Eqs. (5) and (6) to convert the fit parameters. Coulomb corrections are accounted for using the Gamow factor. The error bars represent statistical uncertainties and in most cases are smaller than the marker size.
For the Coulomb correction, a procedure similar to Ref.
[3] is adopted, where the Gamow factor is multiplied by a strength parameter κ. Fitting the correlation function by allowing κ to vary yields values consistent with unity within a statistical uncertainty of ±15%. A conservative uncertainty of 15% applied to the Gamow factor is then propagated, repeating the measurements by varying the magnitude of the Gamow factor up and down by this amount.  The uncertainties from the three sources listed above are computed independently and then added in quadrature to obtain the total systematic uncertainty. The largest contribution originates from the reference samples and the MC tune, reaching values of ∼20% in the final measurement. The other sources are always smaller than ∼6%. Table 1 shows the ranges of systematic uncertainties (variation in N offline trk ) for each k T bin and integrated in k T . Lower multiplicities and higher k T ranges have larger systematic uncertainties because the contamination from the jet fragmentation background is higher in those regions.

Results
The values of R inv and λ obtained with each of the three methods as functions of N tracks and k T are shown in Fig. 4, where N tracks is the average multiplicity at particle level corrected for acceptance and efficiency. The top panel presents the results for R inv and λ versus N tracks , for integrated values of the pair transverse momentum in the range 0 < k T < 1 GeV. The radius fit parameter R inv increases as a function of multiplicity, showing a change in slope around N tracks ∼ 20-30 and a tendency to saturate at higher multiplicities. For the DR and HCS methods, the intercept parameter λ rapidly decreases for increasing multiplicities in the very small N tracks region, whereas for multiplicities 10, it shows an almost constant value with increasing N tracks . The systematic uncertainties are larger for λ using the CS method, with the fit values of λ fluctuating and decreasing at higher multiplicities. This happens because λ is very sensitive to the background modeling (non-BEC effects), which leads to larger uncertainties associated with its determination.
In the bottom panel of Fig. 4, fit parameters R inv and λ are shown in two multiplicity bins, MB (N offline trk < 80) and HM (N offline trk ≥ 80), as a function of k T , where the average is performed in each k T bin, whose width is variable. The length of homogeneity R inv tends to decrease with increasing k T , more so at lower multiplicity. This behavior is compatible with an emitting source that was expanding prior to decoupling [26,[50][51][52]. The correlation intensity λ also decreases with increasing values of k T , with a more pronounced slope than that for R inv .
The increase of R inv with the event multiplicity and decrease with the average pair momentum in pp collisions were predicted in Ref. [50]. These predictions were based on the assumption that a quark-gluon plasma (QGP) [53][54][55][56] could be formed in high energy collisions of small systems in events with multiplicities similar to those investigated here. In the model, high multiplicities are related to large fireball masses formed in the collision, corresponding to a one-dimensional expansion based on Khalatnikov's solution [57] of the Landau hydrodynamical model [58]. These model predictions were also compared to BEC data for pions [6] and kaons [7] measured in pp, pp and αα collisions at the CERN ISR, and described the overall behavior of the correlation functions more closely than the Gaussian fit adopted in the analysis of the data.
As shown in Fig. 4, the three methods produce results for R inv that are compatible within the experimental uncertainties. Compatibility tests among the three methods, based on the variations of the measured values, assume the experimental uncertainties are either fully uncorrelated or fully correlated. For the fully correlated, most conservative case, the results agree within two standard deviations for most of the bins investigated. For the CS method, larger deviations are observed for the λ parameter and the associated systematic uncertainties are also larger. This parameter is particularly sensitive to the analysis procedure adopted to remove the non-Bose-Einstein effects present in the signal, as observed in Ref.
[5]. Therefore, when comparing to other energies and theoretical models, only the values found using the HCS method are shown. This technique is less sensitive to the MC tune than the DR method, mainly in the HM region, and has smaller systematic uncertainties than the CS method. The ratio of RMS over mean for the differences amongst the values of the radius fit parameters obtained with the three methods is adopted as the relative uncertainty due to the variation between techniques (here called "intramethod variation"). In the ATLAS measurement, tracks with p T > 0.1 GeV are included in the multiplicity and the correction to particle-level multiplicity is done using an unfolding procedure, as described in Ref. [15]. To have consistent multiplicities, the N tracks values for CMS data in this figure are corrected for the extrapolation down to p T = 0.1 GeV. In addition, the OS reference sample adopted by ATLAS causes distortions in the SS correlation functions due to resonance contamination. To circumvent this problem, the ATLAS correlation functions are fitted excluding ranges around the resonance peaks. Therefore, for establishing a more consistent comparison, the analysis with the CMS data was repeated excluding those regions in the fits to both the OS and SS correlation functions. The R inv values for pp collisions at 13 TeV are compatible with those obtained by both CMS and ATLAS at 7 TeV over the entire multiplicity range investigated.
In Fig. 6, the linear dependence of R inv on N 1/3 tracks reflects the growth of the number of produced particles with the system volume (or equivalently, N tracks ∝ R 3 inv ). This dependence is investigated for a range of energies and colliding systems, with the results that, R inv is independent of collision energy when compared at the same multiplicity. Radii can also be studied as a function of (dN/dη) 1/3 for investigating the dependence of the final-state geometry on the multiplicity density at freeze-out. Values of R inv are plotted as a function of N tracks  Data for R inv and average particle transverse momentum p T versus multiplicity were investigated in Ref. [59] to deduce approximate equations of state from experimental measurements in pp and pp collisions and search for possible signatures of the phase transition from hadrons to the QGP. A phase transition would cause a change in slope for both observables in the same region of multiplicity per unit pseudorapidity (dN tracks /dη). In Ref. [59], the authors claim that their compilation of R inv values as a function of dN tracks /dη for data from several experiments at different center-of-mass energies shows a common behaviour that is independent of the energy. In particular, for R inv obtained by CMS [3] and ALICE [14], they claim that a linear function in (dN tracks /dη) 1/3 for dN tracks /dη > 7.5, matched with a fifth degree polynomial for smaller dN tracks /dη values, fits the data better than a single function of (dN tracks /dη) 1/3 over the entire range. The dN tracks /dη values in Ref. [59] are for spectra extrapolated to p T of zero and therefore correspond more closely to the right panel of Fig. 5. For the CMS acceptance, a value of dN tracks /dη ∼ 7.5 corresponds to N (p T >0.1) tracks ∼ 35 and N tracks ∼ 23. For comparison to Fig. 6, N tracks ∼ 23 is equivalent to N tracks 1/3 ∼ 2.8 and dN tracks /dη 1/3 ∼ 1.7. This overall qualitative behavior of R inv vs. N tracks or dN tracks /dη seems compatible with the present results shown in Figs. 5 and 6, but the value of R inv around which the data could change slope is not evident, since it is also dependent on the lowest value of p T considered in data.
Although theoretical predictions based on hydrodynamics are not available for pp collisions at 13 TeV yet, expectations for qualitative trends can be found in the literature. A framework based on event-by-event (3+1)-dimensional viscous hydrodynamics, found that the three components of the radius fit parameters continuously grow with N tracks 1/3 for pp collisions [51]. Calculations using a hydrokinetic model also show a linear growth of the lengths of homogeneity with N tracks 1/3 [52]. Such a continuous increase is consistent with the results shown in Fig. 6 and was also observed for different collision systems (CuCu, AuAu, PbPb, and pp) and energies (ranging from 62.4 GeV to 7 TeV) in Fig. 1 of Ref. [51]. To illustrate this expectation from hydrodynamics, a fit with a single linear function over the entire N tracks 1/3 range is shown in the left panel of Fig. 6.
The color glass condensate (CGC) effective theory predicts an increase of the interaction radius (resulting from the initial overlapping of the two protons) as a function of the rapidity density dN/dy [60]. This dependence is parametrized by a third order polynomial in terms of x = (dN/dy) 1/3 for x < 3.4; beyond this point, the radius tends to a constant value, the so-called "saturation" radius. In the case of pp collisions at 7 TeV, this can be expressed by [60] with parameter values of a = 0.387, b = 0.0335, c = 0.274, d = −0.0542, and e = 1.538. According to Ref. [60], the minimum multiplicity where the computation in Eq. (8) could be reliable is around dN/dη ∼ 5 in cases where no p T cut is applied to the data. This condition is better illustrated by the right plot in Fig. 5, and considering that the η coverage in CMS and ATLAS is about 4.8, this minimum value would correspond to N This prediction for the radius behavior based on a calculation relating particle multiplicity to a saturation scale using computations of the interaction radius determined from the CGC [61]. The parameterization given in Eq. (8) is compared with the results from the HCS method in the right panel of Fig. 6. The predictions from the CGC are well below the data and the predicted saturation radius [60,61] is almost half the size of that seen in the data. In Ref. [61], no explicit calculation of the BEC radii (corresponding to emission after the last interaction) is performed. Instead, only the initial size of the pp interaction region and the initial energy density are used, without considering any fluid dynamic evolution, which may explain the underestimated values for the CGC radius parameter seen in Fig. 6. Since the CGC calculation does not include the evolution of the system, it is not unexpected that it underestimates the measured radii. However, the CGC dependence of the radii on particle density, consisting of a rise followed by saturation, is similar to the shape seen in the data. To illustrate this behavior, a linear plus constant function is fitted to the data using statistical uncertainties only (see the right panel of Fig. 6).
The tendency to a constant value of R inv at higher N tracks was suggested by ATLAS in Ref.
[15], based on their data shown in the right panel of Fig. 5, although their uncertainties were large. In their case, this saturating behavior is considered to be achieved for N (p T >0.1) tracks ∼ 55, at a much smaller value (less than 1/3) than that suggested by the CGC calculations, where it is claimed that the saturation radius would be reached for (dN/dη) 1/3 ∼ 3.4, or N tracks ∼ 190 charged particles. The new CMS data, with their smaller uncertainties, appear to be more consistent with a saturation at this higher region of multiplicity.
[GeV]  Finally, in hydrodynamic models, valuable information about the collective transverse expansion of the system (transverse flow) can be obtained from the slope of a linear fit to 1/R 2 inv versus the transverse mass, m T . In addition, the value of 1/R 2 inv at m T = 0 reflects the finalstate geometrical size of the source. Figure 7 shows 1/R 2 inv versus m T for a variety of multiplicity ranges. The left plot shows that the expansion in the low-multiplicity region is faster than in the HM region. The corresponding geometrical sizes are R MB G = 5.1 ± 0.4 (stat) fm and R HM G = 4.2 ± 1.1 (stat) fm, for the low and high multiplicity regions, respectively. The right panel of Fig. 7 shows the variations of 1/R 2 inv with m T in finer multiplicity ranges, showing in more detail the evolution of the slope: the collective flow decreases with increasing multiplicity, but this trend seems to saturate around a reconstructed multiplicity of ∼80.
This dependence R −2 inv ∝ a + bm T (which was universally observed in nucleus-nucleus collisions, for different colliding system sizes, collision energy, and centrality) implies that the radii are driven by the initial geometry, as well as by the transverse flow (in a 3D analysis, also by the longitudinal flow). The present data suggest that a similar phenomenological modeling is appropriate for pp collisions at LHC energies. Theoretical models indicate that the intercept of the linear fit to R −2 inv (m T ) versus m T equals the geometrical size at freeze-out, whereas the slope gives the square of a Hubble-type flow parameter [62], divided by the freeze-out temperature, i.e., H 2 /T f [26,63]. Assuming T f ∼ 150 MeV for the freeze-out temperature, the values of the Hubble-type parameter can be estimated as H HM = 0.17 ± 0.04 (stat) c/ fm and H MB = 0.298 ± 0.004 (stat) c/ fm in the HM and MB regions, respectively.
These values are comparable to those estimated for RHIC AuAu collisions, i.e., ∼0.1-0.2 GeV/c [12, [64][65][66], and can also be converted into a flow velocity by multiplying by the measured size. Finally, it should be noted that the extracted Hubble-type parameter is larger in the MB case than in the HM case. These findings are again qualitatively consistent with the measurements in nucleus-nucleus collisions, where the slope parameter of R −2 inv vs m T usually shows larger Hubble-type parameters for peripheral than for central collisions. These observations suggest yet another similarity between HM events in high energy pp collisions and relativistic nucleusnucleus collisions.

Summary
A Bose-Einstein correlation measurement is reported using data collected with the CMS detector at the LHC in proton-proton collisions at √ s = 13 TeV, covering a broad range of charged particle multiplicity, from a few particles up to 250 reconstructed charged hadrons. Three analysis methods, each with a different dependence on Monte Carlo simulations, are used to generate correlation functions, which are found to give consistent results. One dimensional studies of the radius fit parameter, R inv , and the intercept parameter, λ, have been carried out for both inclusive events and high multiplicity events selected using a dedicated online trigger. For multiplicities in the range 0 < N offline trk < 250 and average pair transverse momentum 0 < k T < 1 GeV, values of the radius fit parameter and intercept are in the ranges 0.8 < R inv < 3.0 fm and 0.5 < λ 1.0, respectively.
Over most of the multiplicity range studied, the value of R inv increases with increasing event multiplicities and is proportional to N tracks 1/3 , a trend which is predicted by hydrodynamical calculations. For events with more than ∼100 charged particles, the observed dependence of R inv suggests a possible saturation, with the lengths of homogeneity also consistent with a constant value. Comparisons of the multiplicity dependence are made with predictions of the color glass condensate effective theory, by means of a parameterization of the radius of the system formed in pp collisions. The values of the radius parameters in the model are much lower than those in the data, although the general shape of the dependence on multiplicity is similar in both cases. The radius fit parameter R inv is also observed to decrease with increasing k T , a behavior that is consistent with emission from a system that is expanding prior to its decoupling.
Inspired by hydrodynamic models, the dependence of R −2 inv on the average pair transverse mass was investigated and the two are observed to be proportional, a behavior similar to that seen in nucleus-nucleus collisions. The proportionality constant between R −2 inv and transverse mass can be related to the flow parameter of a Hubble-type expansion of the system. For pp collisions at 13 TeV, this expansion is slower for larger event multiplicity, a dependence that was also found in nucleus-nucleus collisions. Therefore, the present analysis reveals additional similarities between the systems produced in high multiplicity pp collisions and those found using data for larger initial systems. These results may provide additional constraints on future attempts using hydrodynamical models and/or the color glass condensate framework to explain the entire range of similarities between pp and heavy ion interactions.
acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies:

A Double ratios and cluster subtraction techniques
The analysis procedure using the DR technique follows Refs. [3][4][5]. For illustrating the procedure, Fig. 8 shows the SR defined in data, the one defined in simulation, as well as the DR. The reference sample used is the η-mixing sample defined previously, and the MC tune adopted is PYTHIA 6 Z2*. Both these choices are the default ones employed for obtaining the results in the DR method. For obtaining the parameters of the BEC effect in this method, a DR is defined as where C 2,BE refers to the two-particle BEC, SR(q inv ) MC is the SR in Eq. (1), but computed with simulated events without BEC effects. In each case, the reference samples for data and simulation are obtained in the same way.
The DR method was used in Refs. [3-5] to reduce the bias due to the construction of the reference sample. The DR technique also has the advantage of reducing the non-BEC background that could remain in the SR (i.e., correlations coming from resonance decays and from energymomentum conservation are not included in the reference sample, which is constructed with the event mixing technique adopted throughout this analysis). Therefore, by selecting a MC simulation that describes well the general properties of the data, those residual correlations can, in principle, be removed by taking the DRs with an SR defined similarly in MC events from non-BEC contributions [67,68]).
The CS method, as described in Ref.
[5], employs a different approach by dealing with only SRs, where contaminations (called "cluster contribution") from resonances and jet fragmentation are still present [17,19,49]. For the purpose of evaluating and removing such cluster effects, the 1D OS SR correlation functions are fitted with the expression in Eq. (10), which describes the  Fig. 9), where b and σ b are N offline trk -and k T -dependent parameters, and c is an overall normalization constant, and C (+−) 2 (q inv ) refers to the opposite-sign correlation function. To avoid regions with substantial resonance contamination in the OS correlation function, the ranges 0.60 < q inv < 0.80 GeV and q inv < 0.04 GeV are not used in the fits due to the ρ (770) and photon conversion contributions, respectively; b and σ b are parametrized as in Eqs. (11) and (12), respectively. Results of the fits using these parameterizations are shown in Table 2.  Once the values of b and σ b are determined from OS SR, the cluster contamination in the SS SR correlation function can be estimated.However, an important element, the conservation of electric charge in both minijet and multibody resonance decays, results in a stronger correlation (reflected in the parameter b), for the OS pairs than for the corresponding SS ones. Therefore, the form of the cluster-related contribution obtained from OS pairs is used to fit the SS correlations, but multiplied by an amplitude z(N offline trk , k T ).
The expression in Eq. (13) is first used to fit the SS SRs for finding z(N offline trk , k T ). The values obtained for the cluster amplitude are fitted for each k T bin by a theoretically-motivated parametrization [(a 1 N offline , based on the ratio of the combinatorics of SS and OS particle pairs. Finally, this parametrization is employed in Eq. (13) for expressing z(N offline trk , k T ) and refitting the SRs. For this fit, all the other variables (i.e., b, σ b , z), given by the parameters obtained in earlier steps of the procedure, are kept fixed and only the overall normalization, c, and the parameters of the BEC function, C BEC (q inv ), are fitted. In Fig. 9 some examples of correlation functions with the respective full fits, as in Eq.(13), are shown: where C (++,−−) 2 (q inv ) and C 2,BE (q inv ) refer to the same-sign correlation function and to the BEC, respectively.

B Investigation of an observed anticorrelation
In previous CMS analyses [4,5], the presence of an anticorrelation (dip) in the BEC functions was reported in pp collisions with characteristics that did not show a clear dependence on the center-of-mass energy. The DR technique is used for studying this behavior for pp collisions at 13 TeV since the two methods based on control samples in data include both the BEC and non-BEC contributions together in the fits, making it harder to disentangle these two components.
In Fig. 10, the DRs (zoomed along the correlation function axis) are illustrated in six ranges of multiplicity, for increasing values of N offline trk , ranging from MB to HM events. An anticorrelation is also observed in this case, being more pronounced in the lower N offline trk bins. Prior to its observation in pp collisions, this anticorrelation had been seen in e + e − collisions [68], with features compatible with a description provided by the τ-model [69], in which particle production has a broad distribution in proper time and the phase space distribution of the emitted particles is dominated by strong correlations of the space-time coordinate and momentum components. Thus, this observation in MB pp collisions suggests that such structure could be associated with small systems. More details and related discussions can be found in Ref. [5].
The plots in Fig. 10 show the data points together with the exponential fit (continuous red curve) and a fit based on the τ-model (continuous green curve), which better describes the shape of the dip. Nevertheless, it should be noted that the χ 2 from both the τ-model and the exponential fits are large for some multiplicity bins, favoring neither one of these descriptions. On the other hand, this could also reflect the fact that the uncertainties coming from the choice of MC models are not included in the fits. The depth of the anticorrelation, ∆, can be quantified [4, 5] with respect to the baseline represented by the polynomial form C(1 + q), as in Eq. (3), and the value of the curve corresponding to the τ-model fit at its minimum (details can be found in Ref. [5]). The corresponding results are shown in Fig. 11. The plot on the left shows the variation of ∆ as a function of N tracks , for integrated values of k T . The depth of the dip decreases with multiplicity and suggest an approach to a constant value above N tracks ∼ 120. The behavior of the depth is shown as a function of k T in the right plot, for several N offline trk bins. In the lowest multiplicity bin, a clear decrease with k T is seen, but the slope decreases as N offline trk increases. The results for 60 < N offline trk < 80 show a weak k T dependence and the values of ∆ are almost constant for 80 < N offline trk < 140.
The fact that the depth of the dip, although small, seems to tend to a constant value different from zero at the highest measured multiplicities raises the question of this effect being a consequence of the DR method or an intrinsic characteristic of the collision system that could keep memory of its initially small size, even at the highest track multiplicities produced so far in pp collisions.