Abstract
The \(B^{0}_{s}\) and B ^{0} mixing frequencies, Δm _{ s } and Δm _{ d }, are measured using a data sample corresponding to an integrated luminosity of 1.0 fb^{−1} collected by the LHCb experiment in pp collisions at a centre of mass energy of 7 TeV during 2011. Around 1.8×10^{6} candidate events are selected of the type \(B^{0}_{(s)} \to D^{}_{(s)} \mu^{+}\) (+ anything), where about half are from peaking and combinatorial backgrounds. To determine the B decay times, a correction is required for the momentum carried by missing particles, which is performed using a simulationbased statistical method. Associated production of muons or mesons allows us to tag the initialstate flavour and so to resolve oscillations due to mixing. We obtain
The hypothesis of no oscillations is rejected by the equivalent of 5.8 standard deviations for \(B^{0}_{s}\) and 13.0 standard deviations for B ^{0}. This is the first observation of \(B^{0}_{s}\) mixing to be made using only semileptonic decays.
1 Introduction
\(B_{s}^{0}\) and B ^{0} mesons propagate as superpositions of particle and antiparticle flavour states. For a flavourspecific decay process^{Footnote 1} such as B ^{0}→D ^{−} μ ^{+} ν, particleantiparticle mixing lends a sinusoidal component to the decay rates [1, 2]. To measure mixing, the flavour state of the B meson must be observed to change, which requires knowledge of the state from at least two points in time. The experimentally accessible times to determine the flavour are at production and decay. Neglecting CPviolation in mixing, the decay rate N at a proper decay time t simplifies to
where ΔΓ and Δm are the width and mass differences^{Footnote 2} of the two mass eigenstates, and Γ is the average decay width [2]. The positive sign applies when the B meson decays with the same flavour as its production and the negative sign when the particle decays with opposite flavour to its production, later referred to as “even” and “odd”. In this study, a sample of semileptonic decays obtained with the LHCb detector is used to measure the mixing frequencies Δm _{ s } and Δm _{ d } for the \(B^{0}_{s}\) and B ^{0} systems. These quantities have previously been measured to high precision, usually in the combination of several channels, relying heavily on hadronic decay modes (see for example Refs. [3, 4] and our recent results, Refs. [5–7]). To date no observation of \(B_{s}^{0}\) mixing has been made using only semileptonic decay channels.
2 Experimental setup
The LHCb detector [8] is a singlearm forward spectrometer covering the pseudorapidity range 2<η<5, designed for the study of particles containing b or c quarks. The detector consists of several dedicated subsystems, organized successively further from the interaction region. A siliconstrip vertex detector surrounds the pp interaction region and approaches to within 8 mm of the proton beams. The first of two ringimaging Cherenkov (RICH) detectors comes next, followed by the remainder of the tracking system, which comprises, in order: a largearea siliconstrip detector; a dipole magnet with a bending power of about 4 Tm; and three multilayer tracking stations, each with central siliconstrip detectors and peripheral straw drift tubes. After this comes the second RICH detector, the calorimeter and the muon stations.
The combined highprecision tracking system provides a momentum measurement with relative uncertainty that varies from 0.4 % at 5 GeV c ^{−1} to 0.6 % at 100 GeV c ^{−1}, and impact parameter^{Footnote 3} resolution of 20 μm for tracks with high transverse momentum. By combining information from the two RICH detectors [9] charged hadrons can be identified across a wide range in momentum, around 2 to 150 GeV c ^{−1}. The calorimeter system consists of scintillatingpad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter, allowing identification of photon, electron and hadron candidates. Muons that pass through the calorimeters are detected using a system of alternating layers of iron and multiwire proportional chambers [10]. Triggering of events is performed in two stages [11]: a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which performs full event reconstruction.
3 Data selection and reconstruction
The LHCb dataset used in this analysis corresponds to an integrated luminosity of 1.0 fb^{−1} collected in pp collisions at a centre of mass energy of 7 TeV during the 2011 physics run at the LHC. Where simulation is required, Pythia 6.4 [12] is used, with a specific LHCb configuration [13]. Decays of hadronic particles are described by EvtGen [14], in which finalstate radiation is generated using Photos [15]. The interaction of the generated particles with the detector and the detector response are implemented using the Geant4 toolkit [16, 17] as described in Ref. [18]. Input to EvtGen is taken from the best knowledge of branching fractions (\(\mathcal{B}\)) and form factors at the time of the simulation [1]. The same reconstruction and selection is applied on simulated and detector data.
A sample of events is selected in which a \(D_{(s)}^{+} \to K^{+}K^{}\pi ^{+}\) candidate forms a vertex with a muon candidate. A cutbased selection is applied to enhance the fraction of real \(D^{+}_{(s)}\) mesons in this sample that arise from \(B^{0}_{(s)}\) semileptonic decays. Vertex and track reconstruction qualities, momenta, invariant masses, flight distances and particle identification (PID) variables are used. The selection was initially optimized on simulated data to maximize the signal significance, \(S/\sqrt{(S+B)}\), where S (B) denotes the number of selected signal (background) candidates. The most important cuts for this analysis are those on the PID and invariant masses. Combined information from the RICH detectors, muon stations, calorimeters and tracking allows us to place stringent requirements on a loglikelihood based PID parameter for each finalstate particle separately, ensuring at least 99 % purity in the muon sample, and suppressing peaking backgrounds such as D ^{+}→K ^{−} π ^{+} π ^{+} decays, where a pion has been misidentified as a kaon. To allow a simultaneous measurement of Δm _{ s } and Δm _{ d }, a broad mass window for the K ^{+} K ^{−} π ^{+} system is used to cover both the D ^{+} and \(D_{s}^{+}\) masses, \(0.2 < M(K^{+}K^{}\pi^{+})  M_{0}(D^{+}_{s}) < 0.1~\mbox{GeV}\,c^{2}\), where \(M_{0}(D^{+}_{s})\) is the known mass of the \(D^{+}_{s}\) meson [1]. Decays of the type D ^{∗}(2010)^{+}→D ^{0} π ^{+} are additionally suppressed by requiring that the invariant mass of the two kaons M(K ^{+} K ^{−})<1.84 GeV c ^{−2}, and combinatorial background with slow collinear pions is similarly removed with the mass requirement M(K ^{+} K ^{−} π ^{+})−M(K ^{+} K ^{−})−M _{0}(π ^{+})>15 MeV c ^{−2}.
Simulation studies indicate that the selected sample is dominated by \(B_{s}^{0} \to D_{s}^{} \mu^{+} (\nu,\pi^{0}, \gamma)\), B ^{0}→D ^{−} μ ^{+}(ν,π ^{0},γ) and B ^{+}→D ^{−} μ ^{+}(ν,π ^{+},γ) decays, where no specific intermediate states are required other than those mentioned, and where at least one neutrino will occur together with any number of the other particles in the parentheses. These additional particles are ignored and so a clear B mass peak cannot be reconstructed. For simplicity, to quantify the measured mass, M(Dμ), within its possible range, we define a “normalized mass”, n, relative to the known masses (M _{0}) of the B, D, and μ:
We require 0.24<n<1.0, where the lower cut mainly removes lowmass combinatorial background candidates. The K ^{+} K ^{−} π ^{+} invariant mass distribution and the normalized mass distribution (n) of the selected candidates are shown in Fig. 1, in which the \(D^{+}_{s}\) and D ^{+} peaks can clearly be seen over the combinatorial background.
Determination of the initialstate flavour is performed using the standard LHCb flavourtagging algorithms, which are described in detail elsewhere [5, 6, 19]. These algorithms rely on the reconstruction of particles that were produced in association with, and are flavourcorrelated with, the signal Bmeson. The correlations arise either from fragmentation, which often produces a kaon or pion of specific charge correlated with the signal, or from “oppositeside” decays, where the decay products of the partner b quark are reconstructed (e.g. a muon). A neural network combines tagging decisions for the best tagging power [6].
A hypothesis is required for the nature of the reconstructed candidate, either \(B^{0}_{s}\) or B ^{0}, in order to choose the tagging algorithms to be applied and to select the appropriate mass with which to calculate n. A split around the midpoint between the \(D^{+}_{s}\) and D ^{+} peaks is used. For the \(B^{0}_{s}\) hypothesis all available tags are used. For the B ^{0} hypothesis only oppositeside tags are used, to reduce the difference between B ^{+} and B ^{0} tagging performance and thus better constrain the B ^{+} background (see Sects. 5 and 6). The flavourtagged dataset comprises 594,845 selected candidates.
Two techniques are employed to measure the mixing frequencies: (a) multidimensional loglikelihood maximization, simultaneously fitting Δm _{ s } and Δm _{ d }; (b) modelindependent Fourier analysis, used as a crosscheck, which determines Δm _{ s } with good precision, but Δm _{ d } with a very poor precision. Both methods use a common determination of the proper decay time and so share a portion of the corresponding systematic effects.
4 Proper decaytime distributions
To obtain the Bmeson decay times, a correction is applied for the momentum lost due to missing particles, using a kfactor method as employed in many previous measurements (see, for example, Refs. [20] and [21]). The kfactor [22] is a simulationbased statistical correction, where the average missing momentum in a simulated sample is used to correct the reconstructed momentum as a function of the reconstructed Dμ mass (as shown in Fig. 2). In this study we use a fourthorder polynomial to parameterize k as a function of the normalized Dμ mass (n from Eq. (2)), which allows us to use the same correction for \(B^{0}_{s}\) and B ^{0}. With this approach, both Δm _{ s } and Δm _{ d } exhibit residual biases of around 1 %; these biases are known to good precision from the full simulation and are corrected in the final results.
The experimental resolution of the proper decay time (t) reduces the visibility of the oscillations, smearing Eq. (1) with a resolution function R(t,t′−t), where t is the true decay time and t′ is the measured value. The limited performance of the tagging also reduces the visibility of the oscillations. Our selection requirements include variables that are correlated with the decay time, leading to a timedependent efficiency function, ε(t′). Thus Eq. (1) becomes
where η is the tagging efficiency and ω is the mistag probability (the fraction of tags that assign the wrong flavour). We parameterize the timedependent efficiency with an empirical “acceptance” function. Specifically Gaussian functions are used as motivated by data and full simulation studies [22], ε(t′)=1−fG(t′;μ _{0},σ _{1})−(1−f)G(t′;μ _{0},σ _{2}), where G is the Gaussian function and the parameters are determined from fits to the data (typical values are σ _{1,2}<1 ps and μ _{0}≈0.01 ps).
The kfactor is a relative correction for the average missing momentum at a given value of n; as shown in Fig. 2, the range of missing momenta is broad and varies from about 70 % at n=0.2 to zero at n=1. This large relative uncertainty on the corrected momentum (p′) dominates the decay time resolution, i.e. σ(t′)/t′≈σ(p′)/p′. Hence σ(t′) is approximately proportional to t′ (as seen in Fig. 3) and the decay time resolution worsens as decay time increases. This dependence is determined and parameterized from the full simulation. We may choose between a parameterization in terms of either the generated (“true”) decay time, using a numerical convolution, or in terms of the measured decay time, using analytical methods; the latter is the default approach. The resolution dependence is wellfitted with second or third order polynomials.
5 Multivariate fits to the data
A binned, multidimensional, loglikelihood fit to the data is made, using the Root and embedded RooFit fitting frameworks [23, 24]. In order to improve the resolution on the fitted value of Δm _{ s }, the sample is divided into two subsamples about normalized mass n=0.56 (with this value determined using fastsimulation “pseudoexperiment” studies), and the two subsamples are fitted simultaneously as described below. There are 101,000 bins over the K ^{+} K ^{−} π ^{+} mass, the measured decay time (t′), the normalized mass (n<0.56 and n>0.56), and the tagging result (even and odd). Seven categories of signal and background are assigned component probability density functions (PDFs) whose fractions and shape parameters are left free in the fits to the data. The backgrounds are categorized in terms of their shapes in the mass and decaytime observables. Using the M(K ^{+} K ^{−} π ^{+}) distribution we separate out peaking \(D_{(s)}^{+}\) components from combinatorial background components. Each of these categories can be further divided into two based on their decaytime shape. We use the term “prompt” to describe fake candidates containing particles exclusively produced in the primary pp interaction, and the term “detached” for candidates that contain at least one daughter of a secondary decay and which therefore tend to exhibit a significantly larger lifetime. Candidates for the signal Bdecays of interest must be both detached and peaking. The signallike decays are usually grouped together in the fit; however, we separate the specific background contribution of B ^{+} within the D ^{+} peak and fit that directly. These components are shown in together in Fig. 4 and separately in different M(K ^{+} K ^{−} π ^{+}) regions in Figs. 5 and 6. Each mass PDF is a Gaussian function or a Chebychev polynomial (Fig. 4), and each background decaytime PDF is a simple exponential with an appropriate acceptance function as previously described (Fig. 6). For the signal decaytime shape we use the model described in Eq. (3), with one instance for each peak. The majority of our sensitivity arises from the mixing asymmetry, whose timedependent fit in the signal regions is shown in Fig. 7. Any odd/even asymmetry is assumed to be constant as a function of time for prompt backgrounds and for backgrounds that are known not to mix (B ^{+},Λ _{ b }, etc.). Generic detached backgrounds are allowed to have a timedependent asymmetry varying as an arbitrary quadratic polynomial.
The proportion of B ^{+}→D ^{−} μ ^{+}(ν,π ^{+},γ) with respect to B ^{0}→D ^{−} μ ^{+}(ν,π ^{0},γ) is fixed to 11 % with a ±2 % uncertainty, using the ratio of known fragmentation functions and branching fractions [1]. Based on the full LHCb simulation, this ratio is corrected by 25 % to account for differences in the reconstruction and tagging efficiencies, with the full value of this correction taken as a systematic uncertainty. We fix ΔΓ_{ s } using the result of a recent LHCb analysis [25], and ΔΓ_{ d } is fixed to zero.
Only the signal mass shapes and the parameters of interest, Δm _{ s } and Δm _{ d }, are shared between the two subsamples in n, which are fitted simultaneously. The goodness of the fit is verified with a local density method [26], which finds a pvalue of 19.6 %.
6 Fit results and systematic uncertainties
Table 1 gives the fitted values for some important quantities. In principle the signal lifetimes are also measured, but these have very large systematic uncertainties and so no results are quoted. The systematic uncertainties on Δm _{ s } and Δm _{ d } are first discussed before the final results are given.
Several sources of systematic uncertainty on the main measured quantities, Δm _{ s } and Δm _{ d }, are considered, as summarized in Table 2. The majority of the systematic uncertainties are obtained from the data.

The kfactor: the kfactor correction is a simulationbased method, and so differences between the simulation and reality that modify the visible and invisible momenta potentially invalidate the correction. Such differences could for example be in D ^{∗∗} branching fractions or form factors. Largescale pseudoexperiment studies are combined with full simulations to vary these underlying distributions within their uncertainties and examine biases produced on the fitted Δm values. Small relative uncertainties are found, 0.3 % for Δm _{ s } and 1.0 % for Δm _{ d }, representing the ultimate limit of this technique without further knowledge of the various subdecays.

Detector alignment: momentum scale, decaylength scale, and track position uncertainties arise from known alignment uncertainties and result in variations in reconstructed masses and lifetimes as functions of decay opening angle. These uncertainties have been studied using detector survey data and various control modes; they are well determined and small in comparison to the statistical uncertainties [27].

Values of ΔΓ: The quantities ΔΓ_{ d } and ΔΓ_{ s } are nominally constant in our fits. When they are varied, within ±5 % for ΔΓ_{ d } (chosen to wellcover the experimental range given the lack of information on its sign [1]) and within the known uncertainty on ΔΓ_{ s } [25], our result is only marginally affected.

Model bias: a correction has been made for the 1 % residual frequency bias seen in full simulation studies, as discussed in Sect. 4. This is taken directly from simulation and half of the correction is assigned a systematic uncertainty.

Signal propertime model: the fit is repeated with two different timeresolution models. (a) When the resolution is parameterized as a function of true rather than measured decay time, using full numerical convolution, a (0.09, 0.002) ps^{−1} variation is seen in (Δm _{ s }, Δm _{ d }). (b) When a timeindependent (average) resolution is used, a 0.001 ps^{−1} variation is seen in Δm _{ d } (this method is not applicable to the measurement of Δm _{ s } due to many factors; crucially, within the time frame of any single \(B^{0}_{s}\) oscillation the decay time resolution worsens by an appreciable fraction of the oscillation period, seen in Figs. 3 and 7). With other modifications to the signal model (resolutions and acceptances) a larger variation in Δm _{ d } of 0.007 ps^{−1} is found.

Other models and binning: the order of the Chebychev polynomial is varied, Crystal Ball functions are used for the mass peak shapes, and the background parameterizations and the binning schemes are varied. Out of these modifications, the binning scheme has the largest effect. Resulting uncertanties of 0.05 ps^{−1} and 0.001 ps^{−1} are assigned to Δm _{ s } and Δm _{ d }, respectively.

Assumptions on B ^{+} decays: The Δm _{ d } measurement is sensitive to χ _{ d }, the integrated mixing probability, which in turn is sensitive to the nonmixing B ^{+}background. We hold constant several B ^{+}background parameters in the baseline fit, determined from the full simulation. Many features of the B ^{+} background fit are varied to evaluate systematic variations, including the fraction, the lifetime, and the corrections for relative tagging performance. The largest uncertainty arises from tagging performance corrections and for this a 0.008 ps^{−1} uncertainty is assigned to Δm _{ d }. It is possible to leave one or more of these parameters free during the fit, but the loss in statistical precision is prohibitive.
For crosschecks the data are split by LHCb magnet polarity and LHCb trigger strategies; no variations beyond the expected statistical fluctuations are observed.
We obtain
To obtain a measure for the significance of the observed oscillations, the global likelihood minimum for the full fit is compared with the likelihood of the hypotheses corresponding to the edges of our search window (Δm=0 or Δm≥50 ps^{−1}). Both would result in almost flat asymmetry curves (cf. Fig. 7) corresponding to no observed oscillations. We reject the null hypothesis of no oscillations by the equivalent of 5.8 standard deviations for \(B^{0}_{s}\) oscillations and 13.0 standard deviations for B ^{0} oscillations.
7 Fourier analysis
The full fit as described above was performed in the time domain, but measurement of the mixing frequency can also be made directly in the frequency domain as a crosscheck, using wellestablished Fourier transform techniques [28–30]. The cosine term in Eq. 3 has a different sign for the odd and even samples, where the lifetime, acceptance, and other features are shared; this simplifies the analysis in the frequency domain. Any Fourier components not arising from mixing are suppressed by subtracting the odd Fourier spectrum from the even spectrum and no parameterizations of the background shapes, signal shapes, or decaytime resolution are required, allowing a modelindependent measurement of the mixing frequencies. We search for the Δm _{ s } peak in the subtracted Fourier spectrum, shown in Fig. 8. Extensive fast simulation pseudoexperiments have shown that the value of Δm _{ s } is obtained reliably and with a reasonable precision using this method; however Δm _{ d } is heavily biased and has a large uncertainty, and so a result is not quoted. Since residual components of the Fourier spectrum are of much lower frequency than the Δm _{ s } component, and several complete oscillation periods of Δm _{ s } are observable, the search for a spectral peak is relatively free from complications. For Δm _{ d }, however, the relatively low frequency is similar to that of many other features of the data, and only a single oscillation period is observed; therefore the determination of Δm _{ d } is difficult with this simple modelindependent approach.
Taking the spectrum for events in a 25 MeV c ^{−2} bin around the \(D^{+}_{s}\) mass, we find a clear and separated peak (Fig. 8, right). The rms width of the peak is 0.4 ps^{−1}, around a peak value of 17.95 ps^{−1}; the rms can be used as a modelindependent proxy for the statistical uncertainty. To further evaluate the expected statistical fluctuation in the peak value, we perform a large set of fast simulation pseudoexperiments taking the result of the multivariate fit as a model for signal and background. The uncertainty found from the simulation studies is 0.32 ps^{−1}, slightly smaller than given by the rms. We report Δm _{ s }=(17.95±0.40(rms)±0.11(syst)) ps^{−1}, in order to be modelindependent. Systematic uncertainties arise from the detector alignment and the kfactor correction method, common to both measurement techniques, as quantified previously in Sect. 6.
8 Conclusion
The mixing frequencies for neutral B mesons have been measured using flavourspecific semileptonic decays. To correct for the momentum lost to missing particles, a simulationbased kinematic correction, known as the kfactor, was adopted. Two techniques were used to measure the mixing frequencies: a multidimensional simultaneous fit to the K ^{+} K ^{−} π ^{+} mass distribution, the decaytime distribution, and tagging information; and a simple Fourier analysis. The results of the two methods were consistent, with the first method being more precise. We obtain
We reject the hypothesis of no oscillations by 5.8 standard deviations for \(B^{0}_{s}\) and 13.0 standard deviations for B ^{0}. This is the first observation of \(B^{0}_{s}\)–\(\overline{B}{}^{0}_{s} \) mixing to be made using only semileptonic decays.
Notes
In this paper, charge conjugate modes are always implied.
The mass difference is measured here as an angular frequency, in units of inverse time.
The impact parameter is the distance of closest approach of a track to a primary interaction vertex.
References
J. Beringer et al. (Particle Data Group), Review of particle physics. Phys. Rev. D 86, 010001 (2012)
O. Schneider, (Particle Data Group), B ^{0}–\(\bar{B}^{0}\) mixing. Phys. Rev. D 86, 010001 (2012)
H. Albrecht et al. (ARGUS Collaboration), Observation of B ^{0}–\(\bar{B}^{0}\) mixing. Phys. Lett. B 192, 245 (1987)
A. Abulencia et al. (CDF Collaboration), Observation of \(B^{0}_{s}\)–\(\bar{B}^{0}_{s}\) oscillations. Phys. Rev. Lett. 97, 242003 (2006). arXiv:hepex/0609040
R. Aaij et al. (LHCb Collaboration), Precision measurement of the \(B^{0}_{s}\)–\(\overline{B}{}^{0}_{s}\) oscillation frequency Δm _{ s } in the decay \(B^{0}_{s} \to D^{+}_{s} \pi^{}\). New J. Phys. 15, 053021 (2013). arXiv:1304.4741
R. Aaij et al. (LHCb Collaboration), Oppositeside flavour tagging of B mesons at the LHCb experiment. Eur. Phys. J. C 72, 2022 (2012). arXiv:1202.4979
R. Aaij et al. (LHCb Collaboration), Measurement of the B ^{0}–\(\overline{B}{}^{0}\) oscillation frequency Δm _{ d } with the decays B ^{0}→D ^{−} π ^{+} and B ^{0}→J/ψK ^{∗0}. Phys. Lett. B 719, 318 (2013). arXiv:1210.6750
A.A. Alves Jr. et al. (LHCb Collaboration), The LHCb detector at the LHC. J. Instrum. 3, S08005 (2008)
M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC. Eur. Phys. J. C 73, 2431 (2013). arXiv:1211.6759
A.A. Alves Jr. et al., Performance of the LHCb muon system. J. Instrum. 8, P02022 (2013). arXiv:1211.1346
R. Aaij et al., The LHCb trigger and its performance in 2011. J. Instrum. 8, P04022 (2013). arXiv:1211.3055
T. Sjöstrand, S. Mrenna, P. Skands, PYTHIA 6.4 physics and manual. J. High Energy Phys. 05, 026 (2006). arXiv:hepph/0603175
I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, in Nuclear Science Symposium Conference Record (NSS/MIC) (IEEE, New York, 2010), p. 1155
D.J. Lange, The EvtGen particle decay simulation package. Nucl. Instrum. Methods Phys. Res., Sect. A 462, 152 (2001)
P. Golonka, Z. Was, PHOTOS Monte Carlo: a precision tool for QED corrections in Z and W decays. Eur. Phys. J. C 45, 97 (2006). arXiv:hepph/0506026
J. Allison et al. (Geant4 Collaboration), Geant4 developments and applications. IEEE Trans. Nucl. Sci. 53, 270 (2006)
S. Agostinelli et al. (Geant4 Collaboration), Geant4: a simulation toolkit. Nucl. Instrum. Methods Phys. Res., Sect. A 506, 250 (2003)
M. Clemencic et al., The LHCb simulation application, Gauss: design, evolution and experience. J. Phys. Conf. Ser. 331, 032023 (2011)
M. Grabalosa, Flavour tagging developments within the LHCb experiment. CERNTHESIS2012075
N.T. Leonardo, Analysis of B _{ s } flavor oscillations at CDF. FERMILABTHESIS200618, 2006
M.S. Anzelc, Study of mixing at the DØ detector at Fermilab using the semileptonic decay B _{ s }→D _{ s } μνX. FERMILABTHESIS200807, 2008
T. Bird, Towards measuring B mixing in semileptonic decays at LHCb. CERNTHESIS2011184, 2011
R. Brun, F. Rademakers, Root—an object oriented data analysis framework, in AIHENP’96 Workshop, vol. 389, Lausanne, September (1996), pp. 81–86. doi:10.1016/S01689002(97)00048X
W. Verkerke, D. Kirkby, The RooFit toolkit for data modeling, in 2003 Conference for Computing in HighEnergy and Nuclear Physics (CHEP 03), La Jolla, California, USA, March (2003). arXiv:physics/0306116
R. Aaij et al. (LHCb Collaboration), Measurement of CPviolation and the \(B^{0}_{s}\)meson decay width difference with \(B_{s}^{0}\to J/\psi K^{+}K^{}\) and \(B_{s}^{0} \to J/\psi\pi^{+}\pi^{}\) decays. Phys. Rev. D 87, 112010 (2013). arXiv:1304.2600
M. Williams, How good are your fits? Unbinned multivariate goodnessoffit tests in high energy physics. J. Instrum. 5, P09004 (2010). arXiv:1006.3019
J. Amoraal et al., Application of vertex and mass constraints in trackbased alignment. Nucl. Instrum. Methods Phys. Res., Sect. A 712, 48 (2013). arXiv:1207.4756
J.B.J. Fourier, Théorie analytique de la chaleur, Chez Firmin Didot, père et fils (1822)
S.D. Conte, C. de Boor, Elementary Numerical Analysis (McGraw Hill, New York, 1980)
H. Moser, A. Roussarie, Mathematical methods for B ^{0} antiB ^{0} oscillation analyses. Nucl. Instrum. Methods Phys. Res., Sect. A 384, 491 (1997)
Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC (China); CNRS/IN2P3 and Region Auvergne (France); BMBF, DFG, HGF and MPG (Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); SCSR (Poland); MEN/IFA (Romania); MinES, Rosatom, RFBR and NRC “Kurchatov Institute” (Russia); MinECo, XuntaGal and GENCAT (Spain); SNSF and SER (Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We also acknowledge the support received from the ERC under FP7. The Tier1 computing centres are supported by IN2P3 (France), KIT and BMBF (Germany), INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom). We are thankful for the computing resources put at our disposal by Yandex LLC (Russia), as well as to the communities behind the multiple open source software packages that we depend on.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.