A Study of b bbar Production in e+e- Collisions at sqrt(s) = 130-207 GeV

Measurements are presented of R_b, the ratio of the b bbar cross-section to the q qbar cross-section in e+e- collisions, and the forward-backward asymmetry A^b_FB at twelve energy points in the range sqrt(s) = 130-207 GeV. These results are found to be consistent with the Standard Model expectations. The measurements are used to set limits on new physics scenarios involving contact interactions.


Introduction
The ratio R b ≡ σ(e + e − → bb)/σ(e + e − → qq) and A b FB , the forward-backward production asymmetry of bottom quarks in e + e − collisions, are important parameters in precision studies of electroweak theory, and are sensitive probes of new physics. This paper presents measurements of R b and A b FB made at centre-of-mass energies ( √ s) between 130 GeV and 207 GeV. Events containing a bb pair have several characteristic features, most notably the presence of secondary vertices, which may be used to select a sample enriched in b-decays. A 'b-tag' variable has been constructed for this purpose, which exploits the high resolution tracking provided by the DELPHI Silicon Tracker.
In the asymmetry measurement the hemisphere containing the b-quark has been determined using a hemisphere-charge technique. In order to enhance sensitivity to possible new physics contributions from high energy scales, all measurements have been made for events in which s ′ /s ≥ 0.85, where √ s ′ is the effective centre-of-mass energy after initial state radiation. In the Standard Model e + e − → bb events are produced by an s-channel process propagated by either photon or Z-boson exchange. Over the interval of collision energies under investigation the relative strengths of the two contributions evolve so that the value of R b is expected to fall, and that of A b FB to rise, slowly with √ s.
Studies of bb production at collision energies above the Z-pole have been presented by other LEP collaborations [1][2][3][4][5]. The results presented here for the energies 130 ≤ √ s ≤ 172 GeV supersede those of an earlier DELPHI publication [6]. Sect. 2 describes the datasets and the aspects of the DELPHI detector relevant for the analysis. The event selection is discussed in Sect. 3. The R b determination is presented in Sect. 4 and that of A b FB in Sect. 5. An interpretation of the results within the context of both the Standard Model and possible new physics models including contact interactions is given in Sect. 6. 2 Datasets, the DELPHI Detector and Simulation LEP 2 operation began in 1995, when around 6 pb −1 of data were delivered at centreof-mass energies of √ s =130 GeV and 136 GeV. In 1996 the collision energy of the beams was raised to, and then beyond, the W + W − production threshold of 161 GeV.
Each subsequent year saw increasing amounts of integrated luminosity produced at ever higher energies, reaching 209 GeV in the year 2000. In total around 680 pb −1 were collected by the DELPHI experiment at 12 separate energy points. Note that during the 2000 run, operation occurred at a near-continuum of energies between 202 GeV and 209 GeV. In the present study the data collected during 2000 are divided into two bins, above and below 205.5 GeV. Throughout LEP 2 operation collisions were performed with unpolarised beams. The mean collision energies for each period of operation and the integrated luminosities used in the analysis are summarised in Table 1. More details on the LEP collision energy calibration and the DELPHI luminosity determination are given in [7] and [8], respectively. In addition to the high energy operation, in each year from 1996 onwards LEP also delivered 1-4 pb −1 at the Z-pole, in order to provide well understood calibration data for the experiments. In this paper the events collected during the calibration running are referred to as the 'Z-data', and provide control samples for the high-energy studies. In 1995 the control sample is taken from the Z-peak data immediately preceeding the switch to 130 GeV operation. In 2000 a second set of Z-data was collected in order to provide a dedicated calibration sample for the period in which the DELPHI TPC had impaired efficiency (see below).
A description of the DELPHI detector and its performance can be found in [9,10]. For the analyses presented in this paper, the most important sub-detector in DELPHI was the Silicon Tracker [11]. The Silicon Tracker was a three-layer vertex detector providing measurements in both the views transverse and longitudinal to the beam line, with the capabilities to provide effective b-tagging over the polar angle interval of 25 • < θ < 155 • , where θ is the angle with respect to the e − beam direction. End-caps of mini-strip and pixel detectors gave tracking coverage down to θ = 10 • (170 • ). The Silicon Tracker was fully installed in 1996 and remained operational until the end of the LEP 2 programme. During the 1995 run b-tagging information was provided by the microvertex detector described in [12].
During the 2000 run, one of the 12 azimuthal sectors of the central tracking chamber, the TPC, failed. After the beginning of September 2000 it was not possible to detect the tracks left by charged particles in that sector. The data affected correspond to approximately one quarter of the total dataset of that year (the 'BTPC' period). Nevertheless, the redundancy of the tracking system of DELPHI meant that tracks passing through the sector could still be reconstructed from signals in the other tracking detectors. A modified tracking reconstruction algorithm was used in this sector, which included space points reconstructed in the Barrel RICH detector. As a result, the track reconstruction efficiency was only slightly reduced in the region covered by the broken sector, but the track parameter resolutions were degraded compared with the data taken prior to the failure of this sector (the 'GTPC' period).
To determine selection efficiencies and backgrounds in the analysis, events were simulated using a variety of generators and the DELPHI Monte Carlo [10]. These events were passed through the full data analysis chain. Different software versions were used for each year, in order to follow time variations in the detector performance. For the year 2000, separate GTPC and BTPC sets of simulation were produced. The typical size of the simulated samples used in the analysis is two orders of magnitude larger than those of the data.
The e + e − → ff process was simulated with KK 4.14 [13], interfaced with PYTHIA 6.156 [14,15] for the description of the hadronisation. For systematic studies, the alternative hadronisation description implemented in ARIADNE 4.08 [17] was used. Fourfermion background events were simulated with the generator WPHACT 2.0 [18,19], with PYTHIA again used for the hadronisation.

Event Selection
The analysis was made using charged particles with momentum lying between 0.1 GeV and 1.5·( √ s/2), and measurement uncertainty of less than 100%, and having a closest approach to the beam-spot of less than 4 cm in the plane perpendicular to the beam axis, and less than 4/sin θ cm along the beam axis. Neutral showers were used above a minimum energy cut, which was 300 MeV for the barrel electromagnetic (HPC) and very forward calorimeter (STIC), and 400 MeV for the forward electromagnetic calorimeter (FEMC).
The following requirements were applied to select a pure sample of hadronic events, and to ensure that each event lay within the acceptance of the Silicon Tracker: • Number of charged particle tracks ≥ 7; • Quadrature sum over each end-cap of energy reconstructed in the forward electromagnetic calorimeter system (STIC + FEMC) ≤ 0.85( √ s/2); • Total transverse energy > 0.2 √ s; • Energy of charged particles > 0.1 √ s; • Restriction on the polar angle of the thrust axis, θ T , such that | cos θ T | ≤ 0.9.
Data-taking runs were excluded in which the tracking detectors and Silicon Tracker were not fully operational. In addition to this selection a 'W-veto' was applied to suppress the contamination from four-fermion events. The veto procedure consisted of forcing the event into a fourjet topology using the LUCLUS [14,15] algorithm and imposing the requirement that (E min / √ s) · α min < 4.25 • , where E min is the energy of the softest jet, and α min the smallest opening angle found between all two-jet combinations. This condition is designed to distinguish between two-fermion events containing gluon jets, and genuine four-fermion background. Less than 40% of four-fermion events survive the hadronic selection and the W-veto. The analysis is concerned with events produced with an effective centre-of-mass energy of the qq system, √ s ′ , at or around the collision energy, √ s. The effective centre-of-mass energy is reconstructed as in the hadronic analysis reported in [8]. A constrained fit is performed, taking as input the observed jet directions as found by the DURHAM clustering algorithm [16], imposing energy and momentum conservation, and assuming any ISR photon was emitted along the beam line. Radiative returns to the Z are then rejected by requiring that the reconstructed value of s ′ /s ≥ 0.85. Contamination from events with true values of s ′ /s below this threshold is around 16% at 130.3 GeV and reduces to about 6% at 206.6 GeV. As a final condition, events with |Q + FB | ≥ 1.5 are rejected, where |Q + FB | is one of the event charge variables defined in Sect. 5.1. This selection is applied to exclude badly measured events from the asymmetry measurement, and removes around 0.5% of the sample.
The numbers of events passing the high s ′ /s two-fermion hadronic selection at each energy point are listed in Table 1, together with the Monte Carlo expectations. The two sets of numbers agree well. The background from four-fermion events is estimated to be around 9% in the 172.1 GeV dataset, rising to 21% in the 206.6 GeV sample. The contamination from τ + τ − events is around 0.3%. All other backgrounds are negligible.
A 'b-tag' variable is used to extract a sub-sample of events enriched in b-quarks from the non-radiative qq sample. This variable makes use of three observables, known to distinguish between b-quark events and those events with non-b content. In this analysis, the three categories of observable considered are: • A lifetime variable, constructed from the impact parameters of charged particle tracks in each jet; • The invariant mass of charged particles forming any secondary vertices that are found; • The rapidities of charged particles in any secondary vertex, defined with respect to the jet direction.
These properties are used to construct a single event 'b-tag' variable, B tag , of typical value between -5 and 10. Events with higher values of this variable are enriched in b-events. More information on the b-tagging procedure may be found in [20]. In this analysis a cut value of 1 is used for all high energy data sets to select the b-enriched sample; this

Procedure and Calibration with Z Data
For each energy point R b is determined through the following relation: Here N D total (tag) and N 4f total (tag) are the number of events in the data, and the estimated four-fermion background respectively, before (after) the application of the b-tag cut; R c is directly analogous to R b , but defined for cc events; and ǫ b , ǫ c and ǫ uds are the efficiencies of the b-tag cut applied to b, c and light quark events respectively. c b and c c are correction factors, which account for the fact that the effective values of R b and R c are modified by the hadronic selection, and that there is some contamination from initial state radiative production in the sample, the fraction of which can in principle be different for each quark type, and therefore changes with the application of the b-tag. Simulation indicated that these correction factors lie within 1-2% of unity.
The efficiency and expected background were determined primarily from Monte Carlo, and cross-checked, where possible, from the data themselves. Figure 1 shows the distri-bution of the b-tag variable, B tag , in data and simulation for each dataset. In these plots the 2000 data have been divided between GTPC and BTPC operation, and the 1995 and 1996 data have been combined. In general, reasonable agreement can be seen for all years in the region around and above the cut position of B tag = 1.0, with worse agreement for the background-dominated region below the cut. (The implications of this imperfect background description are assessed below.) The running at the Z-pole in each year provides a control sample which may be used to calibrate the simulation. The value of R b at the Z-pole is well known from LEP 1 [21]. This value has been compared with the results obtained from applying expression (1) to each sample of Z-calibration data. Figure 2 shows the distribution of B tag for Z-calibration data of the 2000 GTPC period, together with that of the corresponding simulation. The b-tag variable has a mild dependence on the collision energy. In order to make the Z-data study as relevant as possible to the high energy measurements, the cut value was placed at B tag = 0.6 for these data, which gives a similar efficiency to the value used at high energy. The analysis returned a value of R b which was similar for all datasets apart from 1998, with a mean that was (4.1 ± 1.2)% higher in relative terms than the world average result. The value found for 1998 was (4.2 ± 1.4)% lower than the world average.
The offset in the measurement of R b with the Z-data can be caused by imperfections in simulating the response of the detector to the b events, the background or to both. (Effects arising from uncertainties in the knowledge of the B and D decay modelling have been accounted for and found to be small.) In order to distinguish between these possibilities, a fit was performed to the B tag distribution of the Z-data in the background enriched region around the cut value (0 < B tag < 2.5), taking the shapes of the signal and background from the simulation and fitting their relative contributions. The results returned background scaling factors with respect to the simulation which varied between around 0.9 and 1.2, depending on the year, with a relative precision of better than 5%. After allowing for these corrections, the remaining, and most significant, cause for the offset was attributed to an incorrect estimate of the b-tagging efficiency.
A fit was performed to the background level in the high energy data, identical to that made with the Z-running samples. Compatible results were obtained within ±10%. For the high energy R b extraction, therefore, these Z-pole determined scaling factors were applied to the cc and uds background, with this 10% uncertainty assigned as a systematic error, uncorrelated between years. The same factors were applied to the four-fermion background, but with twice the systematic uncertainty, as this background component is not present in the Z-data. Finally, the b-tagging efficiency was corrected by the amount indicated from the low energy study, with half of this correction taken as an uncertainty, to account for any variation with energy. The correction factor varied between 0.959 in 1998 and 1.045 for the highest energy point of 2000. Given the very similar nature of the offset seen in the Z-pole study for all years apart from 1998, the uncertainty was taken as correlated for these datasets.
The calibration procedure was repeated under different conditions and assumptions, for example using the same B tag cut value for Z-pole and high energy data, and using an absolute offset rather than a factor to correct the efficiency. In all cases compatible results were obtained. Table 2 shows the post b-tag sample composition at each energy point, after applying the various corrections factors and assuming the Standard Model production fractions.

Systematic Uncertainties in Modelling of Physics Processes
The stability of the results was studied with respect to uncertainties in the knowledge of important properties of B and D production and decay, and other event characteristics relevant to the b-tag. The variation in the parameter values was implemented by reweighting Monte Carlo events to the modified distribution.
• b and c fragmentation: Simulated bb and cc events at high energy had their Peterson fragmentation parameters [22] varied in the range corresponding to the uncertainties in the mean scaled energy of weakly decaying b and c hadrons in Z decays [21]. • b and c decay multiplicity: The charged b decay multiplicity was allowed to vary in the range 4.955 ± 0.062 [21] and that of D mesons was varied according to [21,23], with a ±0.5 uncertainty assigned to the charged multiplicity of c baryon decays. • b and c hadron composition: The proportions of weakly decaying b and c hadrons were varied according to the results reported in [24] and [25] respectively. • b and c hadron lifetime: The b and c hadron lifetimes were varied within their measured range [24]. In the b hadron case this was 1.576 ± 0.016 ps. • gluon splitting to heavy quarks: The rate of gluon splitting to bb and cc per hadronic event was varied in the range (0.254 ± 0.051)% and (2.96 ± 0.38)% respectively [21]. • K 0 S and Λ production: The rate of K 0 S and Λ hadrons was varied by ±5%, consistent with [26,27].
For each property in turn, the value of R b was recalculated using the re-weighted simulation as input and the observed change taken as the systematic uncertainty. The results for the 188.6 GeV and 206.6 GeV energy points are shown in Table 3, with the total uncertainty corresponding to the sum in quadrature of the individual components. Similar behaviour was observed for the other energy points.

Summary of Systematics and Results
The relative systematic uncertainties on R b are summarised in Table 4. In addition to those components already discussed, contributions are included which arise from the finite size of the Monte Carlo simulation sample, and from the effect of the uncertainty in the residual radiative contamination in the analysis. Studies on the resolution of the s ′ /s reconstruction indicated that this background was understood to the level of 10%. It can be seen that the dominant source of systematic uncertainty is that coming from the comparison with the Z-data.
The results for R b are given in Table 5, together with the statistical and systematic uncertainties. The correlation matrix for these results can be found in Appendix A. For each of the two energy points of the year 2000 the results for the GTPC and BTPC period are found to be compatible and are thus combined into a single value. No variation of R c is considered in the systematic uncertainty, but the dependence of R b on this quantity, , is tabulated explicitly. The internal consistency of the measured R b results may be studied, under the assumption that any dependence of the true value on collision energy can be neglected. The pull distribution of (R b − < R b >)/σ is found to have a spread of 1.2, with the most   The results for R b at each energy point. Also given are the dependences of R b on R c , and the values for the latter fraction assumed in the analysis [28]. For convenience, the corresponding Standard Model expectations for R b are included. The stability of the results has been examined when changing the value of the b-tag cut. The cut position was tightened to a value of B tag =2.5 in the high energy data, and B tag =2.1 in the Z-data, and R b re-evaluated at each energy point. Under this selection the event samples halve in size, but the non-bb background is reduced by almost a factor of three. No statistically significant change in result was observed with respect to the standard selection for any energy point in isolation, nor for all energy points averaged together, indicating that the background levels and efficiency are well understood for both selections.
The results for R b are compared with the Standard Model expectations and interpreted in the context of possible new physics contributions in Sect. 6.

Procedure
For the non-radiative bb events selected in this study, the expected form of the differential cross-section is given by: where θ b is the polar angle the b-quark makes with the initial e − direction. The analysis presented in this paper is based on an unbinned likelihood fit to expression (2), and hence requires knowledge of θ rec b , which is the event-by-event value of θ b as reconstructed in DELPHI. This reconstruction is performed using the thrust axis and a hemisphere charge technique. Each event is divided into two hemispheres by the plane perpendicular to the thrust axis that contains the nominal interaction point. Simulation shows that for non-radiative events the thrust axis is a good approximation to the direction of emission of the initial bb pair. Then the 'hemisphere charges' Q F and Q B are calculated for the forward and backward hemispheres. Q F is defined: where p i and q i are the momentum and charge of particle i, T is the thrust axis, κ is an empirical parameter, and the sum runs over all charged particle tracks for which p i · T > 0. Q B is defined in an analogous manner with the requirement that p i · T < 0.
The information from both hemispheres may be combined into two event variables: The sign of Q − FB is sensitive to whether the b-quark was emitted in the forward or backward hemisphere. The value of κ in equation (3) is tuned to maximise this discrimination, and is set to 0.5. Figure 3 (a) shows Q − FB , plotted for all data. There is a small, but significant negative offset, indicating that the b-quark is preferentially emitted in the forward hemisphere. Q + FB has no sensitivity to the initial b-quark direction, but provides a quantity which can be compared between data and simulation, with a width that reflects the resolution of the method. Q + FB is plotted in Fig. 3 (b), together with the corresponding quantity from the simulation. As expected, it is centred on zero. The distribution is marginally wider in data than in the Monte Carlo. The cosine of the reconstructed b-quark direction is then given by: where θ T is the polar angle of the thrust axis. The distribution of cos θ rec b is shown in Fig. 4 (a), for the full LEP 2 dataset, plotted for events where |Q − FB | > 0.1. The asymmetry which is observed is an underestimate of the real asymmetry, both because of 'mistags' and because of background contamination. Detector inefficiencies also distort the distributions, particularly in the forward and backward regions. Mistags are events in which the sign of Q − FB does not give the correct b-quark direction. Mistags dilute the true asymmetry by a factor D = (1 − 2ω), where ω is the probability of mistag. Note that ω has a dependence on the absolute value of Q − FB . For example, simulation indicates that for the ensemble of high energy data the mistag rate has a value of ω = 0.45 for events where |Q − FB | < 0.1, and ω = 0.27 in the case when |Q − FB | > 0.1, falling to ω = 0.17 when |Q − FB | > 0.36. Figure 4 (b) shows the same data after correction for background contamination, detector inefficiency and mistags, and the corresponding distribution for the Z-data. It is apparent that the high energy data exhibit an asymmetry significantly higher than that of the Z-data, which have a value consistent with that measured at LEP 1 [21]. shows the raw distribution of events with respect to cos θ rec b together with the expectations from simulation, generated with the Standard Model values for the asymmetries of each component. (b) shows the differential cross-section (normalised to the total cross-section within the acceptance) with respect to cos θ cor b , where θ cor b is the b-quark direction after correction for wrong flavour tags, non-uniform acceptance efficiency and background. Also shown is the corresponding distribution for the LEP 2 Z-data. The superimposed curves are fits to the form of the expected differential cross-section.
Optimal sensitivity to A b FB is achieved through performing a maxmimum likelihood fit, taking as the probability density function the expected differential cross-section of equation (2). At each energy point, the measured asymmetry A meas FB is determined by maximising the following expression: where the sum runs over all events. Mistags and contamination are accounted for by writing Here the sum runs over the five categories of event type in the sample: signal, radiative bb contamination, cc, light quark and four-fermion. Each category enters with a proportion f j , as given by the values in Table 2, with a true asymmetry A j and dilution factor D j , where A j for the signal category is equivalent to A b FB . For the purposes of accounting for the background in the fit, equation (2) is an adequate description of the distribution of radiative and four-fermion events. The dilution factors are determined from simulation, and the asymmetries of the background processes are set to their Standard Model expectations. In order to exploit the dependence of the mistag probability on the absolute value of the charge asymmetry, all events are used, but the dilutions and event fractions are evaluated in four bins of |Q − FB | and included in the fit accordingly. The fit procedure has been tested on a large ensemble of simulated experiments, and found to give unbiased results with correctly estimated uncertainties. It has also been applied to the Z-data. Averaged over all datasets, the measured asymmetry minus that value determined at LEP 1 [21] is found to be −0.01 ± 0.01.

Results and Systematic Uncertainties
The most important source of systematic uncertainty in the asymmetry measurement is associated with the knowledge of the performance of the charge asymmetry variable. There are three significant contributions to this uncertainty: • Detector Response: The distribution of track multiplicity as a function of momentum has small differences between data and Monte Carlo both at high and low momentum, which may be attributed to an imperfect modelling of the track reconstruction in the simulation. Tracks were re-weighted in the simulation in order to establish the effect on the mistag rate. Similar studies were conducted to understand the consequences of differences in the momentum resolution between data and Monte Carlo. Finally, the width of the Q + FB distribution was artificially increased in the simulation, to match that of the data, by adjusting the value of the κ parameter in the analysis of the simulation alone, and the effect on Q − FB was determined. • Hadronisation: An alternative Monte Carlo data set of events based on ARI-ADNE [17] was used to assess the robustness of the estimation of the mistag rate with respect to the description of the hadronisation process used in the simulation. • Monte Carlo Statistics: The limited amount of simulation data available introduces a non-negligible statistical uncertainty in the knowledge of the mistag rate.
Additional possible sources of measurement bias related to the mistag have been considered, for example whether any significant angular dependence exists in the value of the dilution. These effects were found to have negligible impact on the results. In addition to these studies, systematic uncertainties were evaluated arising from the same three sources that were considered in the R b measurement, namely the uncertainty associated with the sample composition as assessed from the Z-data; the uncertainty in the level of the 4-fermion background; and the uncertainty in the modelling of the physics processes (apart from hadronisation). The modelling systematic here includes a component arising from the uncertainty in the knowledge of the b-mixing parameter χ. This was varied within the range 0.128 ± 0.008, following the evaluation reported in [24].  A further uncertainty is assigned to account for the fact that QCD corrections to the final state, in particular gluon radiation, modify the asymmetry. The size of this effect has been estimated using ZFITTER [28] to be 0.018. In practice the selection cuts disfavour events with hard gluon radiation and thus will suppress this correction. In this study, however, the full effect is taken as an uncertainty, fully correlated between energy points. Finally, a systematic error is added to account for the uncertainty in the knowledge of the residual radiative bb contamination in the sample. Table 6 lists the systematic uncertainties for the 188.6 GeV and 206.6 GeV energy points. The total is the sum in quadrature of the uncorrelated component uncertainties. The results for A b FB , including statistical and systematic uncertainties, are shown in Table 7. The correlation matrix for these results can be found in Appendix A. Both the statistical uncertainty and certain components of the systematic uncertainty have a dependence on the absolute value of the asymmetry. The uncertainties shown have been evaluated assuming the Standard Model value.
The self-consistency of the results may be assessed assuming that any dependence of the true value of A b FB on the collision energy can be neglected. The pull distribution of (A b FB − < A b FB >)/σ is found to have a spread of 1.5. The outliers contributing to this larger than expected width are the dataset at 161.3 GeV, which has an asymmetry which is 2.3 σ higher than the mean, and the samples at 182.7 GeV and 206.6 GeV, which have asymmetries that are low by 2.7 and 2.4 σ respectively. The 206.6 GeV dataset is made up of events accumulated during both the GTPC and BTPC running; the values of the asymmetry and associated statistical uncertainties are found to be 0.087 ± 0.218 and 0.152 ± 0.318, and hence consistent, for the two periods. All asymmetries have been reevaluated with a more severe b-tag cut of 2.5, as was done for the R b analysis. Averaged over all data points the asymmetry is found to shift by −0.008 ± 0.052 with respect to the central values reported in Table 7. The shifts for the 161.3 GeV, 182.7 GeV and 206.6 GeV datasets are 0.019±0.209, −0.278±0.191 and −0.043±0.162 respectively. The magnitudes and signs of these changes do not suggest that there is any significant problem with the understanding of the background level and behaviour. Further cross-checks were performed in which the fit was restricted to high values of |Q − FB | and where alternative methods, such as a binned least-squared fit, were used to determine the asymmetry. Again, no significant changes were observed in the results, in particular those of the three outlying points.

Interpretation
The results for R b from Sect. 4.3 and those for A b FB from Sect. 5.2 have been compared against the Standard Model expectations, as calculated by ZFITTER [28] with final state radiation effects included. The measurements and the expectations are shown in Figs. 5 and 6, for R b and A b FB respectively. The mean values of the differences between the measurements and the Standard Model expectations have been evaluated using both the statistical and systematic uncertainties, and taking full account of all correlations. The results of this computation are presented in Table 8. In both cases it can be seen that the measurements agree reasonably well with the Standard Model. When all data points are combined, the relative precision of the R b measurements is 3.3% and the overall uncertainty on the A b FB measurements is 0.083. These results are the most precise yet obtained for the two parameters at LEP 2 energies.
Contact interactions between initial and final state fermionic currents provide a rather general description of the low energy behaviour of any new physics process with a characteristic energy scale. The results of the R b and A b FB analyses have been compared with a variety of contact interaction models. Following reference [29] the contact interactions are parameterised in the same manner as explained in [8], in which an effective Lagrangian of the form: is added to the Standard Model Lagrangian. Here g 2 /4π is taken to be 1 by convention, η ij = ±1 or 0, Λ is the energy scale of the contact interactions, and e i (b j ) are left or right-handed electron (b-quark) spinors. By assuming different helicity couplings between the initial-state and final-state currents and either constructive or destructive interference with the Standard Model (according to the choice of each η ij ) a set of different models can be defined from this Lagrangian [30]. The values of η ij for the models investigated in this study are given in Table 9.  In fitting for the presence of contact interactions a new parameter ǫ ≡ 1/Λ 2 is defined, with ǫ = 0 being the limit that there are no new physics contributions. The region ǫ > 0 represents physical values of 1/Λ 2 in models in which there is constructive interference with the Standard Model, while the region ǫ < 0 represents physical values for the equivalent model with destructive interference. Least squared fits have been made for the value of ǫ assuming contact interactions from each model listed in Table 9. All R b and A b FB data have been used, taking account of the correlations between the measurements. In this fit, the R b results have been re-expressed as absolute cross-sections, making use of the qq cross-section results found in [8].
The results of the contact interaction fits are shown in Table 10. The data show no evidence for a non-zero value of ǫ in any model, and the table lists the 68% allowed confidence level range for the fits to this parameter. Also shown are the corresponding 95% confidence level lower limits for the contact interaction scale, allowing for positive (Λ + ) and negative (Λ − ) interference with the Standard Model. These limits are in the range 2-13 TeV, with the most stringent for the VV, AA and V0 models. Table 9: Choices of η ij for different contact interaction models.

Conclusions
Analyses of the ratio of the bb cross-section to the hadronic cross-section, R b , and the bb forward-backward asymmetry, A b FB , have been presented for non-radiative production, defined as s ′ /s ≥ 0.85, at 12 energy points ranging from √ s = 130.3 GeV to √ s = 206.6 GeV. The relative uncertainties of all R b measurements is 3.3%, and the uncertainty on the mean value of A b FB for all measurements is 0.083, making these results the most precise yet obtained for the two parameters at LEP 2 energies. The results are

A Correlation Matrices
The correlation matrices for the R b and A b FB results are given in Tables 11 and 12 respectively. The correlations between R b and A b FB are negligible.