Calculations for deep inelastic scattering using fast interpolation grid techniques at NNLO in QCD and the extraction of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _{\mathrm {s}}$$\end{document}αs from HERA data

The extension of interpolation-grid frameworks for perturbative QCD calculations at next-to-next-to-leading order (NNLO) is presented for deep inelastic scattering (DIS) processes. A fast and flexible evaluation of higher-order predictions for any a posteriori choice of parton distribution functions (PDFs) or value of the strong coupling constant is essential in iterative fitting procedures to extract PDFs and Standard Model parameters as well as for a detailed study of the scale dependence. The APPLfast project, described here, provides a generic interface between the parton-level Monte Carlo program NNLOjet and both the APPLgrid and fastNLO libraries for the production of interpolation grids at NNLO accuracy. Details of the interface for DIS processes are presented together with the required interpolation grids at NNLO, which are made available. They cover numerous inclusive jet measurements by the H1 and ZEUS experiments at HERA. An extraction of the strong coupling constant is performed as an application of the use of such grids and a best-fit value of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _{\mathrm {s}} (M_{{\mathrm {Z}}}) = 0.1170\,(15)_\text {exp}\,(25)_\text {th}$$\end{document}αs(MZ)=0.1170(15)exp(25)th is obtained using the HERA inclusive jet cross section data.


Introduction
Modern calculations of higher-order corrections in perturbative QCD for predictions of cross sections from collider a e-mail: alexander.huss@cern.ch experiments are computationally very demanding. In particular, complicated measurement functions and fiducial phasespace definitions associated with differential cross sections prevent an analytic integration over the final-state kinematics, thus calling for numerical approaches. Next-to-nextto-leading order computations for differential cross-section predictions, for example, often require O(10 5 ) CPU hours due to the complicated singularity structure of the realemission amplitudes and the delicate numerical cancellations they entail. Further challenges arise from the requirement of high precision for important benchmark processes. Common examples are jet production cross sections in both electronproton collisions or pp collisions, the Drell-Yan production of Z and W bosons, and gauge-boson production in association with jets.
The NNLOJET program [1] is a recent and continuously developing framework for the calculation of fully differential cross sections for collider experiments. It includes a large number of processes calculated at NNLO in perturbative QCD, implemented in a unified and holistic manner.
For a detailed study of NNLO predictions and the estimation of theoretical uncertainties, these calculations must be repeated with different input conditions. This includes, for example, using different values for the strong coupling α s (M Z ), different parametrisations for the PDFs, or different choices for the factorisation or renormalisation scales. Computationally even more demanding are fits for the determina-tion of the strong coupling constant and the parton densities in the proton.
In such fits, comparisons must be performed between the data and the NNLO predictions for the many thousands of points that are drawn from the multidimensional parameter space used in the minimisation. As such, it is computationally prohibitive to run the full calculation at NNLO for each required input condition encountered in such a fit. Applications of this nature therefore critically require an efficient approach to perform the convolution of the partonic hard scattering with PDFs, change the value of the strong coupling constant, and vary the scales.
The technique of using a grid to store the perturbative coefficients stripped of the parton luminosity and factors of the strong coupling constant α s , during the full Monte Carlo integration allows the convolution with arbitrary PDFs to be performed later with essentially no additional computational cost. Variation of α s (M Z ), and the renormalisation and factorisation scales is also possible. The grid technique, used in Ref. [2], is implemented independently in the APPLgrid [3,4] and fastNLO [5,6] packages. The technique works by using interpolation functions to distribute each single weight from the x and μ 2 phase space of the integration, over a number of discrete a priori determined nodes in that phase space along with the relevant interpolating function coefficients. Subsequently summing over those discrete nodes will therefore reproduce the original value for the weight, or any product of the weight with some function of the phase space parameters for that specific phase space point. One dimension in the grid is required for each parameter upon which the subsequently varied parameters will depend. For instance, for DIS processes, a dimension for x and μ 2 will be required. For pp collisions, a third dimension must be added to account for the momentum fraction x 2 of the second proton. This paper describes developments in the APPLfast project which provides a common interface for the APPLgrid and fastNLO grid libraries to link to the NNLOJET program for the calculation of the perturbative coefficients. The generation and application of interpolation grids for DIS jet production at NNLO [7,8] is discussed. Grids are made publicly available on the ploughshare website [9]. A subset of these grids have previously been employed for a determination of the strong coupling constant, α s (M Z ) [10]. Here, additional details of the grid methodology for DIS are discussed, together with the NNLO extraction of α s (M Z ) using data on inclusive jet production from both H1 and ZEUS.

DIS at NNLO and the NNLOJET framework
Jet production in the neutral-current DIS process proceeds through the scattering of a parton from the proton with a virtual photon or Z boson that mediates the interaction. The cross section for this process is given by the convolution of the parton distribution function with the partonic hard-scattering cross section which includes an implicit summation over the index a which denotes the incoming parton flavour. In perturbative QCD, the hard-scattering cross section can be expanded in the coupling constant where k corresponds to the power in α s at leading order (LO). Jet cross section measurements in DIS commonly employ a reconstruction in the Breit frame of reference, in which the proton and the gauge boson of virtuality Q 2 collide head-on. This is further assumed in the remainder of this work. As a consequence, jet production proceeds through the basic scattering processes γ * g → qq and γ * q → qg, thus requiring at least two partons in the final state. This choice not only gives a direct sensitivity to α s (k = 1) but also a rare handle on the gluon density already at LO. Calculations at higher orders in perturbation theory comprise distinct parton-level ingredients that may involve additional loop integrations and real emission. For jet production in DIS at NNLO ( p = 2), three types of contributions enter the calculation: The double-real (RR) contribution comprising tree-level amplitudes with two additional partons in the final state [11][12][13], the real-virtual (RV) contribution that requires one-loop amplitudes with one additional emission [14][15][16][17], and the double-virtual (VV) contribution involving two-loop amplitudes [18][19][20]. Each of these ingredients are separately infrared divergent and only finite after taking their sum, as dictated by the Kinoshita-Lee-Nauenberg theorem. The different manifestations of the singularities among the three contributions, related to the distinct parton multiplicities, makes the cancellation of infrared singularities a highly non-trivial task. Fully differential predictions in particular, require a procedure to re-distribute and cancel the singularities while retaining the information on the final-state kinematics. The antenna subtraction formalism [21][22][23] accomplishes this by introducing local counter terms with the aim to render each contribution manifestly finite and thus amenable to numerical Monte Carlo integration methods. The partonic hard-scattering cross section can be schematically written as where the subtraction terms dσ S,T,U a absorb in their definition the NNLO mass-factorisation terms from the PDFs and are explicitly given in Ref. [8]. Note that differential distributions can be accommodated in Eq. (1) via event selection cuts in the measurement functions that are implicitly contained in dσ X a . The NNLOJET framework [1] provides the necessary infrastructure to perform calculations at NNLO using the antenna subtraction method following the master formula (2) and incorporates all available processes under a common code base. The parton-level Monte Carlo generator evaluates the integral for each perturbative order ( p = 0, using the short-hand notation For the interface of the NNLOJET code to the grid-filling tools described in Sect. 3, additional hook functions are provided that, e.g., allow for a full decomposition of the differential cross section dσ ( p) a into the coefficients of the logarithms in the renormalisation and factorisation scales: where μ is the reference scale of the decomposition. This ensures maximal flexibility for the interface to accommodate different prescriptions, such as the different strategies pursued by APPLgrid and fastNLO for the reconstruction of the scale dependence.

The APPLgrid and fastNLO packages
The grid technique allows an accurate approximation of a continuous function f (x) to be obtained from the knowledge of its value at discrete nodes a ≡ x [0] < x [1] < . . . < x [N ] ≡ b that partition the interval [x min , x max ] into N disjoint sub-intervals. To this end, interpolation kernels E i (x) are introduced for each node i, which are constructed from polynomials of degree n and satisfy E i (x [ j] ) = δ j i . The set of interpolation kernels further form a partition of unity, As a result, the continuous function f (x) can be approximated as In practice, the interpolation is often set up using equidistant nodes (x [k] = x [0] + k δx) for simplicity. This can however result into a sub-optimal placement of grid nodes resulting in a poor interpolation quality, which in turn would require an increase in the number of nodes to achieve the required target accuracy. Alternatively, the accuracy can be greatly improved by performing a variable transformation x −→ y(x) that increases the density of nodes in regions where f (x) varies more rapidly. In this case, nodes are chosen with respect to y(x) and the corresponding interpolation kernels are denoted by E y i (x). Finally, when the function f (x) appears under an integral, the integration can be approximated by a sum over the nodes using the definition The time-consuming computation of the integral can then be performed once and for all to produce a grid g [i] (i = 0, . . . , N ) and the integral in Eq. (7) can be approximated for different functions f (x) using the sum from the right hand side, which can be evaluated very quickly.

Application to the DIS cross section
For DIS processes, the different parton densities f a (x, μ F ) can be included using the grid technique. In this case, a twodimensional grid in the two independent variables x and μ F is constructed. The respective interpolation kernels E y i (x) and E τ j (μ F ) can be chosen independently for the two variables, introducing the additional transformation in the scale variable, μ F −→ τ (μ F ). Typical transformations for DIS are for instance for the momentum fraction, and for the hard scale, where the parameter α can be used to increase the density of nodes at high or low values of x or μ, and Λ can be chosen of the order of Λ QCD , but need not necessarily be identical. Additional transforms are available in both APPLgrid and fastNLO. For any value of x and μ, both the PDFs and the running of the strong coupling can then be represented by a sum over the interpolation nodes, where μ R = μ F ≡ μ has been set for simplicity. The computationally expensive convolution with the PDFs from Eq. (1), which further includes an implicit phase-space dependence through the scale μ, can thus be approximated by a two-fold summation, Here, the grid of the hard coefficient function at the perturbative order p has been defined aŝ which can be readily obtained during the Monte Carlo integration as described in Eq. (3) by accumulating the weightŝ during the computation.

Renormalisation and factorisation scale dependence
With the hard coefficientsσ a [i, j] determined separately order by order in α s , it is straightforward to restore the dependence on the renormalisation scale, μ R , and factorisation scale, μ F , using the RGE running of α s and the DGLAP evolution for the PDFs. To this end, any functional form can be chosen that depends on the scale μ that was used during the grid generation (14); Generating larger grids that include additional alternative central scale choices each with an additional dimension in the grid allows for the scale choice used in the convolution to be any arbitrary function of these independent central scales, The functionality for storing an additional central scale is implemented in fastNLO but entails an increase in the grid size and therefore also on the memory footprint during the computation. Using the shorthand notation the full scale dependence up to NNLO is given by In APPLgrid, this summation is performed on the fly only if and when required, with the convolutions with the splitting functions P (n) performed using Hoppet [24].
As an alternative to the analytical reconstruction of the scales in Eq. (16), individual grids for the additional independent coefficients of the scale logarithms can be gener-ated. This corresponds to the default strategy in the fastNLO library and the full scale dependence can be reconstructed through where the grids are produced in analogy with Eq. (14) but using the decomposition of Eq. (4) Using additional coefficient grids reduces the numerical complexity of the a posteriori convolutions involving the splitting functions and is faster for these terms but increases the number of summations over the grids for the full NNLO calculation from three to ten. The evaluation of these additional terms can be performed using the full expressions or they can be obtained numerically by evaluating the Monte Carlo weights for six independent scale pairs (μ R , μ F ) and solving a linear equation for the coefficients.

The APPLfast project
The APPLfast project provides a library of code written in C++ with Fortran callable components. It is a lightweight interface used to bridge between the NNLOJET code and the specific code for booking and filling the grids themselves using either APPLgrid or fastNLO. The basic structure for the filling of either grid technology is essentially the same, and as such, much of the functionality for the interface exists as common code that is used for filling both, with only the code that actually fills the weights needing to be specific to either technology. Efforts are under way to implement a common filling API for both fastNLO and APPLgrid, which will allow significantly more of the specific filling code to be shared.
A design principle, applied from the outset, was that the interface should be as unobtrusive as possible in the NNLOJET code, and should provide no additional performance overhead in terms of execution time when not filling a grid. When filling a grid, any additional overhead should be kept as low as possible. This is achieved by the use of a minimal set of hook functions that can be called from within the NNLOJET code itself and which can be left within the code with no impact on performance if the grid filling functionality is not required. The original proof-of-concept implementation accessed the required variables for the weights, scales and momentum fractions via the NNLOJET data structures directly, but following this it was decided to instead implement custom access functions that allow, e.g., for a full decomposition of the event weights as described by Eq. (4), thus enabling a more straightforward design for the filling code.
Each process in NNLOJET consists of a large number of subprocesses. In order to fill the grids, during the configuration stage the internal list of NNLOJET processes is mapped to a minimal set of the unique parton luminosities that are used for the grid. When filling, these internal NNLOJET process identifiers are used to determine which parton luminosity terms in the grid should be filled on the interface side.
Generating a cross section grid using NNLOJET typically involves four stages: 1. Vegas adaption This is the first stage in the standard NNLOJET workflow and is used to generate an optimised Vegas phase-space grid for the subsequent production runs. At this stage the grid filling is not enabled and NNLOJET can run in multi-threaded mode. 2. Grid warm-up This is required in order to optimise the limits for the phase space in x and μ F for the grids. During this stage, the NNLOJET code runs in a custom mode intended solely to sample the phase-space volume, thus skipping the costly evaluation of the Matrix Elements. 3. Grid production Here, the grids from stage 2 are filled with the weights generated from a full NNLOJET run, using the optimised phase-space sampling determined in stage 1. The calculation can be run in parallel using many independent jobs to achieve the desired statistical precision. 4. Grid combination In this stage, the grids from the individual jobs are combined, first merging the results for each of the LO, NLO (R and V), and NNLO (RR, VV, RV) terms separately, and subsequently assembling the respective grids into a final master grid.
The procedure to combine the interpolation grids closely follows the one developed for NNLOJET [25]. Each cross-section bin in the observable of each calculated grid is weighted with the same number as determined by the NNLOJET merging script for the combination of the final cross sections. The stabilisation of higher-order cross sections with respect to statistical fluctuations demands a substantial number of events to be generated. This is particularly true for the double-real contribution, since the large number of final-state partons lead to a complex pattern of infrared divergences that need to be compensated. Typically, computing times of the order of hundreds of thousands of CPU hours are required. In stage 3 it is therefore mandatory to run hundreds to thousands of separate jobs in parallel, in particular for the NNLO sub-contributions. The resulting interpolation grids for each cross section and job typically are about 10-100 MBytes in size. The final master grid obtained by summing the output from all jobs then is somewhat larger than the largest single grid, because it contains at least one weight grid for each order in α s .
The interpolation accuracy must be evaluated to ensure that the results of the full calculation can be reproduced with the desired precision. For sufficiently well-behaved functions, as usually the case for PDFs, it is always possible to reach such precision by increasing the number of nodes in the fractional momentum x and scale μ at the cost of larger grid sizes. For proton-proton scattering, because of the additional momentum fraction associated with the second proton, the grid size grows quadratically with the number of x nodes.
To optimise the number of nodes necessary to achieve a sufficient approximation accuracy, several parameters and techniques can be adapted: Notably, the order or method of interpolation, the transform used for x and μ, and the accessed ranges in x and μ, as determined in the grid warmup stage 3, can be chosen such that the number of nodes can be reduced significantly while retaining the same approximation accuracy. Figure 1 shows the root mean square (RMS) of the fractional difference of the fast grid convolution with respect to the corresponding reference for HERA inclusive jet production data. This uses a third order interpolation in the transformed y(x) variable and the transform from Eq. (10) and shows that the precision is better than one per mille for grids with 20 x nodes, and better than 0.1 per mille for grids with more than 30 x nodes.
For a specific process, observable, and phase space selection, an initial indication of the level of precision can be gained already using a single job by comparing the interpolated result with the reference calculation for the chosen PDF set for each bin in the observable.
Since identical events are filled both into the grid and into the reference cross section, then any statistical fluctuations should be reproduced and thus a limited number of events is usually sufficient for this validation. Subsequently, a similar level of precision should be possible for each of the contribu- [26] [27] [28] [29] [30] [31] Fig. 1 The RMS difference between the fast grid convolution and reference histogram as a function of the number of grid nodes in momentum fraction, x for the HERA inclusive jet measurements in DIS tions for the full calculation. In future, this could be exploited to avoid the time consuming access to the reference PDF during the full NNLOJET calculation itself during the mass production of interpolation grids at a previously validated level of precision.
For the grids presented here, all events have been produced with reference weights and the sufficiently accurate reproduction of the reference has been verified; for each of the individual output grids from the many separate runs for each contribution, for the combined grids from each contribution, and for the final overall grid combination. Figure 2 compares the fast convolution with the reference from NNLOJET for di-jet data at low Q 2 from H1 [28] and demonstrates an agreement better than the per mille level for all bins.
Additional cross checks can be performed, for example, comparing the interpolated result of the final grid using an alternative PDF from the reference cross section, with an independent reference calculation for this same alternative PDF set. Here, of course, agreement can only be confirmed within the statistical precision of the two independent calculations. Moreover, it can be verified that the fast convolution with a change in scale, μ, is consistent with the full calculation performed at that scale.
In addition, the independent and completely different scale variation techniques implemented in APPLgrid and fastNLO are cross-checked against each other and are found to agree. The resulting scale dependence with a choice for the nominal scale of μ 2 0 = Q 2 + p 2 T,jet , is illustrated in Fig. 3 for two bins in inclusive jet p T ; one from the H1 low Q 2 data and one for the ZEUS high Q 2 data.   Fig. 2 Validation of the grid accuracy in di-jet production at low-Q 2 (22 < Q 2 < 30 GeV 2 , top row) and high-Q 2 (150 < Q 2 < 200 GeV 2 , bottom row). The shaded area indicates an agreement of 0. 1%   Fig. 3 The scale dependence for a single bin in jet p T with 25 < p T,jet < 35 GeV for a range 30 < Q 2 < 42 GeV 2 from H1 (left) and in jet p T with 18 < p T,jet < 25 GeV for a range 500 < Q 2 < 1000 GeV 2 from ZEUS (right). The bands show the result of varying the factorisa-tion scale μ F by factors between 0.5 and 2.0 with respect to the nominal scale. At each order three points indicate the result of symmetric variations of μ R and μ F A significant benefit of using such interpolation grids is that the detailed uncertainties can be calculated without the need to rerun the calculation. This is illustrated in Fig. 4, which shows the full seven point scale variation and the PDF uncertainties derived for the p T,jet dependent cross sections of the same H1 and ZEUS measurements from before. The seven point scale uncertainty is a conventional means of estimating the possible effect of uncalculated higher orders. It is defined by the maximal upward and downward changes in the cross section when varying the renormalisation and factorisation scales by factors of two around the nominal scale in the following six combinations of (μ R /μ 0 , μ F /μ 0 ): (1/2, 1/2), (2, 2), (1/2, 1), (1, 1/2), (2, 1), and (1, 2). The PDF uncertainties at the 1 σ level are evaluated as prescribed Fig. 4 Inclusive jet cross section as a function of the jet p T for two ranges in Q 2 : 30 < Q 2 < 42 GeV 2 for H1 data (upper row), and 500 < Q 2 < 1000 GeV 2 for ZEUS data (lower row). On the left the LO, NLO, and NNLO predictions are shown using the NNPDF31 PDF set including their ratio to the LO in the respective lower panels. On the right the NNLO predictions are shown for the four PDF sets NNPDF31, CT14, MMHT2014, and ABMP16 including their ratio to the NNPDF31 PDF prediction in the respective lower panels. The bands indicate the uncertainty derived from six variations of the μ R and μ F scale factors as described in the text (left), respectively the PDF uncertainty as prescribed in the respective publications. For better visibility the points in all upper panels are slightly shifted in p T,jet for the respective PDF sets 1 : NNPDF31 [33], CT14 [34], MMHT2014 [35], and ABMP16 [36]. In all plots PDFs at NNLO have been used with α s (M Z ) = 0.118.

Application: determination of the strong coupling constant
As an application in using the DIS jet grids at NNLO, an extraction of the strong coupling constant, α s (M Z ), is performed using a fit of the NNLO QCD predictions from NNLOJET to the HERA inclusive jet cross-section data.
Seven sets of cross section measurements by the HERA experiments are considered for the α s (M Z ) determination: Five from H1 and two from ZEUS, each given by an inclusive jet cross section measurement as a function of p T,jet and Q 2 . The H1 results include measurements at √ s = 300 GeV [2] and √ s = 320 GeV [26][27][28][29], in the ranges Q 2 120 GeV 2 [26,28] and Q 2 120 GeV 2 [2,27,29], where jets are measured within a kinematic range between 4.5 < p T,jet < 80 GeV. For ZEUS, the data are similarly comprised of measurements at √ s = 300 GeV [30] and √ s = 320 GeV [31], but in the range Q 2 > 125 GeV 2 and with jets having p T,jet > 8 GeV. For all data sets jets are defined in the Breit frame of reference using the k T jet algorithm with a jet-resolution parameter R = 1.
The methodology for the α s (M Z ) determination employs the same technique as Refs.
[10] and [37]. In brief, a goodness-of-fit quantifier between data and prediction that depends on α s (M Z ) is defined in terms of a χ 2 function, which is based on normally-distributed relative uncertainties and accounts for all experimental, hadronisation, and PDF uncertainties. The experimental uncertainties, and the hadronisation corrections and their uncertainties are provided together with the data by the H1 and ZEUS collaborations. The PDF uncertainties are calculated using the prescriptions provided by the respective PDF fitting groups. The χ 2 function is then minimised using Minuit [38]. The α s (M Z ) dependence in the predictions takes into account the contributions from both the hard coefficients and the PDFs. The latter is evaluated using the DGLAP evolution as implemented in the Apfel++ package [39,40], using the PDFs evaluated at a scale of μ 0 = 20 GeV. A different choice for the value of μ 0 is found to have negligible impact on the results. The uncertainties on the fit quantity are obtained by the HESSE algorithm and validated by comparison with results obtained using the MINOS algorithm [38]. The uncertainties are separated into experimental (exp), hadronisation (had), and PDF uncertainties (PDF) by repeating the fit excluding uncertainty components.
Following Ref.
[10], a representative value is assigned for the renormalisation scale to each single data cross section measurement denoted byμ. This is determined from the lower and upper bin boundaries in Q 2 and p T,jet (denoted with subscripts dn and up) as The calculation is performed using five massless flavours, and as such, for the α s fit, the data are restricted to be above twice the mass of the b-quark [41], i.e.μ > 2m b . The nominal predictions are obtained using the NNPDF3.1 PDF set [33], which is used to further define the PDF and PDFα s uncertainties. The PDFset uncertainties, on the other hand, are determined by separately repeating the α s fit using predictions at NNLO that are evaluated using the ABMP [36], CT14 [34], HERAPDF2.0 [42], MMHT [35], and NNPDF3.1 PDF sets. The exact definition of the PDFα s and PDFset uncertainties can be found in Ref. [37].
Results for the values of α s (M Z ) as obtained from the individual fits to the inclusive jet cross section data are collected in Table 1. The entries for the H1 data sets correspond to values previously reported in Ref.
[10] but some have been updated using NNLO predictions with higher statistical precision. New results are presented for the fits to the ZEUS inclusive jet cross section data [30,31] and fits to all the H1 and ZEUS inclusive jet cross section data, which are the principle results of this current study. The α s (M Z ) values from the individual data sets are found to be mutually compatible within their respective errors. Figure 5 summarises the values for a visual comparison, and includes the world average [41,43], which is seen to be consistent with the value extracted here. All the H1 and ZEUS inclusive jet cross section data are found to be in good agreement with the NNLO predictions, as indicated by the individual χ 2 /n dof values in Table 1. From the fit to all HERA inclusive jet data a value of α s (M Z ) = 0.1149 (9) exp (38) th is obtained, where exp and th denote the experimental and theoretical uncertainties, respectively, and where the latter is obtained by combining individual theory uncertainties in quadrature. A detailed description of the uncertainty evaluation procedure can be found in Ref.
[10]. The fit yields χ 2 /n dof = 182.9/193, thus indicating an excellent description of the data by the NNLO predictions. Furthermore, an overall high degree of consistency for all of the HERA inclusive jet cross section data is found. The dominant uncertainty in the extraction of α s arises from the renormalisation scale dependence of the NNLO predictions. As such, the fits are repeated with a restricted data selection requiringμ > 28 GeV, chosen in order to obtain a balance between the experimental uncertainty from the measurements and the scale dependence from the theory predictions and so reduce the total uncertainty on the final extraction. It was verified that the extracted α s value and the associated uncertainty are stable with respect to variations of μ around 28 GeV. This fit represents the primary result and the value of α s (M Z ) is determined to be with the uncertainty decomposition given in Table 1. The value is found to be consistent with the world average within uncertainties. The obtained uncertainties are competitive with other determinations from a single observable. The running of α s (μ R ) can be inferred from separate fits to groups of data points that share a similar value of the renormalisation scale, as estimated byμ in Eq. (18). To this  (20) end, the α s (M Z ) values are determined for eachμ collection individually, and are summarised in Table 2 and shown in the bottom panel of Fig. 6. All values are mutually compatible and in good agreement with the world average, and no significant dependence on μ R is observed. The corresponding values for α s (μ R ), as determined using the QCD renor-  Fig. 6, illustrating the running of the strong coupling. The dashed line corresponds to the prediction for the μ R dependence using the α s value of Eq. (19). The predicted running is in excellent agreement with the individual α s (μ R ) determinations, further reflecting the internal consistency of the study.
To conclude this study it is worth commenting on the robustness of the procedure. On the theory side, the inclusive jet cross section represents an observable that is well defined in perturbative QCD and only moderately affected by nonperturbative effects and experimentally, this study rests on a solid basis, making use of measurements from two different experiments based on three separate data taking periods, which cover two different centre-of-mass energies and two kinematic regions in Q 2 . As a result, although only a single observable is used in the determination of α s , a highly competitive experimental and theoretical precision is achieved.

Conclusions and outlook
NNLO calculations in perturbative QCD are rapidly becoming the new standard for many important scattering processes. These calculations are critical in reducing theory uncertainties and often improve the description of the increasingly precise data, sometimes even resolving prior tensions. However, the computational resources required for such calculations prohibit their use in applications that require a frequent re-evaluation using different input conditions, e.g. fitting procedures for PDFs and Standard Model parameters.
Fast interpolations grid techniques circumvent these limitations by allowing for the a posteriori interchange of PDFs, values of the strong coupling α s , and scales in the prediction at essentially no cost. In this article the APPLfast project is discussed, which provides a generic interface for the APPLgrid and fastNLO grid libraries to produce interpolation tables where the hard coefficient functions are computed by the NNLOJET program. Details on the extension of the techniques to NNLO accuracy and their implementation for DIS are discussed, together with the public release of NNLO grid tables for jet cross-section measurements at HERA [9].
As an application of the grids, an extraction of the strong coupling constant α s has been performed, based on jet data at HERA, closely following the methodology in Refs. [10,37]. In contrast to Ref.
[10], where the α s determination considered both inclusive and di-jet cross section data from H1 alone, this current analysis includes data from both the H1 and ZEUS experiments, but α s is fitted solely using the single jet inclusive data. The usage of a single observable facilitates the simultaneous determination of α s (M Z ) from two experiments, as the observable is defined identically between both experiments and thus reduces ambiguities in the treatment of theory uncertainties. This work represents one of the first determinations of the strong coupling constant to include both H1 and ZEUS DIS jet data at NNLO accuracy, where such a determination is only possible using the foundational work presented in this paper. The determination of α s (M Z ) from H1 and ZEUS data taken together provides a best-fit value of α s (M Z ) = 0.1170 (15) exp (25) th .
Although the discussion in the present work was limited to the DIS process, the implementation in both APPLfast and NNLOJET is fully generic and thus generalisable to hadronhadron collider processes. This means that all NNLO calculations available from within NNLOJET, such as di-jet production and V + jet production in proton-proton scattering, are interfaced to grid-filling tools in a rather straightforward manner. This generalisation will be presented in a future publication. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: The data generated in the context of this publication is comprised of the fast interpolation grids at NNLO accuracy. They are publicly available on the designated platform at ploughshare.web.cern.ch and can be freely downloaded and used to reproduce all results from the manuscript.]