Mueller-Navelet jets in next-to-leading order BFKL: theory versus experiment

We study, within QCD collinear factorization and including BFKL resummation at the next-to-leading order, the production of Mueller-Navelet jets at LHC with center-of-mass energy of 7 TeV. The adopted jet vertices are calculated in the approximation of small aperture of the jet cone in the pseudorapidity-azimuthal angle plane. We consider several representations of the dijet cross section, differing only beyond the next-to-leading order, to calculate a few observables related with this process. We use various methods of optimization to fix the energy scales entering the perturbative calculation and compare thereafter our results with the experimental data from the CMS collaboration.


Introduction
The investigation of jet production in perturbative QCD is an important element of phenomenological studies at LHC. Many interesting physical topics could be studied in such experiments.
In these last years, the inclusive hadroproduction of two jets with large and similar transverse momenta and a big relative separation in rapidity Y , the so-called Mueller-Navelet jets [1], has become very popular. It allows discriminating between BFKL [2] dynamics of parton-parton interaction and the standard collinear fixed-order QCD factorization, which should work only when Y is not big enough, Y ∼ 1. If we compare the BFKL dynamics with the fixed-order DGLAP [3] calculation, we expect a larger cross-section and a reduced azimuthal correlation between the detected two forward jets. If Y is large, the leading terms in a perturbative expansion of the cross section (related with forward amplitude) on the coupling α s are those proportional to powers of α s Y , and they are resummed in the BFKL series. At a first, naive analysis Mueller-Navelet jets should manifest an exponential growth with Y , but the hard matrix elements are convoluted via collinear factorization with the parton distribution functions (PDFs), which damp this behavior.
Taking into account the effects of the PDFs, it is useful to look for ratios of distributions. Examples of such ratios are azimuthal angle correlations between the two measured jets, i.e. average values of cos (nφ), that depend on Y (here n is an integer and φ is the angle in the azimuthal plane between the direction of one jet and the opposite direction of the other jet). Other useful observables are the ratios of two such cosines, introduced for the first time in Refs. [4]. We expect a decrease of these observables as Y increases, due to the larger amount of undetected parton radiation in between the two tagged jets.
It is a well known fact that the next-to-leading order (NLO) BFKL corrections for the n = 0 conformal spin are with opposite sign with respect to the leading order (LO) result and large in absolute value. This happens both to the NLO BFKL kernel [5], which enters the integral equation giving the process-independent BFKL Green's function, and to process-dependent NLO impact factors (see, e.g. Ref. [6], for the case of the vector meson photoproduction). The impact factor needed for the BFKL description of the Mueller-Navelet jet production, the so-called forward jet vertex [7,8], is of no exception. For this reason it is strictly necessary to optimize the amplitude by (i) including some pieces of the (unknown) next-to-NLO corrections and/or (ii) suitably choosing the values of the energy and renormalization scales, which, though being arbitrary within the NLO, can have a sizeable numerical impact through subleading terms. A remarkable example of the former approach is the so-called collinear improvement [9], based on the inclusion of terms generated by renormalization group (RG), or collinear, analysis, leading to more convergent kernels. As for the latter approach, the most common ways to optimize the choice of the energy and renormalization scales are those inspired by the principle of minimum sensitivity (PMS) [10], the fast apparent convergence (FAC) [11] and the Brodsky-LePage-McKenzie method (BLM) [12].
In an ideal situation, the use of one or the other optimization procedure should not change much the prediction for any of the observables related with a given process. In practice, this may well not be the case. Then, it becomes fundamental to identify those observables, if any, which show no or small sensitivity to the change of optimization procedure. Otherwise, the preference to one optimization procedure should be assigned by evaluating the agreement with the experimental data in a certain setup and, thereafter, assumed to apply also in other setups.
The study of Mueller-Navelet jet production process at LHC is, in this respect, a paradigmatic case. The first, pioneer paper devoted to the study of this process within full NLO BFKL [13] used kinematical (i.e. non-optimized) energy scales and considered also as an option the case of an RG-improved kernel. Here the predictions for differential cross section and several azimuthal correlations at the design LHC center-of-mass energy of 14 TeV were built. Later, a similar analysis was redone [14], using the standard (i.e. non-RG-improved) kernel, but energy scales optimized according to the PMS method. Besides, in [14] the analytic expressions for jet vertices derived in a small-cone approximation [8] were used. The small-cone approximation allows to simplify the numerical analysis and is an adequate tool since typically the difference between it and the exact jet definition is much smaller than other theoretical uncertainties inherent to the BFKL approach. A third paper [15] followed the same approach of Ref. [14], but adopted an RG-improved kernel and observed a tendency of optimal values of the energy scales towards "naturalness".
The appearance of the first CMS data at a center-of-mass energy of 7 TeV [16] triggered the theoretical analysis in the same kinematical setup, which showed that the use of a RG-improved kernel with non-optimized energy scales does not lead to agreement with the experiment [17], but a nice agreement is found at the larger values of Y when BLM-optimal energy scales are used instead [18], both in pure BFKL and RG-improved calculations. Recently some effects subleading to the BFKL approach, dubbed as "violation of the energy-momentum conservation", were studied in the context of the Mueller-Navelet jet production process [19].
The aim of the present paper is to supplement the nice results achieved in Refs. [17,18] with some further information. In particular, we will try to answer, at least partially, to the following questions: -are there observables weakly sensitive (or insensitive at all) to the optimization procedure?
-do other optimization schemes, such as PMS and FAC, reproduce the CMS experimental data as well as BLM, if necessary by modifying the amplitude with the inclusion of some of the unknown next-to-NLO corrections?
-does the BLM method reproduce experimental data also for the total Mueller-Navelet cross section as it does for azimuthal correlations?
The paper is organized as follows: in the next Section we will give the kinematics and the basic formulae for the Mueller-Navelet jet process cross section, present the different, NLOequivalent representations of the amplitude adopted in this work and briefly recall the PMS, FAC and BLM optimization methods; in Section 3 we will present our results; finally, in Section 4 we will draw our conclusions and discuss some issues which we believe to be important in confronting the theoretical predictions with experimental data.

The Mueller-Navelet jet process
We consider the production of Mueller-Navelet jets [1] in proton-proton collisions where the two jets are characterized by high transverse momenta, k 2 QCD and large separation in rapidity; p 1 and p 2 are taken as Sudakov vectors satisfying p 2 1 = p 2 2 = 0 and 2 (p 1 p 2 ) = s.
In QCD collinear factorization the cross section of the process (1) reads where the i, j indices specify the parton types (quarks q = u, d, s, c, b; antiquarksq =ū,d,s,c,b; or gluon g), f i (x, µ F ) denotes the initial proton PDFs; x 1,2 are the longitudinal fractions of the partons involved in the hard subprocess, while x J 1,2 are the jet longitudinal fractions; µ F is the factorization scale; dσ i,j (x 1 x 2 s, µ F ) is the partonic cross section for the production of jets and x 1 x 2 s ≡ŝ is the squared center-of-mass energy of the parton-parton collision subprocess (see Fig. 1).
In the BFKL approach [2], the cross section of the hard subprocess can be written as (see Ref. [14] for the details of the derivation) where φ = φ J 1 − φ J 2 − π and the cross section C 0 and the other coefficients C n are given by Hereᾱ s (µ R ) ≡ α s (µ R )N c /π, with N c the number of colors, is the first coefficient of the QCD β-function, is the LO BFKL characteristic function, and are the LO jet vertices in the ν-representation. The remaining objects are related with the NLO corrections of the BFKL kernel (χ(n, ν)) and of the jet vertices in the small-cone approximation (c ) in the ν-representation. Their expressions are given in Eqs. (23), (36) and (37) of Ref. [14].
The representation (4) is valid both in the leading logarithm approximation (LLA), which means resummation of leading energy logarithms, all terms (α s ln (s)) n , and in the next-toleading approximation (NLA), which means resummation of all terms α s (α s ln (s)) n . The scale s 0 is artificial. It is introduced in the BFKL approach at the time to perform the Mellin transform from the s-space to the complex angular momentum plane and cancels in the full expression, up to terms beyond the NLA.
Eq. (4) represents just one of infinitely many representations of the coefficients C n . One can consider alternative representations, aiming at catching some of the unknown next-to-NLA corrections. Introducing for the sake of brevity the definitions the representations we will use in this work are the following: • the so-called exponentiated representation, where the dependence on | k J i | and x J i in c (1) 1,2 has been omitted for simplicity and withχ(n, ν) given by Eq. (23) in Ref. [14].
• the exponentiated representation with an extra, irrelevant in the NLA term, given by the product of the NLO corrections of the two jet vertices, • the exponentiated representation with an RG-improved kernel, where χ

Numerical results
In this Section we present our results for the dependence on Y = y J 1 − y J 2 of the coefficients C n and of their ratios R nm ≡ C n /C m . Among them, the ratios of the form R n0 have a simple physical interpretation, being the azimuthal correlations cos(nφ) .
In order to match the kinematical cuts used by the CMS collaboration, we will consider the integrated coefficients given by with y 1,min = y 2,min = 0, y 1,max = y 2,max = 4.7, k J 1 ,min = k J 2 ,min = 35 GeV, and their ratios R nm ≡ C n /C m . We fix the jet cone size at the value R = 0.5 and the center-of-mass energy at √ s = 7 TeV. We use the PDF set MSTW2008nlo [20] and the two-loop running coupling with α s (M Z ) = 0.11707.
As discussed in the Introduction, to improve the stability of the perturbative series, which is particularly relevant in the BFKL framework, several methods have been devised for the optimal choice of the several energy scales entering the above expressions. We will use the following: • principle of minimal sensitivity (PMS) [10], • fast apparent convergence (FAC) [11], • Brodsky-LePage-McKenzie (BLM) method [12].

PMS
We used an adaptation of the standard PMS method, as usual in our works, valid when more than one energy scale is present. The optimal choices for µ R and s 0 are those values for which the physical observable under exam exhibits the minimal sensitivity under variation of both these scales.
We applied the method to the four representations given in Eqs. (9)- (12). As for the optimal choice of the third scale, the factorization scale µ F , we considered the following two options: (i) let µ F follow the same fate of the renormalization scale µ R , (ii) fix µ F at | k J 1 | in the vertex of the jet 1 and at | k J 2 | in the vertex of the jet 2.
This leads to consider the eight following possibilities: Following Refs. [6,14,15], in our search of the optimal values for the Y 0 and µ R , we considered integer values for Y 0 in the range 0 ÷ 6 and values for µ R given by multiples of with the integer n R in the range 1 ÷ 9.
We looked for stationary points of the coefficient C n in the Y 0 − n R plane, then the ratios C n /C m were obtained indirectly by using the optimal results for the coefficients C n and C m . In particular, following Ref. [16], we studied the ratios R 10 , R 20 , R 30 , R 21 and R 32 . We carried out this analysis for all the representations NLA i , i=1,...,8, listed above. Results are reported in Tables 1-5 and in Figs. 2. For the sake of brevity, we do not show in these Tables the optimal values of Y 0 − n R , but simply say that they are quite sparse in the given intervals, with more recurrent values for Y 0 in the range 2 ÷ 5 and for n R in the range 2 ÷ 6.
We can see that the theoretical predictions overshoot data at all values of Y in the cases of C 1 /C 0 and C 2 /C 0 and at the smaller Y 's for C 3 /C 0 , while there is a agreement, at least for some of the eight options, for the ratios C 2 /C 1 and C 3 /C 2 .

FAC
This method consists in fixing the renormalization scale µ R at the value for which the highestorder correction term is exactly zero. In our case, the application of this method requires an adaptation, since there is a second energy parameter to take care of, Y 0 .
We applied it to the representation labeled by NLA 1 and, for each Y 0 in a finite set of integer and half-integer values in the range 0-6, we found the value of µ R such that the highest-order correction term of a certain coefficient C n is exactly zero. Then, a stationary point was searched for varying Y 0 in the given set. This method in general did not allow to find clear regions of stability. Nevertheless, we report some of our results in Table 6, for the sake of comparison with the other methods.

BLM
This method consists in choosing the scale µ R such that it makes vanish completely the β 0dependence of a given observable.
Also in this case we considered only the representation labeled by NLA 1 , i.e. the exponential representation with µ F = µ R . We implemented the BLM procedure in a slightly different way from Ref. [18]. As a matter of fact, we realized that a clear-cut way to implement this procedure in the present case is not obviously found. We rather implemented two variants of the BLM method, dubbed (a) and (b), and give here all the relevant formulae, but refer to a separate publication for details [21].

The variant (a) is given by
with µ R fixed at the value the variant (b) is given by × c 1 (n, ν) c 2 (n, ν) In Eqs. (15) and (17), we have 2.3439 and ξ is a gauge parameter, fixed at zero, whilec (1) i /c i is the NLO impact factor defined as in Eqs. (9)- (12) with the terms proportional to β 0 removed. Tables 7 and 8 and in Figs. 3. We can see that, except for the ratio C 1 /C 0 , the agreement with experimental data is very good, for both variants, at the larger values of Y .

Discussion
In this paper we have studied several, equivalent within the NLA, representations of the coefficients entering the definition of cross section, azimuthal decorrelations and ratios of azimuthal decorrelations, and have compared them with the corresponding CMS experimental data at the center-of-mass energy of 7 TeV.
We have considered three different procedures to optimize the perturbative series (PMS, FAC and BLM, the latter in two variants) and found that: • the FAC method does not lead to any sensible result for most observables; • the ratios C 2 /C 1 and C 3 /C 2 are quite well reproduced basically by all representations treated with the PMS method; • the BLM method, implemented in the exponentiated representation, reproduces quite well all the ratios studied in this work, in the region Y 6; we see, however, a sizeable difference in the theoretical prediction of the value of C 0 between the two variants (a) and (b); also in Ref. [18] an important effect on the cross section is reported when the BLM method is implemented together with an RG-improved kernel, than with the standard non-RG-improved kernel.
We believe that the information we gathered in this work can be of help in preparing new predictions for the same observables considered the increased collision energy of LHC after the LS1. In particular, it could be useful for estimates of theoretical uncertainties. Our numerical analysis shows that these uncertainties are rather large, in general due to very large NLA BFKL corrections in the considered kinematical range. In particular, the plots in Fig. 2 demonstrate that, within the PMS method, results obtained using different representations of the NLA BFKL amplitude are quite different one from the other. We stress that this type of uncertainty is often not considered and in the NLA BFKL analysis one uses just some prescribed representation of NLA BFKL amplitude. We believe that one should be aware of this "representation" uncertainty, until the time will come when some deeper insight into the physics of effects beyond NLA BFKL will allow to choose a definite representation of NLA BFKL amplitude. Perhaps, the BLM optimization procedure gives us a hint towards the right direction, because theoretical predictions derived with BLM [18] turned to be in a rather good agreement with CMS data. Our own BLM calculations presented in Fig. 3 support this statement, though, comparing our results with the plots of Ref. [18], we see that our predictions lies somewhat beyond the range of the theoretical uncertainty bound accepted there. Most probably this difference is related with the above mentioned "representation uncertainty", indeed our BLM amplitudes in Eqs. (15) and (17), in contrast with [18], do not include the product of the two NLO impact factors terms.
Meanwhile, it would be also useful to address, on the experimental side, some possible issues which could be sources of mismatch with the way in which Mueller-Navelet jets are defined in theory and that are not easy to be revealed in the comparison with theoretical predictions, for being the latter affected in their turn by systematic effects of the same amount. We list below a few of them.
• In data analysis defining the Y value for a given final state with two jets, the rapidity of one of the two jets could be so small, say |y J i | 2, that this jet is actually produced in the central region, rather than in one of the two forward regions. The longitudinal momentum fractions of the parent partons that generate a central jet are very small, and one can naturally expect sizable corrections to the vertex of this jet, due to the fact that the collinear factorization approach used in the derivation of the result for jet vertex is not designed for the region of small x. We believe that a combined theoretical approach that uses collinear factorization for the forward and k t -factorization for the central jets should be more relevant in such kinematics.
• The other issue is related with the experimental event selection for Mueller-Navelet jet analysis in a situation when more that two jets are detected in one single event. In particular, let us consider events with three jets in the final state, two of them being forward in one direction (with large positive rapidities, say, y 1 and y 2 with y 1 > y 2 ), and the third being forward in the other direction (with large negative rapidity, say, y 3 ). Traditionally, as in the current CMS analysis, such event is selected as a single Mueller-Navelet jet, where the two selected Mueller-Navelet jets are those having the largest interval in rapidity. In our example, these are the jets with rapidities y 1 and y 3 , so that Y = y 1 − y 3 . This selection method is convenient for the experimental analysis, but it does not match the definition of Mueller-Navelet jets in the theoretical NLA BFKL calculations. Examining the derivation of the NLA jet vertex [7], one can see that what is calculated in the theory is an inclusive jet production in the forward region, with some prescribed values of rapidity and transverse momentum k, where possible additional parton radiation is attributed to the inclusive hadron system X. Returning to our example of event with three detected jets, we see that in order to match the theory it should lead to the selection of two separate Mueller-Navelet jets events (i.e. it should be counted twice): a pair of Mueller-Navelet jets with rapidities y 1 and y 3 (then Y = y 1 − y 3 ) and another pair of Mueller-Navelet jets with rapidities y 2 and y 3 (then Y = y 2 − y 3 ). This mismatch between experimental event selection and theory appears in NLA BFKL and could be important due to very large value of NLA BFKL corrections. The issue may be clarified either from the experimental side, changing the Mueller-Navelet jet selection criterion, or from the theoretical side, which could require the generation of separate jet events with Monte Carlo methods.
• The use of symmetric cuts in the values of k J i ,min maximizes the contribution of the Born term in C 0 , which is present for back-to-back jets only and is expected to be large, therefore making less visible the effect of the BFKL resummation in all observables involving C 0 . The use of asymmetric cuts can reduce the contribution of the Born term and enhance effects with additional undetected hard gluon radiation, which makes the visibility of BFKL effect more clear in comparison to the descriptions based on fixed order DGLAP approach.
• The experimental determination of the Mueller-Navelet total cross section, C 0 , would provide for a yardstick which could help choosing a definite NLA representation.         Table 6: C 0 , C 1 and C 1 /C 0 in the representation NLA 1 with the FAC method; columns three, four, six and seven give the optimal values for Y 0 and  Table 7: C 0 , C 1 , C 2 and C 3 in the representation NLA 1 with the BLM method, in both variants (a) and (b).  Table 8: C 1 /C 0 , C 2 /C 0 , C 3 /C 0 , C 2 /C 1 , C 3 /C 2 in the representation NLA 1 with the BLM method, in both variants (a) and (b).