Reconsideration of the QCD corrections to the $\eta_c$ decays into light hadrons using the principle of maximum conformality

In the paper, we analyze the $\eta_c$ decays into light hadrons at the next-to-leading order QCD corrections by applying the principle of maximum conformality (PMC). The relativistic correction at the ${\cal{O}}(\alpha_s v^2)$-order level has been included in the discussion, which gives about $10\%$ contribution to the ratio $R$. The PMC, which satisfies the renormalization group invariance, is designed to obtain a scale-fixed and scheme-independent prediction at any fixed order. To avoid the confusion of treating $n_f$-terms, we transform the usual $\overline{\rm MS}$ pQCD series into the one under the minimal momentum space subtraction scheme. To compare with the prediction under conventional scale setting, $R_{\rm{Conv,mMOM}-r}= \left(4.12^{+0.30}_{-0.28}\right)\times10^3$, after applying the PMC, we obtain $R_{\rm PMC,mMOM-r}=\left(6.09^{+0.62}_{-0.55}\right) \times10^3$, where the errors are squared averages of the ones caused by $m_c$ and $\Lambda_{\rm mMOM}$. The PMC prediction agrees with the recent PDG value within errors, i.e. $R^{\rm exp}=\left(6.3\pm0.5\right)\times10^3$. Thus we think the mismatching of the prediction under conventional scale-setting with the data is due to improper choice of scale, which however can be solved by using the PMC.

The heavy quark mass provides a natural hard scale for the heavy quarkonium decays into light hadrons or photons. Calculations of their decay rates are considered as one of the earliest applications of pQCD. The charmonium has become a popular field since the discovery of J/ψ resonance at SLAC and Brookhaven in 1974. There are lots of successful experimental studies about charmonium, including the precise measurements of spectrum, lifetimes and branch ratios, cf. a comprehensive review given in the PDG [1]. At the same time, many theoretical efforts have been tried for an appropriate description of charmonium. As an important breakthrough, a systematic pQCD analysis of the heavy quarkonium inclusive annihilation and production has been given within the nonrelativistic QCD theory (NRQCD) in 1995 [2].
According to the NRQCD framework, the quarkonium decay rate can be factored into a sum of products of the short-distance coefficients and the long-distance matrix elements (LDMEs). The short-distance coefficients are perturbatively calculable in a power series of α s . The LDMEs can be estimated by means of the velocity power counting rule, i.e. the LDMEs can be classified in terms of the relative velocity between the constituent quarks of the heavy quarkonium. Especially, the color-singlet ones can be directly related to the wavefunction (derivative of the wavefunction) at the origin, which then can be calculated via proper potential models.
The decay rates of the pseudoscalar quarknium into light hadrons and photons have been calculated at the next-to-leading order (NLO) level [3,4]. The relativistic corrections at the O(α s v 2 )-order have been given in Refs. [5,6]. Within the NRQCD factorization framework, the decay rate of the η c into light hadrons or photons can be expressed as where F 1 , G 1 , F γγ and G γγ are short distance coefficients. The symbol · · · stands for the contributions from high-dimensional LDMEs which are at least at the level of O(v 4 Γ). m c is the c-quark pole mass 1 . v 2 is the squared heavy quark or antiquark velocity in the meson rest frame. For the case of η c , it can be calculated by To suppress the uncertainty from the LDMEs, one usually calculates the ratio where a(µ) = α s (µ)/(4π), R 0 (µ) = 81π 2 CF 2α 2 NC a 2 (µ), µ is an arbitrary renormalization scale, and β 0 = 11 − 2 3 n f (n f being the active flavor number) is the leading β-term of the renormalization group function. It is noted that the factorization scale dependence is missing at this level, which is the case even at the NNLO level [8], we are thus free of the factorization scale-setting problem.
It is conventional to take the renormalization scale as the typical momentum flow of the process or the one to eliminate the large logs of the pQCD series, we call this conventional scale-setting approach. As will be shown later, such a simple treatment on scale shall introduce large scale uncertainty and make the low-order prediction unreliable. At present the η decays into light hadrons or photons have been calculated up to NNLO level, which still shows a poor pQCD convergence [8][9][10]. Thus by simply pursuing higher-and-higher order terms may not be the solution for those high-energy processes. In fact, even if we obtain a small scale uncertainty for global quantities such as the total cross-section or the decay rate at a certain fixed order, it is due to cancelations among different orders; the scale uncertainty for each order is still uncertain and could be very large. Two such examples for Higgs boson decay and the hadronic production of Higgs boson can be found in Refs. [11,12]. When one applies conventional scale-setting, the renormalization scheme-and initial renormalization scale-dependence is introduced at any fixed order. Thus a proper scale-setting approach is important for fixed-order predictions.
We should point out that those predictions are different from the value derived from the new experimental measurements, which shows R exp = (6.3 ± 0.5)×10 3 [1]. The BLM prediction given in Ref. [13] is questionable. Thus it is interesting to show whether an improved pQCD analysis could be done and could explain the new R exp , as is the purpose of this paper. Especially, it is important to show whether the mismatching of the data and the pQCD prediction is caused by improper choice of scale or by some other reasons.
A novel scale-setting approach, the Principle of Maximum Conformality (PMC) [18][19][20][21], has been developed in recent years. The PMC satisfies renormalization group invariance [22] and it reduces in N C → 0 Abelian limit [23] to the standard Gell-Mann-Low method [24]. A more convergent pQCD series without factorial renormalon divergence can be obtained. The PMC scales are physical in the sense that they reflect the virtuality of the gluon propagators at a given order, as well as setting the effective number (n f ) of active flavors. The resulting resummed pQCD expression thus determines the relevant "physical" scales for any physical observable, thereby providing a solution to the renormalization scale-setting problem. Because all the scheme-dependent {β i }-terms in pQCD series have been resummed into the running couplings with the help of renormalization group equation, the PMC predictions are renormalization scheme independent at every order. Such scheme independence can be demonstrated by using commensurate scale relations [25] among different observables. A number of PMC applications have been summarized in the review [26][27][28]. The PMC provides the underlying principal for the BLM, and in the following, we shall adopt the PMC to set the renormalization scale.
Up to NLO level, the expression of R can be rewritten as (5) where the MS-coefficients r i,j can be read from Eq.(4), in which r i,0 are conformal ones. Following the standard PMC procedures, we get where ln Q 2 1 /µ 2 = −r 2,1 /r 1,0 . We have set the PMC scale Q 2 = Q 1 , whose exact value can be determined by the NNLO term which is not available at present. If directly using the MS-scheme expression (4), we shall obtain a small PMC scale Q 1 = 0.86 GeV or 0.78 GeV for the prediction with or without relativistic correction. It is already close to the low-energy region, this explains why a large R BLM is obtained in Ref. [13]. [At the NLO level, the BLM prediction is the same as the PMC prediction if all n f -terms are pertained to α s -running.] For this case, a reliable prediction can only be obtained by using certain low-energy α s -model, which however will introduce extra model dependence for the prediction.
Following the idea of PMC, only those {β i }-terms that are pertained to the renormalization of running coupling should be absorbed into the running coupling. For the processes involving three-gluon or four-gluon vertex, the scale-setting problem is more involved [29]. Thus to avoid such ambiguity of applying the PMC on R, similar to the case of QCD BFKL Pomeron [30][31][32], we shall first transform the results from the MS-scheme to the momentum space subtraction scheme (MOM-scheme) [33,34] and then apply the PMC.
As a cross-check, by using the same input parameters, we obtain the same MS-scheme prediction on R under conventional scale-setting as that of Ref. [13]. Due to the reasons listed above, we shall adopt the mMOM-scheme to do our following discussions.
We present the PMC prediction on R at the NLO level versus the initial choice of µ in Fig.1, which is under the mMOM scheme and both the results before and after applying the PMC are presented. Under conventional scale setting, R shows a strong scale dependence which decreases with the increment of µ. More explicitly, by varying µ from m c to 4m c , the ratio R will change from ∼ 9 × 10 3 to ∼ 3 × 10 3 . After applying the PMC, the PMC scale Q ′ 1 is the same for any choice of µ, leading to scale independent prediction. The relativistic correction brings an extra ∼ 2% contribution to the conventional prediction and ∼ 14% contribution to the PMC prediction. Thus the relativistic correction is important, especially for the PMC predictions. Fig.1 shows that if choosing µ = Q ′ 1 , the values of R under conventional scale setting shall be equal to the PMC ones. After applying the PMC, due to the elimination of divergent renormalon terms as n!β n 0 α n s , the pQCD series shall be more convergent. We present the LO and NLO terms of R before and after applying the PMC in Table  II. We define a parameter κ = R NLO /R LO to show the relative importance of the NLO-term and the LO-term. Table II confirms that a better pQCD convergence can be achieved by applying the PMC. A larger κ and a larger scale uncertainty for each term indicate that one cannot get the exact value for each term by using a guessed scale suggested by conventional scale-setting. Analyzing the pQCD series in detail, we observe that the scale errors for conventional scale-setting are rather large for each term, and a possible net small scale error for a pQCD approximant is due to correlations/cancelations among different orders. On the other hand, due to the fact that the running of α s at each order has its own {β i }-series governed by the renormalization group equation, the β-pattern for the pQCD series at each order is a superposition of all the {β i }-terms which govern the evolution of the lower-order α s contributions at this particular order. Thus, inversely, the PMC scale at each order is determined by the known β-pattern, and the individual terms of R at each order are well determined. We present the theoretical uncertainties for the conventional and the PMC scale settings in Fig.2, in which the errors are squared averages of the ones from the choices of the c-quark pole mass m c and the asymptotic scale Λ mMOM . As a comparison, the experimental prediction of Ref. [1] is also presented. Under conventional scalesetting, Fig.2 shows that the errors caused by m c and Λ mMOM is smaller than the case of PMC scale-setting, which is however diluted by the quite large scale uncertainty. For example, the value of R with [or without] relativistic corrections shall be varied within the large region of 4.12 +4 R Conv,mMOM−r = 4.12 +0.12+0.28 −0.11−0.26 × 10 3 , R PMC,mMOM = 7.09 +0.32+0.75 −0.29−0.66 × 10 3 , R PMC,mMOM−r = 6.09 +0.21+0.58 −0.19−0.52 × 10 3 , where the first error is for m c ∈ [1.46, 1.52] GeV and the second one is caused by taking Λ mMOM to be the values listed in Table I. Fig.2 shows that the conventional prediction of R with or without relativistic correction is about 3.6σ derivation from the data. This discrepancy becomes even larger by including the NNLO term [8], thus the authors there even doubt the validity of NRQCD theory for this particular observable 2 . However, Fig.2 shows that after applying the PMC, the pQCD prediction and the data are consistent with each other within reasonable errors even at the NLO level. This indicates that the large discrepancy between the data and the pQCD prediction is caused by improper choice of renormalization scale, and a simple guessed scale may lead to false prediction or false conclusion. Thus a proper setting of the renormalization scale is important for lower-order predictions.
As a summary, in this paper, we have studied the ratio of the η c (1S) decay rate into hadrons over its decay rate into photons by applying the PMC. The PMC provides a systematic way to set the optimal renormalization scale for high energy process, whose prediction is free of initial renormalization scale dependence at any fixed order. A more convergent pQCD series can be achieved and the residual scale dependence due to unknown highorder terms are highly suppressed. Fig.2 shows that the large discrepancy between the data and the pQCD prediction by using a guessed scale suggested by conventional scale-setting can be cured by applying the PMC. The PMC, with its solid physical and theoretical background, greatly improves the precision of standard model tests, and it can be applied to a wide variety of perturbatively calculable processes.