1 Preface by the Editors

The current developments in beyond-the-Standard-Model (BSM) phenomenology point to an ever greater use of Effective Field Theories (EFTs). With no concrete hints of a forthcoming discovery of on-shell new physics (NP), we see no reason that this trend should change any time soon. In fact, even a discovery of new high-energy resonances would call for the use of EFTs to study many of their observable effects. The role of computer tools is central to the successful use of EFT in BSM physics: simply put, the amount of repetitive computations is all but impossible to perform on a case by case basis without them. For these reasons, we gathered creators and developers of EFT tools for another workshop three years after the first SMEFT-Tools workshop [1].

The SMEFT-Tools 2022 workshop received contributions from a large part of the EFT theory community, especially, as it pertains to the computer tools of the field. As a result, this review is a comprehensive, if not quite complete, report on the current status of the tools available in the field. Additionally, several speakers at the workshop were presenting new results dealing with the more formal theory aspects. These results frame the current and future developments of EFT tools and are crucial to the ever growing capabilities of the tools. However, the theory developments included in this report are merely a sample of what is being undertaken in the field as a whole; the field is simply too active to include all, or even most, of the developments here.

The introductory section provides some context and motivation for the use of EFTs and discusses some of the trends in EFT used for BSM. We have grouped the other contributions in two main sections: On the one hand, Sect. 2 details computer tools for the study of ultraviolet (UV) models using EFTs. This section describes tools for matching ultraviolet (UV) models to EFTs, automating the renormalization-group (RG) evolution of EFT coefficients, generating EFT operator bases, and a proposal for a unified format for the storage of EFT matching results. On the other hand, Sect. 3 describes computer tools necessary for the phenomenological study of EFTs. These tools are equally invaluable for bottom–up or top–down analyses. This section contains four different tools for the key task of performing global fits to experimental data, along with a code for automatically deriving the Feynman rules of the Standard Model effective theory (SMEFT). In both sections, contributions covering recent theory developments relevant for future implementations or practical applications of EFT tools are also included.

2 Introduction and motivation

José Santiago and Peter Stoffer

EFTs have been a basic tool in particle physics for many years. In most cases EFTs were used in the context of well-defined, usually renormalizable, models, either because they were the only way to compute certain observables (for example, due to the strong coupling of QCD at low energies) or because their use greatly simplified the calculation of interest (gluon-fusion Higgs production at a high perturbative order in the infinite mass limit is a clear example). Their application to the study of physics beyond the Standard Model (SM), while already present in the past, has experienced an exponential increase in the last decade. The reasons for this are two-fold: first, the LHC and other experiments are producing increasingly better limits on the mass of new particles, searching in a multitude of different channels, which seems to clearly indicate the presence of a mass gap between the scale of new physics and the energies at which most experimental observables are measured; second, and this is especially relevant for this workshop, the last few years have seen the appearance of a plethora of new computer tools, that simplify, and in many cases fully automate, the tedious calculations needed to apply EFTs to new physics searches.

2.1 Connecting theory and experiment via EFTs

The problem of obtaining the implications of experimental data on models of new physics is highly non-trivial. The vast number of observables measured experimentally has to be computed, via complicated, sometimes multi-loop calculations, for each particular model of new physics. These difficult calculations have to be repeated for every experimental observable and every model with the added complication that, despite the very large number of new physics models developed by theorists, we are not guaranteed that the true description of Nature falls into one of these models.

EFTs simplify the problem of obtaining the phenomenological implications of experimental data on new models by splitting the calculation in two (mostly independent) steps. In the first one, the bottom-up approach, the experimental observables are computed, to the required order in perturbation theory, in terms of the Wilson coefficients (WCs) of the corresponding effective Lagrangian. This process can be performed with no mention of any new physics model and therefore represents a mostly model-independent parametrization of experimental data in the form of global fits (or rather a global likelihood), see Sect. 3. In the second step, the top-down approach, the WCs of the effective Lagrangian are computed in terms of the couplings and masses of specific UV models, that complete the EFT at high energies. This calculation, called matching, has to be done for every model of new physics (but not any more for every observable) but, thanks to recently developed tools, it can be fully automated (see Sect. 2). When the bottom-up and top-down approaches are combined one can obtain the phenomenological implications of any experimental observable in any UV model and, thanks to existing computer tools, in a mostly automated way.

2.2 The SMEFT and the LEFT

The absence of evidence for physics beyond the SM in direct LHC searches suggests that new particles are either very weakly coupled [2] or much heavier than the electroweak scale. In the latter scenario, their effects at energies below the scale of new physics can be described by an EFT. Depending on the assumption about the nature of the Higgs particle, this is either the Standard Model effective field theory (SMEFT) [3, 4] or Higgs effective field theory (HEFT) [5, 6]. In particular, the SMEFT is the most general EFT invariant under the SM gauge symmetry, \(SU(3)_c\times SU(2)_L\times U(1)_Y\), involving only SM particles with the Higgs field taken as an \(SU(2)_L\) doublet.

The SMEFT Lagrangian up to dimension-six operators is given by

$$\begin{aligned} {{\mathcal {L}}}_{\textrm{SMEFT}}^{d\le 6}&= {{\mathcal {L}}}_{\textrm{SM}} + \sum _{k} C_k^{(5)} Q_k^{(5)}+ \sum _{k} C_k^{(6)}Q_k^{(6)}, \end{aligned}$$
(1.1)

with \({{\mathcal {L}}}_{\textrm{SM}}\) being the SM Lagrangian. There is only one term at dimension five corresponding to the Weinberg operator [7]. This operator violates baryon number in two units and yields Majorana masses for the neutrinos after electroweak symmetry breaking. At dimension six, there are 59 terms that preserve baryon number and another 5 that violate baryon and lepton numbers in one unit. These are commonly presented in the so-called Warsaw basis [4]. The complete set of RG equations for the dimension-six SMEFT in the Warsaw basis has been calculated in [8,9,10,11]. As we describe in the sections below, these advances, together with simultaneous theoretical and computational developments towards the automation of one-loop matching calculations, pave the way to the systematic use of EFT methods in the analysis of NP models.

For processes below the electroweak scale, another EFT should be used, wherein the heavy SM particles, i.e., the top quark, the Higgs scalar, and the heavy gauge bosons, are integrated out. This low-energy effective field theory (LEFT) is a gauge theory invariant only under the unbroken SM groups \(SU(3)_c \times U(1)_{\textrm{em}}\), i.e., QCD and QED augmented by a complete set of effective operators. If matched to the SM at the electroweak scale, it corresponds to the Fermi theory of weak interaction [12], but when all operators invariant under the unbroken gauge groups are included, it also describes the low-energy effects of arbitrary heavy physics beyond the SM.

The LEFT is defined by the Lagrangian

(1.2)

where the QCD and QED Lagrangian is given by

(1.3)

The additional operators are the Majorana-neutrino mass terms at dimension three, as well as operators at dimension five and above. At dimension five, there are photonic dipole operators for all the fermions (including a lepton–number-violating neutrino dipole operator) as well as gluonic dipole operators for the up- and down-type quarks. At dimension six, there are the CP-even and CP-odd three-gluon operators and a large number of four-fermion operators. The entire list of operators up to dimension six can be found in [13], including operators that violate baryon and lepton number.

This theory has been extensively studied in the context of B physics. The operator basis relevant for B-meson decay and mixing has been constructed in [14]. The complete LEFT operator basis up to dimension six in the power counting has been derived in [13], where also the tree-level matching to the dimension-six SMEFT above the weak scale was provided. By now, the LEFT operator basis is known up to dimension 9 [15,16,17]. Recently, the tree-level matching to the SMEFT has been extended to dimension eight in the SMEFT power counting [18]. Partial results for lepton–flavor-violating operators were given already in [19].

The complete one-loop LEFT RG equations were derived in [20]. Partial results for the RG equations were known before and have been studied to higher loop orders [14, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. Within the SMEFT/LEFT framework, the one-loop RG equations at the high scale [8,9,10], the tree-level matching [13], and the RG equations below the weak scale [20] allow one to resum the leading logarithms and to describe the indirect low-energy effects of heavy physics beyond the SM within one unified framework. The RG and matching equations have been implemented in several software tools, many of which were presented at the SMEFT-Tools workshops. Consistent EFT analyses at leading-log accuracy that combine constraints from experiments at very different energy scales are becoming standard.

For certain high-precision observables at low energies it is desirable to extend the analysis beyond leading logarithms. Steps in this direction have been taken, e.g., in [33, 38, 45]. Partial results for the matching at the weak scale at one loop were derived in the context of B physics in [35, 46]. The complete one-loop matching between the SMEFT and the LEFT at dimension six was calculated in [47]. It can be used for fixed-order calculations at one-loop accuracy in cases where the logs are not large, and it is an ingredient in next-to-leading-log analyses within a resummed framework. Several tools are being developed that automate the one-loop matching between the EFT framework and UV models for new physics [48, 49].

At energies as low as the hadronic scale, additional complications appear due to the non-perturbative nature of QCD. In these low-energy processes, one should not work with perturbative quark and gluon degrees of freedom but rather perform either direct non-perturbative calculations of hadronic matrix elements of effective operators or switch to another effective theory in terms of hadronic degrees of freedom, i.e., chiral perturbation theory (\(\chi \)PT) [50,51,52]. In [53], the matching of semileptonic LEFT operators to \(\chi \)PT has been discussed, which can be obtained within standard \(\chi \)PT augmented by tensor sources [54]. The chiral realization of four-quark operators was studied in [55], while [56] analyzed C- and CP-odd LEFT operators up to dimension 8. If lattice QCD is employed to deal with the non-perturbative effects at low energies, one faces the problem that the EFT framework requires matrix elements of dimensionally renormalized operators. This necessitates another matching calculation to a scheme amenable to lattice computations. This matching has to be performed at a scale of a few GeV, which is already accessible to lattice computations but at the same time sufficiently high that perturbation theory can be assumed to work reasonably well. Traditionally, these matching calculations are based on regularization-independent momentum-subtraction (RI-MOM) schemes [34, 57,58,59,60], whereas in recent years the gradient flow [61, 62] has received attention [63,64,65,66,67].

2.3 Going beyond

The great sensitivity achieved in the search for CP violation, rare meson decays, magnetic and electric dipole moments and lepton-flavor-violating processes requires improvements in the theoretical precision with EFTs. The need to include higher-order corrections is twofold. On the one hand, the inclusion of higher-order corrections, especially in QCD, allows to better assess the uncertainties in the theoretical calculation. On the other hand, some new-physics effects are only generated once higher-loop effects have been accounted for in certain UV completions, thus including them naturally yields to better constraints on the underlying theory.

In fact, it is often the case that the leading effects of new physics are due to loop-level processes. The last decade has seen results for one-loop running in the SMEFT [8,9,10,11, 68] and the LEFT [20] and the one-loop SMEFT to LEFT matching [47]. Likewise, as we highlight in this manuscript, there has been recent substantial progress in the connection of NP models to their EFTs. Going beyond the leading logarithm effects requires systematic treatment of RG effects in the EFTs because of the scheme dependence of the anomalous dimension matrix and the matching coefficients appearing, for instance, in the chosen prescription for \(\gamma _5\) in d dimensions [69,70,71,72,73,74,75,76,77,78] and the definition of evanescent operators [22, 47, 79,80,81,82]. To this end, consistent calculations across different EFTs and bases have given cause for a new look at the role of evanescent contributions [49, 81]. In the perspective of systematic multi-loop computations within EFTs, there has been a recent interest in the proper treatment of \( \gamma _5 \) [78, 83, 84], a notorious stumbling block in dimensional regularization.

Higher-order anomalous dimensions have been calculated for subsets of dimension-six operators [14, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45], but due to the hard technical nature of the calculations the complete matrix is not known. Given the large number of operators of the SMEFT and LEFT, it may be more convenient to consider first a generic EFT with an arbitrary number of real scalars and left-handed fermions and compute the anomalous dimensions and the RG in such theory invariant under a generic gauge group. Results for NLO running of the SMEFT or LEFT WCs can be then extracted in a second step by specifying the field content and the gauge group.

There has also been recent progress in expanding the EFT formulation beyond dimension-six operators, sparking the formulation of geometric EFTs [85, 86] and the determination of higher-dimension bases [15, 87, 88], as well as the counting of the EFT operators through the use of Hilbert series [89]. Recent work has also started constraining the effects from higher-dimensional operators through the use of unitarity bounds, e.g., [90,91,92,93,94,95, 95,96,97,98,99,100,101,102,103,104,105,106].

3 Effective field theory matching and running

Despite the usefulness of the EFT approach, the interpretation of data in terms of NP models requires a direct connection between those models and their EFT description. This typically involves the calculation of sequential matching steps at the relevant mass thresholds, and RG equations between these thresholds and the scale of the observables. In recent years, many tools that (at least partially) automate these calculations have been developed.

In the absence of light particles beyond those in the SM, the necessary calculations for RG running and matching below the NP mass threshold are known up to dimension-six operators [8,9,10,11, 13, 20, 35, 47]. These results have been implemented into several computer tools including DsixTools [42, 107] (which we describe in Sect. 2.1), wilson [108], and RGESolver [109]. The SMEFT RG evolution has also been incorporated [110] into the MadGraph Monte Carlo generator [111]. As far as tree-level matching is concerned, the Python package MatchingTools [112] allows to perform a fully automated matching computation for arbitrary heavy particles and gauge groups. Furthermore, the matching code CoDEx (see Sect. 2.2) implements formulae based on path-integral methods [113,114,115,116] to automate the matching of some NP models into the dimension-six SMEFT. Further matching tools that rely on functional methods are SuperTracer [117] and STrEAM [118].

Although it might be tempting to think of the target EFT as the SMEFT, many realistic BSM constructions contain several energy scales, calling for intermediate EFTs, or feature additional light states, such as axion-like or dark-matter particles, thus, demanding extensions of the SMEFT (see, e.g., [2, 119,120,121,122]). Furthermore, some phenomenological studies require extending EFT calculations beyond dimension-six operators (see, e.g., [104, 123,124,125,126] for recent literature examples). Reflecting on this, a new generation of tools is now aiming at solving the more general problem of completely automating one-loop matching and RG evolution of arbitrary weakly-coupled models. The most notable examples in this direction are matchmakereft and matchete, described in Sects. 2.4 and 2.3, respectively.

Additional developments to assist matching calculations are also described in this section. In particular, the computer tool Sym2Int (see Sect. 2.5),Footnote 1 which automates the construction of EFT basis, and the MatchingDB format (see Sect. 2.7), aimed at standardizing the storage of matching results.

3.1 DsixTools: the effective field theory toolkit

figure a

DsixTools is a Mathematica [42, 107] package for the matching and renormalization group evolution from the NP scale to the scale of low energy observables. The current version of DsixTools fully integrates the SMEFT and the LEFT, treating both theories on an equal footing. It allows the user to perform the full one-loop renormalization group evolution of the WCs in the SMEFT and in the LEFT (with SM \(\beta \) functions up to 5-loop order in QCD), and the full one-loop SMEFT-LEFT matching at the electroweak scale. Therefore, the user can start with some numerical values for the SMEFT WCs at the high-energy scale \(\Lambda _{\textrm{UV}}\), in principle obtained after matching to a specific NP model, and translate them into numerical values for the LEFT WCs at the low-energy scale \(\Lambda _{\textrm{IR}}\), where some observables of interest can be computed. This is achieved by adopting some conventions and implementing some results in the recent literature:

  • The Warsaw basis [4] for the SMEFT, for which full one-loop RG equations [8,9,10,11, 130] are known.

  • Full one-loop SMEFT-LEFT matching [13, 47].

  • The San Diego basis [13] for the LEFT, for which full one-loop RG equations [20] are known.

All these results can be used in a visually accessible and operationally convenient way thanks to DsixTools. In addition to running and matching numerical routines, it also includes several functions for analytical applications, as well as user-friendly SMEFT/LEFT dictionary tools. Since version 2.1, DsixTools also admits input obtained with matchmakereft, thus extending its capabilities.

The simplest way to download and install DsixTools is to run the following command in a Mathematica sessionFootnote 2

figure b

This will download and install DsixTools, activate the documentation and load the package. Alternatively, DsixTools can also be installed manually. Finally, DsixTools can also be loaded (once installed) by running the usual

figure c

3.1.1 What DsixTools can do for you

For a full and updated list with all the tools provided by DsixTools we refer to the manual on the package website [131]. We will now concentrate on some useful features that illustrate what DsixTools can do for you in practice. A demo notebook with these and other examples of use is also provided at [132].

User-friendly SMEFT & LEFT information. DsixTools contains several routines and functions that allow one to use the tool as a SMEFT/LEFT dictionary. For instance, one can load DsixTools and execute the command

figure d

to print many details about the \(C_{\varphi \ell }^{(1)}\) SMEFT WC. One may learn the definition of the associated \(Q_{\varphi \ell }^{(1)}\) operator, its dimensionality and type (2-fermion in this case). In case of WCs carrying flavor indices, such as this one, this command also prints information about the possible symmetries under exchange of indices, the number of independent coefficients or the relations (due to Hermiticity, for example) among them. This information is displayed in a user-friendly way. Similarly, with

figure e

one would get the same information about the LEFT WC \(L_G\). Finally, with the functions

figure f

and the analogous ones in the LEFT, DsixTools shows a visual grid or a dropdown menu with all the WCs of the theory. The user can now click on any of them to run the ObjectInfo function on the selected WC and obtain all its properties.

Introducing and changing input values. There are two methods to introduce input values in DsixTools: with the NewInput routine or from a file. In the latter case one can choose between a native DsixTools format or the WCxf format [133]. Let us focus on the former case. With the NewInput routine the user loads the input values directly in the Mathematica notebook. Only the non-zero WCs must be given. The rest will be assumed to vanish. For instance, the command

figure g

sets \([C_{\ell q}^{(1)}]_{1112} = [C_{\ell q}^{(1)}]_{1121} = 1\) GeV\(^{-2}\) and \(C_{\varphi {{\widetilde{B}}}} = - 0.5\) GeV\(^{-2}\). We note that dimensionful quantities in DsixTools are always given in GeV to the proper power. In DsixTools, the input values for the parameters of the effective theory at work (SMEFT or LEFT) are stored as replacement rules in a dispatch variable called Input Values. Then, after defining an input, the user can easily read it as

figure h

Once the input values have been set, the user can change them individually at any moment in the notebook. This is done with the ChangeInput routine. For example, the line

figure i

changes the value of \(C_{\varphi {{\widetilde{B}}}}\) to 0.6 GeV\(^{-2}\). Finally, DsixTools produces a warning message when the WCs provided by the user lead to an invalid set of input values. There are two possible reasons for this:

  1. 1.

    Non-Hermiticity errors: Some WCs are related due to the Hermiticity of the Lagrangian. For instance, \([C_{\ell q}^{(1)}]_{1112} = [C_{\ell q}^{(1)}]^*_{1121}\) must necessarily hold.

  2. 2.

    Antisymmetry errors: Some LEFT WCs are antisymmetric under the exchange of two flavor indices. For instance, \([L_{\nu \gamma }]_{11} = 0\) must necessarily hold.

When the user’s input is not consistent with any of these restrictions, a warning is issued and DsixTools corrects the input by replacing it by a new one that ensures a complete consistency of the Lagrangian. The list of invalid input values can be seen by clicking on the button Input errors. We note, however, that in some cases other WCs, related to these by the two reasons given above, may be modified too.

A simple DsixTools program. Let us illustrate how easily one can use DsixTools with a simple but complete program, given by the following three lines after opening Mathematica and loading DsixTools:

figure j

Here we consider an example SMEFT input with \([C_{\ell q}^{(1)}]_{2233}= 1/\Lambda _{\textrm{UV}}^2\), given at \(\Lambda _{\textrm{UV}} = 10\) TeV. The rest of the SMEFT WCs are assumed to vanish at \(\Lambda _{\textrm{UV}}\). Notice that input for the energy scales must be given too. However, \(\Lambda _{\textrm{EW}}\) and \(\Lambda _{\textrm{IR}}\) are taken to be equal to \(m_W\) and 5 GeV by default, and then only \(\Lambda _{\textrm{UV}}\) must be provided. In the first line of this program, the NewInput routine is used to introduce the SMEFT WCs as well as the NP energy scale \(\Lambda _{\textrm{UV}}\). In the second line we make use of RunDsixTools, one of the most important routines in DsixTools. It runs the SMEFT RG equations, it matches the resulting SMEFT Lagrangian at the electroweak scale onto the LEFT one and runs down to \(\Lambda _{\textrm{IR}}\) with the LEFT RG equations. The results of this process can be obtained by means of the D6run function, which returns interpolating functions that can be evaluated for any value of the energy scale \(\mu \). For instance, in this program we choose to print \([C_{\ell q}^{(1)}]_{2233}\) at the electroweak scale. Last but not least, we emphasize that DsixTools not only provides numerical routines. In fact, all the analytical information in the code can be printed and used in Mathematica sessions. There are plenty of examples of this. For instance, with the command

figure k

the user can display the analytical expression for the WC \([L_{eu}^{V,LL}]_{2211}\) of the LEFT after matching at one-loop to the SMEFT WCs. Similarly, the SMEFT and LEFT \(\beta \) functions can be readily accessed as

figure l

We refer to the demo notebook [132] for examples of use of other DsixTools routines and functions.

Using Matchmakereft results. The first step in the study of specific NP models with the tools described here is the matching of the model to an EFT. If the NP degrees of freedom lie at high energies, this EFT is generally the SMEFT. Even though this theory is very well known nowadays, the calculation might be hard, especially if done at one-loop. Since version 2.1, DsixTools admits input obtained with matchmakereft [48], a fully automated Python code to compute the tree-level and one-loop matching of arbitrary models onto arbitrary EFTs. Its use is very simple. Let us illustrate it with an example NP model that extends the SM field inventory with a right-handed neutrino \(N \sim ({\textbf{1}},{\textbf{1}})_0\) and a scalar leptoquark \(S \sim ({\textbf{3}},{\textbf{2}})_{\frac{1}{6}}\), where we denote their representations under \((\mathrm SU(3)_c, SU(2)_L)_{\mathrm{U(1)_Y}}\). The NP Lagrangian contains the pieces

$$\begin{aligned} {\mathcal {L}}_N&= i {{\overline{N}}} \, \gamma _\mu D^\mu N - \frac{1}{2} M_N \, \overline{N^c} \, N \, , \end{aligned}$$
(2.1)
$$\begin{aligned} {\mathcal {L}}_S&= D_\mu S^\dagger \, D^\mu S - M_S^2 \, S^\dagger \, S \, , \end{aligned}$$
(2.2)
$$\begin{aligned} {\mathcal {L}}_{SH}&= - \lambda _2 \, H^\dagger H \, S^\dagger S - \lambda _3 \, H^\dagger S \, S^\dagger H \, , \end{aligned}$$
(2.3)
$$\begin{aligned} {\mathcal {L}}_{\textrm{Y}}&= - Y_N^\alpha \, {{\overline{N}}} \, \ell _L^\alpha H - Y_S^\alpha \, {{\overline{q}}}_L^\alpha \, N S + \text {h.c.}, \end{aligned}$$
(2.4)

where \(\alpha = 1,2,3\) is a flavor index. This model can be easily implemented and matched onto the SMEFT at the one-loop level with matchmakereft. The results are saved in text file called MatchingResult.dat, which can be loaded into DsixTools with the command

figure m

With this line, the user not only loads the analytical information in MatchingResult.dat, but also gives numerical values for the NP parameters. After executing this command, the user can check some input values for the SMEFT WCs at \(\Lambda _{\textrm{UV}} = 10\) TeV. Their analytical expressions in terms of the parameters of the UV model can also be printed thanks to the dispatch MatchAnalyticalUV. For instance, the analytical expression and numerical value of \(C_\varphi \) can be printed with

figure n

With the DsixTools SMEFT input fully generated, one can now proceed and use the RunDsixTools routine. Therefore, thanks to this novel functionality, the user can easily combine DsixTools and matchmakereft to study NP models using the full power of EFTs.

3.1.2 Summary

Some of the most common tasks in the SMEFT and in the LEFT require the handling of a large number of WCs and/or the resolution of a huge set of coupled RG equations. These can be automatized with the help of DsixTools, a Mathematica package designed to provide a simple and user-friendly experience. DsixTools contains many routines and functions to deal with the SMEFT or the LEFT, both at the algebraic and numerical levels. Some examples of use that illustrate the capabilities of DsixTools are given here. We refer to the manual on the package website [131], as well as to the comprehensive reference and documentation environment provided with DsixTools, for further information on the tool.

3.2 CoDEx: matching BSMs to SMEFT

figure o

CoDEx [134] is a Mathematica package that computes WCs for SMEFT effective operators up to one-loop level and mass dimension six in terms of UV model parameters. The computation of WCs is based on the evaluation of effective action formulae derived using functional methods [113,114,115,116]. The package is applicable to BSM scenarios containing single or multiple mass-degenerate heavy fields of spin 0, \(\frac{1}{2}\), and 1.Footnote 3 It computes the effective operators in both strong interacting light Higgs (SILH) [135, 136] and Warsaw [3, 4] bases. The code also provides an option to perform the RG evolution of these operators in the Warsaw basis, using the anomalous dimension matrix computed in [8,9,10]. Thus, one can get all effective operators at the EW scale, generated from any such BSM theory. To run the program, it requires very minimal input within a user-friendly format. The user needs to provide only the relevant part of the BSM Lagrangian that involves the heavy field(s) to be integrated-out. CoDEx, with its installation instructions, web documentation, and model examples, is available on GitHub (https://effexteam.github.io/CoDEx/) .Footnote 4

Fig. 1
figure 1

CoDEx flowchart

3.2.1 User inputs & CoDEx outputs

The input information for any BSM to implement in CoDEx is minimal. Here, we depict a step-by-step procedure to compute the effective operators and the internal computation that is carried out at each step in CoDEx (see also the flowchart in Fig. 1):

  • The users need to provide the following information (quantum numbers) about the heavy field(s): color, isospin, hypercharge, mass, and spin, based on which the representation(s) of the heavy field(s) are evaluated by the package internally. As mentioned, the SM gauge group quantum numbers of the BSM field are needed as input. On top of that, the relevant part of the BSM Lagrangian that contains the heavy field(s) must be supplied by the user. The code automatically builds the heavy field kinetic (derivative and mass) terms, which are not required from the user. The SM Lagrangian is also appended by default. Let us consider an example here: we have only one heavy field – a real singlet scalar (\(\rightarrow \) 1, \(\rightarrow \) 1, \(\rightarrow \) 0, \(\rightarrow \) 0). Let us denote \(\rightarrow \) ‘hf’ and \(\rightarrow \) ‘m’. This represents the field content of our model in the correct way:

figure w

From this input, we construct the field representation that is needed to write the BSM Lagrangian and the CoDEx internal functions recognise this representation for further analysis. First, load the package:

 

figure x
figure y
figure z

To write the Lagrangian in a compact form one can define the heavy field \({{\mathcal {S}}}\) as:

figure aa
figure ab

Then we need to build the relevant part of the Lagrangian (involving the heavy field only). Note that we do not need to construct the heavy field kinetic term (the covariant derivative and the mass terms) in the CoDEx Lagrangian. Thus, the only part of the Lagrangian we need here isFootnote 5:

figure ac
  • Next, we need to load the symmetry generators for computing loop-level WCs:

figure ad
  • Based on these inputs, one can generate the tree- and one-loop-level WCs. The CoDEx-functions for generating WCs are listed in Table  1.

Table 1 CoDEx functions for computing WCs
Table 2 Effective operators and WCs for Real Singlet Scalar model. These results are calculated in \(\overline{\text {MS}}\) renormalization scheme. The one-loop result depends on the choice of renormalization scheme, e.g. in this particular case, we have noted differences with results given in Ref. [113] where different renormalization scheme has been considered. Here, we have highlighted the extra terms obtained in our calculation in \(\overline{\text {MS}}\) scheme in bold. We have further cross-checked these results in the other scheme, adopted in Ref. [113]. Warsaw basis WCs are consistent with that reported in Refs. [137,138,139]
figure af
figure ag
figure ah

(See the documentation of for details.)

  • The last step is the computation of effective operators and associated WCs:

figure aj
  • The output is obtained in basis and is formatted as a detailed table in

There is provision to export the result in LaTeX  format. Table 2c is actually obtained from the output of the code above. We can compute the same in basis as well and for that we have to use:

figure an

Output of this can be found in Table 2a. Similarly, one-loop results can be obtained by changing the option value of ‘ ’ to . The default value of is , which combines both tree and one-loop results. These resulting WCs can then be run down to the electro-weak scale, using . This function computes RG evolution for Warsaw basis effective operators (in leading log approximation) using anomalous dimension matrices available in Refs. [8,9,10].

  • Detail model building guide is available on the package web-documentation available here (https://effexteam.github.io/CoDEx/) . Moreover, SMEFT matching results for multiple scalar extension are available in these articles [140, 141].

3.2.2 Developers’ version: yet to be released

  • Heavy-light mixed WCs and dimension-8: A module for incorporating effects from the mixed processes at one-loop including heavy fields and light fields is included. We generate these contributions by expanding the UV action around the light field classical solution obtained using the onshell relations of light fields. We implement the universal effective action formulae for the mixed heavy-light contributions and agree with that of Ref. [116] (see Tables 1-5 in there). We evaluated this formula in CoDEx along with 16 BSM models to generate the mixed heavy-light WCs [140,141,142]. Modules for evaluation of one-loop processes involving fields with non-identical spins and incorporating SMEFT operators up to dimension-8 will be released shortly [123].

  • WCxF [133]: There are multiple packages available with different applications for the EFT matching and running of the WCs, and mapping these WCs to the observables [143]. It is desirable to have a data/result exchange format among these packages. WCxF is such a data exchange format widely used among EFT packages, see Ref. [133]. CoDEx has two functions for exporting and importing data in WCxF. These functions are and . We briefly discuss the utilities of these functions below.

figure aw
figure ax
figure ay
figure az

The output (which is suppressed here) is in WCxF template, and it can be interfaced with other available programs. In Ref. [140], we have validated this by interfacing a.json file generated by to DSixTools [107] package successfully.

The function takes WCxF files as input and provides output in CoDEx data format.

figure bc
figure bd
figure be
figure bf
figure bg
figure bh

This output is in CoDEx readable format and the ellipses above represent other WCs in the list.

  • Identities: The effective action evaluation for a BSM generates gauge-invariant structures, which do not directly map to the desired effective operator basis. We implement operator identities and equations of motion on the derived effective Lagrangian to cast the gauge-invariant terms to desired structures. These identities depends on the choice of the effective operator basis. The transformations like Fierz identities, SM field equations of motion, and SMEFT dimension-six operator identities are introduced in the developer version of CoDEx. In future developments, new modules will be available to capture the evanescent operator effects [81, 144,145,146], as well as these identities will be extended to incorporate effects from SMEFT dimension-8 operators [123].

3.3 Matchete: Matching Effective Theories Efficiently

figure bi

Matchete (MATCHing Effective Theories Efficiently) is a Mathematica package fully automating matching computations up to one-loop order, utilizing functional methods. The user supplies the UV theory by first defining the symmetry groups (both local and global), the fields and the coupling parameters with simple commands. With these definitions in place, the Lagrangian can be written in Mathematica language in a simple way and then passed to the matching function to integrate out fields that the user has defined as “heavy”. As a consequence of the functional matching procedure, no prior knowledge of the operators basis of the resulting effective theory is required. Matchete automatically generates the full set of effective operators and reduces it to a basis without any user input.

The Matchete package is free software under the terms of the GNU General Public License v3.0 and is publicly available in the following GitLab repository:

https://gitlab.com/matchete/matchete

This note only serves as a brief overview of the package; we refer the reader to Ref. [49] for details.

3.3.1 Matching strategy

Functional methods and expansion by regions

Matchete achieves the matching by directly computing the Wilsonian effective action, i.e. the contribution to the generating functional encoding only short-distance physics, corresponding to energy scales \(E>\Lambda \), where \(\Lambda \) is the matching scale (or in BSM contexts often called “new-physics scale”) [115, 117, 118, 147, 148]. To this end, one first splits the field content of the theory into Fourier modes with frequencies above (hard) and below (soft) the scale \(\Lambda \):

$$\begin{aligned} \phi = \phi _H+\phi _S\,. \end{aligned}$$
(2.5)

Low-energy matrix elements are computed from the generating functional:

$$\begin{aligned} Z[J_S]&= \int {\mathcal {D}} \phi _S\,{\mathcal {D}}\phi _H\, \exp \left\{ iS(\phi _S,\phi _H)\right. \nonumber \\&\quad \left. +i\int d^4z \, J_s(z)\phi _S(z) \right\} \,, \end{aligned}$$
(2.6)

from which the Wilsonian effective action \(S_\Lambda \) is defined:

$$\begin{aligned} \int {\mathcal {D}} \phi _H \,\exp \left\{ iS(\phi _S,\phi _H) \right\} \equiv \exp \left\{ iS_\Lambda (\phi _S)\right\} \,. \end{aligned}$$
(2.7)

This object can be calculated directly by means of a background field expansion, meaning each field is further split into classical fields \({\hat{\phi }}_i\) and quantum fluctuations \(\eta _i\),

$$\begin{aligned} \phi _i = {{\hat{\phi }}}_i + \eta _i\,, \end{aligned}$$
(2.8)

and an expansion of \(S({{\hat{\phi }}}_S+\eta _S,{{\hat{\phi }}}_H+\eta _H)\) in the quantum fields is performed. Collecting hard and soft modes into a single multiplet, i.e. \({{\hat{\phi }}}\) and \(\eta \) for the classical field and quantum fluctuation, this expansion reads:

$$\begin{aligned} S({{\hat{\phi }}}+\eta )&= S({{\hat{\phi }}}) + \eta _i \left[ \frac{\delta S}{\delta \eta _i} \right] \!({{\hat{\phi }}}) + \frac{1}{2}\,{\bar{\eta }}_i \left[ \frac{\delta ^2 S}{\delta \eta _j\delta {\bar{\eta }}_i} \right] \!({{\hat{\phi }}})\;\eta _j \nonumber \\&\quad + {\mathcal {O}}(\eta ^3)\,. \end{aligned}$$
(2.9)

The first term corresponds to the tree-level contributions, the second vanishes by virtue of the equations of motion and the third term encodes all the one-loop contributions. At tree level, the effective action is obtained by solving the equations of motions for the heavy fields, inserting them back into the full Lagrangian and expanding in the heavy mass. At one-loop level, the effective action is found by

$$\begin{aligned} S_\Lambda ^{(1)} = \pm \,\frac{i}{2}\int _h \frac{d^dk}{(2\pi )^d} \big \langle k \vert \textrm{tr}\log Q \vert k \big \rangle \,, \end{aligned}$$
(2.10)

where \(Q_{ij} = \frac{\delta ^2 S}{\delta {\bar{\eta }}_i\delta \eta _j}\) is the fluctuation operator. The subscript \(_h\) on the integral denotes the fact that the integral is taken in the hard region, meaning the integration momentum k is assumed to be of the order of the hard scale \(\Lambda \). In practice, this is implemented by assigning a power-counting to the soft scales (masses and momenta of soft modes) and expanding the integrand systematically to the desired order in the EFT counting. This expansion by regions [149, 150] simplifies the matching procedure as it computes directly the one-loop contributions to the matching coefficients without the need of having to evaluate matrix elements of the effective theory [115].

The results obtained with the method outlined above yields an effective Lagrangian that is not manifestly gauge-invariant, as it contains open covariant derivatives, meaning expression of the form

$$\begin{aligned} {\mathcal {L}}_\textrm{eff} \supset X^{\mu \nu } D_\mu D_\nu \,, \end{aligned}$$
(2.11)

which cannot be dropped since the covariant derivatives commute non-trivially. This is mitigated by the so-called covariant derivative expansion, for details of which we refer the user to the literature [151,152,153].

Basis Reduction After the matching procedure is performed, the obtained effective operators are not linearly independent. Matchete is able to handle the most common Lie algebras and performs simplifications of Dirac algebra using d-dimensional identities if they are available. To find a basis however, redundant operators still need to be eliminated. The first reduction technique is relating operators with covariant derivatives to each others by the means of integration-by-parts (IBP) identities, which can be derived by imposing total derivative operators to vanish, \(D_\mu J^\mu = 0\). These IBP identities allow one to eliminate certain derivative operators in favor of others. As an example in a theory with charged Dirac fermions and a scalar, the following kind of reduction is achieved by this:

(2.12)

The choice of which operators are preferred is not unique, but it is advantageous to favor operators proportional to the equations of motions of the fields. In the above expression, the derivatives have been either traded for a field-strength tensor or act on the field operators in such a way, that the Dirac equation can be used to further simplify them.

Operators with derivatives acting on fields in the way they appear in their equations of motionFootnote 6 can be further reduced by the means of appropriate field redefinitions. For scalars \(\phi \), fermions \(\psi \) and vector fields \(A^\mu \) these operators are of the form \(J D_\mu D^\mu \phi \), and \(J_\nu D_\mu F^{\mu \nu }\) respectively. Redefining the fields by shifts proportional to the coefficient operator J then eliminates these operators while introducing operators with fewer derivatives as well as operators at higher mass dimension. Applying the procedure iteratively, order by order in power-counting, then fully eliminates all operators proportional to the field equations of motion.

Matchete applies all reductions described above fully automatically without the user having to derive and specify any operator reduction identities. As of the time of writing, the list of possible reductions is not completely implemented yet. In particular, the current version of Matchete does not yet implement Fierz reductions, as these require the proper treatment of evanescent operators [82]. This is left for a future release (see also Sect. 2.3.3).

3.3.2 Usage example

In this section, we briefly outline a simple usage example. Once again, the reader is referred to Ref. [49] for a more detailed user manual of the package including an installation guide. To demonstrate the features of Matchete, we match a simple model in which we supplement the SM with a singlet real scalar field \(\phi \), as has been discussed in Refs. [138, 139]. The Lagrangian of this model readsFootnote 7:

$$\begin{aligned} {\mathcal {L}}_\textrm{UV}&= {\mathcal {L}}_\textrm{SM} + \frac{1}{2}(\partial _\mu \phi )^2 - \frac{1}{2}M^2\phi ^2 - \frac{\mu }{3!}\phi ^3 - \frac{\lambda _\phi }{4!}\phi ^4 \nonumber \\&\quad - A\phi |H|^2- \frac{\kappa }{2}\phi ^2 |H|^2\,. \end{aligned}$$
(2.13)

Matchete provides the full definitions of the SM and its Lagrangian as a simple macro. After installing the package, it can be loaded via:

figure bj

Next, we load the SM Lagrangian from the predefined model file included with Matchete:

figure bk

where we rename the Higgs mass parameter to and the quartic Higgs coupling to . We then define the heavy scalar using the command:

figure bn

The arguments supplied to the function indicate the spin of the field, the fact that it is real, the definition of the mass parameter and that it should be considered heavy. The remaining couplings in the Lagrangian (2.13) are defined with:111

figure bo

Matchete defines simple shortcuts for the fields and couplings when these commands are evaluated. Even though the full objects are more complicated, the user can input them in a simple form:

figure bp

Note that the command automatically generates the kinetic and mass terms for the new field and only interaction terms have to be written out. We are now ready to integrate out the scalar at one-loop order. This is achieved by running the command with appropriate arguments:

figure bs

Here the first option defines the order at which the matching procedure is carried out while the second option denotes the fact that we want to obtain the effective Lagrangian up to dimension six. After this line is successfully evaluated, the object contains redundant operators, as described earlier. Reductions using IBP identities and field redefinitions are then performed by running:

figure bu

The user can choose to only perform IBP identities without field redefinitions to obtain the Green’s basis by using the command. The object then contains the operators together with their matching coefficients. The full output is cumbersome, but individual contributions can be isolated using the command. As an example, to show only the leptonic four-fermion operator, one uses:

figure by
figure bz

where the second argument specifies the field content of the operator(s) to be extracted, and the last argument gives the number of derivatives. Here \(\hbar \) should be understood as a loop counting factor that is equal to \(1/(16\pi ^2)\). The matching example with more details is included with the Matchete package in the example notebook Examples/Singlet_Scalar_Extension.nb.

3.3.3 Outlook

We conclude this note with a roadmap for features intended for future releases:

  • The matching is currently done in strictly \(d=4-2\epsilon \) dimensions. This prevents reductions of Dirac structures including Fierz rearrangements, because these hold only in four dimensions. When applied to d-dimensional operators, one has to account for evanescent contributions.

  • After evanescent contributions can be handled automatically, Matchete will be able to produce output that, in the case of SMEFT computations, can be directly compared to the Warsaw basis. In the future, Matchete will be able to automatically perform this identification as well as output the result in the WCxf [133] format. An interface with other phenomenology codes and/or commonly used formats, such as UFO [154], would be desirable as well.

  • At this time, Matchete does not allow integrating out heavy vector fields at the loop level. The reason for this is that these cannot be generally written down in a renormalizable fashion. In weakly-coupled theories, heavy vectors must arise from spontaneous symmetry breaking. This results in a complicated interplay between vectors, ghosts, and Goldstone bosons, especially in the background field gauge. So as to avoid having to derive and input all interactions manually, we wish to provide (semi-)automated methods to determine the broken phase Lagrangian.

  • With small changes to the matching procedure, it is possible to determine the EFT counterterms and, thereby, the RG functions. Implementing this functionality in Matchete will allow for finding the RG functions for intermediate-scale EFTs and vastly simplify sequential matching scenarios.

After the above list of features is included in the package, Matchete can be an integral part in a fully automated pipeline from a UV model down to phenomenology, and as such be a powerful tool for BSM phenomenology by taking away the laborious task of one-loop matching.

3.4 Matchmakereft: a tool for tree-level and one-loop matching

figure ca

Matchmakereft is a computer tool that automates the tree-level and one-loop matching of arbitrary weakly-coupled UV models onto their EFT. Due to lack of space, we refer the reader to the original publication [48] and the manual that comes with the installation and summarize here its main features and newest developments, and provide a simple but illustrative example.

Matchmakereft is written in python, making it very easy to install in different platforms, and it uses well-tested tools that include Feynrules [155], QGRAF [156], Form [157] and Mathematica to perform an off-shell matching using diagrammatic methods in the background field gauge.

Matchmakereft takes advantage of the large degree of gauge and kinematic redundancy in off-shell matching to perform a significant number of non-trivial cross-checks, ensuring the validity of the resulting computation. It is also equipped to compute the RG equations of arbitrary EFTs and the off-shell (in)dependence of a set of local operators. Matchmakereft treats the kinematic and gauge dependence independently, leaving the latter arbitrary until the very end of the calculation. This increases its efficiency but it also makes matchmakereft an ideal tool to compute IR/UV dictionaries or to perform calculations in theories with arbitrary gauge structures.

Among the latest developments of matchmakereft, the calculation of amplitudes in chunks of a fixed number of diagrams and the ability to compute amplitudes in parallel, have significantly increased its efficiency (see the manual for details).

Let us now demonstrate many of the features of matchmakereft with a concrete example involving two scalar fields, a light, but not massless, field \(\phi \) and a heavy field \(\Phi \). Our model is described by the Lagrangian:

$$\begin{aligned} {\mathcal {L}}= & {} \frac{1}{2}(\partial _\mu \phi )^2 -\frac{1}{2}m_L^2 \phi ^2 + \frac{1}{2} (\partial _\mu \Phi )^2 - \frac{1}{2} M_H^2 \Phi ^2 \nonumber \\{} & {} - \frac{\lambda _0}{4!}\phi ^4 - \frac{\lambda _2}{4}\phi ^2 \Phi ^2 - \frac{\kappa }{2}\phi ^2 \Phi , \end{aligned}$$
(2.14)

which we want to match to the EFT Lagrangian without the heavy scalar,

$$\begin{aligned} {\mathcal {L}}_{\textrm{EFT}}= & {} \frac{\alpha _{4k}}{2}(\partial _\mu \phi )^2 -\frac{\alpha _2}{2} \phi ^2 - \frac{\alpha _4}{4!}\phi ^4 - \frac{\alpha _6}{6!}\phi ^6 \nonumber \\{} & {} - \frac{{\tilde{\alpha }}_6}{4!}\phi ^3 \partial ^2\phi -\frac{{\hat{\alpha }}_6}{2}\left( \partial ^2 \phi \right) ^2. \end{aligned}$$
(2.15)

We will use this Lagrangian during off-shell matching. Subsequently, the kinetic term can be canonically normalized, and the redundant operators can be eliminated. Two of the three operators of dimension six are redundant. We choose \(\phi ^6\) as the independent operator. Using equations of motion we can readily find that:

$$\begin{aligned} \phi ^3 \partial ^2\phi&\rightarrow -\alpha _2\phi ^4 - \frac{1}{3!}\alpha _4\phi ^6, \end{aligned}$$
(2.16)
$$\begin{aligned} \left( \partial ^2 \phi \right) ^2&\rightarrow \alpha _2^2 \phi ^2 + \frac{\alpha _2\alpha _4}{3}\phi ^4 + \frac{\alpha _4^2}{36}\phi ^6. \end{aligned}$$
(2.17)

Eliminating these operators from the Lagrangian would induce the shifts

$$\begin{aligned} \alpha _2\rightarrow & {} \alpha _2 + \alpha _2^2 {\hat{\alpha }}_6 \end{aligned}$$
(2.18)
$$\begin{aligned} \alpha _4\rightarrow & {} \alpha _4 - {\tilde{\alpha }}_6 \alpha _2 +4 \alpha _2\alpha _4{\hat{\alpha }}_6 \end{aligned}$$
(2.19)
$$\begin{aligned} \alpha _6\rightarrow & {} \alpha _6 - 5{\tilde{\alpha }}_6\alpha _4 +10 {\hat{\alpha }}_6\alpha _4^2 \end{aligned}$$
(2.20)

The coupling \(\kappa \) of this model is a dimensionful coupling, and is expected to be parametrically of the order of the heavy mass scale \(M_H\). Thus, \(\frac{\kappa }{M_H}\) is of \({\mathcal {O}}(1)\) and is kept throughout the matching procedure consistently.

The feynrules file for the UV model, saved at two_scalars.fr, is shown below.

figure cb

Note that we use the keyword FullName to characterize each field as “heavy” or “light”. This is mandatory: matchmaker uses this keyword to distinguish between fields that are integrated out and those that are light and are also present in the EFT. Also note that all the parameters that are used in the Lagrangian, masses as well as couplings, must be declared. In this example all parameters are real.

The feynrules file for the EFT model, saved at one_scalar.fr, is:

figure cc

Note that we have included WCs (denoted by alpha) also for the kinetic and mass terms (squared), as well as for all operators that are redundant solely due to the equations of motion.

In order for matchmakereft to perform the reduction to the physical basis, we need to provide a set of relations that express the redundant WCs in terms of the irreducible ones, see Eq. (2.18). This is done at one_scalar.red:

figure cd

Note that only the WCs corresponding to physical operators, among those appearing in the EFT Lagrangian and defined in the file one_scalar.fr, must be present on the left hand side of the replacement rules in this file. The WCs corresponding to redundant operators appear only on the right hand side. When these rules are used, both redundant and non-redundant WCs have been matched and are known as functions of the parameters of the UV theory. The rules are therefore instructions on how to update the non-redundant WCs, to include the effect of the redundant ones.

With these files prepared we are ready to proceed with matching. In the matching directory, where two_ scalars.fr,one_scalar.fr,one_scalar.red are present, we can run matchmakereft:

figure ce

upon which we enter the python interface

figure cf

We first need to create the matchmaker models, i.e. the directories with all the necessary information for the UV and the EFT models. We do this by

figure cg

which has the response

figure ch

We can now observe that the directory two_scalars_MM is created. We proceed with creating the EFT model

figure ci

The one_scalar_MM directory is now created as well, and we are ready for the matching calculation. This is performed by the match_model_to_eft command:

figure cj

Upon completion, the results of the matching are stored in the UV model directory, in this case two_scalars_MM. The file two_scalars_MM/MatchingProblems.dat contains troubleshooting information in case the matching procedure failed. In our case it is an empty list, indicating no problems:

figure ck

The result of the matching procedure is stored in two_scalars_MM/MatchingResults.dat, a Mathematica file with a list of lists of replacement rules. These matching results can also be seen in printed form in Appendix C of the original matchmakereft publication [48]. In this example, the kinetic operator receives a one-loop matching correction and, therefore, \(\phi \) is no longer canonically normalized. A field redefinition is needed to obtain a canonically normalized theory on which we can apply the corresponding redundancies to go to the physical basis. Matchmakereft does these two processes (canonical normalization and going to the physical basis) automatically. The resulting WCs in the physical basis, up to one loop order and \(O\left( \frac{\kappa ^{2n}}{M_H^{2n}}\frac{m_L^2}{M_H^2}\right) \) can also be seen in printed form in Appendix C of Ref. [48].

As mentioned above, matchmakereft can do many more things than just finite matching. One can for instance compute the RG equations for both models and check the consistency of the logarithmic terms from the RG equations and the finite matching. We refer to the manual for a detailed explanation.

We would like to finish this overview of matchmakereft by mentioning two projects we are currently working on in the Granada group, that will either end up being part of matchmakereft or strongly use it for their development. Current matching programs perform the matching off-shell, producing the effective Lagrangian in a so-called Green’s basis, in which some redundant operators are not required to describe physical, on-shell amplitudes. The usual process is to reduce this Green’s basis to a physical basis, either manually (as it is currently done in matchmakereft) or in an automated way, as currently done by matchete [49]. We are looking into side-stepping this reduction step by performing an on-shell matching. This has a number of technical complications that have been essentially solved for the tree-level matching. We are working on extending the on-shell matching to the one-loop level. More details can be found in Sect. 3.6.

Another project we are currently working on was mentioned by R. Fonseca in his talk but is not covered elsewhere in this document (it has a significant overlap with the content of Sect. 2.8 by other authors). Even with current codes that automate the process of one-loop matching, repeating the calculation for different models is re-iterative and, in the case of many fields, can be computationally very expensive. Our idea is to define a generic model, in which the gauge structure is not fixed a priori and the field multiplicity simply appears as a dummy flavor index. This general EFT can be built with a single multiplet of real scalars, Weyl fermions and gauge bosons. We have defined the most general EFT with this generic field content up to mass dimension six and we are computing its RG equations. All the loop integrals, tensor reduction and kinematic projections are performed in this generic EFT in such a way that the calculation of the RG equations for any specific EFT can be obtained by means of a straight-forward group-theoretic calculation. The next step will be to define a generic theory with light and heavy fields and perform the finite one-loop matching for it.

3.5 Sym2Int: Automatic generation of EFT operators

Renato M. Fonseca

EFTs are a powerful tool for probing potential new physics in a model-independent way. At a time when there is a lack of clarity on how to extend the SM, its related EFTs have been receiving an increasing amount of attention. For example, the number of SMEFT operators have been counted with several techniques in the last few years, up to high mass dimensions. Building an explicit basis of operators is more complicated, but here too there has been notable progress. I will go through my recent work on using the software packages GroupMath and Sym2Int to automatically build explicit bases of operators for EFTs, given their fields and symmetries.

3.5.1 Using computers to generate Lagrangians

With EFTs one can study the low energy consequences of a model without having to handle or even know all of its intricacies; by integrating out the heavy fields one can obtain a Lagrangian which describes well the large distance behavior of the original theory. However, the reduction in the number of fields comes at the cost of introducing a potentially large set of local operators of high dimension. As such, the very first step in the study of an effective field theory is to establish a basis of operators which encodes all possible interactions between the light fields. Put simply, one needs a Lagrangian. As the reader is certainly aware, most theories are invariant under some group of transformations — for example the Lorentz group and/or a gauge group — therefore the task of finding all interactions is inseparable from the problem of finding invariant combinations of products of representations of some group.

There is a long history of using computers in particle physics, given the complexity of the calculations one needs to perform. Indeed, there are many codes specialized in various tasks, from calculating Feynman rules all the way to generating events at colliders. However, at the very beginning of such a stack of programs, it would be useful to have one more code which, to some degree, alleviates the heavy burden on the user of having to provide the full Lagrangian of the model. To the best of my knowledge, SARAH [158] and Susyno [159] were the first codes to build symmetry-invariant Lagrangians (superpotentials, to be more specific) with the user having only to specify the representation of the (super)fields under some gauge group.

In the case of Susyno, it builds the most general renormalizable supersymmetric (SUSY) Lagrangian allowed by the gauge symmetry, as well as the soft SUSY breaking terms, and then applies known formulas in order to derive the two-loop renormalization group equations for all the free parameters (such as the gauge and Yukawa couplings). The group theory code used to perform the first step grew over time and was eventually released as the standalone GroupMath program [160]. It is also used in other packages, such as SARAH 4 [161], Pyr@te 2+ [162], DRalgo [163] and Sym2Int [164]. The aim of this last program, which will be the main topic from now on, is to go from symmetries to interactions: given some input fields (that is, representations of the Lorentz and gauge groups) it computes all operators up to some mass dimension. Sym2Int is currently being extended in order to be able to provide explicit expressions for the operators.Footnote 8

3.5.2 The current Sym2Int program

Details on how to use the program can be found in [164], as well as on the program’s webpage. For illustrative purposes, with the following input one can obtain the list of SMEFT interactions up to dimension 8:

gaugeGroup[SM] ⌃= {SU3, SU2, U1};

fld1 = {” u”, {3, 1, 2/3}, ” R”, ” C”, 3};

fld2 = {” d”, {3, 1, -1/3}, ” R”, ” C”, 3};

fld3 = {” Q”, {3, 2, 1/6}, ” L”, ” C”, 3};

fld4 = {” e”, {1, 1, -1}, ” R”, ” C”, 3};

fld5 = {” L”, {1, 2, -1/2}, ” L”, ” C”, 3};

fld6 = {” H”, {1, 2, 1/2}, ” S”, ” C”, 1};

fields[SM] ⌃= {fld1, fld2, fld3, fld4, fld5, fld6};

GenerateListOfCouplings[SM, MaxOrder -> 8];

  The number of interactions in this particular EFT — up to dimension 15 and for an arbitrary number of flavors (which is set to 3 in the code above) — can be found in a couple of hours. As far as I know, this is the only cross-check of the numbers provided for the first time in [89] using the the Hilbert series.

It is worth noting that Sym2Int also computes some important information on the symmetry of flavor indices. For example, \(L_{i}L_{j}HH\) is found to be symmetric under the exchange of \(i\leftrightarrow j\), while \(Q_{i}Q_{j}Q_{l}L_{k}\) has a more complicated symmetry. Let us distinguish an operator, where we expand flavor indices (for n flavors there are \(n\left( n+1\right) /2\) operators of the form LLHH) from a Lagrangian term which are tensors in flavor space (there is just one term of the form LLHH). Then, the information mentioned earlier can be used to infer the minimum number of terms needed to write a model’s Lagrangian. In the case of three Q’s and one L, even though there are four ways of contracting the various spinor and \(SU(2)_{L}\) indices, it is possible to write all of them as a single Lagrangian term \(\omega _{ijkl}Q_{i}Q_{j}Q_{l}L_{k}\) (see [166] for a more thorough discussion of this topic).

3.5.3 An upgrade: building operators and terms explicitly

Counting operators and terms is not the same as building a Lagrangian. For the latter one needs to know the explicit form of each interaction, which implies knowing how the various field indices are contracted. It is also worth highlighting that neither the current version of Sym2Int nor the Hilbert series method can be used to determine how the derivatives — if there is any — are applied to the fields.

We desire a model’s Lagrangian, but I would like to point out that the Lagrangian often consists of a complicated function of the field components with a very low information density. Consider for example the interactions between a fermion transforming under the fundamental representation of SU(n) and the gauge bosons of this group. The relevant Clebsch-Gordan factors coincide with the entries \(T_{ij}^{A}\) of the matrices of the fundamental representation of SU(n), containing a total of \(n^{2}\left( n^{2}-1\right) \) of entries. Given that fields can be redefined, one has to wonder what is the information contained in all these numbers: in principle, observable quantities should depend on these Clebsch-Gordan factors through combinations of the tensor T with no open indices, such as \(T_{ij}^{A}T_{ji}^{A}\) (see for instance [167]).

It is therefore conceivable that in the future we might find it unnecessary to know a model’s Lagrangian in full detail. With this cautionary remark, the fact remains that at present we do need these complicated expressions for the study of models in general, and EFTs in particular. For this reason, the Sym2Int code is in the process of being upgraded such that it not only counts but also computes explicitly operators and terms. In the following I will make a few remarks concerning this upgrade.

Output for humans vs output for other codes While designing a program which automatically generates lists of operators, one must take into account whether the results are to be used directly by a person, or fed into some other software package – such as FeynRules [155] or Matchmakereft [48]. In the second case, the readability of the result is not so important. For example, the two possible ways of contracting three color octets can be described by a \(2\times 8\times 8\times 8\)-dimensional tensor \(c_{aijk}\) containing the relevant Clebsch-Gordan factors. But even in this rather modest example, a human might find it hard to read such data format.

A related problem is that these Clebsch-Gordan factors are not unique. In the previous case, we are free to take any invertible combination of the two contractions, \(c_{aijk}\rightarrow X_{aa^{\prime }}c_{a^{\prime }ijk}\) with \(\text {det}\left( X\right) \ne 0\), and one can also make a rotation in the eight-dimensional space of the octet, leading to the change \(c_{aijk}\rightarrow U_{ii^{\prime }}U_{jj^{\prime }}U_{kk^{\prime }}c_{ai^{\prime }j^{\prime }k^{\prime }}\) for some unitary matrix U.

Currently, the package GroupMath contains a function Invariants which, subject to time and memory constraints, can compute the above group theory data for any product of representations of a semi-simple Lie algebra. However, due to the two issues discussed above, there is room for improvement. Conveniently, the gauge symmetry of many models is completely described by SU(n) groups (and perhaps U(1)’s which are easy to handle). Motivated by this, a future version of GroupMath will include SU(n)-specific code capable of providing the same information as Invariants but using instead the tensor method which is familiar to physicists (see chapter 4 in [168]). The numerical output will still consist of large tensors with Clebsch-Gordan factors, but now it is also possible to include a human readable string which identifies each of them. For the product of three octets, writing them as \(3\times 3\) traceless matrices \(\Omega _{\;j}^{i}\), \(\Omega _{\;j}^{\prime i}\) and \(\Omega _{\;j}^{\prime \prime i}\), the two invariant combinations alluded above would be \(\Omega _{\;j}^{i}\Omega _{\;k}^{\prime j}\Omega _{\;i}^{\prime \prime k}\) and \(\Omega _{\;j}^{i}\Omega _{\;k}^{\prime \prime j}\Omega _{\;i}^{\prime k}\); the GroupMath program will be able to provide the value of these two expressions as well as the corresponding formulas/strings.

The approach to Lorentz indices is similar: for each invariant expression the program keeps track of a tensor (with the SO(1, 3) Clebsch-Gordan factors) as well as a human readable string involving the familiar scalars, spinors, gamma matrices, derivatives and field strength tensors. If there are fermions in the interaction, with the \(\gamma ^{\mu }\) and C matrices one can replace open spinor indices with vector indices. Then, from an expression containing a set of open vector indices only, one can construct all Lorentz invariants by contracting it in all possible ways with the metric and Levi-Civita tensors, \(\eta _{ab}\) and \(\epsilon _{abcd}\). As for derivatives, they should be distributed in all possible ways by the different fields in the operator. This approach of contracting indices and applying derivatives in every conceivable way leads to a highly redundant list of Lorentz invariant expressions; such a list can easily be pruned by looking for linear relations among the various polynomials.

Dealing with gauge indices and Lorentz indices separately Building operators explicitly implies handling potentially very large polynomials of the numerous field components, thus calculations are time consuming even for low dimensional interactions. In each step of the design of a code that handles explicit operators, one must therefore be aware of this problem and try to mitigate its impact.

In my opinion, an important step in reducing the computational and memory requirements of handling operators is to segregate gauge indices from the space-time indices of spinor and vector fields. If \(c_{g_{1}g_{2}\cdots }\) and \(\kappa _{l_{1}l_{2}\cdots }\) are the Clebsch-Gordan factors for the two type of indices, instead of working with the full operator \({\mathcal {O}}\equiv c_{g_{1}g_{2}\cdots }\kappa _{l_{1}l_{2}\cdots }\Phi _{g_{1},l_{1}}\Phi _{g_{2},l_{2}}\cdots \), it is preferable to manipulate — somehow — the simpler polynomials \({\mathcal {O}}_{G}\equiv c_{g_{1}g_{2}\cdots }\Phi _{g_{1}}\Phi _{g_{2}}\cdots \) and \({\mathcal {O}}_{L}\equiv c_{l_{1}l_{2}\cdots }\Phi _{l_{1}}\Phi _{l_{2}}\cdots \) involving only one type of indices.

Some may consider this to be an elementary observation but, as the following example will show, it is not trivial to implement it. Consider that both the gauge and the Lorentz groups are described by the SU(2) group, and ignore for simplicity that some fields (fermions) anti-commute. Now take some field \(\Phi \) which is a doublet under both groups: both \({\mathcal {O}}_{G}=\epsilon _{g_{1}g_{2}}\Phi _{g_{1}}\Phi _{g_{2}}\) and \({\mathcal {O}}_{L}=\epsilon _{l_{1}l_{2}}\Phi _{l_{1}}\Phi _{l_{2}}\) are identically zero, so whatever is our algorithm to handle these two polynomials we would conclude that there is no \(\Phi ^{2}\) operator. And yet, by considering the two indices together, we readily find that \({\mathcal {O}}\equiv \epsilon _{g_{1}g_{2}}\epsilon _{l_{1}l_{2}}\Phi _{g_{1},l_{1}}\Phi _{g_{2},l_{2}}\) is not null. What is happening here is that both the Lorentz indices \(l_{i}\) and the gauge indices \(g_{i}\) are contracted anti-symmetrically, so by considering each set of indices separately, we get vanishing polynomials, while the operator \({\mathcal {O}}\) is symmetric — not anti-symmetric — under the change \(\left( g_{1},l_{1}\right) \leftrightarrow \left( g_{2},l_{2}\right) \).

A possible solution is to distinguish equal fields in the simpler polynomials \({\mathcal {O}}_{G}\) and \({\mathcal {O}}_{L}\); in the previous example, \({\mathcal {O}}_{G}=\epsilon _{g_{1}g_{2}}\Phi _{g_{1}}\Phi _{g_{2}}^{\prime }\) and \({\mathcal {O}}_{L}=\epsilon _{l_{1}l_{2}}\Phi _{l_{1}}\Phi _{l_{2}}^{\prime }\) are no longer null, as long as we keep \(\Phi ^{\prime }\ne \Phi \). A solution along this lines is viable, and indeed it has been successfully tested in Sym2Int. Without going into details, I will simply note that one must still account, in the end, for the fact that \(\Phi ^{\prime }\) is the same as as \(\Phi \).Footnote 9 Things become even more complicated when fields have flavor.

Flavor It turns out that in some models (for example in SMEFT), some representations of the symmetry group are present more than once. We tend to account for this mysterious multiplicity by adding a flavor index to the relevant fields, \(\Phi \rightarrow \Phi _{i}\), and write Lagrangians with flavored tensors, such as the Yukawa matrices. Each of them is associated with what I previously called a Lagrangian term, which may correspond to many operators once the indices are expanded.

Flavor constitutes a significant complication. If all the fields in an interaction are distinct, such as in \(L_{i}^{*}L_{j}Q_{k}^{*}Q_{l}\),Footnote 10 accounting for multiple flavors is trivial. The problem are those cases containing repeated fields, such as \(L_{i}^{*}L_{j}L_{k}^{*}L_{l}\), as they will have some underlying symmetry under permutations of the flavor indices. One approach to these troublesome cases might be to consider each flavor combination at a time, effectively expanding the flavor indices. Not only is such an approach very taxing computationally, but it is also not clear how the results are to be presented — ideally one would like to undo the expansion of the indices and write as few terms in the Lagrangian as possible.

As the operator dimension increases, so does the number of intervening fields. At some point we are bound to find cases such as \(Q_{i}Q_{j}Q_{k}L_{l}\) in SMEFT, where the same field appears three or more times. The relevant permutation group is then no longer abelian, and we may have to consider mixed (or multi-term) symmetries in the flavor indices (see [166]).

This problem is most acute in a model where only one index distinguishes all scalars and all fermions; such model is therefore a good test-bed for Sym2Int’s new code. Indeed, we may represent a generic EFT as a theory with an arbitrary number of real scalars \(\phi _{i}\) and Weyl fermions \(\psi _{i}\) with covariant derivatives

$$\begin{aligned} D_{\mu }\psi _{i}= & {} \partial _{\mu }\psi _{i}-\textrm{i}gt_{ij}^{A}V_{\mu }^{A}\psi _{j}\text {and} \nonumber \\ D_{\mu }\phi _{a}= & {} \partial _{\mu }\phi _{a}-\textrm{i}g\theta _{ab}^{A}V_{\mu }^{A}\phi _{b} \end{aligned}$$
(2.21)

where \(t^{A}\) and \(\theta ^{A}\) are generic hermitian matrices (the \(\theta ^{A}\) must also be anti-symmetric). It was precisely in this general framework that the two-loop RG equations for dimension 4 operators were derived in [169,170,171]. We may however extend it to non-renormalizable operators, not just to study complicated flavor symmetries, but also to derive important results for a general EFT. Indeed we are currently in the process of deriving the one-loop RG equations for this EFT (up to dimension-six interactions), as well as the matching relations between it and a general renormalizable UV model [172]. Once these are known, it becomes unnecessary to go back to the computation of loops and amplitudes for every single model; the task of computing RG equations and matching conditions for a particular model is reduced to writing a Lagrangian, which involves only some algebra and group theory.

3.5.4 Outlook

The Sym2Int code, as it exists now, lists and counts all the possible operators in an effective field theory, up to some cutoff. It is currently being extended to also build them explicitly, while handling in a satisfactory way the fact that some fields have flavor. With the approach being pursued, the presence of flavor does not significantly affect the computational time, but it does pose some complicated problems of a conceptual nature which still have to be addressed. Side-stepping these for now, the code was already used to compute all Green operators in SMEFT up to dimension 10, for an arbitrary number of fermion flavors, with their counting matching the correct result.

3.6 SOLD: Towards the one-loop matching dictionary in the SMEFT

figure cl

The benefit of using EFTs to compute experimental observables for different UV models is undeniable, but EFTs are much more powerful than that. They really shine when the top-down approach is combined with power-counting arguments, which allow for a complete classification of observable UV models in the form of IR/UV dictionaries. The EFT is a double expansion in the mass dimension of the operators and the loop order of the WCs, with operators of higher dimension being less relevant at low energies and WCs of higher loop order being smaller than lower loop ones. Eventually, contributions of high enough mass dimension and/or loop order are smaller than the experimental precision and can be disregarded as unobservable.

Given a finite order in mass dimension and loop order, the complete set of UV models that contribute to and EFT up to these orders can be exhaustively classified. Once the complete classification is achieved one can go one step further and compute the resulting WCs for all the UV models in the list. This way we obtain true IR/UV dictionaries that relate the WCs of the EFT (and therefore to experimental observables via the bottom-up approach) to all UV models that contribute to the EFT at the particular order the dictionary has been computed. These dictionaries can be used in an iterative way to obtain a complete map of the implications of experimental data on models of new physics. Indeed, given a particular experimental constraint or anomaly, one can list all models that are restricted by (or explain in the case of anomalies) that particular measurement. With this list, we can then obtain all other experimental implications of these models, that can be tested in a correlated way with different experimental data.

The tree-level, dimension-six dictionary for the SMEFT was computed a few years back [137], building on previous efforts [173,174,175,176]. This includes the most general extension of the SM with new scalars, fermions or vectors that contribute at tree level to the SMEFT operators up to mass-dimension six. While essential for the classification of large effects, this leading dictionary falls short when compared with the current precision of many experimental measurements, even more taking into account that certain WCs are only generated in weakly coupled extensions of the SM at the one-loop order. The first step towards the calculation of the one-loop, dimension-six IR/UV dictionary for the SMEFT has been recently published in [177].

3.7 Towards the one-loop, dimension-six IR/UV dictionary in the SMEFT

Contrary to the tree-level dictionary, in which the list of new fields is finite (a total of 48 new scalars, fermions or vectors appear in the dictionary), at one-loop order the list is infinite, due to the fact that some contributions only constrain the quantum numbers of the product of multiples fields rather than each of them independently. Despite this infinite number of models, it is still possible to classify them in the form of a finite number of conditions on these models. Still, the calculation of the complete one-loop dictionary for the SMEFT at dimension six is a formidable task and, with the help of the computer tool matchmakereft [48], we have just finished the first step towards it [177]. In particular, we have considered the most general extension of the SM with an arbitrary number of scalar and fermionic fieldsFootnote 11 that contribute at one-loop order to those operators in the Warsaw basis [4] which cannot be generated at tree level in any weakly-coupled extension of the SM. This includes all operators in the basis that contain at least one field-strength gauge tensor. Our results, that include the full classification of models, a partial list of the specific representations (up to a certain representation dimension, to be chosen by the user) and the actual value of the corresponding WCs are too long to be reported in print form, and we have published them in electronic form as a Mathematica package called SOLD (for Smeft One Loop Dictionary), available via its Gitlab repository (it can also trivially installed directly from a Mathematica notebook). Tools to create models suitable for the full one-loop matching using matchmakereft are also available within SOLD (see [177] for details). The fact that matchmakereft performs the matching calculation in a gauge-blind fashion for most of the computation has significantly helped the development of the dictionary.

3.8 MatchingDB: A format for matching dictionaries

Juan Carlos Criado

MatchingDB is a format for the storage, exchange and exploration of EFT matching results up to one-loop order. Its specification, both in human- and machine-readable forms, together with a Python interface, is located at the MatchingDB GitLab repository:

gitlab.com/jccriado/matchingdb

It aims to provide:

  • A unified language/tool-independent format for the communication of EFT matching results.

  • An efficient workflow for the practical use of matching dictionaries.

The format is particularly useful to store and publish large matching dictionaries, whose size might make it impractical to provide them in the form of human-readable equations and tables. The Python interface makes it easy to interact with them, and quickly obtain the relevant information for specific applications. An example of such a dictionary is the complete tree-level dictionary [137] between the dimension-six SMEFT and any of its UV completions, which is provided in MatchingDB format under the dictionaries directory in the MatchingDB repository. Other matching databases will be made available at the same place.

MatchingDB can also be employed as a data exchange format between tools, allowing to compare results from different matching codes, and providing an interface to connect them to packages for RG running and observable calculations. It will be implemented as an output format in MatchingTools [112] and Matchmakereft [48].

Some of the features currently offered by the combination of the MatchingDB format and the accompanying Python package are: listing the heavy fields and UV couplings that generate a given WC; listing the EFT contributions of a given set of heavy fields; providing LaTeX output, both for the UV Lagrangian and for matching corrections; and providing numerical output that matches WCxf [133] for the SMEFT.

Fig. 2
figure 2

Diagram summarizing the MatchingDB JSON schema. Rectangles represent json objects, with the property names on the left column and the corresponding value types on the right. Capsule-shaped items represent a tuple, with the type of each of its items given in a larger font, and an short explanation of their meaning on a smaller one directly below

3.8.1 Format definition

MatchingDB data can be stored either as a plain-text JSON [178] file or as an SQLite [179] database. The format is defined by its JSON schema, which is given in the matchingdb.json file at the root of the MatchingDB repository, using the JSON Schema language [180]. A MatchingDB JSON file must comply with this schema, which can be checked using any of the standard tools for this purpose, such as the jsonschema Python package [181]. Alternatively, MatchingDB data can be stored as an SQLite database, with a structure based on the JSON schema. The SQLite representation may provide faster access to the information in larger databases. Below, I describe the format informally, starting with the JSON representation. A diagram summarizing it is provided in Fig. 2. The root value of the data must be an object with 4 name/value pairs, with names ”fields””couplings,””constants”and ”terms”. The corresponding values should be arrays whose values are objects with the following structures:

  • field Represents a heavy field in the UV theory that has been integrated out. It contains the following key/value pairs:

    • name(string): a name identifying the field.

    • real(boolean): determines whether the field is real or complex.

    • representation(string): the group theory representation. The format for this is free in principle, but intended to be self-consistent in each database.

    • latex(string): math-mode LaTeX code representing the field.

  • coupling Represents a coupling present in the UV theory. Its name/value pairs are:

    • name(string): a name identifying the coupling constant.

    • fields(array of string): a sorted list of the heavy fields that appear in the interaction.

    • real(boolean): determines whether the coupling constant is real or complex.

    • latex(string): math-mode LaTeX code representing the coupling constant.

    • latex_interaction(string): math-mode LaTeX code representing the full interaction term in the UV theory, including the coupling constant.

  • constant Represents a scalar constant such as \(\pi \) or a constant tensor such as the Kronecker delta, which appears in the matching results. Its name/value pairs are

    • name(string): a name identifying the constant.

    • value(nested array of numbers or object): the numerical value of the constant. If the constant is a scalar, it should be a number. If it is a tensor, it should be provided as a nested array of numbers. Finally, if the constant is complex, it should be given as an object of the form: {Re:...,Im:...}, with the values having the real scalar o tensor type.

    • latex(string): math-mode LaTeX code representing the constant.

  • term Represents a term appearing in the matching corrections to some WCs in the EFT:

    • coefficient(string): a name identifying the WC in which the term appears.

    • fields(array of string): a sorted list of the heavy fields that contribute to the term.

    • factors(array of *_factor): a list of the factors that appear in the term, as described below.

    • free_indices(array of string): a list of the free indices of the term, which must coincide with the ones of the corresponding coefficient, and appear at least once in the ”factors”list.

The ”fields”array in both couplings and terms should be sorted with the lexicographic order. This allows to compare them efficiently to a set of heavy fields when querying the database.

The full analytical formulas for the matching corrections to the WCs in the EFT are stored in the ”factors”property of the term objects. The matching correction to any WC \({\mathcal {C}}\) is a sum of terms \({\mathcal {T}}^{(N)}\), with each term being a product of factors \({\mathcal {F}}^{(N)}_A\):

$$\begin{aligned} {\mathcal {C}} = \sum _N \mathcal {T^{(N)}}, \qquad {\mathcal {T}}^{(N)} = \prod _A {\mathcal {F}}^{(N)}_A. \end{aligned}$$
(2.22)

Each factor is assumed to be of one of 7 possible forms. Every form has an associated JSON type, all of them being tuples, that is, inhomogeneous arrays with a fixed type for each of its items:

\({\mathcal {F}} = b^n\):

(numerical_factor): a number b to some power n. Represented as a tuple [bn] of type: [number, number].

\({\mathcal {F}} = k^n_{ij\dots }\):

(constant_factor): a constant k, to some power n, with some flavor indices i, j, ...Represented as a tuple \([g, n, [i, j,\ldots ]]\) of type: [string, number, array of string].

\({\mathcal {F}} = g^{n(*)}_{ij\dots }\):

(coupling_factor): a coupling constant g, to some power n, possibly complex conjugated (\(c = \mathrm True \) /False), with some flavor indices i, j, \(\ldots \) Represented as a tuple [gnc, [ij\( \ldots ]]\) of type: [string, number, boolean,array of string].

\({\mathcal {F}} = M^n_{F,i}\):

(mass_factor): the mass of a field F, to some power n, with a flavor index i. Represented as a tuple [Fin] of type: [string, string, number].

\({\mathcal {F}} = (M^2_{F,i} - M^2_{G,j})^n\):

(mass_difference_factor): the difference between the masses of two fields. Represented as a tuple [FiGjn] of type: [string, string, string, string, number].

\({\mathcal {F}} = \log (M^n_{F,i}/\mu ^n)\):

(log_mass_factor): the log of the mass of a field F, to some power n, with a flavor index i. Represented as a tuple [Fin] of type: [string, string, number].

\({\mathcal {F}} = \log (M^n_{F,i}/M^n_{G,j})\):

(log_mass_difference_factor): the log of the ratio between the masses of two fields. Represented as a tuple [FiGjn] of type: [string, string, string, string, number].

This completes the specification of the MatchingDB format in JSON form.

MatchingDB data can also be stored as an SQLite database. The SQLite representation consists of 4 tables, named fields, couplings, constants and terms. Their columns take their names from the keys of the associated objects. Every object is stored as a row, with each of its values encoded as a string containing the corresponding JSON code.

3.8.2 Python interface

The matchingdb Python package is provided under the python directory of the MatchingDB repository. It can be installed by cloning the repository, moving into the python directory and running

figure cm

The package exposes two classes: JsonDB and SQLiteDB, for creating and querying MatchingDB dictionaries, in the JSON and the SQLite representations, respectively. Both classes have the same methods, with the same arguments and the same behaviour. An existing database can be loaded as:

figure cn

A new one can be created through:

figure co

where my_data is the data to be included, as a JSON value that validates against the MatchingDB schema, represented as a Python object through the mapping displayed in Table 3.

New items can be inserted into a database through:

figure cp

where table is one of ”fields”, ”couplings”, ”constants”, or ”terms”, and item is a dict complying with the corresponding sub-schema, which can be found at $defs/<table>/items in the full schema. Any changes made to a database must be saved with

figure cq

in order for them to persist.

Table 3 Mapping between JSON and Python types
Table 4 Summary of the querying methods of the JsonDB and SQLiteDB classes of the matchingdb Python package
Table 5 Possible values and behavior of the fields_criterion argument to the select_couplings() and select_terms() methods of the JsonDB, and SQLiteDB classes. In the right column, fields refers to the argument of these methods with the same name

There are 4 methods to query a database:

  • select_fields()

  • select_couplings()

  • select_constants()

  • select_terms()

They filter the items of each of the corresponding arrays according to certain conditions, and prepare the selected items in the desired output state. A summary of their arguments is provided in Table 4. All of them are optional. If provided, the name, fields and coeffi

cients arguments select only those items for which the corresponding property coincides with the given one. fields_criterion further configures the behaviour of the fields argument, following Table 5.

The output_format argument must be one of the following strings:

  • raw”(default). The method returns a list of the selected items, represented as Python values, following Table 3.

  • pandas”. The method returns a Pandas [182] dataframe with a simplified version of the output. Provides an easy way to visually explore the data.

  • latex”. Returns math-mode LaTeX code representing the output. select_couplings() returns a single string with the formula for the selected sector of the UV Lagrangian. select_terms() returns a dict with coefficient names as keys and strings with their selected terms as values. The other 2 methods do not accept this option.

  • numeric”. Available in select_terms() only. Returns a function for the numerical evaluation of WCs. The additional parameters argument of select_terms() must be set to an iterable containing all the names of all the parameters that will be set to non-vanishing values. This allows to prepare the output function to be efficiently evaluated many times.

The function returned by select_terms() when output

_format=numeric”takes two arguments:

  • parameters: dict. A dict whose keys are the UV parameters (couplings, masses and matching scale), and whose values are NumPy [183] arrays with one axis for each of the indices of the corresponding parameter. Masses are named ”M_ < field> ”where ”< field> ”is the name of the field. The matching scale is ”mu”. The values of constants in the ”constants”array of the database are included automatically.

  • expand_flavor: bool (optional).

    • If False (default), the output of the function is a dictionary with WC names as keys, and NumPy arrays as values, with one axis per EFT flavor index of the coefficient.

    • If True, flavor indices are expanded, and the output becomes a dictionary with keys of the form ”< coeff >_<flavor_indices> ”, and values being either float (if real) or dictionaries with keys ”Re”, ”Im”and floats as values (if complex). If the names given to the coefficients in the database follow the conventions of WCxf, the output will be compatible with the values section of a WCxf WC file.

3.8.3 Example

The python/examples directory contains examples showcasing several features of the matchingdb package. Here, I will present a brief example on how to extract different types of information from the tree-level dimension-six SMEFT matching dictionary [137] (given in MatchingDB format at dictionaries/smeft_dim6_tree

.json). To load this dictionary, one can do:

figure cr

One can then get a summary view of all terms that appear in the WC for the \({\mathcal {O}}_{ll}\) Warsaw operator through:

figure cs
figure ct

From this table, one can see which UV fields and couplings generate this operator at tree level. Information on these fields can be obtained as:

figure cu
figure cv

All the matching corrections to any WC induced by the \({\mathcal {S}}_1\) field can be found via:

figure cw
figure cx

This implies that \({\mathcal {S}}_1\) only contributes to \({\mathcal {O}}_{ll}\). The formula for this contribution in LaTeX code can be obtained through:

figure cy
figure cz

which renders as: \(+ \frac{\left( y_{{\mathcal {S}}_1}\right) _{ajl}^{*} \left( y_{{\mathcal {S}}_1}\right) _{aik}}{ M_{{\mathcal {S}}_1,a}^{2}}\).

This formula can also be numerically evaluated given the values of the UV parameters: the coupling \(y_{{\mathcal {S}}_1}\) and the mass \(M_{{\mathcal {S}}_1}\). This is done as:

figure da
figure db

The final output here has the format of the values field of the WCxf format, and is thus suitable for interfacing with numerical tools for running and the calculation of observables. The evaluator() function is optimized for multiple evaluations, with the corresponding database lookup being performed once, in the select_terms() call.

3.9 RG equations in generic EFTs

Mikołaj Misiak and Ignacy Nałȩcz

The SMEFT RG equations at one loop were determined in Refs. [8,9,10], and the RG equations for the LEFT WCs have been determined in the past, dependently on phenomenological needs, sometimes up to the four-loop level [29]. However, the two-loop SMEFT RG equations remain unknown. Instead of deriving the RG equations separately in various Effective Field Theories (EFTs), one can consider a generic case, as done for renormalizable models (see below). Particular results are then found by substitutions. Our goal in the current (ongoing) project is to evaluate one-loop RG equations for all the dimension-six operators in the generic case.

3.9.1 Operator classification

We shall consider EFTs of the LEFT and SMEFT type, where the gauge group is an arbitrary finite product of finite-dimensional Lie groups. Real scalars \(\phi _a\) and left-handed spin-\(\frac{1}{2}\) fermions \(\psi _k\) are going to be the matter fields. Obviously, any complex scalar can always be written in terms of two real ones, while right-handed spin-\(\frac{1}{2}\) fermions can always be described as charge-conjugated left-handed ones.

To simplify our calculation in its initial steps, we assume a discrete symmetry \(\{\phi \rightarrow -\phi , ~\psi \rightarrow i\psi \}\). It turns out to forbid all odd-dimensional operators. However, it gives no restriction on even-dimensional ones when they have already been required to be Lorentz-invariant. In more generic EFTs, where no such discrete symmetry is imposed, RG equations for odd-dimensional operators can be obtained from the even-dimensional case by treating one of the scalar fields as an auxiliary gauge-singlet that takes a fixed vacuum expectation value.

The generic EFT Lagrangian we are going to consider reads

$$\begin{aligned} {{\mathcal {L}}}&= -\frac{1}{4} F^A_{\mu \nu } F^{A\,\mu \nu } + \frac{1}{2} (D_\mu \phi )_a (D^\mu \phi )_a + i {\bar{\psi }}_j (\not \!\!D \psi )_j\nonumber \\&\quad - \frac{1}{2} m^2_{ab} \phi _a\phi _b -\frac{1}{4!} \lambda _{abcd} \phi _a \phi _b \phi _c \phi _d\nonumber \\&\quad -\frac{1}{2} \left( Y^a_{jk} \phi _a \psi ^T_j C \psi _k + \mathrm{h.c.}\right) + {{\mathcal {L}}}_{\mathrm{g.f.}} + {{\mathcal {L}}}_{\textrm{FP}}\nonumber \\&\quad + \frac{1}{\Lambda ^2} \sum Q_N + {\mathcal O}\left( \frac{1}{\Lambda ^4}\right) , \end{aligned}$$
(2.23)

where \(Q_N\) stand for linear combinations of dimension-six operators multiplied by their WCs.

Let us absorb the gauge couplings into the structure constants and generators. Then \(F^A_{\mu \nu } = \partial _\mu V^A_\nu - \partial _\nu V^A_\mu - f^{ABC} V^B_\mu V^C_\nu \), \((D_\rho F_{\mu \nu })^A = \partial _\rho F^A_{\mu \nu } - f^{ABC} V_\rho ^B F^C_{\mu \nu }\), \((D_\mu \phi )_a = \left( \delta _{ab}\partial _\mu + i\theta ^A_{ab} V^A_\mu \right) \phi _b\), and \((D_\mu \psi )_j = \left( \delta _{jk}\partial _\mu \right. \left. + i t^A_{jk} V^A_\mu \right) \psi _k\). RG equations for couplings at the dimension-four interactions in Eq. (2.23) were calculated up to two loops in a series of papers by Machacek and Vaughn almost 40 years ago [169,170,171]. Some corrections to their results were found more recently in Refs. [184, 185]. Even at the one-loop level, it is only the latter paper [185] that we fully agree with. Generic RG equations for the gauge and Yukawa couplings at the four- and three-loop levels, respectively, were recently determined in Ref. [186, 187] by combining information on results in various specific models. Earlier three-loop results for the gauge coupling beta functions can be found in Refs. [188, 189].

We perform our one-loop calculation off shell, using the background-field gauge method. Therefore, we need to begin with classifying all the dimension-six operators in the off-shell basis. Such operators are gauge invariant but many linear combinations of them vanish by the Equations of Motion (EOM). Once the RG equations in the off-shell basis are found, one needs to pass to the on-shell basis where no linear combination of operators vanishes by the EOM. Only in the latter case are the RG equations gauge-parameter independent.

The off-shell basis we use consists of the following 22 termsFootnote 12

(2.24)

where \(W^{(N)}\) contain both the WCs and the necessary Clebsch-Gordan coefficients that select singlets from various tensor products of the gauge group representations. In general, each \(W^{(N)}\) contains many independent WCs, and many gauge-singlet operators are present in each \(Q_N\).

After applying the EOM, we find an on-shell basis that consists of 11 operators only. They are conveniently chosen as \(\{Q_1,Q_2,Q_5,Q_6,Q_8,Q_9,Q_{10},Q_{11},Q_{17},Q_{18},Q_{19}\}\). There is a subtlety for \(Q_2\) whose W-coefficient has more symmetries in the on-shell basis, namely \(W^{(2)}_{abcd} = W^{(2)}_{cdab}\) and \(W^{(2)}_{(abcd)} = 0\), apart from just \(W^{(2)}_{abcd} = W^{(2)}_{(ab)(cd)}\) in the off-shell case.

3.9.2 Sample off-shell results

As a sample off-shell result, let us quote the RG equation we have obtained for \(W^{(1)}\) in the Feynman-’t Hooft gauge. Terms that are due to the presence of fermions \(\psi \) are going to be denoted by \((\ldots )_\psi \) in what follows. The RG equation reads

$$\begin{aligned} \mu \frac{d W^{(1)}_{abcdef}}{d\mu }&= \frac{1}{16\pi ^2} \left( 2 X^{(1)} + X^{(2)} + X^{(3)}\right. \nonumber \\&\left. - 6 X^{(4)} + 2 X^{(5)} + 2 X^{(6)} + 2 X^{(7)}\right. \nonumber \\&\quad \left. -\, 12 X^{(8)} + 6 X^{(9)} + (\ldots )_\psi \right) _{abcdef} \end{aligned}$$
(2.25)

where

$$\begin{aligned} X^{(1)}_{abcdef}&= \frac{1}{48} \sum \theta ^A_{ag} \theta ^A_{bh} W^{(1)}_{cdefgh},\nonumber \\ X^{(2)}_{abcdef}&= \frac{2\pi ^2}{15} \sum (\gamma _\phi )_{ag} W^{(1)}_{bcdefg},\nonumber \\ X^{(3)}_{abcdef}&= \frac{1}{48} \sum \lambda _{abgh} W^{(1)}_{cdefgh},\nonumber \\ X^{(4)}_{abcdef}&= \frac{1}{4} \sum \theta ^A_{ag} \theta ^A_{bh} \theta ^B_{cg} \theta ^B_{di} W^{(2)}_{hief},\nonumber \\ X^{(5)}_{abcdef}&= \frac{1}{16} \sum \lambda _{adhi} \lambda _{bcgi} W^{(2)}_{ghef},\nonumber \\ X^{(6)}_{abcdef}&= \frac{1}{8} \sum \theta ^A_{ei} \theta ^A_{fj} \lambda _{adhi} \lambda _{bcgj} W^{(3)}_{gh},\nonumber \\ X^{(7)}_{abcdef}&= \frac{1}{16} \sum \lambda _{aeij} \lambda _{bfhj} \lambda _{cdgi} W^{(3)}_{gh},\nonumber \\ X^{(8)}_{abcdef}&= \frac{1}{4} \sum \theta ^A_{cg} \theta ^A_{dh} \theta ^B_{bh} \theta ^C_{ag} W^{(5)BC}_{ef},\nonumber \\ X^{(9)}_{abcdef}&= \frac{1}{2} \sum \theta ^A_{fi} \theta ^B_{eh} \theta ^C_{cg} \theta ^C_{di} \theta ^D_{ag} \theta ^D_{bh} W^{(7)AB}. \end{aligned}$$
(2.26)

The sums go over such permutations of uncontracted indices that make each \(X^{(N)}_{abcdef}\) totally symmetric. The scalar field anomalous dimensions in \(X^{(2)}\) are given by

$$\begin{aligned} (\gamma _\phi )_{ab} = \frac{1}{32\pi ^2} \left[ Y^a_{ij} Y^{b*}_{ij} + Y^b_{ij} Y^{a*}_{ij} -4 \theta ^A_{ac} \theta ^A_{cb} \right] . \end{aligned}$$
(2.27)

3.9.3 Automatic computations

Our calculation begins with generating the Feynman rules from the Lagrangian (2.23) with the help of FeynRules [155]. Next, FeynArts [190] is used to construct expressions for all the necessary one-loop diagrams. Calculation of their divergent parts is very simple, most efficiently achieved with the help of a self-written code. Simplification of the evaluated results requires applying various identities that stem from gauge invariance and/or EOM (see the next section). For this purpose, the code xTensor [191] is very helpful, as it allows us to impose all the relevant symmetries of the considered tensors in a straightforward manner. However, full automation of the necessary simplifications has not yet been achieved, which is the main reason why our project is still quite far from getting completed. New ideas are currently being tested.

3.9.4 Simplification methods

Gauge invariance of the theory imposes some identities on the couplings and W-coefficients. To derive such an identity for the Yukawa couplings, one considers an infinitesimal gauge transformation

$$\begin{aligned} Y^a_{jk}\phi _a(\psi _j)^T CP_L\psi _k&\rightarrow Y^a_{jk}(\delta _{ab}-i\epsilon ^A \theta ^A_{ab})\phi _b \nonumber \\&\quad \times \left[ (\delta _{jl}-i\epsilon ^B t^B_{jl})\psi _l\right] ^T \nonumber \\&\quad \times CP_L(\delta _{kn}-i\epsilon ^C t^C_{kn})\psi _n.\nonumber \\ \end{aligned}$$
(2.28)

Since the Yukawa term is gauge invariant,

$$\begin{aligned} \phi _a(\psi _j)^T CP_L\psi _k \epsilon ^A[ -\theta ^A_{ab}Y^b_{jk}+(t^A)^T_{jl} Y^a_{lk}+Y^a_{jl}t^A_{lk} ] = 0,\nonumber \\ \end{aligned}$$
(2.29)

it follows that

$$\begin{aligned} (t^A)^T_{jl} Y^a_{lk}+Y^a_{jl}t^A_{lk}-\theta ^A_{ab}Y^b_{jk}=0. \end{aligned}$$
(2.30)

A generic, purely fermionic operator can be written as

$$\begin{aligned} W^{(n)}_{j_1j_2...k_1k_2...l_1l_2...}\psi _{k_1}^T\omega C\psi _{k_2}\ldots \overline{\psi }_{l_1}\omega C\overline{\psi }_{l_2}^T\ldots \overline{\psi }_{j_1}\gamma \psi _{j_2}\ldots , \end{aligned}$$
(2.31)

where \(\omega \) that contracts spinor indices is either the identity or the \(\sigma _{\mu \nu }\) matrix. Some of the spinor fields may be replaced by their covariant derivatives of arbitrary degree. For such operators, the quantity that must vanish due to gauge invariance reads

$$\begin{aligned}{} & {} t^{E}_{m k_1} W^{(N)}_{j_1 j_2...m k_2...l_1l_2...}\,+\,t^{E}_{m k_2 }W^{(N)}_{j_1 j_2...k_1m...l_1l_2...}\nonumber \\{} & {} \quad -t^{E*}_{m l_1} W^{(N)}_{j_1 j_2...k_1 k_2...ml_2...}\,\nonumber \\{} & {} \quad -\,t^{E*}_{m l_2 }W^{(N)}_{j_1 j_2...k_1k_2...l_1m...}-t^{E*}_{m j_1} W^{(N)}_{m j_2...k_1k_2...l_1l_2...}\,\nonumber \\{} & {} \quad +\,t^{E}_{m j_2 }W^{(N)}_{j_1 m...k_1k_2...l_1l_2...}+\,\ldots \,. \end{aligned}$$
(2.32)

Analogously, for the W-coefficients of operators with bosonic fields only, the quantity that must vanish reads

$$\begin{aligned}{} & {} if^{B E A_1} W^{(N) B A_2...A_k}_{a_1...a_m}\,+\,\ldots \,+\,if^{B E A_k}W^{(N) A_1...B}_{a_1...a_m} \nonumber \\{} & {} \quad +\,\theta ^{E}_{b a_1} W^{(N) A_1...A_k}_{b a_2...a_m}\,+\,\ldots \,+\,\theta ^{E}_{b a_m}W^{(N) A_1...A_k}_{a_1...b}. \end{aligned}$$
(2.33)

Both types of terms arise on the r.h.s. for operators that involve both the fermionic and bosonic fields.

Once the RG equations in the off-shell basis are found, we should pass to the on-shell ones by using the EOM. Let us illustrate this using the W-coefficient of \(Q_5\). We start from the observation that \(Q_7\) is reducible by the gauge-field EOM

$$\begin{aligned} (D_{\mu }F^{\mu \nu })^A= - i \theta ^A_{ab}\phi _b (D^{\nu }\phi )_a+ (\ldots )_\psi + {\mathcal {O}}(\frac{1}{\Lambda }). \end{aligned}$$
(2.34)

An operator \(\widetilde{Q_7}\) that vanishes on-shell is obtained by a simple redefinition

$$\begin{aligned} {\widetilde{Q}}_{7}:= Q_7+\frac{1}{2} Q_4^{\prime } +\frac{1}{4} Q_5^{\prime } + (\ldots )_\psi , \end{aligned}$$
(2.35)

with

$$\begin{aligned} Q_4^\prime&:= i W^{(7)\;AC}\theta ^{C}_{ab}(D^{\mu }\phi )_a(D^{\nu }\phi )_b F^A_{\mu \nu },\nonumber \\ Q_5^{\prime }&:= \frac{1}{4} \left( \sum W^{(7)\;AC} \theta ^{C}_{ac}\theta ^{B}_{bc} \right) \phi _a \phi _b F^A_{\mu \nu } F^{B\;\mu \nu }. \end{aligned}$$
(2.36)

Next, \(Q_4^\prime \) and \(Q_5^\prime \) are absorbed into \(Q_4\) and \(Q_5\):

$$\begin{aligned} {\overline{W}}^{(4)\;}{}^{A}_{ab}&:= W^{(4)\;}{}^{A}_{ab}-i W^{(7)\;AC}\theta ^{C}{}_{ab},\nonumber \\ {\overline{W}}^{(5)\;}{}^{AB}_{ab}&:= W^{(5)\;}{}^{AB}_{ab}- \frac{1}{4} W^{(7)\;AC} \theta ^{C}_{ac}\theta ^{B}_{bc}\,. \end{aligned}$$
(2.37)

To get an on-shell expression for the W-coefficient of \(Q_5\), another redefinition is necessary:

$$\begin{aligned} \widetilde{Q_{4}}:= Q_4+\tfrac{1}{4} Q^{\prime \prime }_5+(\ldots ), \end{aligned}$$
(2.38)

with

$$\begin{aligned} Q^{\prime \prime }_5:=\tfrac{i}{4}\sum {\overline{W}}^{(4)}_{ac}{}^A\theta ^B_{cb}\phi _a \phi _b F^A_{\mu \nu } F^{B\;\mu \nu }. \end{aligned}$$
(2.39)

It yields

$$\begin{aligned} {\widetilde{W}}^{(5)\;}{}^{AB}_{ab}&= {\overline{W}}^{(5)\;}{}^{AB}_{ab}+\frac{i}{4} \sum {\overline{W}}^{(4)\;}{}^{A}_{ac}\theta ^{B}_{bc}= W^{(5)\;}{}^{AB}_{ab}\nonumber \\&+\frac{i}{4} \sum W^{(4)\;}{}^{A}_{ac}\theta ^{B}_{bc}\,. \end{aligned}$$
(2.40)

Finally, applying \(\mu \frac{d}{d\mu }\) to both sides of the above equation, one obtains the on-shell RG equation for \({\widetilde{W}}^{(5)}\):

$$\begin{aligned} \mu \frac{d{\widetilde{W}}^{(5)\;}{}^{AB}_{ab}}{d\mu }= & {} \mu \frac{dW^{(5)\;}{}^{AB}_{ab}}{d\mu }\nonumber \\{} & {} + \frac{i}{4} \sum \left( \mu \frac{d W^{(4)\;}{}^{A}_{ac}}{d\mu }\theta ^{B}_{bc} + W^{(4)\;}{}^{A}_{ac} \theta ^{\underline{B}}_{bc} \gamma _{\underline{B}} \right) ,\nonumber \\ \end{aligned}$$
(2.41)

where

$$\begin{aligned} \gamma _{B}=\frac{1}{48\pi ^2} \left[ -11 C_2(G_B) +\frac{1}{2}\text {tr}(\theta ^A_{\underline{B}}\theta ^A_{\underline{B}}) +2\text {tr}(t^A_{\underline{B}} t^A_{\underline{B}}) \right] \nonumber \\ \end{aligned}$$
(2.42)

and  \(C_2(G_{\underline{B}}) \delta ^{\underline{B}C} = f^{BDE} f^{CDE}\).

3.9.5 Sample On-Shell Results

Three out of six on-shell-irreducible bosonic operators, namely \(Q_6\), \(Q_8\) and \(Q_9\), transform trivially to the on-shell basis. The corresponding RG equations that we find in their case take the form

$$\begin{aligned} \mu \frac{dW^{(6)}{}^{AB}_{ab}}{d\mu }&= \frac{1}{16\pi ^2} \left( 2 Z^{(1)}\!+\!2 Z^{(2)}\!+\!8 Z^{(3)}\!-\!8 Z^{(4)}\right. \nonumber \\&\left. + \!2 Z^{(5)}\!+\!Z^{(6)}\!+\!Z^{(7)}\!+\!2 Z^{(8)}\!+\!6 Z^{(9)}\right) ^{AB}_{ab},\nonumber \\ \mu \frac{dW^{(8)}{}^{ABC}}{d\mu }&= \frac{1}{16\pi ^2} [ 12\,C_2(G_{\underline{B}})+48\pi ^2\,\gamma _{\underline{B}} ]\,W^{(8)}{}^{A\underline{B}C},\nonumber \\ \mu \frac{dW^{(9)}{}^{ABC}}{d\mu }&= \frac{1}{16\pi ^2} [ 12\,C_2(G_{\underline{B}})+48\pi ^2\,\gamma _{\underline{B}}]\,W^{(9)}{}^{A\underline{B}C}, \end{aligned}$$
(2.43)

where

$$\begin{aligned} Z^{(1)AB}_{ab}&= W^{(6)}{}^{AB}_{cd}\, \theta ^{C}{}_{ac}\, \theta ^{C}{}_{bd}\,,&\nonumber \\ Z^{(2)AB}_{ab}&= \sum W^{(6)}{}^{BC}_{bd}\, \theta ^{A}{}_{cd} \,\theta ^{C}{}_{ac}\,,\nonumber \\ Z^{(3)AB}_{ab}&= C_2(G_{\underline{B}}) W^{(6)}{}^{A\underline{B}}_{ab}\,,&\nonumber \\ Z^{(4)AB}_{ab}&= f^{ACE} f^{BDE} W^{(6)}{}^{CD}_{ab}\,,\nonumber \\ Z^{(5)AB}_{ab}&= 16\pi ^2 W^{(6)}{}{}^{A\underline{B}}_{ab}\, \gamma _{\underline{B}}\,,&\nonumber \\ Z^{(6)AB}_{ab}&= 8\pi ^2\sum W^{(6)}{}^{AB}_{bc} (\gamma _{\phi }){} _{ac}\,,\nonumber \\ Z^{(7)AB}_{ab}&= W^{(6)}{}^{AB}_{cd} \lambda _{abcd}\,,&\nonumber \\ Z^{(8)AB}_{ab}&= i\sum W^{(9)}{}^{BCD} \,\theta ^{A}{}_{ac}\, \theta ^{C}{}_{bd} \,\theta ^{D}{}_{cd}\,,\nonumber \\ Z^{(9)AB}_{ab}&= \tfrac{i}{2}\sum W^{(9)}{}^{BCD} \,\theta ^{A}{}_{cd} \,\theta ^{C}{}_{ac} \,\theta ^{D}{}_{bd}\,. \end{aligned}$$
(2.44)

The RG equations for \(Q_8\) and \(Q_9\) in Eq. (2.43) agree with those in Refs. [192, 193]. As far as \(Q_6\) in the generic case is concerned, we are not aware of any published one-loop RG equation so far. However, we have checked that in the SMEFT case we reproduce the RG equations found in Refs. [8,9,10]. Such a comparison tests the sum \(2Z^{(5)}+Z^{(6)}+Z^{(7)}\) in Eq. (2.43).

3.9.6 Summary

Our goal is to evaluate one-loop RG equations for dimension-six operators in a generic class of EFTs. In the absence of fermions, the calculation has been completed [194] in the off-shell basis, with partial reduction to the on-shell one. As far as the operators with fermions are concerned, only partial off-shell results have been obtained so far [195]. The main issue that remains to be resolved is automatization of tensor expression simplifications that must be performed after evaluation of the necessary Feynman diagrams. Eventually, once our project is completed, one-loop RG equations for many practically relevant specific EFTs will be possible to determine via straightforward substitutions.

3.10 Two-loop Renormalization for \(\chi \)QED in the BMHV Scheme

                    Hermès Bélusca-Maïto, Amon Ilakovac,                     Marija Mador-Božinović, Paul Kühler, Dominik Stöckinger, and Matthias Weißwange

Dimensional regularization (DReg) is an indispensable tool for practical calculations at the (multi-)loop level. Its popularity is not least of all due to manifest preservation of symmetries of vector-like theories of the classical action to all loop orders, which aids greatly both in renormalizability proofs, as well as in the practical determination of counterterms [74]. Many powerful theorems such as the quantum action principle can be rigorously derived in this framework. However, it is known from experiment that the world is described by chiral gauge theories such as the electroweak sector of the SM. For such theories no invariant regulator is known, and dimensional schemes like DReg clash with their chiral nature. Technically, this is reflected in the definition of \(\gamma _5\) (or, equivalently, \(\varepsilon ^{\mu \nu \rho \sigma }\)) in DReg, where inconsistencies arise from relying on the simultaneous validity of customary 4-dimensional relations in the dimensionally regularized setting.

Therefore, one cannot literally apply the familiar relations involving \(\gamma _5\) but must define an appropriate scheme. In the following we summarize the main results presented in Refs. [76, 77, 83]. We adopt the Breitenlohner–Maison–’t Hooft–Veltman (BMHV) [69, 70] scheme, which treats Lorentz covariants as being comprised of a 4-dimensional (barred) and \(-2\epsilon \)-dimensional evanescent (hatted) part,

$$\begin{aligned} g^{\mu \nu }={\bar{g}}^{\mu \nu }+{\hat{g}}^{\mu \nu }. \end{aligned}$$
(2.45)

Inconsistencies are avoided by giving up the anti-commutativity of \(\gamma _5\),

$$\begin{aligned} \{\gamma _5,\gamma ^{\mu }\} = 2\gamma _5{\hat{\gamma }}^{\mu } \, , \{\gamma _5,\bar{\gamma }^{\mu }\} = 0 \, , [\gamma _5,{\hat{\gamma }}^{\mu }] = 0 \, . \end{aligned}$$
(2.46)

Its distinguishing feature is its consistency, and hence, reliability at the multi-loop level, but it introduces spurious symmetry breakings. There are a number of alternative schemes (cf. [74], also references in [76, 77]) like the naive scheme [196] or reading-point prescriptions [197, 198], which are computationally simpler, but their consistency at higher loop orders is generally not ensured.

At the level of the full quantum theory, expressed in terms of the 1-particle-irreducible (1-PI) quantum effective action \(\Gamma \), the Becchi–Rouet–Stora–Tyutin (BRST) symmetry is formulated by the Slavnov–Taylor identity,

$$\begin{aligned} {\mathcal {S}}(\Gamma ) = \int {\text {d}}^{4}{x}\; \frac{\delta \Gamma }{\delta \phi _i(x)}\frac{\delta \Gamma }{\delta K_{\phi _i}(x)}=0 \,, \end{aligned}$$
(2.47)

with generic quantum fields \(\phi \) and corresponding BRST sources \(K_\phi \). Its validity to all orders is an essential ingredient in ensuring unitarity and physicality of the S-matrix. Hence we require that our full quantum theory obeys the Slavnov–Taylor identity. It turns out that any violation of BRST symmetry, and hence Eq. (2.47), can be related to the insertion of a local operator into the effective action by the regularized quantum action principle [70, 199, 200],

$$\begin{aligned} {\mathcal {S}}(\Gamma _\text {DRen}) = \Delta \cdot \Gamma _\text {DRen} \,. \end{aligned}$$
(2.48)

Evaluating the r.h.s. of Eq. (2.48) determines the finite symmetry restoring counterterms without the need to compute products of Green’s functions including higher-order terms from the l.h.s. of Eq. (2.47).

We, therefore, aim for the systematic determination of the full counterterm structure—comprised of non-symmetric, singular counterterms needed for consistency at higher orders, and finite, symmetry-restoring counterterms—for various toy models of increasing complexity and loop orders and, eventually, the SM. So far we have studied the scheme at the one-loop level for a generic Yang-Mills theory [76] (see also [78, 201] for related works) and at the two-loop level for an abelian model [77]. The latter will serve to illustrate our methods in this article. For an extensive review on the topic of chiral gauge theories and renormalization see [83].

3.10.1 Application to \(\chi \)QED

We consider an Abelian gauge theory with a family of \(N_f\) right-handed fermions [77], which we denote as \(\chi \)QED. The d-dimensional treatment only affects the fermionic sector non-trivially, where the kinetic term is kept d-dimensional while the interaction term is purely 4-dimensional:

(2.49)

where \({\mathcal {Y}}_{Rij}=(\textrm{diag}({\mathcal {Y}}_R^{1},\dots ,{\mathcal {Y}}_R^{N_f}))_{ij}\) is the hypercharge matrix for the abelian model. The full regularized action at tree-level becomes

$$\begin{aligned} \begin{aligned} \hspace{-1pt} S_0&= (2.49) + \int {\text {d}}^{d}{x}\; \left( - \frac{1}{4} F_{\mu \nu } F^{\mu \nu } - \frac{1}{2 \xi } (\partial _\mu A^\mu )^2 \right. \\&\left. - {\bar{c}}\partial ^2 c + K_\phi s_d\phi \right) \\&\equiv (2.49) + S_{AA} + S_\text {g-fix} + S_{{\bar{c}}c} + S_{\rho c} + S_{{\bar{R}} c \psi } + S_{\overline{\psi } c R} , \end{aligned} \end{aligned}$$
(2.50)

with fields \(\phi \in \{A^{\mu },\overline{\psi }^i,\psi ^i\}\) and sources \(K_\phi \in \{\rho ^{\mu },R^i,{\overline{R}}^i\}\). We can see that the symmetry is violated for the regularized action at tree level, giving rise to the following breaking vertex:

figure dc

The standard UV-renormalization of the model at one loop order leads to the singular counterterm action,

$$\begin{aligned} S_\text {sct}^{(1)} = \bigl ({\text {symmetric}}\bigr ) - \frac{\hbar \, e^2}{16 \pi ^2 \epsilon } \frac{{{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^2]}{3} \int {\text {d}}^{d}{x}\; \frac{1}{2} {\bar{A}}_\mu {\widehat{\partial }}^2 {\bar{A}}^\mu \,,\nonumber \\ \end{aligned}$$
(2.52)

where the unspecified terms correspond to 4-dimensional multiplicative renormalization. The last term, evanescent and non–gauge invariant, is necessary for canceling the divergence in (). At one-loop order the symmetry restoration is rather straightforward, with Eq. (2.48) boiling down toFootnote 13

figure dd
figure de

These are the onlyFootnote 14 (divergent) contributing diagrams. The finite part of () leads to the finite, non-invariant BRST-restoring counterterm action \(S^{(1)}_\text {fct}\), whose structure [76] is the same at two loops and will be highlighted below.

Fig. 3
figure 3

The two-loop diagrams with one insertion of the tree-level \({\widehat{\Delta }}\)-vertex and the relevant counterterms for the subdivergences

At order \(\hbar ^{>1}\) (using \( \hbar \) as the loop-counting parameter), Eq. (2.48) implies

$$\begin{aligned} {\mathcal {S}}_d(S_0+S_\text {ct}) = ({\widehat{\Delta }}+\Delta _\text {ct})\cdot \Gamma _\text {DRen} \,, \end{aligned}$$
(2.54)

and more explicitly at two-loop order:

$$\begin{aligned} \big ({\widehat{\Delta }}\cdot \Gamma _\text {subren}^{(2)} {+} \Delta _\text {sct}^{(1)}\cdot \Gamma _\text {subren}^{(1)}{+} \Delta _\text {fct}^{(1)}\cdot \Gamma _\text {subren}^{(1)} {+} \Delta _\text {sct}^{(2)}\big )_\text {div}&= 0 \, , \end{aligned}$$
(2.55a)
$$\begin{aligned} \mathop {\text {LIM}}_{d \rightarrow 4} \big ({\widehat{\Delta }}\cdot \Gamma _\text {subren}^{(2)} {+} \Delta _\text {sct}^{(1)}\cdot \Gamma _\text {subren}^{(1)}{+}\Delta _\text {fct}^{(1)}\cdot \Gamma _\text {subren}^{(1)} {+} \Delta _\text {fct}^{(2)}\big )_\text {fin}&= 0 \,. \end{aligned}$$
(2.55b)

The singular counterterms have the same structure as at one-loop (including the evanescent \(\int {\text {d}}^{d}{x}\; \frac{1}{2} {\bar{A}}_\mu {\widehat{\partial }}^2 {\bar{A}}^\mu \)), except for a novel non-gauge invariant, 4-dimensional piece,

$$\begin{aligned} S_\text {sct}^{(2,\,2)} \supset {-}\left( \frac{\hbar \, e^2}{16 \pi ^2}\right) ^2 \frac{1}{3 \epsilon } \sum _j ({\mathcal {Y}}_R^j)^2 \left( \frac{5}{2} ({\mathcal {Y}}_R^j)^2 - \frac{2}{3} {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^2] \right) \overline{S^{j}_{\overline{\psi }\psi _R}} \,.\nonumber \\ \end{aligned}$$
(2.56)

There exist additional divergent one-loop diagrams containing one insertion of a finite counterterm \(\overline{S_\text {fct}^{(1)}}\) (similar to diagrams Figs. 4 and 5), whose divergent parts define new counterterms, \(S_\text {sct}^{(2,\,1)}\):

$$\begin{aligned} S_\text {sct}^{(2,\,1)} = -\left( \overline{S_\text {fct}^{(1)}} \cdot \Gamma ^{(1)} \right) ^\text {div} \,. \end{aligned}$$
(2.57)

They possess a genuine one-loop structure, even though they are of order \(\hbar ^2\).

For the r.h.s. of Eq. (2.48) three structures arise: the proper two-loop diagrams with one insertion of the tree-level \({\widehat{\Delta }}\)-vertex (first diagram in Fig 3), a new insertion of BRST-transformed non-invariant one-loop counterterms into one-loop diagrams (last diagram in the first line of Fig 3), and the last term in Eqs. (2.55a) and (2.55b) that determines the finite counterterms and provides a consistency check for the divergent ones, respectively. The complete order-\(\hbar ^2\) finite counterterms are, in Feynman gauge \(\xi = 1\),

$$\begin{aligned} \overline{S_\text {fct}^{(2)}}= & {} \left( \frac{\hbar \, e^2}{16\pi ^2}\right) ^2 \left\{ \int {\text {d}}^{4}{x}\; \left( \frac{11 {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^4]}{24} \frac{1}{2} {\bar{A}}_\mu \overline{\partial }^2 {\bar{A}}^\mu \right. \right. \nonumber \\{} & {} \left. \left. + \frac{3 e^2 {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^6]}{2} \frac{1}{4} ({\bar{A}}^2)^2 \right) \right. \nonumber \\{} & {} \left. - \sum _j ({\mathcal {Y}}_R^j)^2 \, \left( \frac{127}{36} ({\mathcal {Y}}_R^j)^2 - \frac{1}{27} {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^2] \right) \overline{S^{j}_{\overline{\psi }\psi _R}} \right\} \,.\nonumber \\ \end{aligned}$$
(2.58)

Remarkably, they have the same compact structure as at one-loop and directly correspond to the restoration of three well-known QED Ward identities, i.e., transversality of the photon two- and four-point function, as well as the relation between electron self energy and electron-photon interaction. Indeed, we can explicitly check for the restoration of the symmetry by evaluating the relevant identities and confirming that only after adding those counterterms, are they satisfied in our model [77].

3.10.2 RG equation in dimensional renormalization

The RG equation [202, 203] describes the invariance of bare correlation (Green’s) functions under a change of the arbitrary renormalization scale: in DReg it is the “unit of mass”Footnote 15\(\mu \) [204, 205], that is associated to each loop of any diagram: \(\mu ^\epsilon \int {\text {d}}^{d}{x}\; \). For example, the 1-PI quantum effective action \(\Gamma \) depends on \(\mu \) both explicitly and implicitly via the \(\mu \)-dependence of the field renormalizations \(Z_\phi ^{1/2}\) and renormalized parameters: \(\Gamma [\{\phi (\mu )\}; e(\mu ), \xi (\mu ), \mu ]\). Its invariance under a total \(\mu \)-variation is represented by the RG equation (summation over fields \(\phi \in {\chi } \text {QED}\) is impliedFootnote 16 ):

$$\begin{aligned} \mu \frac{{\text {d}}\Gamma }{{\text {d}}\mu } = 0 = \mu \partial _\mu \Gamma + \left( \beta _e e \partial _e + \beta _\xi \partial _\xi - \gamma _\phi N_\phi \right) \Gamma \,. \end{aligned}$$
(2.59a)

In Eq. (2.59a), \(\mu \partial _\mu \) is the RG differential operator. The \(N_\phi \) are field-numbering (“leg-counting”) differential operators, defined by \(N_\phi \equiv \int {\text {d}}^{d}{x}\; \phi (x) {\delta }/{\delta \phi (x)}\), for bosonic fields, ghosts, and for right-handed (and left-handed anti-) fermions \(\phi := {{\mathbb {P}}_\text {R}} \psi \), \(\phi := \overline{\psi } {{\mathbb {P}}_\text {L}}\). The coefficient functions \(\beta _{e,\xi }\) are the beta-functions for the coupling constant e and the gauge parameter \(\xi \), and \(\gamma _\phi \) are the anomalous dimensions for the fields \(\phi \), defined by

$$\begin{aligned} \beta _e = \frac{1}{e} \mu \frac{{\text {d}}e}{{\text {d}}\mu } \, ,{} & {} \beta _\xi = \mu \frac{{\text {d}}\xi }{{\text {d}}\mu } \, ,{} & {} \gamma _\phi = \frac{1}{2} \mu \frac{{\text {d}}\ln {Z_\phi }}{{\text {d}}\mu } .\nonumber \\ \end{aligned}$$
(2.59b)

“Modified” multiplicative renormalization (MultRen)

Standard renormalization transformation [169,170,171, 204] consists in renormalizing fields multiplicatively, while couplings are usually renormalized additively:

$$\begin{aligned} \begin{gathered} e \rightarrow e + \delta e \,, \qquad \xi \rightarrow Z_A \xi \,, \qquad A_\mu \rightarrow \sqrt{Z_A} A_\mu \,, \\ ({\psi _R}_i, \overline{\psi _R}_i) \rightarrow \sqrt{Z_\psi } ({\psi _R}_i, \overline{\psi _R}_i) \,, \qquad ({\psi _L}_i, \overline{\psi _L}_i) \rightarrow ({\psi _L}_i, \overline{\psi _L}_i) \,, \end{gathered}\nonumber \\ \end{aligned}$$
(2.60)

(BRST sources renormalize the inverse way from their corresponding dynamical fields). Beta-functions and anomalous dimensions can be found from the \(1/\epsilon \) poles of the renormalizations \(\delta e\) and \(Z_\phi \).

The situation becomes more involved when new evanescent singular and finite symmetry-restoring counterterms are generated during renormalization. One way to proceed is to extend [206, 207] (also Section 8 in [76]) the original tree-level action \(S_0\) with those new generated operators, associated with new auxiliary couplings \(\rho _{\mathcal {O}}:= \sigma _i, \rho _i\). A new tree-level action \(S_0^*\) is thus defined, which, in the case of \(\chi \)QED, can take the following formFootnote 17

$$\begin{aligned} \begin{aligned} S_0^* =\;&S_0 + \rho _1 \delta \text {fct}_\psi \overline{S_{\overline{\psi } \psi }} + \sigma _1 \widehat{S_{\overline{\psi } \psi }} + \rho _2 \delta \text {fct}_A (\overline{S_{AA}} + \xi \overline{S_\text {g-fix}}) \\&+ \sigma _2 \widehat{S_{AA}} + \int {\text {d}}^{d}{x}\; \left( \sigma _3 \frac{1}{2} {\bar{A}}_\mu {\widehat{\partial }}^2 {\bar{A}}^\mu + \rho _3 \frac{e^2}{4} ({\bar{A}}^2)^2 \right) \,. \end{aligned} \nonumber \\ \end{aligned}$$
(2.61)

The coefficients \(\delta \text {fct}_\psi \) and \(\delta \text {fct}_A\) arise from the finite BRST-restoring counterterms \(S_\text {fct}\). The generated modified effective action \(\Gamma ^*_\text {DReg}[\phi , \rho _{\mathcal {O}}]\) and counterterms can be expanded in \(\rho _{\mathcal {O}}\), whose lowest-order terms correspond to the quantities evaluated in the original theory.

One obtains an RG equation for \(\Gamma ^*_\text {DReg}[e, \xi , \{\sigma _i\}, \{\rho _i\}]\), with beta-functions \({\widetilde{\beta }}\) for \(e, \xi \) and auxiliary couplings \(\sigma _i, \rho _i\), and anomalous dimensions \(\widetilde{\gamma _\phi }\):

$$\begin{aligned} \mu \partial _\mu \Gamma ^*_\text {DReg} = \left( - \widetilde{\beta _e} e \partial _e - \widetilde{\beta _\xi } \partial _\xi - \widetilde{\beta _{\sigma _i}} \partial _{\sigma _i} - \widetilde{\beta _{\rho _i}} \partial _{\rho _i} + \widetilde{\gamma _\phi } N_\phi \right) \Gamma ^*_\text {DReg} \,.\nonumber \\ \end{aligned}$$
(2.62)

The genuine renormalized theory generated by the original \(S_0\) can be recovered in the limit \(\sigma _i, \rho _i \rightarrow 0\), since \(\sigma _i, \rho _i\) are unphysical and are absent in \(S_0\). The true \(\beta \) and \(\gamma \) functions for \(\Gamma \) will depend on \({\widetilde{\beta }}\) and \(\widetilde{\gamma _\phi }\) and are obtained for the 4-dimensional renormalized effective action \(\Gamma \), defined by

$$\begin{aligned} \Gamma [e, \xi ] = \mathop {\text {LIM}}_{d \rightarrow 4} \lim _{\sigma _i,\rho _i \rightarrow 0} \Gamma ^*_\text {DReg}[e, \xi , \{\sigma _i\}, \{\rho _i\}] \,, \end{aligned}$$
(2.63)

where: (i) divergences are MS-subtracted from \(\Gamma \) with suitable singular counterterms, and (ii) \(d \rightarrow 4\), with (iii) remaining finite evanescent quantities set to zero. The corresponding RG equation, obtained from Eq. (2.62) when both sides are taken under those same limits, has the final structure:

$$\begin{aligned} \mu \partial _\mu \Gamma= & {} \left( - \beta _e e \partial _e - \beta _\xi \partial _\xi + \gamma _\phi N_\phi \right) \Gamma \nonumber \\{} & {} \quad \sim \quad \mathop {\text {LIM}}_{d \rightarrow 4} \lim _{\sigma _i,\rho _i \rightarrow 0} \mu \partial _\mu \Gamma ^*_\text {DReg} \,. \end{aligned}$$
(2.64)

The procedure [206, 207] then consists in evaluating the effects of the evanescent and non-symmetric operators, that dilute into the non-evanescent ones, via the following terms in the limit \(\sigma _i, \rho _i \rightarrow 0\) and the renormalized limit \(d \rightarrow 4\):

$$\begin{aligned} - \widetilde{\beta _{\sigma _i}} \partial _{\sigma _i} \Gamma ^*_\text {DReg},\quad - \widetilde{\beta _{\rho _i}} \partial _{\rho _i} \Gamma ^*_\text {DReg} \, . \end{aligned}$$
(2.65)

They correspond, from the Regularized Action Principle [70, 208, 209], to diagrammatic vertex insertions of their associated operators: \({\partial \Gamma ^*_\text {DReg}}/{\partial \rho _{\mathcal {O}}} = \left( {\mathcal {O}} + {\partial S_\text {ct}^*}/{\partial \rho _{\mathcal {O}}} \right) \cdot \Gamma _\text {DReg}\), where \({\partial S_\text {ct}^*}/{\partial \rho _{\mathcal {O}}}\) removes the divergences from \({\mathcal {O}} \cdot \Gamma _\text {DReg}\). Finally, they are re-cast as new contributions to \(\beta _e e \partial _e \Gamma \), \(\beta _\xi \xi \partial _\xi \Gamma \) and \(\gamma _\phi N_\phi \Gamma \), from which shifts to the \(\beta _e\) and \(\gamma _\phi \) are obtained.

RG equation in Algebraic Renormalization (AlgRen) The other and more streamlined method for obtaining the RG equation is that of the “Algebraic Renormalization” framework [201, 210]. It is based on the properties of the theory and of the RG evolution regarding the BRST symmetry (see, e.g., Section 7 of [76]). It applies at the level of the BRST-restored 4-dimensional renormalized effective action \(\Gamma \).

After symmetry restoration, \(\Gamma \) is now BRST invariant, and the RG operator inherits the symmetries from \(\Gamma \): (i) the RG evolution is BRST invariant, (ii) it satisfies the gauge-fixing condition, (iii) and the ghost equation. The RG equation for \(\Gamma \) is thus an expansion in a basis of 4-dimensional operators, with ghost number \(= 0\), satisfying these same constraints (\(\phi = A, \psi , c\)):

$$\begin{aligned} \underbrace{\mu \partial _\mu \Gamma }_{= {\mathfrak {R}}} = \underbrace{\left( -\beta _e e \partial _e + \gamma _\phi {\mathcal {N}}_\phi \right) \Gamma }_{= {\mathfrak {W}}} \,. \end{aligned}$$
(2.66)

The \({\mathcal {N}}_\phi \) are BRST-invariant field-counting operators [76, 211] that are linear combinations of the basic \(N_\phi \) operators previously introduced:

$$\begin{aligned} {\mathcal {N}}_A&= (N_A + 2 \xi \partial _\xi - \cdots ) \, , \\ {\mathcal {N}}_\psi&= (N_\psi ^R + N_{\overline{\psi }}^L - \cdots ) \, , {\mathcal {N}}_c = N_c \, . \end{aligned}$$

The Quantum Action Principle (QAP) [70, 199, 200, 208,209,210, 212,213,214], asserts that the variations of \(\Gamma \) (terms \({\mathfrak {W}}\)) with respect to parameters and fields naturally present in \(S_0\), are equivalent to a renormalized insertion of local d-dimensional operators in \(\Gamma \), derived from the finite dimensional-regularized action, \(S_0 + S_\text {fct}\),

$$\begin{aligned} {\mathcal {D}} \Gamma = N[{\mathcal {D}} (S_0 + S_\text {fct})] \cdot \Gamma \,, \qquad \text {with}\qquad {\mathcal {D}} = e \partial _e \;; \; {\mathcal {N}}_\phi \,. \end{aligned}$$
(2.67)

Because \(\mu \) is not a parameter of \(S_0\), but is a modification of the loop integration, the QAP does not directly apply to \(\mu \partial _\mu \Gamma \) itself (term \({\mathfrak {R}}\)). Nonetheless, it can also be expressed as a renormalized insertion (Bonneau [205]):

$$\begin{aligned} {\mathfrak {R}} \equiv \mu \partial _\mu \Gamma = \sum _{N_l \ge 1} N_l \, N[\text {r.s.p.}\, \Gamma _{{\text {DReg}^N}_{l}} \text {loops}] \cdot \Gamma \,. \end{aligned}$$
(2.68)

In this equation, \(\text {r.s.p.}\, \Gamma _{{\text {DReg}^N}_{l}}\) loops designates the residue of simple \(1/(4-d)\) pole of the \(N_l\)-loop 1-PI diagrams, made from Feynman rules derived from the action \((S_0 + S_\text {fct})\), and sub-renormalized using lower-order singular counterterms \(S_\text {sct}\).

The procedure then consists in re-expressing Eq. (2.66) using (2.67) and (2.68), and all the operator insertions into a basis of (independent) 4-dimensional ones. Note that the inserted evanescent \(\widehat{{\mathcal {M}}_j}\) operators manifesting there are not linearly independent quantities, and need to be expanded into independent insertions of 4-dimensional operators \(\overline{{\mathcal {M}}_i}\) (Bonneau identities [205, 215]): \(N[\widehat{{\mathcal {M}}_j}] \cdot \Gamma = {\textstyle \sum _i} c_{ji} N[\overline{{\mathcal {M}}_i}] \cdot \Gamma \), with \(c_{ji} \sim {\mathcal {O}}(\hbar )\). Grouping all terms together, one obtains the final form of the RG equation, as a system of equations for the \(\beta _e\) and the \(\gamma _\phi \) functions, ensuring their self-consistency:

$$\begin{aligned} \mu \partial _\mu \Gamma= & {} \underbrace{{\textstyle \sum _i} r_i N[\overline{{\mathcal {M}}_i}] \cdot \Gamma }_{= {\mathfrak {R}}} \nonumber \\= & {} \underbrace{{\textstyle \sum _i} \left( - \beta _e w_{e,i} + \gamma _\phi w_{\phi ,i} \right) N[\overline{{\mathcal {M}}_i}] \cdot \Gamma }_{= {\mathfrak {W}}} \,. \end{aligned}$$
(2.69)

3.10.3 Results for the two-loop RG evolution

Mainly focusing on the AlgRen method, we now describe how all the \(\hbar ^2\)-order terms entering Eq. (2.69) can be explicitly evaluated in order to determine the two-loop \(\beta _e^{(2)}\) and \(\gamma _\phi ^{(2)}\) functions. For all details we refer to [211].

The left-hand side of Eq. (2.69) can be grouped into four different terms, \({\mathfrak {R}} = \mathfrak {R_1} + \mathfrak {R_2} + \mathfrak {R_3} + \mathfrak {R_4}\).

  • \(\mathfrak {R_1}\) corresponds to insertions of one-loop 4-dimensional singular counterterms into one-loop diagrams, \(\mathfrak {R_1} = N[-\text {r.s.p.}\, \overline{S_\text {sct}^{(1)}}] \cdot \Gamma ^{(1)}\).

  • \(\mathfrak {R_2}\) corresponds to insertions of one-loop evanescent singular counterterms into one-loop diagrams, Figs. 4 and 5: \(\mathfrak {R_2} = N[-\text {r.s.p.}\, \widehat{S_\text {sct}^{(1)}}] \cdot \Gamma ^{(1)}\).

  • \(\mathfrak {R_3}\) corresponds to two-loop singular counterterms, obtained from the \(1/(4-d)\) pole of one-loop diagrams involving insertions of one-loop finite counterterms (see discussion around Eq. (2.57)), \(\mathfrak {R_3} = - \text {r.s.p.}\, \overline{ S_\text {sct}^{(2,\,1)} }\). Those diagrams are similar to those of Figs. 4 and 5, but with \(\widehat{{\mathcal {O}}} \rightarrow \overline{S_\text {fct}^{(1)}}\).

  • \(\mathfrak {R_4}\) corresponds to the genuine two-loop \(\hbar ^2\) singular counterterms (see discussion around Eq. (2.56)). Note that in the language of Eq. (2.68), it is only these \(\mathfrak {R_4}\) terms that receive a factor \(N_l = 2\), whereas all other contributions \(\mathfrak {R_{1,2,3}}\) receive a factor \(N_l = 1\). This subtlety is not present in the case of manifest symmetry preservation.

Similarly, the right-hand side of Eq. (2.69) can be grouped into four terms, \({\mathfrak {W}} = \mathfrak {W_1} + \mathfrak {W_2} + \mathfrak {W_3} + \mathfrak {W_4}\).

  • \(\mathfrak {W_1}\) corresponds to contributions from the one-loop RG coefficients combined with the insertions of the respective differential operators, i.e. \(\mathfrak {W_1} = -\beta _e^{(1)} N[e \partial _e \overline{S_0}] \cdot \Gamma ^{(1)} + \gamma _\phi ^{(1)} N[{\mathcal {N}}_\phi \overline{S_0}] \cdot \Gamma ^{(1)}\). Note that there is an automatic agreement \(\mathfrak {R_1} = \mathfrak {W_1}\), in accord with the one-loop RG coefficients.

  • \(\mathfrak {W_2}\) corresponds to the contributions from one-loop RG coefficients combined with insertions of tree-level evanescent operators, \(\mathfrak {W_2} = 2 \gamma _A^{(1)} N[\widehat{S_{AA}}] \cdot \Gamma ^{(1)} + \gamma _{\psi _i}^{(1)} N[\widehat{S^{i}_{\overline{\psi } \psi }}] \cdot \Gamma ^{(1)}\), corresponding to Figs. 4 and 5.

  • \(\mathfrak {W_3}\) corresponds to contributions from one-loop RG coefficients combined with finite one-loop counterterms, \(\mathfrak {W_3} = \left( \gamma _{\psi _i}^{(1)} + \gamma _A^{(1)} \xi \frac{\partial }{\partial \xi } - \beta _e^{(1)} \right) {\mathcal {N}}_{\psi _i} \overline{S_\text {fct}^{(1)}}\).

  • \(\mathfrak {W_4}\) contains the genuine “two-loop” \(\hbar ^2\)-order \(\beta \)-functions and anomalous dimensions of \(\chi \)QEDto be determined: \(\mathfrak {W_4} = -\beta _e^{(2)} e \partial _e \overline{S_0} + \gamma _\phi ^{(2)} {\mathcal {N}}_\phi \overline{S_0}\).

Fig. 4
figure 4

Diagrams with insertion of \(\widehat{{\mathcal {O}}} \equiv \widehat{S_{AA}}\) or \(-\text {r.s.p.}\, \widehat{S_\text {sct}^{(1)}} \propto \int {\text {d}}^{d}{x}\; \frac{1}{2} {\bar{A}}_\mu {\widehat{\partial }}^2 {\bar{A}}^\mu \)

Fig. 5
figure 5

Non-vanishing diagrams with insertion of \(\widehat{{\mathcal {O}}} \equiv \widehat{S^{i}_{\overline{\psi } \psi }}\)

In the MultRen method, there exists a one-to-one correspondence with the terms obtained in the AlgRen method. The singular counterterms \(\mathfrak {R_{3,4}}\) generate contributions \(\widetilde{\beta _{e,\xi }}\) and \(\widetilde{\gamma _{A,\psi _i}}\) to the \(\beta \) and \(\gamma \) functions. The terms \(- \widetilde{\beta _{\sigma _i}} \partial _{\sigma _i} \Gamma ^*_\text {DReg}\) for \(i = 1,2,3\), evaluated following Eq. (2.65) in Sect. 2.9.2, correspond to \(\mathfrak {W_2}\) and \(\mathfrak {R_2}\). Those are evaluated with the very same diagrammatic calculations as in AlgRen, Figs. 4 and 5. Likewise, one evaluates the terms \(- \widetilde{\beta _{\rho _i}} \partial _{\rho _i} \Gamma ^*_\text {DReg}\) (\(i = 1,2\)), that correspond to \(\mathfrak {W_3}\) in the AlgRen method.

All these quantities, except for the unknown two-loop RG coefficients in \(\mathfrak {W_4}\), are known or calculable from one-loop diagrams. The equation \({\mathfrak {R}} = {\mathfrak {W}}\) can therefore be solved to obtain these coefficients. The resulting \(\hbar ^2\)-order \(\beta \) and \(\gamma \) functions of \(\chi \)QEDare (in Feynman gauge \(\xi = 1\)):

$$\begin{aligned} \beta _e^{(2)}&= \gamma _A^{(2)} = \gamma _c^{(2)} = \left( \frac{\hbar \, e^2}{16 \pi ^2}\right) ^2 2 {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^4] \, , \end{aligned}$$
(2.70a)
$$\begin{aligned} \gamma _{\psi _i}^{(2)}&= -\left( \frac{\hbar \, e^2}{16 \pi ^2}\right) ^2 \left( \frac{2}{9} {{\,\textrm{Tr}\,}}[{\mathcal {Y}}_R^2] ({\mathcal {Y}}_R^i)^2 + \frac{3}{2} ({\mathcal {Y}}_R^i)^4 \right) \, . \end{aligned}$$
(2.70b)

The two compared AlgRen and MultRen methods agree in the obtained results.

3.10.4 Summary and outlook

We have demonstrated the practical renormalization of a chiral Abelian toy model up to two-loop order. The main result consists in the full set of non-invariant, singular counterterms as well as the finite, non-invariant symmetry-restoring counterterms which implement the Slavnov–Taylor identity at the two-loop level. These counterterms are found to be rather compact and of a similar structure at one- and two-loop order. Importantly, it is verified that they ensure the validity of the usual Ward identities. The beta-functions and anomalous dimensions of the renormalization group equation of the model have been derived using two approaches: in the Algebraic Renormalization framework and in a modified version of the more customary multiplicative renormalization method. The methods are equivalent and provide the same final results. However, the application of algebraic renormalization is more straightforward, as it does not require any “auxiliary couplings”. We are currently working on the three-loop renormalization as well as the two-loop study of the non-Abelian case. All of this is in preparation for the application to the SM.

4 Phenomenological studies and applications

Another important use case for computer tools has to do with phenomenological analyses. Typical tasks performed by such codes include the extraction of theory parameters from data, the prediction of observables in terms of NP parameters, or setting bounds on the underlying parameter space. Tools are for instance used to determine the WCs of higher-dimensional operators, by extracting them from observables in an automated global analyses. Typical fitting tools that allow for such analyses are SMEFiT [216], as well as smelli, HighPT, and HEPfit, which are all discussed further in this section. Common observable calculators, that consist of a large data base of predefined observables are flavio [217], SuperIso [218, 219], FlavBit [220], as well as the package EOS, which is further discussed below. Furthermore, there are several clustering tools and Montecarlo enablers on the market, such as ClusterKing [221] and Pandemonium [222], the package SMEFTsim [223], as well as SmeftFR which can be used for Montecarlo simulations including Dim8 SMEFT operators, and which is discussed in the last subsection.

4.1 smelli: Towards a global SMEFT likelihood

figure df

The Python package smelli is a powerful tool for constraining SMEFT WCs and parameters of UV models matched to the SMEFT. Its goal is to provide a likelihood that is as global as possible while being fast enough to allow comprehensive fits and parameter scans.

NP extensions of the SM aim to resolve certain theoretical issues or tensions with experimental data. Typically, however, they have effects on many observables beyond their original purpose. It is therefore crucial to carry out global phenomenological analyses of NP models in order to assess their viability and to show their actual superiority over the SM. This is a challenging task as it involves computing predictions for a large number of observables and doing so for each model. Fortunately, this problem can be tremendously simplified by using the SMEFT in an intermediate step. In particular, a global likelihood function that yields the probability of observing the experimental data given SMEFT WCsFootnote 18 can also be used as a likelihood function for model parameters of all NP models that can be matched to the SMEFT. For such models, a global phenomenological analysis can be divided into two parts.

  1. 1.

    The NP model has to be matched to the SMEFT in order to express the SMEFT WCs at the matching scale \(\Lambda _\text {NP}\), \(\textbf{C} (\Lambda _\text {NP})\), in terms of model parameters \(\vec \xi \), i.e.

    $$\begin{aligned} \textbf{C} (\Lambda _\text {NP}) = f_\text {match}(\mathbf {\xi })\,, \end{aligned}$$
    (3.1)

    where the matching function \(f_\text {match}\) and the model parameters \(\vec \xi \) depend on the specific NP model. It might be necessary to include one-loop effects in this step, in particular if the leading contribution to relevant WCs is not generated by the tree-level matching.

  2. 2.

    The SMEFT WCs at the scale \(\Lambda _\text {NP}\), \(\textbf{C}(\Lambda _\text {NP})\), have to be constrained by experimental data. This requires the computation of theory predictions for a large number of observables at various scales, both in the SMEFT and in the LEFT. Importantly, WCs at different scales and in different EFTs are connected by RG running and matching. The one-loop contributions introduced by the RG running have been shown to be crucial in constraining NP models (see e.g. Refs. [224, 225]). Theoretical predictions and experimental measurements of all relevant observables can then be used to construct a global likelihood function for the SMEFT WCs at the scale \(\Lambda _\text {NP}\),

    $$\begin{aligned} L_\text {SMEFT}\left( \textbf{C}(\Lambda _ \text {NP})\right) \,. \end{aligned}$$
    (3.2)

    Through Eq. (3.1), this also directly provides a likelihood function for the parameters \(\vec \xi \) of a NP model,

    $$\begin{aligned} L_\text {NP}\left( \vec \xi \right) = L_\text {SMEFT}\left( f_\text {match}(\vec \xi )\right) . \end{aligned}$$
    (3.3)

The matching in step 1 depends only on the NP model, but is independent of both the experimental data and the theoretical predictions of the observables. Full tree-level matching of generic models to SMEFT has been performed in Ref. [137] and several tools are being developed to fully automate generic one-loop matching [48, 49, 134].

The phenomenological part in step 2 is independent of the NP model, so that a SMEFT likelihood function, once constructed, can be be used for generic phenomenological analyses of NP models. It is important to stress that different sectors of observables should not be considered separately, since RG effects mix all sectors, and matching a NP model to the SMEFT will generally lead to effects in many sectors. It is therefore crucial to consider a global SMEFT likelihood function that encompasses as many sectors as possible.

4.1.1 smelli – the SMEFT likelihood

To establish a comprehensive global likelihood function in the space of dimension-six SMEFT WCs, the open source Python package smelli – the SMEFT likelihood – was introduced in Ref. [226]. It builds on several other open-source projects that provide key components:

  • wilson [108] – running and matching beyond the SM wilson is a Python package for the running and matching of WCs in the LEFT and the SMEFT. It implements the one-loop running of all dimension-six operators in the SMEFT [8,9,10], matching to the LEFT at the electroweak scale [13], and one-loop running of all dimension-six LEFT operators in QCD and QED [14, 20]. Furthermore, it takes into account effects from rediagonalization of Yukawa matrices after running above the EW scale [227, 228].

  • flavio [217] – A Python package for flavour and precision physics in and beyond the SM The Python package flavio can compute theoretical predictions for a wide range of observables from different sectors, including flavour physics, electroweak precision tests, Higgs physics, and other precision tests of the SM. NP contributions are taken into account in terms of WCs of dimension-six operators in the SMEFT and the LEFT. flavio also comes with an extensive database of experimental measurements and allows the construction of likelihoods based on these measurements and their corresponding theoretical predictions.

  • WCxf [133] – the Wilson coefficient exchange format smelli, wilson, and flavio all use the Wilson coefficient exchange format (WCxf) to represent WCs, which makes it easy to interface these codes with each other and with any other code that supports the WCxf standard.

In order to achieve a reasonably fast evaluation of the likelihood function in smelli, two simplifying approximations are used to deal with nuisance parameters \(\vec \theta \) that enter the theory predictions \(\textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta })\):

  • For observables with negligible theoretical uncertainties compared to the experimental uncertainties, each likelihood \(L_\text {exp}^i\) from a given experimental measurement is evaluated with nuisance parameters fixed to their central values \(\vec \theta _0\),

    $$\begin{aligned} L_\text {exp}^i\left( \textbf{O}_\text {th} (\textbf{C}, \mathbf {\theta }_0)\right) \!. \end{aligned}$$
    (3.4)
  • For observables with significant theoretical uncertainties,Footnote 19 both the theoretical and experimental uncertainties are approximated as multivariate Gaussian and a combined likelihood is constructed for all correlated observables. The experimental covariance matrix \(\Sigma _\text {exp}\) and the central experimental values \(\vec O_\text {exp}\) are extracted from the original experimental likelihoods. The theoretical covariance matrix \(\Sigma _\text {th}\) is obtained by sampling the nuisance parameters \(\vec \theta \) from their respective likelihood distributions, while their central values \(\vec \theta _0\) are used for the theoretical predictions \(\mathrm{\textbf{O}}_\text {th}(\textbf{C}, \mathbf {\theta }_0)\). Both covariance matrices enter the combined likelihood \({{\tilde{L}}}_\text {exp}\) defined by

    $$\begin{aligned} -2 \ln \tilde{L}_\text {exp}\left( \textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta }_0)\right) = \textbf{D}^T (\Sigma _\text {exp}+ \Sigma _\text {th})^{-1} \textbf{D}\,, \nonumber \\ \textbf{D} = \textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta }_0) - \textbf{O}_\text {exp}\,. \end{aligned}$$
    (3.5)

The global likelihood is then constructed by combining the individual approximated likelihood functions,

$$\begin{aligned} L_\text {SMEFT}\left( \textbf{C}\right) \approx \tilde{L}_\text {exp}\left( \textbf{O}_ \text {th}(\textbf{C}, \mathbf {\theta }_0)\right) \times \prod _i L_\text {exp}^i\left( \textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta }_0)\right) \,.\nonumber \\ \end{aligned}$$
(3.6)

The smelli Python package that provides this global likelihood function is available in the Python package manager pip and can be installed using

  • python3 -m pip install smelli --user

which will download smelli with all dependencies from the Python package archive (PyPI) and install it in the user’s home directory. The source code of the package and more information about using it can be found in

4.1.2 Status and prospects of smelli

The smelli project is under active development and has been extended several times in recent years, in particular also since the SMEFT-Tools 2019 workshop [1] where smelli v1.3 was presented.

The first version of smelli focused on flavour and electroweak precision observables. These included flavour-changing neutral and charged current B and K decays, meson-antimeson mixing observables in the B, K and D systems, charged-lepton flavour violating B, K, \(\tau \), \(\mu \) and Z decays, as well as Z and W electroweak precision observables and the anomalous magnetic moments of the charged leptons.

In the context of Ref. [230], smelli has been extended to Higgs physics, and the signal strengths of various decay (\(h\rightarrow \gamma \gamma \), \(Z\gamma \), ZZ, WW, bb, cc, \(\tau \tau \), \(\mu \mu \)) and production channels (gg, VBF, Zh, Wh, \(t{\bar{t}}h\)) have been implemented.

With smelli v2.0 more new observables and features have been introduced. Beta decays were implemented following Ref. [231], adding the lifetimes and correlation coefficients of neutron beta decay as well as super-allowed nuclear beta decays. Furthermore, additional K decays and the total and differential cross sections for \(e^+ e^-\rightarrow W^+ W^-\) pair production, as measured at LEP-2, have been added. Apart from new observables and some minor innovations, one of the most important new features of smelli v2.0 is a proper treatment of the Cabibbo–Kobayashi–Maskawa (CKM) matrix in SMEFT. Inspired by Ref. [232], smelli uses a CKM input scheme that takes four observables as proxies for the four CKM parameters. The default CKM input scheme uses \(R_{K\pi }=\Gamma (K^+\rightarrow \mu ^+\nu )/\Gamma (\pi ^+\rightarrow \mu ^+\nu )\) (mostly fixing \(V_{us}\)), \(BR(B^+\rightarrow \tau \nu )\) (fixing \(V_{ub}\)), \(BR(B\rightarrow X_c e \nu )\) (fixing \(V_{cb}\)), and \(\Delta M_d/\Delta M_s\) (mostly fixing the CKM phase \(\delta \)). The CKM elements are then expressed in terms of the four CKM input observables and the SMEFT WCs that enter the predictions of these observables. This removes a major limitation of smelli and allows semi-leptonic charged current meson decays to be included in the likelihood.

Since smelli v2.0 there have been several new developments that will be incorporated in future versions of smelli. A new numerical method has been developed in the context of Ref. [233], which allows a numerically efficient implementation of the NP-dependence of the theory covariance matrix. This will remove another major limitation of smelli and will enable the inclusion of observables whose theoretical uncertainties have a strong NP dependence, as e.g. the neutron Electric Dipole Moment (EDM). In addition, the new method of Ref. [233] increases the computational speed by orders of magnitude, resulting in a significantly shorter evaluation time of the global likelihood function and allowing for much more comprehensive analyses. These new features have already been successfully applied in Ref. [234], where a global likelihood was constructed that includes neutral and charged current Drell–Yan tails, which will be implemented in a future version of smelli.

4.2 HighPT: A tool for Drell-Yan tails beyond the Standard Model

figure dg

High-\(p_T\) tails in Drell-Yan processes can provide useful complementary information to low-energy and electroweak observables when investigating the flavor structure beyond the SM. The Mathematica package HighPT allows to compute Drell-Yan cross sections for dilepton and monolepton final states at the LHC. The observables can be computed at tree-level in the SMEFT, including the relevant operators up to dimension-eight, with a consistent expansion up to \({\mathcal {O}}(\Lambda ^{-4})\). Furthermore, hypothetical TeV-scale bosonic mediators can be included at tree level in the computation of the cross-sections, thus allowing to account for their propagation effects. Using the Run-2 searches by ATLAS and CMS, the LHC likelihood for all possible leptonic final states can be constructed within the package, which therefore provides a simple framework for high-\(p_T\) Drell-Yan analyses. We illustrate the main features of HighPT with a simple example.

Semi-leptonic interactions have received a lot of attention in the literature in recent years, driven mainly by interesting data in B meson decays. In this context, it has been stressed several times that not only low-energy observables can contribute to constrain the new physics scenarios, but high-\(p_T\) observables, especially Drell-Yan tails, can give complementary and independent information and, sometimes even more stringent bounds [235,236,237,238,239]. A comprehensive analysis of these effects has been implemented for the first time in HighPT [240, 241], a Mathematica package that allows to compute hadronic cross-sections, event yields and the likelihoods from different LHC searches involving leptonic final states. The aim is to provide an easy-to-use integrated framework to directly obtain a likelihood, in order to easily extract bounds on new physics parameters (both WCs in the SMEFT and couplings of TeV-scale mediators), to be then juxtaposed with low-energy experiments.

4.2.1 Drell-Yan cross-section

The most general Drell-Yan process can be written, at parton level, as the scattering

$$\begin{aligned} {{\bar{q}}}_i q_j' \rightarrow \ell _\alpha {\bar{\ell }}_\beta ' \,, \end{aligned}$$
(3.7)

where ij (\(\alpha ,\beta \)) are quark (lepton) flavour indices, and \(q,q'\) indicate either up- or down-type quarks, while \(\ell ,\ell '\) generically stand for either a charged lepton or a neutrino.Footnote 20 The amplitude can be expressed in terms of form factors as

$$\begin{aligned}&{\mathcal {A}}({\bar{q}}_i q_j' \rightarrow {\ell }_{\alpha } {\bar{\ell }}_{\beta }')\,\nonumber \\&=\, \frac{1}{v^2}\,\sum _{XY}\,\Big \lbrace \,\, \left( \bar{\ell }_\alpha \gamma ^\mu {\mathbb {P}}_X \ell _\beta '\right) \left( \bar{q}_i\gamma _\mu {\mathbb {P}}_Y q_j'\right) \, [{\mathcal {F}}^{XY,\,qq'}_{V}({\hat{s}},{\hat{t}})]_{\alpha \beta ij}\nonumber \\&\qquad + \left( {\bar{\ell }}_\alpha {\mathbb {P}}_X\ell _\beta '\right) \left( \bar{q}_i {\mathbb {P}}_Y q_j'\right) \, [{\mathcal {F}}^{XY,\,qq'}_{S}({\hat{s}},{\hat{t}})]_{\alpha \beta ij}\nonumber \\&\qquad + \left( {\bar{\ell }}_\alpha \sigma _{\mu \nu } {\mathbb {P}}_X\ell _\beta '\right) \left( {{\bar{q}}}_i \sigma ^{\mu \nu } {\mathbb {P}}_Y q_j'\right) \, \delta ^{XY} \, [{\mathcal {F}}^{XY,\,qq'}_T({\hat{s}},{\hat{t}})]_{\alpha \beta ij}\nonumber \\&\qquad + \left( {\bar{\ell }}_\alpha \gamma _{\mu } {\mathbb {P}}_X\ell _\beta '\right) \left( {{\bar{q}}}_i \sigma ^{\mu \nu } {\mathbb {P}}_Y q_j'\right) \,\frac{ik_\nu }{v} \, [{\mathcal {F}}^{XY,\,qq'}_{D_q}({\hat{s}},{\hat{t}})]_{\alpha \beta ij} \nonumber \\&\qquad + \left( {\bar{\ell }}_\alpha \sigma ^{\mu \nu } {\mathbb {P}}_X \ell _\beta '\right) \left( {{\bar{q}}}_i \gamma _{\mu } {\mathbb {P}}_Y q_j'\right) \,\frac{ik_\nu }{v} \, [{\mathcal {F}}^{XY,\,qq'}_{D_\ell }({\hat{s}},{\hat{t}})]_{\alpha \beta ij}\,\Big \rbrace \,, \end{aligned}$$
(3.8)

which captures all possible \(SU(3)_c\times U(1)_{\textrm{em}}\) and Lorentz-invariant structures. The sum over \(X,Y=L,R\) extends over left- and right-handed chiralities, and we have defined the Mandelstam variables \({{\hat{s}}} = k^2 = (p_\ell + p_{\ell '})^2\), and \({{\hat{t}}} = (p_\ell - p_{q'})^2\). The form factors \({\mathcal {F}}_I\) can be decomposed as

$$\begin{aligned} {\mathcal {F}}_{I}({{\hat{s}}}, {{\hat{t}}}) = {\mathcal {F}}_{I,\text {Reg}}({{\hat{s}}}, {{\hat{t}}}) + {\mathcal {F}}_{I,\text {Poles}}({{\hat{s}}}, {{\hat{t}}}) \,, \end{aligned}$$
(3.9)

where

$$\begin{aligned} {\mathcal {F}}_{I,\,\mathrm Reg}({\hat{s}},{\hat{t}})\ =\ \sum _{n,m=0}^\infty {\mathcal {F}}_{I \,(n,m)}\,\left( \frac{{{\hat{s}}}}{v^2}\right) ^{\!n}\left( \frac{{{\hat{t}}}}{v^2}\right) ^{\!m}\,, \end{aligned}$$
(3.10)

is an analytic function in \({{\hat{s}}}\) and \({{\hat{t}}}\), describing local interactions (i.e. effective operators of \(d\ge 6\)), while \({\mathcal {F}}_{I,\text {Poles}}\) captures the effect of simple poles in the s, t or u channel, due to some TeV-scale mediator. The differential cross-section at parton level then is

$$\begin{aligned} \displaystyle \frac{\textrm{d}{{\hat{\sigma }}}}{\textrm{d}{{\hat{t}}}}({\bar{q}}_i q^\prime _j \rightarrow {\ell }_{\alpha } \overline{\ell ^{\prime }}_{\beta })&= \frac{1}{48\pi \,v^4}\,\sum _{XY} \sum _{IJ} M^{XY}_{IJ}({{\hat{s}}},{{\hat{t}}})\nonumber \\&\quad \times {\left[ {\mathcal {F}}^{XY,\,qq^\prime }_I ({{\hat{s}}},{{\hat{t}}}) \right] }_{\alpha \beta ij} {\left[ {\mathcal {F}}^{XY,\,qq^\prime }_J({{\hat{s}}},{{\hat{t}}})\right] }^{*}_{\alpha \beta ij}\,, \end{aligned}$$
(3.11)

where \(M_{IJ}^{XY}\) decribes the interference between different form factors. This cross-section needs to be convoluted with the parton luminosity functions and integrated over the appropriate region in order to match the experimental searches (see [241] for further details).

4.2.2 Drell-Yan in the SMEFT

When working in the context of the SMEFT, the WCs can be mapped to the form factor description of the scattering process by suitable matching conditions [241]. Writing the SMEFT Lagrangian as

$$\begin{aligned} {\mathcal {L}}_\textrm{SMEFT}\ {}&=\ {\mathcal {L}}_\textrm{SM}\ +\ \sum _{d,k} \frac{{\mathcal {C}}_k^{(d)}}{\Lambda ^{d-4}}\,{\mathcal {O}}^{(d)}_k\ \nonumber \\&\quad +\ \sum _{d,k} \bigg [\frac{{\widetilde{{\mathcal {C}}}}_k^{(d)}}{\Lambda ^{d-4}}\,{\widetilde{{\mathcal {O}}}}^{(d)}_k\,+\,\mathrm {h.c.} \bigg ] \,, \end{aligned}$$
(3.12)

the cross-section, up to \({\mathcal {O}}(\Lambda ^{-4})\), can be schematically written as

$$\begin{aligned}&{\hat{\sigma }}\, \sim \, \int [\textrm{d}\Phi ]\,\Bigg \{ \vert {\mathcal {A}}_{\textrm{SM}} \vert ^2 + \frac{v^2}{\Lambda ^2}\sum _i 2\,\textrm{Re}\Big ({\mathcal {A}}_i^{(6)}\,{\mathcal {A}}_{\textrm{SM}}^*\Big ) \end{aligned}$$
(3.13)
$$\begin{aligned}&+ \frac{v^4}{\Lambda ^4}\bigg [\sum _{ij}2\,\textrm{Re}({\mathcal {A}}_i^{(6)}\,{\mathcal {A}}_j^{(6)\,*}) \nonumber \\&\qquad + \sum _{i} 2\,\textrm{Re}\Big ({\mathcal {A}}_i^{(8)}\,{\mathcal {A}}_{\textrm{SM}}^*\Big )\bigg ] + \dots \Bigg \} \end{aligned}$$
(3.14)

where \({\mathcal {A}}_i^{(6)}\) (\({\mathcal {A}}_i^{(8)}\)) indicates the contribution from dimension-six (dimension-eight) operators. The classes of operators contributing to Drell-Yan up to this order are summarized in Table 6.

Table 6 Classes of SMEFT operators relevant to the high-\(p_T\) observables and corresponding energy scaling of the amplitude. The parameter counting takes into account all structures entering the Drell-Yan cross-section up to \({\mathcal {O}}(1/\Lambda ^{4})\). The operator bases for dimension-six and dimension-eight operators are taken from [4] and [87], respectively

4.2.3 Collider limits

In order to compare the theory prediction for the cross-section with the searches performed by the experimental collaborations, detector effects, such as limited resolution or acceptance, must be taken into account. For binned distributions, this is done by introducing a response matrix K, such that

$$\begin{aligned} \sigma _q(x_{\textrm{obs}}) = \sum _{p=1}^M K_{pq} \sigma _p (x) \,, \end{aligned}$$
(3.15)

where x indicates a generic particle-level observable, divided into M bins, and \(x_{\textrm{obs}}\) is the experiment-level observable. \(\sigma _q\) here indicates the cross-section for bin q. The matrix K needs to be extracted from Monte Carlo simulation for each independent combination of form factors. With all the elements described so far, one can define a \(\chi ^2\) likelihood as

$$\begin{aligned} \chi ^2(\theta )=\sum _{A\in {\mathcal {A}}}\left( \frac{{\mathcal {N}}_A(\theta )+{\mathcal {N}}^{b}_A-{\mathcal {N}}^\textrm{obs}_A}{\Delta _A}\right) ^2\,, \end{aligned}$$
(3.16)

where \({\mathcal {N}}_A^b\) is the number of background events and \({\mathcal {N}}_A^\textrm{obs}\) the number of observed events in bin A, both provided by the experimental collaborations. The uncertainty \(\Delta _A\) is obtained by adding in quadrature the background and observed uncertainties, \(\Delta _A^2=(\delta {\mathcal {N}}^b_A)^2+ {\mathcal {N}}_A^{\textrm{obs}}\), where the last term corresponds to the Poissonian uncertainty of the data. \({\mathcal {N}}_A(\theta )\), on the other hand, is the predicted number of events in bin A, depending on the new physics parameters \(\theta \). HighPT includes recasts from ATLAS and CMS searches for all possible dilepton (ee, \(\mu \mu \), \(\tau \tau \), \(e\mu \), \(e\tau \), \(\mu \tau \)) and monolepton (\(e\nu \), \(\mu \nu \), \(\tau \nu \)) final states [240], such that a likelihood, written as a polynomial in the WCs, can be obtained for each of them.

4.2.4 Using HighPT: an example

In order to briefly illustrate the main features of HighPT, we show here an explicit example. For a detailed review of all the functionalities see [240]. The main routine of the package is the function ChiSquareLHC, yielding the \(\chi ^2\) likelihood as a list, with each element corresponding to a bin e.g. in \(m_{\ell \ell }^2\).Footnote 21 Consider the dimuon search by CMS [242] and the dimension-six coefficients \([{\mathcal {C}}_{lq}^{(1)}]_{2211}\), \([{\mathcal {C}}_{lq}^{(1)}]_{2222}\), as in [4]. The likelihood can be extracted as

figure dh

which computes the \(\chi ^2\) keeping only the specified operators, and to \({\mathcal {O}}(\Lambda ^{-4})\). The default setting is \(\Lambda = 1\) TeV, but this can be changed at any time, together with the order of the EFT truncation [240]. Within the same framework, one can compute the projected likelihood for the HL-LHC by

figure di
figure dj

where the first option corresponds to a rescaling of the background uncertainty by \(\Delta {\mathcal {N}}_A^b \rightarrow (L_\textrm{projected}/L_\textrm{current})^{1/2} \,\Delta {\mathcal {N}}_A^b=\sqrt{3000/140}\,\Delta {\mathcal {N}}_A^b\), while the second is the likelihood computed assuming that the ratio of background error over background is constant, i.e. \(\Delta {\mathcal {N}}_A^b/{\mathcal {N}}_A^b=\text {const}\). Minimizing these likelihoods, one can plot for example the 95% C.L. contours as in Fig. 6.

Fig. 6
figure 6

\(95\%\) CL regions for a fit of the given WCs to the dimuon search [242]. The blue region corresponds to the current constraints, whereas the orange and green regions correspond to projections for HL-LHC

4.2.5 Summary and outlook

We have introduced HighPT, a Mathematica package designed to translate the data from Drell-Yan searches at the LHC into a likelihood function in terms of WCs. It is worth stressing that, despite the focus in this brief overview has been on the SMEFT, HighPT currently includes also a set of leptoquark mediators, allowing to include possible propagation effects of such new states in the computation of the cross-section [240]. We have shown in a short example how the \(\chi ^2\) can be computed, including also an option for HL-LHC projections. Future directions of development for the package include the implementation of electroweak and low-energy observables, in order to be able to get a global likelihood for combined analyses in a unified framework. Another possible extension is the inclusion of more high-\(p_T\) observables related to semi-leptonic interactions, such as processes with a jet in the final state, and the inclusion of processes mediated by four-quark operators.

4.3 EOS: Flavor Phenomenology with the EOS Software

figure dk

Recent studies in flavor physics have revealed a consistent numerical pattern where large amounts of experimental data are analyzed to infer theory parameters of and BSM. Constraining the WCs of effective field theories have been proven particularly useful as they provide a model-independent framework to study scenarios BSM. In this context, performing the flavor analyses in a separated software and exporting the resulting constraints in terms of likelihoods in the WCs space is crucial for model building.

EOS [243, 244] is such a software.Footnote 22 It is an open source flavor software dedicated to the calculation of observables and the inference of theory parameters from an extendable list of models and constraints. It is particularly suited to the extraction of constraints on the parameters of effective field theories in the context of global analysis. It is written for three main use cases:

  • the numerical prediction of experimental observables with a wide range of theoretical and statistical techniques;

  • the inference of theory parameters from an extensible database of experimental and theoretical likelihoods;

  • and the production of Monte Carlo samples that can e.g. be used to study the experimental sensitivity to a specific observable.

EOS is written in C++ but offers a rich Python interface meant to be used e.g. within a Jupyter notebook.

4.3.1 Installation and documentation

EOS can be installed using Python package installer:

figure dl

and the Python module can be accessed using

figure dm

EOS documentation [245, 246] contains further installation instructions, basic tutorials, as well as detailed examples for advanced usage.

4.3.2 How to derive flavor constraints in the SMEFT

The two main objects in EOS are Observable and Analysis. The former allows to compute any build-in (pseudo-)observable by specifying a set of parameters, options and kinematics. EOS pre-built observables are classified by their QualifiedNames and can be listed using the Observables command. An updated list can also be found together with the documentation. Experimental measurements and theory constraints are expressed in terms of likelihoods with the Constraint class.

The Analysis class allows to evaluate a set of constraints within ranges of parameters provided by the user. Once the analysis object is defined, it can be optimized to identify the best-fit point(s) and it accepts sampling routines.

A SMEFT analysis therefore consists of the following steps:

  1. 1.

    List the experimental and theory constraints relevant for the analysis. New constraints can be added using manual_constraints.

  2. 2.

    List the relevant nuisance parameters. The parameters of interest are the WCs of the effective theory relevant to the observables (e.g. ”ubmunumu::Re{cVL}”for a study of \(B\rightarrow \pi \mu \nu _\mu \)). The matching from the low-energy effective theory to the SMEFT is performed at a later stage.

  3. 3.

    Create an Analysis object, specifying model: LEFT as a global option. This analysis can be optimized to find the best-fit point and the corresponding goodness-of-fit information.

  4. 4.

    Create posterior-predictive samples of the analysis, using one of the sampling routines: sample_mcmc, sample_pmc or sample_nested (EOS > v1.0.5).

  5. 5.

    After marginalizing over the nuisance parameters, the samples can be exported to any matching software (e.g. wilson [108]) and converted to SMEFT parameters using the EOS basis of the WCxf format [247].

Alternatively, WCs can be imported from wilson directly into a Parameters object using the FromWCxf routine.

4.3.3 EOS vs. other flavor software

EOS is developed since 2011 [243] and was used in many phenomenological studies (see e.g. [248,249,250,251,252,253,254] for the most recent ones). It is however not the only openly available flavour software and competes, among others, with flavio [217], SuperIso [218, 219], HEPfit [255] and FlavBit [220]. The unique features of EOS are described below.

  • EOS is particularly suited to study and compare different models of hadronic matrix elements (theory calculations, parameterizations...). It thus implements the possibility to select from various hadronic models at run time. As far as theory calculations are concerned, EOS implements all the necessary tools for the evaluation of these elements using QCD sum rules.

  • The careful implementation of hadronic matrix elements makes it the primary tool to a simultaneous inference of hadronic and new physics parameters. The underlying correlations are of primary importance when combining many experimental results.

  • EOS also offers the possibility of producing pseudo-events from an extensible set of PDFs. These events can then be used, e.g. for sensitivity studies and in preparation for experimental measurements.

4.3.4 Recent and future developments

EOS development is done via its GitHub page, where the issue tracker allows the user to ask for new features, observables or constraints. The long-term plans are also discussed via the discussion panel.

In parallel to the implementations of new observables, parameterizations and recent experimental results, a considerable work has been done on the improvement of statistical tools. This development was performed in preparation to new phenomenology analyses which now usually reach \({\mathcal {O}}(100)\) nuisance parameters. Such large numbers make the approach based on basic Monte-Carlo techniques inefficient if not impossible. EOS now offers an interface to the dynesty [256, 257] package to make full use of nested sampling algorithm.

In the long-term, EOS will contain “pre-packaged” low-energy analyses. The idea is to simplify the use of low-energy constraints in the conception of new physics models. In particular, following the steps described above can be particularly time- and CPU-demanding when the number of nuisance parameters is large. This is typically the case in flavor physics due to involved parameterizations of the hadronic form factors. For example, the extraction of the WCs of the \(b\rightarrow s\mu \mu \) weak effective theory using \(B\rightarrow K\mu \mu \), \(B\rightarrow K^*\mu \mu \) and \(B_s\rightarrow \phi \mu \mu \) requires at least 130 parameters for a consistent description of the hadronic transitions [253]. Provided that these nuisance parameters are uncorrelated to the other parameters entering a global SMEFT analysis, repeating this analysis in its entirety would be pointless and computationally challenging.

We therefore propose to simplify the publication of likelihoods containing only the parameters of interests (the WCs in this case). The posterior densities can be fitted with a Gaussian Mixture Model and used in EOS or other flavor software.

4.4 HEPfit: effective field theory analyses with HEPfit

figure dn

HEPfit is a tool developed to facilitate the combination of all different types of available constraints that can be used to learn from the parameter space of the SM or any new physics model. In the case of new physics, these constraints include experimental searches looking for the direct production of new particles, i.e. direct searches, or to find deviations from SM predictions in measurements of SM processes, i.e. indirect searches. The code has great flexibility in the form in which experimental likelihoods for these searches can be implemented, allowing e.g. correlations, binned measurements or non-Gaussian likelihoods. Theory constraints such as, e.g. unitarity, and theory uncertainties (including correlations) can also be taken into account in the analysis of a desired model.

The above-mentioned types of information can be combined and used to sample the model parameter space via the built-in Bayesian Markov Chain Monte Carlo (MCMC) engine, which uses the Bayesian Analysis Toolkit (BAT) library [258]. This enables the possibility of doing Bayesian statistical inference of the model parameters. This Bayesian analysis framework is parallelized with MPI so it can be run in clusters and CPUs capable of multi-thread computing. Alternatively, HEPfit can also be used in library mode to compute predictions for observables. These can then be used to perform inference in any other statistical framework. To use HEPfit’s Bayesian framework, the user only needs to provide the priors for the different model input parameters, those observables to be included in the likelihood calculation and the settings of the MCMC. Examples can be found in Section 7 of Ref. [255].

Another important feature of HEPfit is that, aside from the observables and models currently implemented in the code, the latter including the SM and several new physics scenarios, the user can implement their own custom observables and/or models as external modules.

On the technical side, HEPfit is developed in C++ and it requires a series of mandatory dependencies such as the GNU Scientific Library, the BOOST libraries, and ROOT. To use the HEPfit MCMC engine BAT is also required. Finally, to enable the parallel use of HEPfit one needs OpenMPI. See the Installation section in [255] for more details.

Aside from the SM, the current version of HEPfit already includes several BSM models, such as Two-Higgs doublet models [259], as well as several model-independent frameworks for the phenomenological description of new physics effects using EFTs. The implementation of these EFT is briefly described in the next section, following the status of the most up-to-date (developer’s) version of the code, which can be found in the Downloads area of https://hepfit.roma1.infn.it. These features are expected to appear in the next public release of HEPfit.

4.4.1 Effective field theory implementation in HEPfit

Assuming that BSM physics is characterized by a mass scale \(\Lambda \) and that for energies \(E\ll \Lambda \) the particle spectrum and symmetries of nature are those of the SM, two types of EFT can be used to describe the physics at such energies: the SMEFT (see e.g. Ref. [260]), where the Higgs-like boson is embedded in a \(SU(2)_L\) doublet as in the SM; and the HEFT, where the Higgs boson is described by a singlet scalar state, i.e. not belonging to an \(SU(2)_L\) doublet. Most of the current development in HEPfit is focused on the SMEFT, whose power counting follows an expansion in operators of increasing canonical mass dimension, and thus BSM effects are suppressed by correspondingly larger powers of the EFT cut-off scale \(\Lambda \). Hence, the effective Lagrangian expansion takes the following form,

$$\begin{aligned} \mathcal{L}_{\textrm{eff}}=\mathcal{L}_{\textrm{SM}} + \frac{1}{\Lambda }\mathcal{L}_5 + \frac{1}{\Lambda ^{2}}\mathcal{L}_6 + \ldots ,~~~~\mathcal{L}_d=\sum _i C_i^{(d)} \mathcal{O}_i^{(d)}, \end{aligned}$$
(3.17)

where \(\mathcal{O}_i^{(d)}\) are operators of mass dimension d and \(C_i^{(d)}\) the corresponding WCs. The first term, \(\mathcal{L}_5\), only contains the lepton-number-violating Weinberg operator. In a lepton-number preserving theory the leading order (LO) new physics effects are therefore given by the dimension-six operators in \(\mathcal{L}_6\) and these are the effects implemented in HEPfit.

In the current implementation of SMEFT effects in HEPfit, which can be found in the so-called NPSMEFTd6 model class, new physics contributions from dimension-six operators are considered for several types of observables:

  • Electroweak precision measurement (Z-pole observables at LEP/SLD and measurements of the W mass and decay widths). These are implemented to the state-of-the-art precision in the SM and to LO in the SMEFT [261, 262].

  • Diboson production at LEP2 [263] and the LHC [264].

  • LHC Higgs measurements, including the signal strengths for the different production and decay modes, as well as the Simplified Template Cross Section bin parameterization from [265]. A comprehensive set of Higgs observables at future \(e^+ e^-\) or \(\mu ^+ \mu ^-\) colliders at different energies, with or without polarization, is also available in the code, for future collider studies [266,267,268].

The current version of the code allows the use of either the \(\left\{ M_Z, \alpha , G_F\right\} \) or the \(\left\{ M_Z, M_W, G_F\right\} \) schemes for the SM electroweak input parameters for most of these observables.

A comprehensive set of top-quark observables at the LHC is also available in HEPfit, via the NPSMEFT6dtopquark model class used in [269]. These include differential cross section measurements of \(t{\bar{t}}Z\) and \(t{\bar{t}}\gamma \) processes and inclusive cross sections for \(t{\bar{t}}W\), \(t{\bar{t}}H\) and single top processes. (See Fig. 7 right.) These top-quark observables are also being implemented as part of the main NPSMEFTd6 class for global analyses.

Fig. 7
figure 7

(Left) Constraints on SMEFT modifications of electroweak, Higgs and anomalous triple gauge couplings at future circular \(e^+ e^-\) colliders. From Ref. [267]. (Right) Constraints from different top processes on Top dipole operators. From Ref. [269]

For the above-mentioned set of observables, new physics corrections are currently implemented at the linear level in \(1/\Lambda ^2\),

$$\begin{aligned} O=O_{\textrm{SM}}+ \sum _i F_i \frac{C_i}{\Lambda ^2}. \end{aligned}$$
(3.18)

The coefficients \(F_i\) parametrizing the dependence on the WCs \(C_i\) are computed at leading order, either analytically, as in the case of the electroweak precision measurements or, for LHC Higgs and top-quark observables, numerically, by fitting Eq. (3.18) to the results of MadGraph5_aMC@NLO [111] simulations using our own UFO implementation of the SMEFT or any of the models available in the literature, e.g. SMEFTsim [223] or SMEFT@NLO [270]. Our expressions are given in the so-called Warsaw basis [4], but we give the possibility of choosing as model parameters some operators in other bases, in which case the corresponding expressions are obtained via the SM equations of motion. Different flavor assumptions can be chosen for fermionic operators, not restricted to flavor universality.

Flavor physics is another sector that has been the focus of attention during the development of HEPfit [271,272,273,274,275], with multiple \(\Delta F=2\) and \(\Delta F=1\) observables included in the code. As in the case of the electroweak precision measurements the SM prediction has been implemented including all available corrections. New physics corrections are implemented as a function of the WCs of the LEFT, and the full matching with the SMEFT is currently work in progress (so far it is only implemented for interactions relevant for the analysis of B anomalies [274, 275]). Combined analyses of flavor physics with electroweak precision observables can be found in, e.g. [272, 276], see Fig. 8.

Fig. 8
figure 8

Results from a combined fit to electroweak precision data and flavor observables. From Ref. [276]

As mentioned above, most of the SMEFT effects implemented in HEPfit are currently available at leading order and at \(\mathcal{O}(1/\Lambda ^2)\). Part of the work to extend such calculations include the implementation of \(\mathcal{O}(1/\Lambda ^4)\) effects, see e.g. [277], and the full renormalization group running [8,9,10] via the integration of RGESolver [109] in HEPfit. The remaining contributions needed to obtain the full next-to-leading order (NLO) calculation of given observables are becoming increasingly available in the literature (e.g. [278, 279]) and will be gradually implemented.

Finally, aside from the SMEFT implementation, a model describing the HEFT corrections to single Higgs processes is also available in HEPfit. These corrections include the effects from the leading order HEFT Lagrangian, using a power counting in terms of chiral dimensions [280]. These include all operators of chiral dimension two, but we also include several operators of chiral dimension four, to parameterize local contributions from new particle loops in \(H\rightarrow gg, \gamma \gamma \) and \(Z\gamma \). The results from a global analysis using LHC run 1 and 2 Higgs data in this HEFT formalism can be found in Ref. [281].

4.5 SmeftFR v3: a tool for creating and handling vertices in SMEFT

Athanasios Dedes, Janusz Rosiek and Michal Ryczkowski

The abundance of parameters and interaction vertices in SMEFT requires automation. The scope of the SmeftFR v3 code [282] is to derive the Feynman rules for interaction vertices from dimension-5, and -6, and, so far, all bosonic dimension-8 operators which can easily be imported into other codes, such as FeynArts [190] and MadGraph5 [111], for further symbolic or numeric calculations of matrix elements and cross-sections.

SmeftFR starts from the most commonly used dimension-5,6 “Warsaw” basis [4] and dimension-8 basis of Ref. [87] of operators in the unbroken phase, and, following the steps of ref. [283] generates all relevant Feynman rules in the physical mass basis quantized in Unitary or \(R_\xi \)-gauges. It is written in Mathematica language and uses the package FeynRules [155]. The code SmeftFR is an open access, publicly available code and can be downloaded from www.fuw.edu.pl/smeft.

There are several advances in SmeftFR v3 [282] compared to its predecessor SmeftFR v2 [284]. Apart from general optimizing and speeding up the code, SmeftFR v3 can calculate vertices consistently up to the order \(1/\Lambda ^4\) in the EFT expansion, including terms quadratic in dim-6 WCs and linear in bosonic dim-8 WCs. What is particularly important is that SmeftFR v3 is able to express the SMEFT interaction vertices directly in terms of the chosen set of observable input parameters, avoiding the need of reparametrizations of transition amplitudes calculated in terms of SM gauge and Higgs couplings. For convenience, SmeftFR v3 is augmented with two predefinedFootnote 23 input parameter schemes in the electroweak sector including corrections of order \(O(1/\Lambda ^4)\):

  • The GF-scheme with input parameters \((G_F,M_Z,M_W,M_H)\),

  • The AEM-scheme with input parameters \((\alpha _{em},M_Z,M_W,M_H)\).Footnote 24

Moreover, SmeftFR v3 employs the flavour input scheme of ref. [232] which inserts the SMEFT corrected CKM matrix elements starting directly from flavour observable processes.Footnote 25

4.5.1 SmeftFR v3 by an example

All details about physics and usage of SmeftFR v3 are presented in [282]. To get the essence of what SmeftFR can do in practice, it is better to study a step-by-step example for a given set of dim-6 and dim-8, CP-even, operators. The processes we have in mind are vector-boson scattering at the LHC. The subsequent steps follow the Mathematica notebook file given in the SmeftFR distribution, SmeftFR-init.nb. After loading FeynRules and SmeftFR codes, we need first to set the operator’s set (in gauge basis). For the processes we have in mind, we set:

figure do

The naming of operators is given in App. B of Ref. [282], e.g. \(Q_{\varphi \Box } \rightarrow \texttt {``phiBox}\) ”, \(Q_{\varphi ^4 D^4}^{(1)} \rightarrow \texttt { ``phi4n1}\) ”, etc. The next step is to initialize the SMEFT Lagrangian with a chosen set of available options:

figure dp

Here we choose to generate vertices in the \(R_\xi \)-gauges up to the EFT expansion order of \(1/\Lambda ^4\) and with maximal 4 external legs (this option does not affect the UFO and FeynArts file generation where there is no such restriction). We have also chosen to use the \(G_F\)-input parameter scheme, and no SMEFT corrections to the CKM matrix. Moreover, we use real numerical parameter values for WCs (as required by MadGraph5) taken from the file named, WCxfInput. The next step is to load the parameters’ model-file and calculate the Lagrangian in the gauge basis, find field-bilinears and diagonalize mass matrices to maximal order \(1/\Lambda ^4\), and finally, find the SMEFT Lagrangian in the mass basis and generate the Feynman rules, at this stage keeping the field redefinitions necessary to canonicalize the Lagrangian as symbols, without expanding them in \(1/\Lambda \) powers. Up to now, the program takes \(\sim 7\) mins on a typical laptop.Footnote 26 The obtained vertices in this form are stored in ”/output/smeft_feynman_rules.m”file.

Now we are ready to expand the field-redefinition parameters and read the full vertices in user’s \(G_F\)-scheme, previously adopted. We use the FeynRules command to select the \(h\gamma \gamma \)-vertex and obtain:

figure dq
figure dr

As expected from gauge invariance, the resulting vertex is proportional entirely to the Lorentz factor \((p_1^{\mu _2}p_2^{\mu _1} - g^{{\mu _1}{\mu _2}} p_1\cdot p_2)\). In this vertex, there are terms linear in dim-6 WCs plus quadratic terms of the form (dim-6)\(^2\). Although, the chosen set of WCs associated to dim-8 operators does not appear in \(H\gamma \gamma \)-vertex, in the following example they do:

figure ds
figure dt

The quartic Z-vertex is generated for the first time at dim-8 level from the chosen operators \(Q_{\phi ^4 D^4}^{(1),(2),(3)}\) ! The user could enjoy investigating further new vertices that did not appear up to dim-6 level. Finally, we note here that analogous vertices can be extracted at this stage in standard SM parametrization (\(({\bar{g}},{\bar{g}}',v)\)-scheme) or even in the unexpanded-field-redefinition version of this scheme.

If we now want to continue with interfaces to LaTeX, UFO, FeynArts and WCxf formats we have to Quit[] Mathematica kernel and open the notebook SmeftFR_ interfaces.nb located in the home-directory of the SmeftFR distribution. We again have to load FeynRules  and SmeftFR engines and reload the mass basis Lagrangian by typing:

figure du

where the “user”-input scheme from the previous session is used (i.e. the \(G_F\)-scheme), expansion is up to \(1/\Lambda ^4\), etc. (we do not include 4-fermion operators, so this option is irrelevant for the chosen set of operators in this example). The whole SMEFT Lagrangian in the mass basis is finally stored in variable SMEFT$MBLagrangian for further use by interface routines.

At this point, we can continue by exporting numerical values of WCs from the FeynRules model-file to a WCxf-file format. The created file can be used to transfer numerical values of WCs to other codes that also support WCxf-format. In addition, as already possible in SmeftFR v2, we can generate a LaTeX file with vertices and corresponding Feynman graphs. Since the resulting expressions for (dim-6)\(^2\) and dim-8 contributions are (usually) too long, we have kept in the LaTeX output only the linear dim-6 terms.

4.5.2 SmeftFR UFO and MadGraph5

SmeftFR v3 provides a new routine for producing UFO model files that may be useful in running realistic Monte Carlo simulations and replaces the standard FeynRules one. It can assign the correct “interaction orders” for both the SM couplings and the higher order operators, as required by MC generators to properly truncate transition amplitude calculations and reads:

figure dv

For details, including several comparisons to other existing codes such as, SMEFT@NLO [285], Dim6Top [286] and SMEFTsim [287], the user must consult Ref. [282]. The generation of the UFO model files (especially in the “user” scheme) is a time-consuming process. For this particular example, it took about 2 hrs to generate the ”/output/UFO” directory. Moreover, the resulting UFO model-file may lead to lengthy calculations in MadGraph5 itself. If the goal of the user is to examine the influence of a single SMEFT operator on the chosen set of processes at a time, one may either start with the model containing several SMEFT operators and manually set only one of them to be non-zero by using MadGraph5’s, set command (e.g. set CW 1e-06) or produce separate models, each containing one of the SMEFT operators and load different models before each run. Both options lead to the same results, but the latter one may be especially attractive to users with limited CPU facilities. Whichever we choose, one must copy the produced UFO model-directory to the models-directory of MadGraph5 and then import it with the command import model UFO. We are now ready to generate matrix elements and cross-sections with MadGraph5.

For example, the cross-section for vector-boson scattering at LHC is calculated with: generate p p > w+ w+ j j QCD=0 (& NP=0 - SM, NP<=1 - \({\mathcal {O}}(\Lambda ^{-2})\) and NP<=2 - \({\mathcal {O}}(\Lambda ^{-4})\) order). In order to highlight the significance of the quadratic (dim-6)\(^2\) corrections, we adopt for the input WCs large values which could arise from a hypothetical strongly coupled sector. The resulting cross-sections are given in Table 7, with further definitions in its caption. As we can see, the quadratic effects for \((C_W)^2\) are by a factor of 4400 bigger than the linear contributions. For the pure scalar operators, the effects of (dim-6)\(^2\) terms depend on the sign of \(C_{\varphi \Box }\), while the effect of dim-8 coefficient \(C_{\varphi ^4 D^4}^{(i)}\), has an impact of about 100 in the cross-section. The tendency in the results for the pure scalar operators, presented in Table 7, follow the analytic amplitude expression for the \(W_L^+W_L^+\)-scattering in Eq. (3.19) below. To our knowledge, the effects of these (dim-6)\(^2\) and dim-8 modifications to the cross-section appear for the first time in the literature.

Table 7 Cross-sections (in pb) obtained using MadGraph5 v3.4.1 with UFO models provided by SmeftFR v3 at the orders \({\mathcal {O}}(\Lambda ^{-2})\) and \({\mathcal {O}}(\Lambda ^{-4})\) in the EFT expansion for the p p> w+ w+ j j QCD=0 process at the Large Hadron Collider (LHC) with \(\sqrt{s}=13\) TeV and cuts: \(\Delta \eta _{jj}> 2.5\), \(m_{jj}>500\) GeV. Simulations are performed in the default “\(G_F\)” electroweak input scheme with default numerical values of input parameters. For each run, only one of the WCs has non-zero value assigned, equal to \(\frac{C_i}{\Lambda ^2}=\frac{4\pi }{\text {TeV}^2}\) for dim-6 and \(\frac{C_i}{\Lambda ^4}=\frac{(4\pi )^2}{\text {TeV}^4}\) for dim-8 operators

4.5.3 SmeftFR to FeynArts

SmeftFR can generate a FeynArts output by just using the native FeynRules command,

figure dw

This is also a time-consuming stage (about double the UFO file generation). The generated file is stored in the ”output/FeynArts”-directory. We can use the files suffixed *.gen,*.mod and *.pars in the patched FeynArts program, FormCalc [288] or FeynCalc [289]. As an example, we create tree-level diagrams for the vector boson scattering, \(W^+ W^+ \rightarrow W^+ W^+\) and isolate the longitudinal W-bosons, \(W_L^+\). We obtain the tree amplitude at high energies expanded for \(s\gg M_W^2\), with \(\theta \) being the scattering angle,

$$\begin{aligned}{} & {} {\mathcal {M}}_{W_L^+W_L^+ \rightarrow W_L^+ W_L^+}(s,\theta ) \nonumber \\{} & {} \quad = -2 \sqrt{2} G_F M_H^2 \left[ 1 - \frac{M_Z^2}{M_H^2}\left( 1-\frac{4}{\sin ^2\theta } \right) \right] \quad \mathbf {(SM)} \nonumber \\{} & {} \qquad + \left( 2 C_{\varphi \Box } + C_{\varphi D} \right) \frac{s}{\Lambda ^2} \quad \mathbf {(dim-6)} \nonumber \\{} & {} \qquad + \left[ 8 C_{\varphi ^6 \Box } + 2 C_{\varphi ^6 D^2} + 16 (C_{\varphi \Box })^2 \right. \nonumber \\{} & {} \qquad + (C_{\varphi D})^2 - 8 C_{\varphi \Box }\, C_{\varphi D}- \left. 16 (C_{\varphi ^4 D^4}^{(1)}+2 C_{\varphi ^4 D^4}^{(2)}\right. \nonumber \\{} & {} \qquad \left. +C_{\varphi ^4 D^4}^{(3)}) G_F M_W^2 \right] \frac{\sqrt{2}}{8\, G_F \Lambda ^2} \frac{s}{\Lambda ^2} \quad \mathbf {(dim-6)^2} \nonumber \\{} & {} \qquad + \left[ (3 + \cos 2\theta ) ( C_{\varphi ^4 D^4}^{(1)} + C_{\varphi ^4 D^4}^{(3)} ) + 8 C_{\varphi ^4 D^4}^{(2)} \right] \nonumber \\{} & {} \qquad \times \frac{s^2}{8 \Lambda ^4} \quad \mathbf {dim-8} . \end{aligned}$$
(3.19)

This result, up to linear dim-6 operators, agrees with Ref. [290] whereas all other contributions, the quadratic (dim-6)\(^2\) and the linear dim-8 effects, are new. The advantage of using a \(R_\xi \)-gauge (here Feynman gauge), is that we can confirm this result by using the Goldstone–Boson equivalence Theorem comparing Eq. (3.19) with the amplitude for charged Goldstone boson scattering, \(G^+ G^+ \rightarrow G^+ G^+ \). Indeed, we find agreement. This is a serious non-trivial check since the Feynman diagrams involved in \(W_L^+ W_L^+\) elastic scattering contain in addition, the coefficients \(C_W, C_{\varphi WB}, C_{\varphi B}, C_{\varphi W}\) in a complicated way, but in the end their contributions cancel out. We have also verified, that in the ”AEM”input scheme the combination of WCs appearing in (3.19) are exactly the same and therefore, numerically, the result is identical. This is another check towards correctness of SMEFT vertices generated by SmeftFR v3.

4.5.4 Conclusions

We briefly presented a step-by-step example illustrating the practical use and capabilities of the recently released SmeftFR v3 code [282]. SmeftFR brings forward Feynman rules for a desired set of WCs by consistently including corrections of up-to order \(O(1/\Lambda ^4)\) in the EFT expansion. SmeftFR generates interaction vertices in terms of chosen physical input parameters. Furthermore, SmeftFR offers LaTeX output, as well as UFO and FeynArts model-files useful for numerical and analytical calculations.

In Table 7 and in Eq. (3.19), we show an example in which, (dim-6)\(^2\) and dim-8 operator effects should not be ignored when mapping experimental data onto their associated WCs. For such research, SmeftFR v3 is a requisite.

4.6 Application of EFT tools to the study of positivity bounds

Mikael Chala

Positivity bounds are restrictions on the S-matrix of well-defined relativistic-quantum theories that follow from locality, unitarity and crossing-symmetry. In order to discuss the findings from Refs. [104, 125], let us first consider any such theory with a low-energy spectrum coinciding with that of the SM and with heavy fields of mass \(\sim M\). Let us focus on two-to-two Higgs scattering for simplicity, \(\phi \phi \rightarrow \phi \phi \). In the forward limit, the corresponding scattering amplitude satisfies that \({\mathcal {A}}(s)={\mathcal {A}}(-s)\) due to crossing-symmetry, and it is analytic everywhere in the complex plane of the Mandelstam invariant s up to certain “mild” singularities (definitely not as severe as delta functions [291]).

In first approximation, the only singularities of \({\mathcal {A}}(s)\) are single poles at \(s=\pm M^2\), from where it can be easily proven that \({\mathcal {A}}^{\prime \prime }(s=0) \ge 0\) [292]. But this positivity restriction is actually much more widely satisfied. For example, let us assume that the singularities of \({\mathcal {A}}(s)\) are branch-cuts sitting along the \(\text {Re}(s)\) axis, with branch points at \(s\sim M^2\). Then, following Fig. 9, we can compute the quantity

$$\begin{aligned} \Sigma&\equiv \frac{1}{2\pi \textrm{i}}\int _\Gamma \text {d}s\, \frac{{\mathcal {A}}(s)}{s^3} , \end{aligned}$$
(3.20)

which fulfils

$$\begin{aligned} \Sigma&= \frac{1}{\pi \textrm{i}} \int _{M^2}^\infty \text {d}s\, \frac{1}{s^3}\lim _{\epsilon \rightarrow 0}\left[ {\mathcal {A}}(s+\textrm{i}\epsilon )-{\mathcal {A}}(s-\textrm{i}\epsilon )\right] \nonumber \\&= \frac{1}{\pi \textrm{i}} \int _{M^2}^\infty \text {d}s\, \frac{1}{s^3}\lim _{\epsilon \rightarrow 0}\left[ {\mathcal {A}}(s+\textrm{i}\epsilon )-{\mathcal {A}}(s+\textrm{i}\epsilon )^*\right] \nonumber \\&= \frac{2}{\pi }\int _{M^2}^\infty \text {d}s\, \frac{\sigma (s)}{s^2}\ge 0, \end{aligned}$$
(3.21)

where in the first equality we have used that, by virtue of the Froissart’s bound [293], the integral over the circular paths of \(\Gamma \) vanishes; in the second equality we have relied on the Schwarz reflection principle \({\mathcal {A}}(s^*)={\mathcal {A}}(s)^*\); and in the last step we have invoked the optical theorem, which relates the imaginary part of the forward amplitude to the total cross section \(\sigma (s)\).

Fig. 9
figure 9

Singularities of the amplitude for scalar two-to-two scattering in the forward limit, and contour of integration used in the derivation of positivity bounds

Now, by analyticity, and using Cauchy’s theorem, \(\Sigma \) can be also computed from the residue of \({\mathcal {A}}(s)/s^3\) in the origin, which is nothing but the second derivative of the amplitude itself at \(s=0\), from where we conclude again that \({\mathcal {A}}^{\prime \prime }(s=0)\ge 0\). Similar results can be drawn even in the case in which the branch cut extends all the way to \(s=0\) [294].

Because the amplitude in the vicinity of the origin can be computed within the EFT, the aforementioned restriction translates to bounds on the parameters of the EFT. Thus, if this process occurs already at tree level, then

$$\begin{aligned} {\mathcal {A}}(s) = a_0 + a_2 \frac{s^2}{\Lambda ^4} + a_4 \frac{s^4}{\Lambda ^8}+\cdots \end{aligned}$$
(3.22)

where \(\Lambda \sim M\) represents the cutoff of the EFT. (Note that odd terms in s, and in particular the linear one and hence contributions from dimension-six EFT interactions, are absent due to the invariance of the amplitude under \(s\rightarrow -s\).) From this equation and \({\mathcal {A}}^{\prime \prime }(s=0)\ge 0\), it can be concluded that \(a_2\ge 0\).

If the EFT amplitude vanishes at tree level, then the WCs \(a_i\) must be understood as evaluated at a scale \(\mu \ll \Lambda \). For \(a_2\) in particular:

$$\begin{aligned} a_2(\mu ) \sim a_2(\Lambda ) + \frac{1}{16\pi ^2} (\beta _{2} + \beta _{2}^\prime )\log {\frac{\mu }{\Lambda }}, \end{aligned}$$
(3.23)

where \(\beta \) and \(\beta ^\prime \) are the one-loop beta functions induced by dimension-eight and pairs of dimension-six operators, respectively. From this schematic point of view several interesting conclusions can be drawn [104, 125]:

  1. 1.

    The matching contribution \(a_2(\Lambda )\) must be always non-negative at tree level.

  2. 2.

    It must be also non-negative if it does not run at one loop.

  3. 3.

    On the contrary, if it does run, then it can be negative.

  4. 4.

    \(\beta _2\) must be non-positive, because \(\beta _2'\) can be neglected in the limit of vanishing gauge and Yukawa couplings of the UV.

  5. 5.

    \(\beta _2^\prime \) must be non-positive whenever \(\beta _2\) is zero.

4.6.1 Explicit computations using EFT tools

The conclusions 3.6–3.6 can be explicitly checked within different UV completions of the SM. For concreteness, we focus mostly on the restrictions ensuing from the processes \(\varphi _i\varphi _j\rightarrow \varphi _i\varphi _j\), with \(\varphi _i\) representing any of the real degrees of freedom of the Higgs doublet \(\phi \). We assume that the Higgs is massless.

The condition \(a_2\ge 0\) translates into the bounds \(c_{\phi ^4 D^4}^{(2)}\ge 0\), \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}\ge 0\) and \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\ge 0\), where \(c_{\phi ^4 D^4}^{(1,2,3)}\) are the WCs of the only three \(\phi ^4 D^4\) SMEFT dimension-eight operators in the basis of Ref. [87]:

$$\begin{aligned} {\mathcal {O}}_{\phi ^4 D^4}^{(1)}&= (D_\mu \phi ^\dagger D_\nu \phi ) (D^\nu \phi ^\dagger D^\mu \phi ),\nonumber \\ {\mathcal {O}}_{\phi ^4 D^4}^{(2)}&= (D_\mu \phi ^\dagger D_\nu \phi ) (D^\mu \phi ^\dagger D^\nu \phi )\,,\nonumber \\ {\mathcal {O}}_{\phi ^4 D^4}^{(3)}&= (D_\mu \phi ^\dagger D^\mu \phi ) (D^\nu \phi ^\dagger D_\nu \phi ). \end{aligned}$$
(3.24)

The benefit of testing conclusions 3.6–3.6 with explicit computations within concrete UV models is that it strengthens the confidence on results that might be hard to follow at the pure abstract level. In turn, a careful validation of these conclusions implies a thorough cross-check of the EFT tools (which entails performing highly non-trivial computations of matching and running up to dimension eight) against robust results supported by very fundamental physics principles.

In what follows, we describe the different ways in which we have tested the conclusions 3.6–3.6 using matchmakereft [48], SuperTracer [117], and MatchingTools [112].

Tree-level matching Let us consider five different single-field extensions of the SM that induce \(\phi ^4 D^4\) operators at tree level:

$$\begin{aligned} {\mathcal {S}}&\sim (1,1)_0 \rightarrow c_{\phi ^4 D^4}^{(1,2,3)} \sim (0,0,1)\,, \nonumber \\&\Xi \sim (1,3)_0 \rightarrow c_{\phi ^4 D^4}^{(1,2,3)}\sim (2,0,-1)\,,\nonumber \\ {\mathcal {B}}&\sim (1,1)_0 \rightarrow c_{\phi ^4 D^4}^{(1,2,3)} \sim (-1,1,0)\,, \nonumber \\ {\mathcal {B}}_1&\sim (1,1)_1 \rightarrow c_{\phi ^4 D^4}^{(1,2,3)} \sim (1,0,-1)\,,\nonumber \\ {\mathcal {W}}&\sim (1,3)_0 \rightarrow c_{\phi ^4 D^4}^{(1,2,3)}\sim (1,1,-2). \end{aligned}$$
(3.25)

This should be read as follows: \({\mathcal {S}}\) is a full singlet of \(SU(3)_c\times SU(2)_L\) with hypercharge \(Y=0\) (in sub-index), which when integrated out produces the WCs (in arbitrary units) specified in the last parenthesis; likewise for the scalar triplet \(\Xi \) and for the three vectors. In these and all cases hereafter, the omitted UV couplings appear squared, so they are always positive.

The WCs above satisfy the positivity relations in a non-trivial way. For example, in the \({\mathcal {W}}\) case, \(c_{\phi ^4 D^4}^{(3)}\) and \(c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\) are negative, but precisely \(c_{\phi ^4 D^4}^{(2)}\), \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}\) and \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\) are non-negative.

We have obtained these results with matchmakereft. Despite being “simply” a tree-level computation, the task is not as easy as it might seem. Within matchmakereft, where the matching is performed by computing one-light-particle irreducible (1PI) Green’s functions off-shell in both the UV and in the IR, one needs to specify the full set of EFT operators independent up to field redefinitions, as well as their reduction to physical ones in on-shell observables. Fortunately, these results, for the bosonic sector of the dimension-eight SMEFT, can be found in Ref. [88]; see also Ref. [295]. But even implementing this into matchmakereft can be very cumbersome.

As an alternative cross-check, we have verified the values of the WCs by using MatchingTools, which performs the matching by solving for the classical equations of motion. The advantage is that no EFT basis needs to be provided a priori, but the problem is that the final result involves operators related by all kind of redundancies (field redefinitions, integration by parts, different names of same indices,...). As a matter of example, integrating out \({\mathcal {W}}\) within MatchingTools gives (suppressing couplings) [88]:

$$\begin{aligned} {\mathcal {L}}_\text {EFT}&= (D_\mu \phi ^\dagger D_\nu \phi ) (D^\mu \phi ^\dagger D^\nu \phi ) \nonumber \\&\quad +\underbrace{\cdots }_{\text {17 terms}}-\,\frac{1}{4} (D_\nu D_\mu \phi ^\dagger \phi )(D^\mu D^\nu \phi ^\dagger \phi ). \end{aligned}$$
(3.26)

Our approach to reduce this Lagrangian consists of using dedicated routines to export the output of MatchingTools to Feynrules [155], where it is in turn exported to FeynArts [190] and FormCalc [296], in which 1PI amplitudes are computed and matched onto the basis of Green’s functions of Ref. [88]. The final result is finally reduced onto a physical basis using the relations obtained from equations of motion therein. It reads:

$$\begin{aligned} {\mathcal {L}}_\text {EFT} = 2{\mathcal {O}}_{\phi ^4 D^4}^{(1)} + 2{\mathcal {O}}_{\phi ^4 D^4}^{(2)} - 4{\mathcal {O}}_{\phi ^4 D^4}^{(3)} + \cdots \end{aligned}$$
(3.27)

in agreement with matchmakereft (the ellipses stand for higher-point interactions).

One-loop matching Let us now take the scalar singlet and triplet cases up to one loop, in the limit in which the only relevant couplings in the UV are the trilinear terms (which we set to unit). Working with matchmakereft, we get:

$$\begin{aligned} c_{\phi ^4 D^4}^{(1)} = c_{\phi ^4 D^4}^{(2)} = -\frac{13}{48\pi ^2}, \end{aligned}$$
(3.28)

for the scalar case, and

$$\begin{aligned} c_{\phi ^4 D^4}^{(2)} = -\frac{61}{144\pi ^2}, \end{aligned}$$
(3.29)

in the triplet case. Here, we ignore the WCs that arise already at tree level. In both models, at least the condition \(c_{\phi ^4 D^4}^{(2)}\ge 0\) is broken, as expected from conclusion 3.6.

We have cross-checked this result with the help of SuperTracer [117]. To this aim, the output of SuperTracer is simplified within the code itself, and the final result is processed following the same strategy as with MatchingTools. This provides a very strong and robust test of the validity of both matchmakereft and SuperTracer.

Let us now take scalar quadruplet extensions of the SM, with \(Y=1/2\) and \(Y= 3/2\). These scalars couple linearly to three Higgses. The only operators that they induce at tree level are of the form \(\phi ^6 D^{2n}\), which do not renormalize \(\phi ^4 D^4\). Consequently, following conclusion 3.6, we expect the positivity bounds to hold. Indeed, from matchmakereft we obtain (we ignore couplings again):

$$\begin{aligned} c_{\phi ^4 D^4}^{(1)} = \frac{1}{9\pi ^2}, \,\, c_{\phi ^4 D^4}^{(2)} = \frac{1}{36\pi ^2}, \,\, c_{\phi ^4 D^4}^{(3)} = -\frac{1}{18\pi ^2}, \end{aligned}$$
(3.30)

for \(Y=1/2\) and

$$\begin{aligned} c_{\phi ^4 D^4}^{(1)} = c_{\phi ^4 D^4}^{(3)} = 0, \quad c_{\phi ^4 D^4}^{(2)} = \frac{1}{4\pi ^2}, \end{aligned}$$
(3.31)

for \(Y=3/2\).

One-loop running Despite the breaking of positivity in one-loop matching, as for example highlighted in Eq. (3.29), the amplitude for \(\varphi _i\varphi _j\rightarrow \varphi _i\varphi _j\) is non-negative in the deep IR because there it is dominated by the running of \(c_{\phi ^4 D^4}^{(2)}\) induced by tree-level operators. It can be indeed checked that \(\beta _{\phi ^4 D^4}^{(2)}\) is always non-positive [104, 297].

On the other hand, this implies that \(\beta _{\phi ^4 D^4}^{(2)\prime }\) does not need to be negative. Computed again with matchmakereft as well as with FeynArts+FormCalc, we obtain for example:

$$\begin{aligned} \beta _{\phi ^4 D^4}^{(2)\prime }= \frac{1}{6} (28c_{\phi ^4 D^4}^{(1)}+ 43 c_{\phi ^4 D^4}^{(2)}+ 15 c_{\phi ^4 D^4}^{(3)}) g_2^2 +\cdots \nonumber \\ \end{aligned}$$
(3.32)

which is positive, for example, in the scalar singlet case (\(c_{\phi ^4 D^4}^{(1,2)}=0\), \(c_{\phi ^4 D^4}^{(3)}=1\)). In the equation above, \(g_2\) stands for the \(SU(2)_L\) gauge coupling and the ellipses represent terms proportional to other SM couplings.

In cases where \(\beta _2\) vanishes, we do expect \(\beta _2'\) to be non-positive; see conclusion 3.6. One such case is given by the renormalisation of \(W^2\phi ^2 D^2\) operators (where W is the \(SU(2)_L\) gauge boson) by \(\phi ^4 D^4\) operators. We know that \(\beta _2\) vanishes in this case because loops with two insertions of \(\phi ^4 D^{2n}\) operators must have at least four Higgses.

Among the \(W^2\phi ^2 D^2\) operators, there is one that is restricted by the positivity of the amplitude for \(W\phi \rightarrow W\phi \). The corresponding \(\beta _2\) function reads:

$$\begin{aligned} \beta _{W^2\phi ^2 D^2}^{(1)} = -\frac{g_2^2}{6} (2 c_{\phi ^4 D^4}^{(1)} + 3 c_{\phi ^4 D^4}^{(2)} + c_{\phi ^4 D^4}^{(3)})+\cdots . \nonumber \\ \end{aligned}$$
(3.33)

The ellipses encode non-\(\phi ^4 D^4\) operators. This quantity is necessarily non-positive, because the parenthesis is non-negative (at tree level). Indeed, we can recast it in the form

$$\begin{aligned} (c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}) + (c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}) + c_{\phi ^4 D^4}^{(2)}, \nonumber \\ \end{aligned}$$
(3.34)

which is non-negative because the three terms in the sum are non-negative, as we saw before.

For all these calculations, we have relied on matchmakereft with full cross-check using FeynArts+ FormCalc.

4.6.2 Towards fully-automated one-loop matching

Even with the help of current EFT tools, the explicit computations described before can become extremely tedious. This is because the simplification of the Lagrangian resulting from integrating out the heavy degrees of freedom is highly redundant. To the best of our knowledge, there is no generic and publicly available method to reduce the effective Lagrangian to a physical basis in an automated way.Footnote 27 In this final section, we comment briefly on the approach we have adopted to face this problem, and on the progress we have made so far.

Our idea for automating the process of reducing a redundant Lagrangian to a physical basis of operators consists in requiring explicitly that both Lagrangians provide exactly the same S-matrix for all different processes that can be computed within the EFT (up to the corresponding order in the expansion in inverse powers of the cutoff).

In practice, this amounts to equating all needed tree-level on-shell connected and amputated Feynman graphs. As a matter of example, let us focus here on the SMEFT Higgs sector up to dimension eight. For the redundant Lagrangian, we consider that comprised by all Higgs operators in the Green’s basis of Ref. [298] (dimension six), together with those in Ref. [88] (dimension eight). For the physical one, we stick to the basis of Ref. [87]. The notation below follows the conventions in these references.

Upon equating the resulting calculations in both theories, one obtains a set of equations from where the physical WCs in the physical theory can be solved in terms of the physical and redundant WCs in the redundant EFT.

The main complication of on-shell matching, besides the huge number of diagrams, is the presence of light propagators, which manifest as non-local combinations of momenta in the S-matrix, which implies that solving the aforementioned system of equations analytically and in an automated way becomes very complicated. It can be instead solved numerically, upon giving concrete values to the different momenta involved in the process. In order to do so, without conflicting with (i) momentum conservation, (ii) on-shellness of the external legs and (iii) the fact that, as soon as the number of momenta in the amplitude is larger than four, not all them can be linearly independent; namely to ensure that we are in the physical region, we use Monte Carlo algorithms (currently RAMBO [299]) to sample the phase space.

Our current results for the Higgs sector of the SMEFT (up to the operator \(\phi ^8\)), expressed as shifts on physical WCs (\(c_i\)) induced by the presence of redundant ones (\(r_j\)), read:

$$\begin{aligned} c_{\phi \Box }&\rightarrow c_{\phi \Box }+\frac{1}{2}r_{\phi D}'\,, \end{aligned}$$
(3.35)
$$\begin{aligned} c_{\phi ^6}&\rightarrow c_{\phi ^6} + 2\lambda r_{\phi D}', \end{aligned}$$
(3.36)
$$\begin{aligned} c_{\phi ^6 D^2}^{(1)}&\rightarrow c_{\phi ^6 D^2}^{(1)}+2\lambda (2 r_{\phi ^4 D^4}^{(12)}-2 r_{\phi ^4 D^4}^{(4)}-r_{\phi ^4 D^4}^{(6)})\nonumber \\&\quad -4 c_{\phi \Box } r_{\phi D}'-\frac{1}{2} c_{\phi D}r_{\phi D}'{-\frac{7}{4} r_{\phi D}'^2 + r_{\phi D}''^2}, \end{aligned}$$
(3.37)
$$\begin{aligned} c_{\phi ^6}^{(2)}&\rightarrow c_{\phi ^6}^{(2)} + 2\lambda (r_{\phi ^4 D^4}^{(12)} -r_{\phi ^4 D^4}^{(6)})-c_{\phi D} r_{\phi D}', \end{aligned}$$
(3.38)

where we omit those that remain invariant. These shifts coincide fully with previous results derived from using equations of motion [88, 297, 298], with the exception of the terms in blue. These terms are of higher order in the power counting used in the mentioned references, but in other scenarios they must be considered and they can be only captured by field redefinitions [300] or via our explicit matching of scattering amplitudes. Both of these approaches guarantee exactly the invariance of the S-matrix, while equations of motion do not.

As of now, we have successfully applied this method to the SMEFT and other EFTs up to dimension eight including (massless or massive) scalars and gauge bosons, and we are working on extending it to fermions as well [301].

5 Summary and outlook

Effective field theories are basic tools in our description of nature. On the one hand, they let us calculate physical quantities without knowing the underling theory; e.g., the SMEFT provides an efficient way to characterize new physics at the EW scale in terms of coefficients of higher-dimension operators without knowing the underlying UV completion. On the other hand, even if we want to consider specific NP models, EFTs offer a comprehensive approach to compare with data while, at the same time, providing a way to sum large logarithms via the use of the RG-improved perturbation theory.

However, implementing the EFT approach in an automated way suited for systematic phenomenological analyses is a formidable task. This approach includes, for instance, the identification of higher-dimensional operators appearing in an EFT, the extraction of Feynman rules, the calculation of the matching coefficients between EFTs valid at different energy scales, and the calculation of anomalous dimensions to evolve the WCs between different energy scales. Many of these tasks are not feasible in a reasonable amount of time without computer codes as they can involve hundreds or even thousands of WCs, like in the SMEFT and the LEFT.

The interpretation of experimental data in terms of constraints on effective couplings also requires automation, in all but a few restricted cases. Extensions of the SM are typically parameterized in terms of the SMEFT. Each WC, however, can appears in many observables. This is also a challenging task, as it involves computing predictions for a large number of observables and performing global fits with hundreds of experimental constraints.

With this report, we have documented the large efforts in the theory community in automating calculations within EFTs framework. They have been presented at the SMEFT-Tools 2022 workshop held at the University of Zurich from 14th-16th September 2022. The milestones reached so far can be summarized as follows:

  • For the two major extensions of the SM, SMEFT and LEFT below the electroweak scale, the anomalous dimensions of dimension-six operators have been calculated to leading order and implemented in user-friendly programs such as DsixTools and Wilson. They also provide the complete matching between SMEFT and LEFT at tree level and one-loop order. Higher order operators (dimension sever or higher) can be studied with Sym2Int, which automatically build explicit bases of operators for EFTs, given their fields and symmetries. Derivation of Feynman rules is enabled by SmeftFR, which also generate UFO model files suitable for further symbolic or numerical calculation of matrix elements and cross-sections.

  • The viability study of specific BSM scenarios can be largely simplified by first matching the NP models to the SMEFT and then determining the experimental constraints on the SMEFT WCs. Such matching between any realization of NP can be performed in an automated way at tree level with tools like Matchmakereft, Machete and CoDEx. These programs allow to get all effective operators at the EW scale. Matching at one-loop is possible in many cases. A completely generic implementation at one-loop level in currently under development.

  • Several programs (e.g., smelli, HighPT, HEPfit, EOS) have been developed for phenomenological studies, implementing predictions in the presence of EFT WCs which are then used to constraints NP effects in global fits. They include observables from a wide range of high-energy physics, such as flavor physics, physics at hadron colliders, electroweak precision tests, Higgs physics, and other precision tests of the SM.

Many issues and development directions were also addressed during the workshop. Most probably, the next-to-leading logarithms in SMEFT and LEFT will have to be tackled. Given the large number of operators, it is inconvenient to consider separately the two-loop running of the various EFTs separately. In fact, it is more advantageous to calculate the anomalous dimensions for a generic EFTs with an arbitrary number of real scalars and left-handed fermions, invariant under a generic gauge structure. Result for NLO running of the SMEFT or LEFT WCs can be derived in a second step by specifying the field content and the gauge group. An ongoing project aims at evaluating the one-loop RG equations for all the dimension-six operators in such generic EFT.

Another relevant development is the complete classification of contributions to dimension six operators in SMEFT which arise from BSM models only at the one-loop level. Such one-loop matching program is using on-shell matching to avoid the use of (redundant) Green’s basis.

Also a consistent treatment of \(\gamma _5\) in running of four-fermion operators is necessary since it gives rise to evanescent structures in dimensional regularization. Renormalization of chiral Abelian theory up to two loops has been developed recently in the so called BMHV scheme for \(\gamma _5\) and it will be extended to the non-Abelian case in preparation for the application to the SM.