Abstract
In recent years, theoretical and phenomenological studies with effective field theories have become a trending and prolific line of research in the field of highenergy physics. In order to discuss present and future prospects concerning automated tools in this field, the SMEFTTools 2022 workshop was held at the University of Zurich from 14th–16th September 2022. The current document collects and summarizes the content of this workshop.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Preface by the Editors
The current developments in beyondtheStandardModel (BSM) phenomenology point to an ever greater use of Effective Field Theories (EFTs). With no concrete hints of a forthcoming discovery of onshell new physics (NP), we see no reason that this trend should change any time soon. In fact, even a discovery of new highenergy resonances would call for the use of EFTs to study many of their observable effects. The role of computer tools is central to the successful use of EFT in BSM physics: simply put, the amount of repetitive computations is all but impossible to perform on a case by case basis without them. For these reasons, we gathered creators and developers of EFT tools for another workshop three years after the first SMEFTTools workshop [1].
The SMEFTTools 2022 workshop received contributions from a large part of the EFT theory community, especially, as it pertains to the computer tools of the field. As a result, this review is a comprehensive, if not quite complete, report on the current status of the tools available in the field. Additionally, several speakers at the workshop were presenting new results dealing with the more formal theory aspects. These results frame the current and future developments of EFT tools and are crucial to the ever growing capabilities of the tools. However, the theory developments included in this report are merely a sample of what is being undertaken in the field as a whole; the field is simply too active to include all, or even most, of the developments here.
The introductory section provides some context and motivation for the use of EFTs and discusses some of the trends in EFT used for BSM. We have grouped the other contributions in two main sections: On the one hand, Sect. 2 details computer tools for the study of ultraviolet (UV) models using EFTs. This section describes tools for matching ultraviolet (UV) models to EFTs, automating the renormalizationgroup (RG) evolution of EFT coefficients, generating EFT operator bases, and a proposal for a unified format for the storage of EFT matching results. On the other hand, Sect. 3 describes computer tools necessary for the phenomenological study of EFTs. These tools are equally invaluable for bottom–up or top–down analyses. This section contains four different tools for the key task of performing global fits to experimental data, along with a code for automatically deriving the Feynman rules of the Standard Model effective theory (SMEFT). In both sections, contributions covering recent theory developments relevant for future implementations or practical applications of EFT tools are also included.
2 Introduction and motivation
José Santiago and Peter Stoffer
EFTs have been a basic tool in particle physics for many years. In most cases EFTs were used in the context of welldefined, usually renormalizable, models, either because they were the only way to compute certain observables (for example, due to the strong coupling of QCD at low energies) or because their use greatly simplified the calculation of interest (gluonfusion Higgs production at a high perturbative order in the infinite mass limit is a clear example). Their application to the study of physics beyond the Standard Model (SM), while already present in the past, has experienced an exponential increase in the last decade. The reasons for this are twofold: first, the LHC and other experiments are producing increasingly better limits on the mass of new particles, searching in a multitude of different channels, which seems to clearly indicate the presence of a mass gap between the scale of new physics and the energies at which most experimental observables are measured; second, and this is especially relevant for this workshop, the last few years have seen the appearance of a plethora of new computer tools, that simplify, and in many cases fully automate, the tedious calculations needed to apply EFTs to new physics searches.
2.1 Connecting theory and experiment via EFTs
The problem of obtaining the implications of experimental data on models of new physics is highly nontrivial. The vast number of observables measured experimentally has to be computed, via complicated, sometimes multiloop calculations, for each particular model of new physics. These difficult calculations have to be repeated for every experimental observable and every model with the added complication that, despite the very large number of new physics models developed by theorists, we are not guaranteed that the true description of Nature falls into one of these models.
EFTs simplify the problem of obtaining the phenomenological implications of experimental data on new models by splitting the calculation in two (mostly independent) steps. In the first one, the bottomup approach, the experimental observables are computed, to the required order in perturbation theory, in terms of the Wilson coefficients (WCs) of the corresponding effective Lagrangian. This process can be performed with no mention of any new physics model and therefore represents a mostly modelindependent parametrization of experimental data in the form of global fits (or rather a global likelihood), see Sect. 3. In the second step, the topdown approach, the WCs of the effective Lagrangian are computed in terms of the couplings and masses of specific UV models, that complete the EFT at high energies. This calculation, called matching, has to be done for every model of new physics (but not any more for every observable) but, thanks to recently developed tools, it can be fully automated (see Sect. 2). When the bottomup and topdown approaches are combined one can obtain the phenomenological implications of any experimental observable in any UV model and, thanks to existing computer tools, in a mostly automated way.
2.2 The SMEFT and the LEFT
The absence of evidence for physics beyond the SM in direct LHC searches suggests that new particles are either very weakly coupled [2] or much heavier than the electroweak scale. In the latter scenario, their effects at energies below the scale of new physics can be described by an EFT. Depending on the assumption about the nature of the Higgs particle, this is either the Standard Model effective field theory (SMEFT) [3, 4] or Higgs effective field theory (HEFT) [5, 6]. In particular, the SMEFT is the most general EFT invariant under the SM gauge symmetry, \(SU(3)_c\times SU(2)_L\times U(1)_Y\), involving only SM particles with the Higgs field taken as an \(SU(2)_L\) doublet.
The SMEFT Lagrangian up to dimensionsix operators is given by
with \({{\mathcal {L}}}_{\textrm{SM}}\) being the SM Lagrangian. There is only one term at dimension five corresponding to the Weinberg operator [7]. This operator violates baryon number in two units and yields Majorana masses for the neutrinos after electroweak symmetry breaking. At dimension six, there are 59 terms that preserve baryon number and another 5 that violate baryon and lepton numbers in one unit. These are commonly presented in the socalled Warsaw basis [4]. The complete set of RG equations for the dimensionsix SMEFT in the Warsaw basis has been calculated in [8,9,10,11]. As we describe in the sections below, these advances, together with simultaneous theoretical and computational developments towards the automation of oneloop matching calculations, pave the way to the systematic use of EFT methods in the analysis of NP models.
For processes below the electroweak scale, another EFT should be used, wherein the heavy SM particles, i.e., the top quark, the Higgs scalar, and the heavy gauge bosons, are integrated out. This lowenergy effective field theory (LEFT) is a gauge theory invariant only under the unbroken SM groups \(SU(3)_c \times U(1)_{\textrm{em}}\), i.e., QCD and QED augmented by a complete set of effective operators. If matched to the SM at the electroweak scale, it corresponds to the Fermi theory of weak interaction [12], but when all operators invariant under the unbroken gauge groups are included, it also describes the lowenergy effects of arbitrary heavy physics beyond the SM.
The LEFT is defined by the Lagrangian
where the QCD and QED Lagrangian is given by
The additional operators are the Majorananeutrino mass terms at dimension three, as well as operators at dimension five and above. At dimension five, there are photonic dipole operators for all the fermions (including a lepton–numberviolating neutrino dipole operator) as well as gluonic dipole operators for the up and downtype quarks. At dimension six, there are the CPeven and CPodd threegluon operators and a large number of fourfermion operators. The entire list of operators up to dimension six can be found in [13], including operators that violate baryon and lepton number.
This theory has been extensively studied in the context of B physics. The operator basis relevant for Bmeson decay and mixing has been constructed in [14]. The complete LEFT operator basis up to dimension six in the power counting has been derived in [13], where also the treelevel matching to the dimensionsix SMEFT above the weak scale was provided. By now, the LEFT operator basis is known up to dimension 9 [15,16,17]. Recently, the treelevel matching to the SMEFT has been extended to dimension eight in the SMEFT power counting [18]. Partial results for lepton–flavorviolating operators were given already in [19].
The complete oneloop LEFT RG equations were derived in [20]. Partial results for the RG equations were known before and have been studied to higher loop orders [14, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. Within the SMEFT/LEFT framework, the oneloop RG equations at the high scale [8,9,10], the treelevel matching [13], and the RG equations below the weak scale [20] allow one to resum the leading logarithms and to describe the indirect lowenergy effects of heavy physics beyond the SM within one unified framework. The RG and matching equations have been implemented in several software tools, many of which were presented at the SMEFTTools workshops. Consistent EFT analyses at leadinglog accuracy that combine constraints from experiments at very different energy scales are becoming standard.
For certain highprecision observables at low energies it is desirable to extend the analysis beyond leading logarithms. Steps in this direction have been taken, e.g., in [33, 38, 45]. Partial results for the matching at the weak scale at one loop were derived in the context of B physics in [35, 46]. The complete oneloop matching between the SMEFT and the LEFT at dimension six was calculated in [47]. It can be used for fixedorder calculations at oneloop accuracy in cases where the logs are not large, and it is an ingredient in nexttoleadinglog analyses within a resummed framework. Several tools are being developed that automate the oneloop matching between the EFT framework and UV models for new physics [48, 49].
At energies as low as the hadronic scale, additional complications appear due to the nonperturbative nature of QCD. In these lowenergy processes, one should not work with perturbative quark and gluon degrees of freedom but rather perform either direct nonperturbative calculations of hadronic matrix elements of effective operators or switch to another effective theory in terms of hadronic degrees of freedom, i.e., chiral perturbation theory ( \(\chi \)PT) [50,51,52]. In [53], the matching of semileptonic LEFT operators to \(\chi \)PT has been discussed, which can be obtained within standard \(\chi \)PT augmented by tensor sources [54]. The chiral realization of fourquark operators was studied in [55], while [56] analyzed C and CPodd LEFT operators up to dimension 8. If lattice QCD is employed to deal with the nonperturbative effects at low energies, one faces the problem that the EFT framework requires matrix elements of dimensionally renormalized operators. This necessitates another matching calculation to a scheme amenable to lattice computations. This matching has to be performed at a scale of a few GeV, which is already accessible to lattice computations but at the same time sufficiently high that perturbation theory can be assumed to work reasonably well. Traditionally, these matching calculations are based on regularizationindependent momentumsubtraction (RIMOM) schemes [34, 57,58,59,60], whereas in recent years the gradient flow [61, 62] has received attention [63,64,65,66,67].
2.3 Going beyond
The great sensitivity achieved in the search for CP violation, rare meson decays, magnetic and electric dipole moments and leptonflavorviolating processes requires improvements in the theoretical precision with EFTs. The need to include higherorder corrections is twofold. On the one hand, the inclusion of higherorder corrections, especially in QCD, allows to better assess the uncertainties in the theoretical calculation. On the other hand, some newphysics effects are only generated once higherloop effects have been accounted for in certain UV completions, thus including them naturally yields to better constraints on the underlying theory.
In fact, it is often the case that the leading effects of new physics are due to looplevel processes. The last decade has seen results for oneloop running in the SMEFT [8,9,10,11, 68] and the LEFT [20] and the oneloop SMEFT to LEFT matching [47]. Likewise, as we highlight in this manuscript, there has been recent substantial progress in the connection of NP models to their EFTs. Going beyond the leading logarithm effects requires systematic treatment of RG effects in the EFTs because of the scheme dependence of the anomalous dimension matrix and the matching coefficients appearing, for instance, in the chosen prescription for \(\gamma _5\) in d dimensions [69,70,71,72,73,74,75,76,77,78] and the definition of evanescent operators [22, 47, 79,80,81,82]. To this end, consistent calculations across different EFTs and bases have given cause for a new look at the role of evanescent contributions [49, 81]. In the perspective of systematic multiloop computations within EFTs, there has been a recent interest in the proper treatment of \( \gamma _5 \) [78, 83, 84], a notorious stumbling block in dimensional regularization.
Higherorder anomalous dimensions have been calculated for subsets of dimensionsix operators [14, 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45], but due to the hard technical nature of the calculations the complete matrix is not known. Given the large number of operators of the SMEFT and LEFT, it may be more convenient to consider first a generic EFT with an arbitrary number of real scalars and lefthanded fermions and compute the anomalous dimensions and the RG in such theory invariant under a generic gauge group. Results for NLO running of the SMEFT or LEFT WCs can be then extracted in a second step by specifying the field content and the gauge group.
There has also been recent progress in expanding the EFT formulation beyond dimensionsix operators, sparking the formulation of geometric EFTs [85, 86] and the determination of higherdimension bases [15, 87, 88], as well as the counting of the EFT operators through the use of Hilbert series [89]. Recent work has also started constraining the effects from higherdimensional operators through the use of unitarity bounds, e.g., [90,91,92,93,94,95, 95,96,97,98,99,100,101,102,103,104,105,106].
3 Effective field theory matching and running
Despite the usefulness of the EFT approach, the interpretation of data in terms of NP models requires a direct connection between those models and their EFT description. This typically involves the calculation of sequential matching steps at the relevant mass thresholds, and RG equations between these thresholds and the scale of the observables. In recent years, many tools that (at least partially) automate these calculations have been developed.
In the absence of light particles beyond those in the SM, the necessary calculations for RG running and matching below the NP mass threshold are known up to dimensionsix operators [8,9,10,11, 13, 20, 35, 47]. These results have been implemented into several computer tools including DsixTools [42, 107] (which we describe in Sect. 2.1), wilson [108], and RGESolver [109]. The SMEFT RG evolution has also been incorporated [110] into the MadGraph Monte Carlo generator [111]. As far as treelevel matching is concerned, the Python package MatchingTools [112] allows to perform a fully automated matching computation for arbitrary heavy particles and gauge groups. Furthermore, the matching code CoDEx (see Sect. 2.2) implements formulae based on pathintegral methods [113,114,115,116] to automate the matching of some NP models into the dimensionsix SMEFT. Further matching tools that rely on functional methods are SuperTracer [117] and STrEAM [118].
Although it might be tempting to think of the target EFT as the SMEFT, many realistic BSM constructions contain several energy scales, calling for intermediate EFTs, or feature additional light states, such as axionlike or darkmatter particles, thus, demanding extensions of the SMEFT (see, e.g., [2, 119,120,121,122]). Furthermore, some phenomenological studies require extending EFT calculations beyond dimensionsix operators (see, e.g., [104, 123,124,125,126] for recent literature examples). Reflecting on this, a new generation of tools is now aiming at solving the more general problem of completely automating oneloop matching and RG evolution of arbitrary weaklycoupled models. The most notable examples in this direction are matchmakereft and matchete, described in Sects. 2.4 and 2.3, respectively.
Additional developments to assist matching calculations are also described in this section. In particular, the computer tool Sym2Int (see Sect. 2.5),^{Footnote 1} which automates the construction of EFT basis, and the MatchingDB format (see Sect. 2.7), aimed at standardizing the storage of matching results.
3.1 DsixTools: the effective field theory toolkit
DsixTools is a Mathematica [42, 107] package for the matching and renormalization group evolution from the NP scale to the scale of low energy observables. The current version of DsixTools fully integrates the SMEFT and the LEFT, treating both theories on an equal footing. It allows the user to perform the full oneloop renormalization group evolution of the WCs in the SMEFT and in the LEFT (with SM \(\beta \) functions up to 5loop order in QCD), and the full oneloop SMEFTLEFT matching at the electroweak scale. Therefore, the user can start with some numerical values for the SMEFT WCs at the highenergy scale \(\Lambda _{\textrm{UV}}\), in principle obtained after matching to a specific NP model, and translate them into numerical values for the LEFT WCs at the lowenergy scale \(\Lambda _{\textrm{IR}}\), where some observables of interest can be computed. This is achieved by adopting some conventions and implementing some results in the recent literature:

The Warsaw basis [4] for the SMEFT, for which full oneloop RG equations [8,9,10,11, 130] are known.

The San Diego basis [13] for the LEFT, for which full oneloop RG equations [20] are known.
All these results can be used in a visually accessible and operationally convenient way thanks to DsixTools. In addition to running and matching numerical routines, it also includes several functions for analytical applications, as well as userfriendly SMEFT/LEFT dictionary tools. Since version 2.1, DsixTools also admits input obtained with matchmakereft, thus extending its capabilities.
The simplest way to download and install DsixTools is to run the following command in a Mathematica session^{Footnote 2}
This will download and install DsixTools, activate the documentation and load the package. Alternatively, DsixTools can also be installed manually. Finally, DsixTools can also be loaded (once installed) by running the usual
3.1.1 What DsixTools can do for you
For a full and updated list with all the tools provided by DsixTools we refer to the manual on the package website [131]. We will now concentrate on some useful features that illustrate what DsixTools can do for you in practice. A demo notebook with these and other examples of use is also provided at [132].
Userfriendly SMEFT & LEFT information. DsixTools contains several routines and functions that allow one to use the tool as a SMEFT/LEFT dictionary. For instance, one can load DsixTools and execute the command
to print many details about the \(C_{\varphi \ell }^{(1)}\) SMEFT WC. One may learn the definition of the associated \(Q_{\varphi \ell }^{(1)}\) operator, its dimensionality and type (2fermion in this case). In case of WCs carrying flavor indices, such as this one, this command also prints information about the possible symmetries under exchange of indices, the number of independent coefficients or the relations (due to Hermiticity, for example) among them. This information is displayed in a userfriendly way. Similarly, with
one would get the same information about the LEFT WC \(L_G\). Finally, with the functions
and the analogous ones in the LEFT, DsixTools shows a visual grid or a dropdown menu with all the WCs of the theory. The user can now click on any of them to run the ObjectInfo function on the selected WC and obtain all its properties.
Introducing and changing input values. There are two methods to introduce input values in DsixTools: with the NewInput routine or from a file. In the latter case one can choose between a native DsixTools format or the WCxf format [133]. Let us focus on the former case. With the NewInput routine the user loads the input values directly in the Mathematica notebook. Only the nonzero WCs must be given. The rest will be assumed to vanish. For instance, the command
sets \([C_{\ell q}^{(1)}]_{1112} = [C_{\ell q}^{(1)}]_{1121} = 1\) GeV \(^{2}\) and \(C_{\varphi {{\widetilde{B}}}} =  0.5\) GeV \(^{2}\). We note that dimensionful quantities in DsixTools are always given in GeV to the proper power. In DsixTools, the input values for the parameters of the effective theory at work (SMEFT or LEFT) are stored as replacement rules in a dispatch variable called Input Values. Then, after defining an input, the user can easily read it as
Once the input values have been set, the user can change them individually at any moment in the notebook. This is done with the ChangeInput routine. For example, the line
changes the value of \(C_{\varphi {{\widetilde{B}}}}\) to 0.6 GeV \(^{2}\). Finally, DsixTools produces a warning message when the WCs provided by the user lead to an invalid set of input values. There are two possible reasons for this:

1.
NonHermiticity errors: Some WCs are related due to the Hermiticity of the Lagrangian. For instance, \([C_{\ell q}^{(1)}]_{1112} = [C_{\ell q}^{(1)}]^*_{1121}\) must necessarily hold.

2.
Antisymmetry errors: Some LEFT WCs are antisymmetric under the exchange of two flavor indices. For instance, \([L_{\nu \gamma }]_{11} = 0\) must necessarily hold.
When the user’s input is not consistent with any of these restrictions, a warning is issued and DsixTools corrects the input by replacing it by a new one that ensures a complete consistency of the Lagrangian. The list of invalid input values can be seen by clicking on the button Input errors. We note, however, that in some cases other WCs, related to these by the two reasons given above, may be modified too.
A simple DsixTools program. Let us illustrate how easily one can use DsixTools with a simple but complete program, given by the following three lines after opening Mathematica and loading DsixTools:
Here we consider an example SMEFT input with \([C_{\ell q}^{(1)}]_{2233}= 1/\Lambda _{\textrm{UV}}^2\), given at \(\Lambda _{\textrm{UV}} = 10\) TeV. The rest of the SMEFT WCs are assumed to vanish at \(\Lambda _{\textrm{UV}}\). Notice that input for the energy scales must be given too. However, \(\Lambda _{\textrm{EW}}\) and \(\Lambda _{\textrm{IR}}\) are taken to be equal to \(m_W\) and 5 GeV by default, and then only \(\Lambda _{\textrm{UV}}\) must be provided. In the first line of this program, the NewInput routine is used to introduce the SMEFT WCs as well as the NP energy scale \(\Lambda _{\textrm{UV}}\). In the second line we make use of RunDsixTools, one of the most important routines in DsixTools. It runs the SMEFT RG equations, it matches the resulting SMEFT Lagrangian at the electroweak scale onto the LEFT one and runs down to \(\Lambda _{\textrm{IR}}\) with the LEFT RG equations. The results of this process can be obtained by means of the D6run function, which returns interpolating functions that can be evaluated for any value of the energy scale \(\mu \). For instance, in this program we choose to print \([C_{\ell q}^{(1)}]_{2233}\) at the electroweak scale. Last but not least, we emphasize that DsixTools not only provides numerical routines. In fact, all the analytical information in the code can be printed and used in Mathematica sessions. There are plenty of examples of this. For instance, with the command
the user can display the analytical expression for the WC \([L_{eu}^{V,LL}]_{2211}\) of the LEFT after matching at oneloop to the SMEFT WCs. Similarly, the SMEFT and LEFT \(\beta \) functions can be readily accessed as
We refer to the demo notebook [132] for examples of use of other DsixTools routines and functions.
Using Matchmakereft results. The first step in the study of specific NP models with the tools described here is the matching of the model to an EFT. If the NP degrees of freedom lie at high energies, this EFT is generally the SMEFT. Even though this theory is very well known nowadays, the calculation might be hard, especially if done at oneloop. Since version 2.1, DsixTools admits input obtained with matchmakereft [48], a fully automated Python code to compute the treelevel and oneloop matching of arbitrary models onto arbitrary EFTs. Its use is very simple. Let us illustrate it with an example NP model that extends the SM field inventory with a righthanded neutrino \(N \sim ({\textbf{1}},{\textbf{1}})_0\) and a scalar leptoquark \(S \sim ({\textbf{3}},{\textbf{2}})_{\frac{1}{6}}\), where we denote their representations under \((\mathrm SU(3)_c, SU(2)_L)_{\mathrm{U(1)_Y}}\). The NP Lagrangian contains the pieces
where \(\alpha = 1,2,3\) is a flavor index. This model can be easily implemented and matched onto the SMEFT at the oneloop level with matchmakereft. The results are saved in text file called MatchingResult.dat, which can be loaded into DsixTools with the command
With this line, the user not only loads the analytical information in MatchingResult.dat, but also gives numerical values for the NP parameters. After executing this command, the user can check some input values for the SMEFT WCs at \(\Lambda _{\textrm{UV}} = 10\) TeV. Their analytical expressions in terms of the parameters of the UV model can also be printed thanks to the dispatch MatchAnalyticalUV. For instance, the analytical expression and numerical value of \(C_\varphi \) can be printed with
With the DsixTools SMEFT input fully generated, one can now proceed and use the RunDsixTools routine. Therefore, thanks to this novel functionality, the user can easily combine DsixTools and matchmakereft to study NP models using the full power of EFTs.
3.1.2 Summary
Some of the most common tasks in the SMEFT and in the LEFT require the handling of a large number of WCs and/or the resolution of a huge set of coupled RG equations. These can be automatized with the help of DsixTools, a Mathematica package designed to provide a simple and userfriendly experience. DsixTools contains many routines and functions to deal with the SMEFT or the LEFT, both at the algebraic and numerical levels. Some examples of use that illustrate the capabilities of DsixTools are given here. We refer to the manual on the package website [131], as well as to the comprehensive reference and documentation environment provided with DsixTools, for further information on the tool.
3.2 CoDEx: matching BSMs to SMEFT
CoDEx [134] is a Mathematica package that computes WCs for SMEFT effective operators up to oneloop level and mass dimension six in terms of UV model parameters. The computation of WCs is based on the evaluation of effective action formulae derived using functional methods [113,114,115,116]. The package is applicable to BSM scenarios containing single or multiple massdegenerate heavy fields of spin 0, \(\frac{1}{2}\), and 1.^{Footnote 3} It computes the effective operators in both strong interacting light Higgs (SILH) [135, 136] and Warsaw [3, 4] bases. The code also provides an option to perform the RG evolution of these operators in the Warsaw basis, using the anomalous dimension matrix computed in [8,9,10]. Thus, one can get all effective operators at the EW scale, generated from any such BSM theory. To run the program, it requires very minimal input within a userfriendly format. The user needs to provide only the relevant part of the BSM Lagrangian that involves the heavy field(s) to be integratedout. CoDEx, with its installation instructions, web documentation, and model examples, is available on GitHub (https://effexteam.github.io/CoDEx/) .^{Footnote 4}
3.2.1 User inputs & CoDEx outputs
The input information for any BSM to implement in CoDEx is minimal. Here, we depict a stepbystep procedure to compute the effective operators and the internal computation that is carried out at each step in CoDEx (see also the flowchart in Fig. 1):

The users need to provide the following information (quantum numbers) about the heavy field(s): color, isospin, hypercharge, mass, and spin, based on which the representation(s) of the heavy field(s) are evaluated by the package internally. As mentioned, the SM gauge group quantum numbers of the BSM field are needed as input. On top of that, the relevant part of the BSM Lagrangian that contains the heavy field(s) must be supplied by the user. The code automatically builds the heavy field kinetic (derivative and mass) terms, which are not required from the user. The SM Lagrangian is also appended by default. Let us consider an example here: we have only one heavy field – a real singlet scalar ( \(\rightarrow \) 1, \(\rightarrow \) 1, \(\rightarrow \) 0, \(\rightarrow \) 0). Let us denote \(\rightarrow \) ‘hf’ and \(\rightarrow \) ‘m’. This represents the field content of our model in the correct way:
From this input, we construct the field representation that is needed to write the BSM Lagrangian and the CoDEx internal functions recognise this representation for further analysis. First, load the package:
To write the Lagrangian in a compact form one can define the heavy field \({{\mathcal {S}}}\) as:
Then we need to build the relevant part of the Lagrangian (involving the heavy field only). Note that we do not need to construct the heavy field kinetic term (the covariant derivative and the mass terms) in the CoDEx Lagrangian. Thus, the only part of the Lagrangian we need here is^{Footnote 5}:

Next, we need to load the symmetry generators for computing looplevel WCs:

Based on these inputs, one can generate the tree and onelooplevel WCs. The CoDExfunctions for generating WCs are listed in Table 1.
(See the documentation of for details.)

The last step is the computation of effective operators and associated WCs:

The output is obtained in basis and is formatted as a detailed table in
There is provision to export the result in LaTeX format. Table 2c is actually obtained from the output of the code above. We can compute the same in basis as well and for that we have to use:
Output of this can be found in Table 2a. Similarly, oneloop results can be obtained by changing the option value of ‘ ’ to . The default value of is , which combines both tree and oneloop results. These resulting WCs can then be run down to the electroweak scale, using . This function computes RG evolution for Warsaw basis effective operators (in leading log approximation) using anomalous dimension matrices available in Refs. [8,9,10].

Detail model building guide is available on the package webdocumentation available here (https://effexteam.github.io/CoDEx/) . Moreover, SMEFT matching results for multiple scalar extension are available in these articles [140, 141].
3.2.2 Developers’ version: yet to be released

Heavylight mixed WCs and dimension8: A module for incorporating effects from the mixed processes at oneloop including heavy fields and light fields is included. We generate these contributions by expanding the UV action around the light field classical solution obtained using the onshell relations of light fields. We implement the universal effective action formulae for the mixed heavylight contributions and agree with that of Ref. [116] (see Tables 15 in there). We evaluated this formula in CoDEx along with 16 BSM models to generate the mixed heavylight WCs [140,141,142]. Modules for evaluation of oneloop processes involving fields with nonidentical spins and incorporating SMEFT operators up to dimension8 will be released shortly [123].

WCxF [133]: There are multiple packages available with different applications for the EFT matching and running of the WCs, and mapping these WCs to the observables [143]. It is desirable to have a data/result exchange format among these packages. WCxF is such a data exchange format widely used among EFT packages, see Ref. [133]. CoDEx has two functions for exporting and importing data in WCxF. These functions are and . We briefly discuss the utilities of these functions below.
The output (which is suppressed here) is in WCxF template, and it can be interfaced with other available programs. In Ref. [140], we have validated this by interfacing a.json file generated by to DSixTools [107] package successfully.
The function takes WCxF files as input and provides output in CoDEx data format.
This output is in CoDEx readable format and the ellipses above represent other WCs in the list.

Identities: The effective action evaluation for a BSM generates gaugeinvariant structures, which do not directly map to the desired effective operator basis. We implement operator identities and equations of motion on the derived effective Lagrangian to cast the gaugeinvariant terms to desired structures. These identities depends on the choice of the effective operator basis. The transformations like Fierz identities, SM field equations of motion, and SMEFT dimensionsix operator identities are introduced in the developer version of CoDEx. In future developments, new modules will be available to capture the evanescent operator effects [81, 144,145,146], as well as these identities will be extended to incorporate effects from SMEFT dimension8 operators [123].
3.3 Matchete: Matching Effective Theories Efficiently
Matchete (MATCHing Effective Theories Efficiently) is a Mathematica package fully automating matching computations up to oneloop order, utilizing functional methods. The user supplies the UV theory by first defining the symmetry groups (both local and global), the fields and the coupling parameters with simple commands. With these definitions in place, the Lagrangian can be written in Mathematica language in a simple way and then passed to the matching function to integrate out fields that the user has defined as “heavy”. As a consequence of the functional matching procedure, no prior knowledge of the operators basis of the resulting effective theory is required. Matchete automatically generates the full set of effective operators and reduces it to a basis without any user input.
The Matchete package is free software under the terms of the GNU General Public License v3.0 and is publicly available in the following GitLab repository:
https://gitlab.com/matchete/matchete
This note only serves as a brief overview of the package; we refer the reader to Ref. [49] for details.
3.3.1 Matching strategy
Functional methods and expansion by regions
Matchete achieves the matching by directly computing the Wilsonian effective action, i.e. the contribution to the generating functional encoding only shortdistance physics, corresponding to energy scales \(E>\Lambda \), where \(\Lambda \) is the matching scale (or in BSM contexts often called “newphysics scale”) [115, 117, 118, 147, 148]. To this end, one first splits the field content of the theory into Fourier modes with frequencies above (hard) and below (soft) the scale \(\Lambda \):
Lowenergy matrix elements are computed from the generating functional:
from which the Wilsonian effective action \(S_\Lambda \) is defined:
This object can be calculated directly by means of a background field expansion, meaning each field is further split into classical fields \({\hat{\phi }}_i\) and quantum fluctuations \(\eta _i\),
and an expansion of \(S({{\hat{\phi }}}_S+\eta _S,{{\hat{\phi }}}_H+\eta _H)\) in the quantum fields is performed. Collecting hard and soft modes into a single multiplet, i.e. \({{\hat{\phi }}}\) and \(\eta \) for the classical field and quantum fluctuation, this expansion reads:
The first term corresponds to the treelevel contributions, the second vanishes by virtue of the equations of motion and the third term encodes all the oneloop contributions. At tree level, the effective action is obtained by solving the equations of motions for the heavy fields, inserting them back into the full Lagrangian and expanding in the heavy mass. At oneloop level, the effective action is found by
where \(Q_{ij} = \frac{\delta ^2 S}{\delta {\bar{\eta }}_i\delta \eta _j}\) is the fluctuation operator. The subscript \(_h\) on the integral denotes the fact that the integral is taken in the hard region, meaning the integration momentum k is assumed to be of the order of the hard scale \(\Lambda \). In practice, this is implemented by assigning a powercounting to the soft scales (masses and momenta of soft modes) and expanding the integrand systematically to the desired order in the EFT counting. This expansion by regions [149, 150] simplifies the matching procedure as it computes directly the oneloop contributions to the matching coefficients without the need of having to evaluate matrix elements of the effective theory [115].
The results obtained with the method outlined above yields an effective Lagrangian that is not manifestly gaugeinvariant, as it contains open covariant derivatives, meaning expression of the form
which cannot be dropped since the covariant derivatives commute nontrivially. This is mitigated by the socalled covariant derivative expansion, for details of which we refer the user to the literature [151,152,153].
Basis Reduction After the matching procedure is performed, the obtained effective operators are not linearly independent. Matchete is able to handle the most common Lie algebras and performs simplifications of Dirac algebra using ddimensional identities if they are available. To find a basis however, redundant operators still need to be eliminated. The first reduction technique is relating operators with covariant derivatives to each others by the means of integrationbyparts (IBP) identities, which can be derived by imposing total derivative operators to vanish, \(D_\mu J^\mu = 0\). These IBP identities allow one to eliminate certain derivative operators in favor of others. As an example in a theory with charged Dirac fermions and a scalar, the following kind of reduction is achieved by this:
The choice of which operators are preferred is not unique, but it is advantageous to favor operators proportional to the equations of motions of the fields. In the above expression, the derivatives have been either traded for a fieldstrength tensor or act on the field operators in such a way, that the Dirac equation can be used to further simplify them.
Operators with derivatives acting on fields in the way they appear in their equations of motion^{Footnote 6} can be further reduced by the means of appropriate field redefinitions. For scalars \(\phi \), fermions \(\psi \) and vector fields \(A^\mu \) these operators are of the form \(J D_\mu D^\mu \phi \), and \(J_\nu D_\mu F^{\mu \nu }\) respectively. Redefining the fields by shifts proportional to the coefficient operator J then eliminates these operators while introducing operators with fewer derivatives as well as operators at higher mass dimension. Applying the procedure iteratively, order by order in powercounting, then fully eliminates all operators proportional to the field equations of motion.
Matchete applies all reductions described above fully automatically without the user having to derive and specify any operator reduction identities. As of the time of writing, the list of possible reductions is not completely implemented yet. In particular, the current version of Matchete does not yet implement Fierz reductions, as these require the proper treatment of evanescent operators [82]. This is left for a future release (see also Sect. 2.3.3).
3.3.2 Usage example
In this section, we briefly outline a simple usage example. Once again, the reader is referred to Ref. [49] for a more detailed user manual of the package including an installation guide. To demonstrate the features of Matchete, we match a simple model in which we supplement the SM with a singlet real scalar field \(\phi \), as has been discussed in Refs. [138, 139]. The Lagrangian of this model reads^{Footnote 7}:
Matchete provides the full definitions of the SM and its Lagrangian as a simple macro. After installing the package, it can be loaded via:
Next, we load the SM Lagrangian from the predefined model file included with Matchete:
where we rename the Higgs mass parameter to and the quartic Higgs coupling to . We then define the heavy scalar using the command:
The arguments supplied to the function indicate the spin of the field, the fact that it is real, the definition of the mass parameter and that it should be considered heavy. The remaining couplings in the Lagrangian (2.13) are defined with:111
Matchete defines simple shortcuts for the fields and couplings when these commands are evaluated. Even though the full objects are more complicated, the user can input them in a simple form:
Note that the command automatically generates the kinetic and mass terms for the new field and only interaction terms have to be written out. We are now ready to integrate out the scalar at oneloop order. This is achieved by running the command with appropriate arguments:
Here the first option defines the order at which the matching procedure is carried out while the second option denotes the fact that we want to obtain the effective Lagrangian up to dimension six. After this line is successfully evaluated, the object contains redundant operators, as described earlier. Reductions using IBP identities and field redefinitions are then performed by running:
The user can choose to only perform IBP identities without field redefinitions to obtain the Green’s basis by using the command. The object then contains the operators together with their matching coefficients. The full output is cumbersome, but individual contributions can be isolated using the command. As an example, to show only the leptonic fourfermion operator, one uses:
where the second argument specifies the field content of the operator(s) to be extracted, and the last argument gives the number of derivatives. Here \(\hbar \) should be understood as a loop counting factor that is equal to \(1/(16\pi ^2)\). The matching example with more details is included with the Matchete package in the example notebook Examples/Singlet_Scalar_Extension.nb.
3.3.3 Outlook
We conclude this note with a roadmap for features intended for future releases:

The matching is currently done in strictly \(d=42\epsilon \) dimensions. This prevents reductions of Dirac structures including Fierz rearrangements, because these hold only in four dimensions. When applied to ddimensional operators, one has to account for evanescent contributions.

After evanescent contributions can be handled automatically, Matchete will be able to produce output that, in the case of SMEFT computations, can be directly compared to the Warsaw basis. In the future, Matchete will be able to automatically perform this identification as well as output the result in the WCxf [133] format. An interface with other phenomenology codes and/or commonly used formats, such as UFO [154], would be desirable as well.

At this time, Matchete does not allow integrating out heavy vector fields at the loop level. The reason for this is that these cannot be generally written down in a renormalizable fashion. In weaklycoupled theories, heavy vectors must arise from spontaneous symmetry breaking. This results in a complicated interplay between vectors, ghosts, and Goldstone bosons, especially in the background field gauge. So as to avoid having to derive and input all interactions manually, we wish to provide (semi)automated methods to determine the broken phase Lagrangian.

With small changes to the matching procedure, it is possible to determine the EFT counterterms and, thereby, the RG functions. Implementing this functionality in Matchete will allow for finding the RG functions for intermediatescale EFTs and vastly simplify sequential matching scenarios.
After the above list of features is included in the package, Matchete can be an integral part in a fully automated pipeline from a UV model down to phenomenology, and as such be a powerful tool for BSM phenomenology by taking away the laborious task of oneloop matching.
3.4 Matchmakereft: a tool for treelevel and oneloop matching
Matchmakereft is a computer tool that automates the treelevel and oneloop matching of arbitrary weaklycoupled UV models onto their EFT. Due to lack of space, we refer the reader to the original publication [48] and the manual that comes with the installation and summarize here its main features and newest developments, and provide a simple but illustrative example.
Matchmakereft is written in python, making it very easy to install in different platforms, and it uses welltested tools that include Feynrules [155], QGRAF [156], Form [157] and Mathematica to perform an offshell matching using diagrammatic methods in the background field gauge.
Matchmakereft takes advantage of the large degree of gauge and kinematic redundancy in offshell matching to perform a significant number of nontrivial crosschecks, ensuring the validity of the resulting computation. It is also equipped to compute the RG equations of arbitrary EFTs and the offshell (in)dependence of a set of local operators. Matchmakereft treats the kinematic and gauge dependence independently, leaving the latter arbitrary until the very end of the calculation. This increases its efficiency but it also makes matchmakereft an ideal tool to compute IR/UV dictionaries or to perform calculations in theories with arbitrary gauge structures.
Among the latest developments of matchmakereft, the calculation of amplitudes in chunks of a fixed number of diagrams and the ability to compute amplitudes in parallel, have significantly increased its efficiency (see the manual for details).
Let us now demonstrate many of the features of matchmakereft with a concrete example involving two scalar fields, a light, but not massless, field \(\phi \) and a heavy field \(\Phi \). Our model is described by the Lagrangian:
which we want to match to the EFT Lagrangian without the heavy scalar,
We will use this Lagrangian during offshell matching. Subsequently, the kinetic term can be canonically normalized, and the redundant operators can be eliminated. Two of the three operators of dimension six are redundant. We choose \(\phi ^6\) as the independent operator. Using equations of motion we can readily find that:
Eliminating these operators from the Lagrangian would induce the shifts
The coupling \(\kappa \) of this model is a dimensionful coupling, and is expected to be parametrically of the order of the heavy mass scale \(M_H\). Thus, \(\frac{\kappa }{M_H}\) is of \({\mathcal {O}}(1)\) and is kept throughout the matching procedure consistently.
The feynrules file for the UV model, saved at two_scalars.fr, is shown below.
Note that we use the keyword FullName to characterize each field as “heavy” or “light”. This is mandatory: matchmaker uses this keyword to distinguish between fields that are integrated out and those that are light and are also present in the EFT. Also note that all the parameters that are used in the Lagrangian, masses as well as couplings, must be declared. In this example all parameters are real.
The feynrules file for the EFT model, saved at one_scalar.fr, is:
Note that we have included WCs (denoted by alpha) also for the kinetic and mass terms (squared), as well as for all operators that are redundant solely due to the equations of motion.
In order for matchmakereft to perform the reduction to the physical basis, we need to provide a set of relations that express the redundant WCs in terms of the irreducible ones, see Eq. (2.18). This is done at one_scalar.red:
Note that only the WCs corresponding to physical operators, among those appearing in the EFT Lagrangian and defined in the file one_scalar.fr, must be present on the left hand side of the replacement rules in this file. The WCs corresponding to redundant operators appear only on the right hand side. When these rules are used, both redundant and nonredundant WCs have been matched and are known as functions of the parameters of the UV theory. The rules are therefore instructions on how to update the nonredundant WCs, to include the effect of the redundant ones.
With these files prepared we are ready to proceed with matching. In the matching directory, where two_ scalars.fr,one_scalar.fr,one_scalar.red are present, we can run matchmakereft:
upon which we enter the python interface
We first need to create the matchmaker models, i.e. the directories with all the necessary information for the UV and the EFT models. We do this by
which has the response
We can now observe that the directory two_scalars_MM is created. We proceed with creating the EFT model
The one_scalar_MM directory is now created as well, and we are ready for the matching calculation. This is performed by the match_model_to_eft command:
Upon completion, the results of the matching are stored in the UV model directory, in this case two_scalars_MM. The file two_scalars_MM/MatchingProblems.dat contains troubleshooting information in case the matching procedure failed. In our case it is an empty list, indicating no problems:
The result of the matching procedure is stored in two_scalars_MM/MatchingResults.dat, a Mathematica file with a list of lists of replacement rules. These matching results can also be seen in printed form in Appendix C of the original matchmakereft publication [48]. In this example, the kinetic operator receives a oneloop matching correction and, therefore, \(\phi \) is no longer canonically normalized. A field redefinition is needed to obtain a canonically normalized theory on which we can apply the corresponding redundancies to go to the physical basis. Matchmakereft does these two processes (canonical normalization and going to the physical basis) automatically. The resulting WCs in the physical basis, up to one loop order and \(O\left( \frac{\kappa ^{2n}}{M_H^{2n}}\frac{m_L^2}{M_H^2}\right) \) can also be seen in printed form in Appendix C of Ref. [48].
As mentioned above, matchmakereft can do many more things than just finite matching. One can for instance compute the RG equations for both models and check the consistency of the logarithmic terms from the RG equations and the finite matching. We refer to the manual for a detailed explanation.
We would like to finish this overview of matchmakereft by mentioning two projects we are currently working on in the Granada group, that will either end up being part of matchmakereft or strongly use it for their development. Current matching programs perform the matching offshell, producing the effective Lagrangian in a socalled Green’s basis, in which some redundant operators are not required to describe physical, onshell amplitudes. The usual process is to reduce this Green’s basis to a physical basis, either manually (as it is currently done in matchmakereft) or in an automated way, as currently done by matchete [49]. We are looking into sidestepping this reduction step by performing an onshell matching. This has a number of technical complications that have been essentially solved for the treelevel matching. We are working on extending the onshell matching to the oneloop level. More details can be found in Sect. 3.6.
Another project we are currently working on was mentioned by R. Fonseca in his talk but is not covered elsewhere in this document (it has a significant overlap with the content of Sect. 2.8 by other authors). Even with current codes that automate the process of oneloop matching, repeating the calculation for different models is reiterative and, in the case of many fields, can be computationally very expensive. Our idea is to define a generic model, in which the gauge structure is not fixed a priori and the field multiplicity simply appears as a dummy flavor index. This general EFT can be built with a single multiplet of real scalars, Weyl fermions and gauge bosons. We have defined the most general EFT with this generic field content up to mass dimension six and we are computing its RG equations. All the loop integrals, tensor reduction and kinematic projections are performed in this generic EFT in such a way that the calculation of the RG equations for any specific EFT can be obtained by means of a straightforward grouptheoretic calculation. The next step will be to define a generic theory with light and heavy fields and perform the finite oneloop matching for it.
3.5 Sym2Int: Automatic generation of EFT operators
Renato M. Fonseca
EFTs are a powerful tool for probing potential new physics in a modelindependent way. At a time when there is a lack of clarity on how to extend the SM, its related EFTs have been receiving an increasing amount of attention. For example, the number of SMEFT operators have been counted with several techniques in the last few years, up to high mass dimensions. Building an explicit basis of operators is more complicated, but here too there has been notable progress. I will go through my recent work on using the software packages GroupMath and Sym2Int to automatically build explicit bases of operators for EFTs, given their fields and symmetries.
3.5.1 Using computers to generate Lagrangians
With EFTs one can study the low energy consequences of a model without having to handle or even know all of its intricacies; by integrating out the heavy fields one can obtain a Lagrangian which describes well the large distance behavior of the original theory. However, the reduction in the number of fields comes at the cost of introducing a potentially large set of local operators of high dimension. As such, the very first step in the study of an effective field theory is to establish a basis of operators which encodes all possible interactions between the light fields. Put simply, one needs a Lagrangian. As the reader is certainly aware, most theories are invariant under some group of transformations — for example the Lorentz group and/or a gauge group — therefore the task of finding all interactions is inseparable from the problem of finding invariant combinations of products of representations of some group.
There is a long history of using computers in particle physics, given the complexity of the calculations one needs to perform. Indeed, there are many codes specialized in various tasks, from calculating Feynman rules all the way to generating events at colliders. However, at the very beginning of such a stack of programs, it would be useful to have one more code which, to some degree, alleviates the heavy burden on the user of having to provide the full Lagrangian of the model. To the best of my knowledge, SARAH [158] and Susyno [159] were the first codes to build symmetryinvariant Lagrangians (superpotentials, to be more specific) with the user having only to specify the representation of the (super)fields under some gauge group.
In the case of Susyno, it builds the most general renormalizable supersymmetric (SUSY) Lagrangian allowed by the gauge symmetry, as well as the soft SUSY breaking terms, and then applies known formulas in order to derive the twoloop renormalization group equations for all the free parameters (such as the gauge and Yukawa couplings). The group theory code used to perform the first step grew over time and was eventually released as the standalone GroupMath program [160]. It is also used in other packages, such as SARAH 4 [161], Pyr@te 2+ [162], DRalgo [163] and Sym2Int [164]. The aim of this last program, which will be the main topic from now on, is to go from symmetries to interactions: given some input fields (that is, representations of the Lorentz and gauge groups) it computes all operators up to some mass dimension. Sym2Int is currently being extended in order to be able to provide explicit expressions for the operators.^{Footnote 8}
3.5.2 The current Sym2Int program
Details on how to use the program can be found in [164], as well as on the program’s webpage. For illustrative purposes, with the following input one can obtain the list of SMEFT interactions up to dimension 8:
gaugeGroup[SM] ⌃= {SU3, SU2, U1};
fld1 = {” u”, {3, 1, 2/3}, ” R”, ” C”, 3};
fld2 = {” d”, {3, 1, 1/3}, ” R”, ” C”, 3};
fld3 = {” Q”, {3, 2, 1/6}, ” L”, ” C”, 3};
fld4 = {” e”, {1, 1, 1}, ” R”, ” C”, 3};
fld5 = {” L”, {1, 2, 1/2}, ” L”, ” C”, 3};
fld6 = {” H”, {1, 2, 1/2}, ” S”, ” C”, 1};
fields[SM] ⌃= {fld1, fld2, fld3, fld4, fld5, fld6};
GenerateListOfCouplings[SM, MaxOrder > 8];
The number of interactions in this particular EFT — up to dimension 15 and for an arbitrary number of flavors (which is set to 3 in the code above) — can be found in a couple of hours. As far as I know, this is the only crosscheck of the numbers provided for the first time in [89] using the the Hilbert series.
It is worth noting that Sym2Int also computes some important information on the symmetry of flavor indices. For example, \(L_{i}L_{j}HH\) is found to be symmetric under the exchange of \(i\leftrightarrow j\), while \(Q_{i}Q_{j}Q_{l}L_{k}\) has a more complicated symmetry. Let us distinguish an operator, where we expand flavor indices (for n flavors there are \(n\left( n+1\right) /2\) operators of the form LLHH) from a Lagrangian term which are tensors in flavor space (there is just one term of the form LLHH). Then, the information mentioned earlier can be used to infer the minimum number of terms needed to write a model’s Lagrangian. In the case of three Q’s and one L, even though there are four ways of contracting the various spinor and \(SU(2)_{L}\) indices, it is possible to write all of them as a single Lagrangian term \(\omega _{ijkl}Q_{i}Q_{j}Q_{l}L_{k}\) (see [166] for a more thorough discussion of this topic).
3.5.3 An upgrade: building operators and terms explicitly
Counting operators and terms is not the same as building a Lagrangian. For the latter one needs to know the explicit form of each interaction, which implies knowing how the various field indices are contracted. It is also worth highlighting that neither the current version of Sym2Int nor the Hilbert series method can be used to determine how the derivatives — if there is any — are applied to the fields.
We desire a model’s Lagrangian, but I would like to point out that the Lagrangian often consists of a complicated function of the field components with a very low information density. Consider for example the interactions between a fermion transforming under the fundamental representation of SU(n) and the gauge bosons of this group. The relevant ClebschGordan factors coincide with the entries \(T_{ij}^{A}\) of the matrices of the fundamental representation of SU(n), containing a total of \(n^{2}\left( n^{2}1\right) \) of entries. Given that fields can be redefined, one has to wonder what is the information contained in all these numbers: in principle, observable quantities should depend on these ClebschGordan factors through combinations of the tensor T with no open indices, such as \(T_{ij}^{A}T_{ji}^{A}\) (see for instance [167]).
It is therefore conceivable that in the future we might find it unnecessary to know a model’s Lagrangian in full detail. With this cautionary remark, the fact remains that at present we do need these complicated expressions for the study of models in general, and EFTs in particular. For this reason, the Sym2Int code is in the process of being upgraded such that it not only counts but also computes explicitly operators and terms. In the following I will make a few remarks concerning this upgrade.
Output for humans vs output for other codes While designing a program which automatically generates lists of operators, one must take into account whether the results are to be used directly by a person, or fed into some other software package – such as FeynRules [155] or Matchmakereft [48]. In the second case, the readability of the result is not so important. For example, the two possible ways of contracting three color octets can be described by a \(2\times 8\times 8\times 8\)dimensional tensor \(c_{aijk}\) containing the relevant ClebschGordan factors. But even in this rather modest example, a human might find it hard to read such data format.
A related problem is that these ClebschGordan factors are not unique. In the previous case, we are free to take any invertible combination of the two contractions, \(c_{aijk}\rightarrow X_{aa^{\prime }}c_{a^{\prime }ijk}\) with \(\text {det}\left( X\right) \ne 0\), and one can also make a rotation in the eightdimensional space of the octet, leading to the change \(c_{aijk}\rightarrow U_{ii^{\prime }}U_{jj^{\prime }}U_{kk^{\prime }}c_{ai^{\prime }j^{\prime }k^{\prime }}\) for some unitary matrix U.
Currently, the package GroupMath contains a function Invariants which, subject to time and memory constraints, can compute the above group theory data for any product of representations of a semisimple Lie algebra. However, due to the two issues discussed above, there is room for improvement. Conveniently, the gauge symmetry of many models is completely described by SU(n) groups (and perhaps U(1)’s which are easy to handle). Motivated by this, a future version of GroupMath will include SU(n)specific code capable of providing the same information as Invariants but using instead the tensor method which is familiar to physicists (see chapter 4 in [168]). The numerical output will still consist of large tensors with ClebschGordan factors, but now it is also possible to include a human readable string which identifies each of them. For the product of three octets, writing them as \(3\times 3\) traceless matrices \(\Omega _{\;j}^{i}\), \(\Omega _{\;j}^{\prime i}\) and \(\Omega _{\;j}^{\prime \prime i}\), the two invariant combinations alluded above would be \(\Omega _{\;j}^{i}\Omega _{\;k}^{\prime j}\Omega _{\;i}^{\prime \prime k}\) and \(\Omega _{\;j}^{i}\Omega _{\;k}^{\prime \prime j}\Omega _{\;i}^{\prime k}\); the GroupMath program will be able to provide the value of these two expressions as well as the corresponding formulas/strings.
The approach to Lorentz indices is similar: for each invariant expression the program keeps track of a tensor (with the SO(1, 3) ClebschGordan factors) as well as a human readable string involving the familiar scalars, spinors, gamma matrices, derivatives and field strength tensors. If there are fermions in the interaction, with the \(\gamma ^{\mu }\) and C matrices one can replace open spinor indices with vector indices. Then, from an expression containing a set of open vector indices only, one can construct all Lorentz invariants by contracting it in all possible ways with the metric and LeviCivita tensors, \(\eta _{ab}\) and \(\epsilon _{abcd}\). As for derivatives, they should be distributed in all possible ways by the different fields in the operator. This approach of contracting indices and applying derivatives in every conceivable way leads to a highly redundant list of Lorentz invariant expressions; such a list can easily be pruned by looking for linear relations among the various polynomials.
Dealing with gauge indices and Lorentz indices separately Building operators explicitly implies handling potentially very large polynomials of the numerous field components, thus calculations are time consuming even for low dimensional interactions. In each step of the design of a code that handles explicit operators, one must therefore be aware of this problem and try to mitigate its impact.
In my opinion, an important step in reducing the computational and memory requirements of handling operators is to segregate gauge indices from the spacetime indices of spinor and vector fields. If \(c_{g_{1}g_{2}\cdots }\) and \(\kappa _{l_{1}l_{2}\cdots }\) are the ClebschGordan factors for the two type of indices, instead of working with the full operator \({\mathcal {O}}\equiv c_{g_{1}g_{2}\cdots }\kappa _{l_{1}l_{2}\cdots }\Phi _{g_{1},l_{1}}\Phi _{g_{2},l_{2}}\cdots \), it is preferable to manipulate — somehow — the simpler polynomials \({\mathcal {O}}_{G}\equiv c_{g_{1}g_{2}\cdots }\Phi _{g_{1}}\Phi _{g_{2}}\cdots \) and \({\mathcal {O}}_{L}\equiv c_{l_{1}l_{2}\cdots }\Phi _{l_{1}}\Phi _{l_{2}}\cdots \) involving only one type of indices.
Some may consider this to be an elementary observation but, as the following example will show, it is not trivial to implement it. Consider that both the gauge and the Lorentz groups are described by the SU(2) group, and ignore for simplicity that some fields (fermions) anticommute. Now take some field \(\Phi \) which is a doublet under both groups: both \({\mathcal {O}}_{G}=\epsilon _{g_{1}g_{2}}\Phi _{g_{1}}\Phi _{g_{2}}\) and \({\mathcal {O}}_{L}=\epsilon _{l_{1}l_{2}}\Phi _{l_{1}}\Phi _{l_{2}}\) are identically zero, so whatever is our algorithm to handle these two polynomials we would conclude that there is no \(\Phi ^{2}\) operator. And yet, by considering the two indices together, we readily find that \({\mathcal {O}}\equiv \epsilon _{g_{1}g_{2}}\epsilon _{l_{1}l_{2}}\Phi _{g_{1},l_{1}}\Phi _{g_{2},l_{2}}\) is not null. What is happening here is that both the Lorentz indices \(l_{i}\) and the gauge indices \(g_{i}\) are contracted antisymmetrically, so by considering each set of indices separately, we get vanishing polynomials, while the operator \({\mathcal {O}}\) is symmetric — not antisymmetric — under the change \(\left( g_{1},l_{1}\right) \leftrightarrow \left( g_{2},l_{2}\right) \).
A possible solution is to distinguish equal fields in the simpler polynomials \({\mathcal {O}}_{G}\) and \({\mathcal {O}}_{L}\); in the previous example, \({\mathcal {O}}_{G}=\epsilon _{g_{1}g_{2}}\Phi _{g_{1}}\Phi _{g_{2}}^{\prime }\) and \({\mathcal {O}}_{L}=\epsilon _{l_{1}l_{2}}\Phi _{l_{1}}\Phi _{l_{2}}^{\prime }\) are no longer null, as long as we keep \(\Phi ^{\prime }\ne \Phi \). A solution along this lines is viable, and indeed it has been successfully tested in Sym2Int. Without going into details, I will simply note that one must still account, in the end, for the fact that \(\Phi ^{\prime }\) is the same as as \(\Phi \).^{Footnote 9} Things become even more complicated when fields have flavor.
Flavor It turns out that in some models (for example in SMEFT), some representations of the symmetry group are present more than once. We tend to account for this mysterious multiplicity by adding a flavor index to the relevant fields, \(\Phi \rightarrow \Phi _{i}\), and write Lagrangians with flavored tensors, such as the Yukawa matrices. Each of them is associated with what I previously called a Lagrangian term, which may correspond to many operators once the indices are expanded.
Flavor constitutes a significant complication. If all the fields in an interaction are distinct, such as in \(L_{i}^{*}L_{j}Q_{k}^{*}Q_{l}\),^{Footnote 10} accounting for multiple flavors is trivial. The problem are those cases containing repeated fields, such as \(L_{i}^{*}L_{j}L_{k}^{*}L_{l}\), as they will have some underlying symmetry under permutations of the flavor indices. One approach to these troublesome cases might be to consider each flavor combination at a time, effectively expanding the flavor indices. Not only is such an approach very taxing computationally, but it is also not clear how the results are to be presented — ideally one would like to undo the expansion of the indices and write as few terms in the Lagrangian as possible.
As the operator dimension increases, so does the number of intervening fields. At some point we are bound to find cases such as \(Q_{i}Q_{j}Q_{k}L_{l}\) in SMEFT, where the same field appears three or more times. The relevant permutation group is then no longer abelian, and we may have to consider mixed (or multiterm) symmetries in the flavor indices (see [166]).
This problem is most acute in a model where only one index distinguishes all scalars and all fermions; such model is therefore a good testbed for Sym2Int’s new code. Indeed, we may represent a generic EFT as a theory with an arbitrary number of real scalars \(\phi _{i}\) and Weyl fermions \(\psi _{i}\) with covariant derivatives
where \(t^{A}\) and \(\theta ^{A}\) are generic hermitian matrices (the \(\theta ^{A}\) must also be antisymmetric). It was precisely in this general framework that the twoloop RG equations for dimension 4 operators were derived in [169,170,171]. We may however extend it to nonrenormalizable operators, not just to study complicated flavor symmetries, but also to derive important results for a general EFT. Indeed we are currently in the process of deriving the oneloop RG equations for this EFT (up to dimensionsix interactions), as well as the matching relations between it and a general renormalizable UV model [172]. Once these are known, it becomes unnecessary to go back to the computation of loops and amplitudes for every single model; the task of computing RG equations and matching conditions for a particular model is reduced to writing a Lagrangian, which involves only some algebra and group theory.
3.5.4 Outlook
The Sym2Int code, as it exists now, lists and counts all the possible operators in an effective field theory, up to some cutoff. It is currently being extended to also build them explicitly, while handling in a satisfactory way the fact that some fields have flavor. With the approach being pursued, the presence of flavor does not significantly affect the computational time, but it does pose some complicated problems of a conceptual nature which still have to be addressed. Sidestepping these for now, the code was already used to compute all Green operators in SMEFT up to dimension 10, for an arbitrary number of fermion flavors, with their counting matching the correct result.
3.6 SOLD: Towards the oneloop matching dictionary in the SMEFT
The benefit of using EFTs to compute experimental observables for different UV models is undeniable, but EFTs are much more powerful than that. They really shine when the topdown approach is combined with powercounting arguments, which allow for a complete classification of observable UV models in the form of IR/UV dictionaries. The EFT is a double expansion in the mass dimension of the operators and the loop order of the WCs, with operators of higher dimension being less relevant at low energies and WCs of higher loop order being smaller than lower loop ones. Eventually, contributions of high enough mass dimension and/or loop order are smaller than the experimental precision and can be disregarded as unobservable.
Given a finite order in mass dimension and loop order, the complete set of UV models that contribute to and EFT up to these orders can be exhaustively classified. Once the complete classification is achieved one can go one step further and compute the resulting WCs for all the UV models in the list. This way we obtain true IR/UV dictionaries that relate the WCs of the EFT (and therefore to experimental observables via the bottomup approach) to all UV models that contribute to the EFT at the particular order the dictionary has been computed. These dictionaries can be used in an iterative way to obtain a complete map of the implications of experimental data on models of new physics. Indeed, given a particular experimental constraint or anomaly, one can list all models that are restricted by (or explain in the case of anomalies) that particular measurement. With this list, we can then obtain all other experimental implications of these models, that can be tested in a correlated way with different experimental data.
The treelevel, dimensionsix dictionary for the SMEFT was computed a few years back [137], building on previous efforts [173,174,175,176]. This includes the most general extension of the SM with new scalars, fermions or vectors that contribute at tree level to the SMEFT operators up to massdimension six. While essential for the classification of large effects, this leading dictionary falls short when compared with the current precision of many experimental measurements, even more taking into account that certain WCs are only generated in weakly coupled extensions of the SM at the oneloop order. The first step towards the calculation of the oneloop, dimensionsix IR/UV dictionary for the SMEFT has been recently published in [177].
3.7 Towards the oneloop, dimensionsix IR/UV dictionary in the SMEFT
Contrary to the treelevel dictionary, in which the list of new fields is finite (a total of 48 new scalars, fermions or vectors appear in the dictionary), at oneloop order the list is infinite, due to the fact that some contributions only constrain the quantum numbers of the product of multiples fields rather than each of them independently. Despite this infinite number of models, it is still possible to classify them in the form of a finite number of conditions on these models. Still, the calculation of the complete oneloop dictionary for the SMEFT at dimension six is a formidable task and, with the help of the computer tool matchmakereft [48], we have just finished the first step towards it [177]. In particular, we have considered the most general extension of the SM with an arbitrary number of scalar and fermionic fields^{Footnote 11} that contribute at oneloop order to those operators in the Warsaw basis [4] which cannot be generated at tree level in any weaklycoupled extension of the SM. This includes all operators in the basis that contain at least one fieldstrength gauge tensor. Our results, that include the full classification of models, a partial list of the specific representations (up to a certain representation dimension, to be chosen by the user) and the actual value of the corresponding WCs are too long to be reported in print form, and we have published them in electronic form as a Mathematica package called SOLD (for Smeft One Loop Dictionary), available via its Gitlab repository (it can also trivially installed directly from a Mathematica notebook). Tools to create models suitable for the full oneloop matching using matchmakereft are also available within SOLD (see [177] for details). The fact that matchmakereft performs the matching calculation in a gaugeblind fashion for most of the computation has significantly helped the development of the dictionary.
3.8 MatchingDB: A format for matching dictionaries
Juan Carlos Criado
MatchingDB is a format for the storage, exchange and exploration of EFT matching results up to oneloop order. Its specification, both in human and machinereadable forms, together with a Python interface, is located at the MatchingDB GitLab repository:
gitlab.com/jccriado/matchingdb
It aims to provide:

A unified language/toolindependent format for the communication of EFT matching results.

An efficient workflow for the practical use of matching dictionaries.
The format is particularly useful to store and publish large matching dictionaries, whose size might make it impractical to provide them in the form of humanreadable equations and tables. The Python interface makes it easy to interact with them, and quickly obtain the relevant information for specific applications. An example of such a dictionary is the complete treelevel dictionary [137] between the dimensionsix SMEFT and any of its UV completions, which is provided in MatchingDB format under the dictionaries directory in the MatchingDB repository. Other matching databases will be made available at the same place.
MatchingDB can also be employed as a data exchange format between tools, allowing to compare results from different matching codes, and providing an interface to connect them to packages for RG running and observable calculations. It will be implemented as an output format in MatchingTools [112] and Matchmakereft [48].
Some of the features currently offered by the combination of the MatchingDB format and the accompanying Python package are: listing the heavy fields and UV couplings that generate a given WC; listing the EFT contributions of a given set of heavy fields; providing LaTeX output, both for the UV Lagrangian and for matching corrections; and providing numerical output that matches WCxf [133] for the SMEFT.
3.8.1 Format definition
MatchingDB data can be stored either as a plaintext JSON [178] file or as an SQLite [179] database. The format is defined by its JSON schema, which is given in the matchingdb.json file at the root of the MatchingDB repository, using the JSON Schema language [180]. A MatchingDB JSON file must comply with this schema, which can be checked using any of the standard tools for this purpose, such as the jsonschema Python package [181]. Alternatively, MatchingDB data can be stored as an SQLite database, with a structure based on the JSON schema. The SQLite representation may provide faster access to the information in larger databases. Below, I describe the format informally, starting with the JSON representation. A diagram summarizing it is provided in Fig. 2. The root value of the data must be an object with 4 name/value pairs, with names ”fields””couplings,””constants”and ”terms”. The corresponding values should be arrays whose values are objects with the following structures:

field Represents a heavy field in the UV theory that has been integrated out. It contains the following key/value pairs:

”name”(string): a name identifying the field.

”real”(boolean): determines whether the field is real or complex.

”representation”(string): the group theory representation. The format for this is free in principle, but intended to be selfconsistent in each database.

”latex”(string): mathmode LaTeX code representing the field.


coupling Represents a coupling present in the UV theory. Its name/value pairs are:

”name”(string): a name identifying the coupling constant.

”fields”(array of string): a sorted list of the heavy fields that appear in the interaction.

”real”(boolean): determines whether the coupling constant is real or complex.

”latex”(string): mathmode LaTeX code representing the coupling constant.

”latex_interaction”(string): mathmode LaTeX code representing the full interaction term in the UV theory, including the coupling constant.


constant Represents a scalar constant such as \(\pi \) or a constant tensor such as the Kronecker delta, which appears in the matching results. Its name/value pairs are

”name”(string): a name identifying the constant.

”value”(nested array of numbers or object): the numerical value of the constant. If the constant is a scalar, it should be a number. If it is a tensor, it should be provided as a nested array of numbers. Finally, if the constant is complex, it should be given as an object of the form: {”Re”:..., ”Im”:...}, with the values having the real scalar o tensor type.

”latex”(string): mathmode LaTeX code representing the constant.


term Represents a term appearing in the matching corrections to some WCs in the EFT:

”coefficient”(string): a name identifying the WC in which the term appears.

”fields”(array of string): a sorted list of the heavy fields that contribute to the term.

”factors”(array of *_factor): a list of the factors that appear in the term, as described below.

”free_indices”(array of string): a list of the free indices of the term, which must coincide with the ones of the corresponding coefficient, and appear at least once in the ”factors”list.

The ”fields”array in both couplings and terms should be sorted with the lexicographic order. This allows to compare them efficiently to a set of heavy fields when querying the database.
The full analytical formulas for the matching corrections to the WCs in the EFT are stored in the ”factors”property of the term objects. The matching correction to any WC \({\mathcal {C}}\) is a sum of terms \({\mathcal {T}}^{(N)}\), with each term being a product of factors \({\mathcal {F}}^{(N)}_A\):
Each factor is assumed to be of one of 7 possible forms. Every form has an associated JSON type, all of them being tuples, that is, inhomogeneous arrays with a fixed type for each of its items:
 \({\mathcal {F}} = b^n\):

(numerical_factor): a number b to some power n. Represented as a tuple [b, n] of type: [number, number].
 \({\mathcal {F}} = k^n_{ij\dots }\):

(constant_factor): a constant k, to some power n, with some flavor indices i, j, ...Represented as a tuple \([g, n, [i, j,\ldots ]]\) of type: [string, number, array of string].
 \({\mathcal {F}} = g^{n(*)}_{ij\dots }\):

(coupling_factor): a coupling constant g, to some power n, possibly complex conjugated ( \(c = \mathrm True \) /False), with some flavor indices i, j, \(\ldots \) Represented as a tuple [g, n, c, [i, j, \( \ldots ]]\) of type: [string, number, boolean,array of string].
 \({\mathcal {F}} = M^n_{F,i}\):

(mass_factor): the mass of a field F, to some power n, with a flavor index i. Represented as a tuple [F, i, n] of type: [string, string, number].
 \({\mathcal {F}} = (M^2_{F,i}  M^2_{G,j})^n\):

(mass_difference_factor): the difference between the masses of two fields. Represented as a tuple [F, i, G, j, n] of type: [string, string, string, string, number].
 \({\mathcal {F}} = \log (M^n_{F,i}/\mu ^n)\):

(log_mass_factor): the log of the mass of a field F, to some power n, with a flavor index i. Represented as a tuple [F, i, n] of type: [string, string, number].
 \({\mathcal {F}} = \log (M^n_{F,i}/M^n_{G,j})\):

(log_mass_difference_factor): the log of the ratio between the masses of two fields. Represented as a tuple [F, i, G, j, n] of type: [string, string, string, string, number].
This completes the specification of the MatchingDB format in JSON form.
MatchingDB data can also be stored as an SQLite database. The SQLite representation consists of 4 tables, named fields, couplings, constants and terms. Their columns take their names from the keys of the associated objects. Every object is stored as a row, with each of its values encoded as a string containing the corresponding JSON code.
3.8.2 Python interface
The matchingdb Python package is provided under the python directory of the MatchingDB repository. It can be installed by cloning the repository, moving into the python directory and running
The package exposes two classes: JsonDB and SQLiteDB, for creating and querying MatchingDB dictionaries, in the JSON and the SQLite representations, respectively. Both classes have the same methods, with the same arguments and the same behaviour. An existing database can be loaded as:
A new one can be created through:
where my_data is the data to be included, as a JSON value that validates against the MatchingDB schema, represented as a Python object through the mapping displayed in Table 3.
New items can be inserted into a database through:
where table is one of ”fields”, ”couplings”, ”constants”, or ”terms”, and item is a dict complying with the corresponding subschema, which can be found at $defs/<table>/items in the full schema. Any changes made to a database must be saved with
in order for them to persist.
There are 4 methods to query a database:

select_fields()

select_couplings()

select_constants()

select_terms()
They filter the items of each of the corresponding arrays according to certain conditions, and prepare the selected items in the desired output state. A summary of their arguments is provided in Table 4. All of them are optional. If provided, the name, fields and coeffi
cients arguments select only those items for which the corresponding property coincides with the given one. fields_criterion further configures the behaviour of the fields argument, following Table 5.
The output_format argument must be one of the following strings:

”raw”(default). The method returns a list of the selected items, represented as Python values, following Table 3.

”pandas”. The method returns a Pandas [182] dataframe with a simplified version of the output. Provides an easy way to visually explore the data.

”latex”. Returns mathmode LaTeX code representing the output. select_couplings() returns a single string with the formula for the selected sector of the UV Lagrangian. select_terms() returns a dict with coefficient names as keys and strings with their selected terms as values. The other 2 methods do not accept this option.

”numeric”. Available in select_terms() only. Returns a function for the numerical evaluation of WCs. The additional parameters argument of select_terms() must be set to an iterable containing all the names of all the parameters that will be set to nonvanishing values. This allows to prepare the output function to be efficiently evaluated many times.
The function returned by select_terms() when output
_format=”numeric”takes two arguments:

parameters: dict. A dict whose keys are the UV parameters (couplings, masses and matching scale), and whose values are NumPy [183] arrays with one axis for each of the indices of the corresponding parameter. Masses are named ”M_ < field> ”where ”< field> ”is the name of the field. The matching scale is ”mu”. The values of constants in the ”constants”array of the database are included automatically.

expand_flavor: bool (optional).

If False (default), the output of the function is a dictionary with WC names as keys, and NumPy arrays as values, with one axis per EFT flavor index of the coefficient.

If True, flavor indices are expanded, and the output becomes a dictionary with keys of the form ”< coeff >_<flavor_indices> ”, and values being either float (if real) or dictionaries with keys ”Re”, ”Im”and floats as values (if complex). If the names given to the coefficients in the database follow the conventions of WCxf, the output will be compatible with the values section of a WCxf WC file.

3.8.3 Example
The python/examples directory contains examples showcasing several features of the matchingdb package. Here, I will present a brief example on how to extract different types of information from the treelevel dimensionsix SMEFT matching dictionary [137] (given in MatchingDB format at dictionaries/smeft_dim6_tree
.json). To load this dictionary, one can do:
One can then get a summary view of all terms that appear in the WC for the \({\mathcal {O}}_{ll}\) Warsaw operator through:
From this table, one can see which UV fields and couplings generate this operator at tree level. Information on these fields can be obtained as:
All the matching corrections to any WC induced by the \({\mathcal {S}}_1\) field can be found via:
This implies that \({\mathcal {S}}_1\) only contributes to \({\mathcal {O}}_{ll}\). The formula for this contribution in LaTeX code can be obtained through:
which renders as: \(+ \frac{\left( y_{{\mathcal {S}}_1}\right) _{ajl}^{*} \left( y_{{\mathcal {S}}_1}\right) _{aik}}{ M_{{\mathcal {S}}_1,a}^{2}}\).
This formula can also be numerically evaluated given the values of the UV parameters: the coupling \(y_{{\mathcal {S}}_1}\) and the mass \(M_{{\mathcal {S}}_1}\). This is done as:
The final output here has the format of the values field of the WCxf format, and is thus suitable for interfacing with numerical tools for running and the calculation of observables. The evaluator() function is optimized for multiple evaluations, with the corresponding database lookup being performed once, in the select_terms() call.
3.9 RG equations in generic EFTs
Mikołaj Misiak and Ignacy Nałȩcz
The SMEFT RG equations at one loop were determined in Refs. [8,9,10], and the RG equations for the LEFT WCs have been determined in the past, dependently on phenomenological needs, sometimes up to the fourloop level [29]. However, the twoloop SMEFT RG equations remain unknown. Instead of deriving the RG equations separately in various Effective Field Theories (EFTs), one can consider a generic case, as done for renormalizable models (see below). Particular results are then found by substitutions. Our goal in the current (ongoing) project is to evaluate oneloop RG equations for all the dimensionsix operators in the generic case.
3.9.1 Operator classification
We shall consider EFTs of the LEFT and SMEFT type, where the gauge group is an arbitrary finite product of finitedimensional Lie groups. Real scalars \(\phi _a\) and lefthanded spin \(\frac{1}{2}\) fermions \(\psi _k\) are going to be the matter fields. Obviously, any complex scalar can always be written in terms of two real ones, while righthanded spin \(\frac{1}{2}\) fermions can always be described as chargeconjugated lefthanded ones.
To simplify our calculation in its initial steps, we assume a discrete symmetry \(\{\phi \rightarrow \phi , ~\psi \rightarrow i\psi \}\). It turns out to forbid all odddimensional operators. However, it gives no restriction on evendimensional ones when they have already been required to be Lorentzinvariant. In more generic EFTs, where no such discrete symmetry is imposed, RG equations for odddimensional operators can be obtained from the evendimensional case by treating one of the scalar fields as an auxiliary gaugesinglet that takes a fixed vacuum expectation value.
The generic EFT Lagrangian we are going to consider reads
where \(Q_N\) stand for linear combinations of dimensionsix operators multiplied by their WCs.
Let us absorb the gauge couplings into the structure constants and generators. Then \(F^A_{\mu \nu } = \partial _\mu V^A_\nu  \partial _\nu V^A_\mu  f^{ABC} V^B_\mu V^C_\nu \), \((D_\rho F_{\mu \nu })^A = \partial _\rho F^A_{\mu \nu }  f^{ABC} V_\rho ^B F^C_{\mu \nu }\), \((D_\mu \phi )_a = \left( \delta _{ab}\partial _\mu + i\theta ^A_{ab} V^A_\mu \right) \phi _b\), and \((D_\mu \psi )_j = \left( \delta _{jk}\partial _\mu \right. \left. + i t^A_{jk} V^A_\mu \right) \psi _k\). RG equations for couplings at the dimensionfour interactions in Eq. (2.23) were calculated up to two loops in a series of papers by Machacek and Vaughn almost 40 years ago [169,170,171]. Some corrections to their results were found more recently in Refs. [184, 185]. Even at the oneloop level, it is only the latter paper [185] that we fully agree with. Generic RG equations for the gauge and Yukawa couplings at the four and threeloop levels, respectively, were recently determined in Ref. [186, 187] by combining information on results in various specific models. Earlier threeloop results for the gauge coupling beta functions can be found in Refs. [188, 189].
We perform our oneloop calculation off shell, using the backgroundfield gauge method. Therefore, we need to begin with classifying all the dimensionsix operators in the offshell basis. Such operators are gauge invariant but many linear combinations of them vanish by the Equations of Motion (EOM). Once the RG equations in the offshell basis are found, one needs to pass to the onshell basis where no linear combination of operators vanishes by the EOM. Only in the latter case are the RG equations gaugeparameter independent.
The offshell basis we use consists of the following 22 terms^{Footnote 12}
where \(W^{(N)}\) contain both the WCs and the necessary ClebschGordan coefficients that select singlets from various tensor products of the gauge group representations. In general, each \(W^{(N)}\) contains many independent WCs, and many gaugesinglet operators are present in each \(Q_N\).
After applying the EOM, we find an onshell basis that consists of 11 operators only. They are conveniently chosen as \(\{Q_1,Q_2,Q_5,Q_6,Q_8,Q_9,Q_{10},Q_{11},Q_{17},Q_{18},Q_{19}\}\). There is a subtlety for \(Q_2\) whose Wcoefficient has more symmetries in the onshell basis, namely \(W^{(2)}_{abcd} = W^{(2)}_{cdab}\) and \(W^{(2)}_{(abcd)} = 0\), apart from just \(W^{(2)}_{abcd} = W^{(2)}_{(ab)(cd)}\) in the offshell case.
3.9.2 Sample offshell results
As a sample offshell result, let us quote the RG equation we have obtained for \(W^{(1)}\) in the Feynman’t Hooft gauge. Terms that are due to the presence of fermions \(\psi \) are going to be denoted by \((\ldots )_\psi \) in what follows. The RG equation reads
where
The sums go over such permutations of uncontracted indices that make each \(X^{(N)}_{abcdef}\) totally symmetric. The scalar field anomalous dimensions in \(X^{(2)}\) are given by
3.9.3 Automatic computations
Our calculation begins with generating the Feynman rules from the Lagrangian (2.23) with the help of FeynRules [155]. Next, FeynArts [190] is used to construct expressions for all the necessary oneloop diagrams. Calculation of their divergent parts is very simple, most efficiently achieved with the help of a selfwritten code. Simplification of the evaluated results requires applying various identities that stem from gauge invariance and/or EOM (see the next section). For this purpose, the code xTensor [191] is very helpful, as it allows us to impose all the relevant symmetries of the considered tensors in a straightforward manner. However, full automation of the necessary simplifications has not yet been achieved, which is the main reason why our project is still quite far from getting completed. New ideas are currently being tested.
3.9.4 Simplification methods
Gauge invariance of the theory imposes some identities on the couplings and Wcoefficients. To derive such an identity for the Yukawa couplings, one considers an infinitesimal gauge transformation
Since the Yukawa term is gauge invariant,
it follows that
A generic, purely fermionic operator can be written as
where \(\omega \) that contracts spinor indices is either the identity or the \(\sigma _{\mu \nu }\) matrix. Some of the spinor fields may be replaced by their covariant derivatives of arbitrary degree. For such operators, the quantity that must vanish due to gauge invariance reads
Analogously, for the Wcoefficients of operators with bosonic fields only, the quantity that must vanish reads
Both types of terms arise on the r.h.s. for operators that involve both the fermionic and bosonic fields.
Once the RG equations in the offshell basis are found, we should pass to the onshell ones by using the EOM. Let us illustrate this using the Wcoefficient of \(Q_5\). We start from the observation that \(Q_7\) is reducible by the gaugefield EOM
An operator \(\widetilde{Q_7}\) that vanishes onshell is obtained by a simple redefinition
with
Next, \(Q_4^\prime \) and \(Q_5^\prime \) are absorbed into \(Q_4\) and \(Q_5\):
To get an onshell expression for the Wcoefficient of \(Q_5\), another redefinition is necessary:
with
It yields
Finally, applying \(\mu \frac{d}{d\mu }\) to both sides of the above equation, one obtains the onshell RG equation for \({\widetilde{W}}^{(5)}\):
where
and \(C_2(G_{\underline{B}}) \delta ^{\underline{B}C} = f^{BDE} f^{CDE}\).
3.9.5 Sample OnShell Results
Three out of six onshellirreducible bosonic operators, namely \(Q_6\), \(Q_8\) and \(Q_9\), transform trivially to the onshell basis. The corresponding RG equations that we find in their case take the form
where
The RG equations for \(Q_8\) and \(Q_9\) in Eq. (2.43) agree with those in Refs. [192, 193]. As far as \(Q_6\) in the generic case is concerned, we are not aware of any published oneloop RG equation so far. However, we have checked that in the SMEFT case we reproduce the RG equations found in Refs. [8,9,10]. Such a comparison tests the sum \(2Z^{(5)}+Z^{(6)}+Z^{(7)}\) in Eq. (2.43).
3.9.6 Summary
Our goal is to evaluate oneloop RG equations for dimensionsix operators in a generic class of EFTs. In the absence of fermions, the calculation has been completed [194] in the offshell basis, with partial reduction to the onshell one. As far as the operators with fermions are concerned, only partial offshell results have been obtained so far [195]. The main issue that remains to be resolved is automatization of tensor expression simplifications that must be performed after evaluation of the necessary Feynman diagrams. Eventually, once our project is completed, oneloop RG equations for many practically relevant specific EFTs will be possible to determine via straightforward substitutions.
3.10 Twoloop Renormalization for \(\chi \)QED in the BMHV Scheme
Hermès BéluscaMaïto, Amon Ilakovac, Marija MadorBožinović, Paul Kühler, Dominik Stöckinger, and Matthias Weißwange
Dimensional regularization (DReg) is an indispensable tool for practical calculations at the (multi)loop level. Its popularity is not least of all due to manifest preservation of symmetries of vectorlike theories of the classical action to all loop orders, which aids greatly both in renormalizability proofs, as well as in the practical determination of counterterms [74]. Many powerful theorems such as the quantum action principle can be rigorously derived in this framework. However, it is known from experiment that the world is described by chiral gauge theories such as the electroweak sector of the SM. For such theories no invariant regulator is known, and dimensional schemes like DReg clash with their chiral nature. Technically, this is reflected in the definition of \(\gamma _5\) (or, equivalently, \(\varepsilon ^{\mu \nu \rho \sigma }\)) in DReg, where inconsistencies arise from relying on the simultaneous validity of customary 4dimensional relations in the dimensionally regularized setting.
Therefore, one cannot literally apply the familiar relations involving \(\gamma _5\) but must define an appropriate scheme. In the following we summarize the main results presented in Refs. [76, 77, 83]. We adopt the Breitenlohner–Maison–’t Hooft–Veltman (BMHV) [69, 70] scheme, which treats Lorentz covariants as being comprised of a 4dimensional (barred) and \(2\epsilon \)dimensional evanescent (hatted) part,
Inconsistencies are avoided by giving up the anticommutativity of \(\gamma _5\),
Its distinguishing feature is its consistency, and hence, reliability at the multiloop level, but it introduces spurious symmetry breakings. There are a number of alternative schemes (cf. [74], also references in [76, 77]) like the naive scheme [196] or readingpoint prescriptions [197, 198], which are computationally simpler, but their consistency at higher loop orders is generally not ensured.
At the level of the full quantum theory, expressed in terms of the 1particleirreducible (1PI) quantum effective action \(\Gamma \), the Becchi–Rouet–Stora–Tyutin (BRST) symmetry is formulated by the Slavnov–Taylor identity,
with generic quantum fields \(\phi \) and corresponding BRST sources \(K_\phi \). Its validity to all orders is an essential ingredient in ensuring unitarity and physicality of the Smatrix. Hence we require that our full quantum theory obeys the Slavnov–Taylor identity. It turns out that any violation of BRST symmetry, and hence Eq. (2.47), can be related to the insertion of a local operator into the effective action by the regularized quantum action principle [70, 199, 200],
Evaluating the r.h.s. of Eq. (2.48) determines the finite symmetry restoring counterterms without the need to compute products of Green’s functions including higherorder terms from the l.h.s. of Eq. (2.47).
We, therefore, aim for the systematic determination of the full counterterm structure—comprised of nonsymmetric, singular counterterms needed for consistency at higher orders, and finite, symmetryrestoring counterterms—for various toy models of increasing complexity and loop orders and, eventually, the SM. So far we have studied the scheme at the oneloop level for a generic YangMills theory [76] (see also [78, 201] for related works) and at the twoloop level for an abelian model [77]. The latter will serve to illustrate our methods in this article. For an extensive review on the topic of chiral gauge theories and renormalization see [83].
3.10.1 Application to \(\chi \)QED
We consider an Abelian gauge theory with a family of \(N_f\) righthanded fermions [77], which we denote as \(\chi \)QED. The ddimensional treatment only affects the fermionic sector nontrivially, where the kinetic term is kept ddimensional while the interaction term is purely 4dimensional:
where \({\mathcal {Y}}_{Rij}=(\textrm{diag}({\mathcal {Y}}_R^{1},\dots ,{\mathcal {Y}}_R^{N_f}))_{ij}\) is the hypercharge matrix for the abelian model. The full regularized action at treelevel becomes
with fields \(\phi \in \{A^{\mu },\overline{\psi }^i,\psi ^i\}\) and sources \(K_\phi \in \{\rho ^{\mu },R^i,{\overline{R}}^i\}\). We can see that the symmetry is violated for the regularized action at tree level, giving rise to the following breaking vertex:
The standard UVrenormalization of the model at one loop order leads to the singular counterterm action,
where the unspecified terms correspond to 4dimensional multiplicative renormalization. The last term, evanescent and non–gauge invariant, is necessary for canceling the divergence in (). At oneloop order the symmetry restoration is rather straightforward, with Eq. (2.48) boiling down to^{Footnote 13}
These are the only^{Footnote 14} (divergent) contributing diagrams. The finite part of () leads to the finite, noninvariant BRSTrestoring counterterm action \(S^{(1)}_\text {fct}\), whose structure [76] is the same at two loops and will be highlighted below.
At order \(\hbar ^{>1}\) (using \( \hbar \) as the loopcounting parameter), Eq. (2.48) implies
and more explicitly at twoloop order:
The singular counterterms have the same structure as at oneloop (including the evanescent \(\int {\text {d}}^{d}{x}\; \frac{1}{2} {\bar{A}}_\mu {\widehat{\partial }}^2 {\bar{A}}^\mu \)), except for a novel nongauge invariant, 4dimensional piece,
There exist additional divergent oneloop diagrams containing one insertion of a finite counterterm \(\overline{S_\text {fct}^{(1)}}\) (similar to diagrams Figs. 4 and 5), whose divergent parts define new counterterms, \(S_\text {sct}^{(2,\,1)}\):
They possess a genuine oneloop structure, even though they are of order \(\hbar ^2\).
For the r.h.s. of Eq. (2.48) three structures arise: the proper twoloop diagrams with one insertion of the treelevel \({\widehat{\Delta }}\)vertex (first diagram in Fig 3), a new insertion of BRSTtransformed noninvariant oneloop counterterms into oneloop diagrams (last diagram in the first line of Fig 3), and the last term in Eqs. (2.55a) and (2.55b) that determines the finite counterterms and provides a consistency check for the divergent ones, respectively. The complete order \(\hbar ^2\) finite counterterms are, in Feynman gauge \(\xi = 1\),
Remarkably, they have the same compact structure as at oneloop and directly correspond to the restoration of three wellknown QED Ward identities, i.e., transversality of the photon two and fourpoint function, as well as the relation between electron self energy and electronphoton interaction. Indeed, we can explicitly check for the restoration of the symmetry by evaluating the relevant identities and confirming that only after adding those counterterms, are they satisfied in our model [77].
3.10.2 RG equation in dimensional renormalization
The RG equation [202, 203] describes the invariance of bare correlation (Green’s) functions under a change of the arbitrary renormalization scale: in DReg it is the “unit of mass”^{Footnote 15}\(\mu \) [204, 205], that is associated to each loop of any diagram: \(\mu ^\epsilon \int {\text {d}}^{d}{x}\; \). For example, the 1PI quantum effective action \(\Gamma \) depends on \(\mu \) both explicitly and implicitly via the \(\mu \)dependence of the field renormalizations \(Z_\phi ^{1/2}\) and renormalized parameters: \(\Gamma [\{\phi (\mu )\}; e(\mu ), \xi (\mu ), \mu ]\). Its invariance under a total \(\mu \)variation is represented by the RG equation (summation over fields \(\phi \in {\chi } \text {QED}\) is implied^{Footnote 16} ):
In Eq. (2.59a), \(\mu \partial _\mu \) is the RG differential operator. The \(N_\phi \) are fieldnumbering (“legcounting”) differential operators, defined by \(N_\phi \equiv \int {\text {d}}^{d}{x}\; \phi (x) {\delta }/{\delta \phi (x)}\), for bosonic fields, ghosts, and for righthanded (and lefthanded anti) fermions \(\phi := {{\mathbb {P}}_\text {R}} \psi \), \(\phi := \overline{\psi } {{\mathbb {P}}_\text {L}}\). The coefficient functions \(\beta _{e,\xi }\) are the betafunctions for the coupling constant e and the gauge parameter \(\xi \), and \(\gamma _\phi \) are the anomalous dimensions for the fields \(\phi \), defined by
“Modified” multiplicative renormalization (MultRen)
Standard renormalization transformation [169,170,171, 204] consists in renormalizing fields multiplicatively, while couplings are usually renormalized additively:
(BRST sources renormalize the inverse way from their corresponding dynamical fields). Betafunctions and anomalous dimensions can be found from the \(1/\epsilon \) poles of the renormalizations \(\delta e\) and \(Z_\phi \).
The situation becomes more involved when new evanescent singular and finite symmetryrestoring counterterms are generated during renormalization. One way to proceed is to extend [206, 207] (also Section 8 in [76]) the original treelevel action \(S_0\) with those new generated operators, associated with new auxiliary couplings \(\rho _{\mathcal {O}}:= \sigma _i, \rho _i\). A new treelevel action \(S_0^*\) is thus defined, which, in the case of \(\chi \)QED, can take the following form^{Footnote 17}
The coefficients \(\delta \text {fct}_\psi \) and \(\delta \text {fct}_A\) arise from the finite BRSTrestoring counterterms \(S_\text {fct}\). The generated modified effective action \(\Gamma ^*_\text {DReg}[\phi , \rho _{\mathcal {O}}]\) and counterterms can be expanded in \(\rho _{\mathcal {O}}\), whose lowestorder terms correspond to the quantities evaluated in the original theory.
One obtains an RG equation for \(\Gamma ^*_\text {DReg}[e, \xi , \{\sigma _i\}, \{\rho _i\}]\), with betafunctions \({\widetilde{\beta }}\) for \(e, \xi \) and auxiliary couplings \(\sigma _i, \rho _i\), and anomalous dimensions \(\widetilde{\gamma _\phi }\):
The genuine renormalized theory generated by the original \(S_0\) can be recovered in the limit \(\sigma _i, \rho _i \rightarrow 0\), since \(\sigma _i, \rho _i\) are unphysical and are absent in \(S_0\). The true \(\beta \) and \(\gamma \) functions for \(\Gamma \) will depend on \({\widetilde{\beta }}\) and \(\widetilde{\gamma _\phi }\) and are obtained for the 4dimensional renormalized effective action \(\Gamma \), defined by
where: (i) divergences are MSsubtracted from \(\Gamma \) with suitable singular counterterms, and (ii) \(d \rightarrow 4\), with (iii) remaining finite evanescent quantities set to zero. The corresponding RG equation, obtained from Eq. (2.62) when both sides are taken under those same limits, has the final structure:
The procedure [206, 207] then consists in evaluating the effects of the evanescent and nonsymmetric operators, that dilute into the nonevanescent ones, via the following terms in the limit \(\sigma _i, \rho _i \rightarrow 0\) and the renormalized limit \(d \rightarrow 4\):
They correspond, from the Regularized Action Principle [70, 208, 209], to diagrammatic vertex insertions of their associated operators: \({\partial \Gamma ^*_\text {DReg}}/{\partial \rho _{\mathcal {O}}} = \left( {\mathcal {O}} + {\partial S_\text {ct}^*}/{\partial \rho _{\mathcal {O}}} \right) \cdot \Gamma _\text {DReg}\), where \({\partial S_\text {ct}^*}/{\partial \rho _{\mathcal {O}}}\) removes the divergences from \({\mathcal {O}} \cdot \Gamma _\text {DReg}\). Finally, they are recast as new contributions to \(\beta _e e \partial _e \Gamma \), \(\beta _\xi \xi \partial _\xi \Gamma \) and \(\gamma _\phi N_\phi \Gamma \), from which shifts to the \(\beta _e\) and \(\gamma _\phi \) are obtained.
RG equation in Algebraic Renormalization (AlgRen) The other and more streamlined method for obtaining the RG equation is that of the “Algebraic Renormalization” framework [201, 210]. It is based on the properties of the theory and of the RG evolution regarding the BRST symmetry (see, e.g., Section 7 of [76]). It applies at the level of the BRSTrestored 4dimensional renormalized effective action \(\Gamma \).
After symmetry restoration, \(\Gamma \) is now BRST invariant, and the RG operator inherits the symmetries from \(\Gamma \): (i) the RG evolution is BRST invariant, (ii) it satisfies the gaugefixing condition, (iii) and the ghost equation. The RG equation for \(\Gamma \) is thus an expansion in a basis of 4dimensional operators, with ghost number \(= 0\), satisfying these same constraints ( \(\phi = A, \psi , c\)):
The \({\mathcal {N}}_\phi \) are BRSTinvariant fieldcounting operators [76, 211] that are linear combinations of the basic \(N_\phi \) operators previously introduced:
The Quantum Action Principle (QAP) [70, 199, 200, 208,209,210, 212,213,214], asserts that the variations of \(\Gamma \) (terms \({\mathfrak {W}}\)) with respect to parameters and fields naturally present in \(S_0\), are equivalent to a renormalized insertion of local ddimensional operators in \(\Gamma \), derived from the finite dimensionalregularized action, \(S_0 + S_\text {fct}\),
Because \(\mu \) is not a parameter of \(S_0\), but is a modification of the loop integration, the QAP does not directly apply to \(\mu \partial _\mu \Gamma \) itself (term \({\mathfrak {R}}\)). Nonetheless, it can also be expressed as a renormalized insertion (Bonneau [205]):
In this equation, \(\text {r.s.p.}\, \Gamma _{{\text {DReg}^N}_{l}}\) loops designates the residue of simple \(1/(4d)\) pole of the \(N_l\)loop 1PI diagrams, made from Feynman rules derived from the action \((S_0 + S_\text {fct})\), and subrenormalized using lowerorder singular counterterms \(S_\text {sct}\).
The procedure then consists in reexpressing Eq. (2.66) using (2.67) and (2.68), and all the operator insertions into a basis of (independent) 4dimensional ones. Note that the inserted evanescent \(\widehat{{\mathcal {M}}_j}\) operators manifesting there are not linearly independent quantities, and need to be expanded into independent insertions of 4dimensional operators \(\overline{{\mathcal {M}}_i}\) (Bonneau identities [205, 215]): \(N[\widehat{{\mathcal {M}}_j}] \cdot \Gamma = {\textstyle \sum _i} c_{ji} N[\overline{{\mathcal {M}}_i}] \cdot \Gamma \), with \(c_{ji} \sim {\mathcal {O}}(\hbar )\). Grouping all terms together, one obtains the final form of the RG equation, as a system of equations for the \(\beta _e\) and the \(\gamma _\phi \) functions, ensuring their selfconsistency:
3.10.3 Results for the twoloop RG evolution
Mainly focusing on the AlgRen method, we now describe how all the \(\hbar ^2\)order terms entering Eq. (2.69) can be explicitly evaluated in order to determine the twoloop \(\beta _e^{(2)}\) and \(\gamma _\phi ^{(2)}\) functions. For all details we refer to [211].
The lefthand side of Eq. (2.69) can be grouped into four different terms, \({\mathfrak {R}} = \mathfrak {R_1} + \mathfrak {R_2} + \mathfrak {R_3} + \mathfrak {R_4}\).

\(\mathfrak {R_1}\) corresponds to insertions of oneloop 4dimensional singular counterterms into oneloop diagrams, \(\mathfrak {R_1} = N[\text {r.s.p.}\, \overline{S_\text {sct}^{(1)}}] \cdot \Gamma ^{(1)}\).

\(\mathfrak {R_2}\) corresponds to insertions of oneloop evanescent singular counterterms into oneloop diagrams, Figs. 4 and 5: \(\mathfrak {R_2} = N[\text {r.s.p.}\, \widehat{S_\text {sct}^{(1)}}] \cdot \Gamma ^{(1)}\).

\(\mathfrak {R_3}\) corresponds to twoloop singular counterterms, obtained from the \(1/(4d)\) pole of oneloop diagrams involving insertions of oneloop finite counterterms (see discussion around Eq. (2.57)), \(\mathfrak {R_3} =  \text {r.s.p.}\, \overline{ S_\text {sct}^{(2,\,1)} }\). Those diagrams are similar to those of Figs. 4 and 5, but with \(\widehat{{\mathcal {O}}} \rightarrow \overline{S_\text {fct}^{(1)}}\).

\(\mathfrak {R_4}\) corresponds to the genuine twoloop \(\hbar ^2\) singular counterterms (see discussion around Eq. (2.56)). Note that in the language of Eq. (2.68), it is only these \(\mathfrak {R_4}\) terms that receive a factor \(N_l = 2\), whereas all other contributions \(\mathfrak {R_{1,2,3}}\) receive a factor \(N_l = 1\). This subtlety is not present in the case of manifest symmetry preservation.
Similarly, the righthand side of Eq. (2.69) can be grouped into four terms, \({\mathfrak {W}} = \mathfrak {W_1} + \mathfrak {W_2} + \mathfrak {W_3} + \mathfrak {W_4}\).

\(\mathfrak {W_1}\) corresponds to contributions from the oneloop RG coefficients combined with the insertions of the respective differential operators, i.e. \(\mathfrak {W_1} = \beta _e^{(1)} N[e \partial _e \overline{S_0}] \cdot \Gamma ^{(1)} + \gamma _\phi ^{(1)} N[{\mathcal {N}}_\phi \overline{S_0}] \cdot \Gamma ^{(1)}\). Note that there is an automatic agreement \(\mathfrak {R_1} = \mathfrak {W_1}\), in accord with the oneloop RG coefficients.

\(\mathfrak {W_2}\) corresponds to the contributions from oneloop RG coefficients combined with insertions of treelevel evanescent operators, \(\mathfrak {W_2} = 2 \gamma _A^{(1)} N[\widehat{S_{AA}}] \cdot \Gamma ^{(1)} + \gamma _{\psi _i}^{(1)} N[\widehat{S^{i}_{\overline{\psi } \psi }}] \cdot \Gamma ^{(1)}\), corresponding to Figs. 4 and 5.

\(\mathfrak {W_3}\) corresponds to contributions from oneloop RG coefficients combined with finite oneloop counterterms, \(\mathfrak {W_3} = \left( \gamma _{\psi _i}^{(1)} + \gamma _A^{(1)} \xi \frac{\partial }{\partial \xi }  \beta _e^{(1)} \right) {\mathcal {N}}_{\psi _i} \overline{S_\text {fct}^{(1)}}\).

\(\mathfrak {W_4}\) contains the genuine “twoloop” \(\hbar ^2\)order \(\beta \)functions and anomalous dimensions of \(\chi \)QEDto be determined: \(\mathfrak {W_4} = \beta _e^{(2)} e \partial _e \overline{S_0} + \gamma _\phi ^{(2)} {\mathcal {N}}_\phi \overline{S_0}\).
In the MultRen method, there exists a onetoone correspondence with the terms obtained in the AlgRen method. The singular counterterms \(\mathfrak {R_{3,4}}\) generate contributions \(\widetilde{\beta _{e,\xi }}\) and \(\widetilde{\gamma _{A,\psi _i}}\) to the \(\beta \) and \(\gamma \) functions. The terms \( \widetilde{\beta _{\sigma _i}} \partial _{\sigma _i} \Gamma ^*_\text {DReg}\) for \(i = 1,2,3\), evaluated following Eq. (2.65) in Sect. 2.9.2, correspond to \(\mathfrak {W_2}\) and \(\mathfrak {R_2}\). Those are evaluated with the very same diagrammatic calculations as in AlgRen, Figs. 4 and 5. Likewise, one evaluates the terms \( \widetilde{\beta _{\rho _i}} \partial _{\rho _i} \Gamma ^*_\text {DReg}\) ( \(i = 1,2\)), that correspond to \(\mathfrak {W_3}\) in the AlgRen method.
All these quantities, except for the unknown twoloop RG coefficients in \(\mathfrak {W_4}\), are known or calculable from oneloop diagrams. The equation \({\mathfrak {R}} = {\mathfrak {W}}\) can therefore be solved to obtain these coefficients. The resulting \(\hbar ^2\)order \(\beta \) and \(\gamma \) functions of \(\chi \)QEDare (in Feynman gauge \(\xi = 1\)):
The two compared AlgRen and MultRen methods agree in the obtained results.
3.10.4 Summary and outlook
We have demonstrated the practical renormalization of a chiral Abelian toy model up to twoloop order. The main result consists in the full set of noninvariant, singular counterterms as well as the finite, noninvariant symmetryrestoring counterterms which implement the Slavnov–Taylor identity at the twoloop level. These counterterms are found to be rather compact and of a similar structure at one and twoloop order. Importantly, it is verified that they ensure the validity of the usual Ward identities. The betafunctions and anomalous dimensions of the renormalization group equation of the model have been derived using two approaches: in the Algebraic Renormalization framework and in a modified version of the more customary multiplicative renormalization method. The methods are equivalent and provide the same final results. However, the application of algebraic renormalization is more straightforward, as it does not require any “auxiliary couplings”. We are currently working on the threeloop renormalization as well as the twoloop study of the nonAbelian case. All of this is in preparation for the application to the SM.
4 Phenomenological studies and applications
Another important use case for computer tools has to do with phenomenological analyses. Typical tasks performed by such codes include the extraction of theory parameters from data, the prediction of observables in terms of NP parameters, or setting bounds on the underlying parameter space. Tools are for instance used to determine the WCs of higherdimensional operators, by extracting them from observables in an automated global analyses. Typical fitting tools that allow for such analyses are SMEFiT [216], as well as smelli, HighPT, and HEPfit, which are all discussed further in this section. Common observable calculators, that consist of a large data base of predefined observables are flavio [217], SuperIso [218, 219], FlavBit [220], as well as the package EOS, which is further discussed below. Furthermore, there are several clustering tools and Montecarlo enablers on the market, such as ClusterKing [221] and Pandemonium [222], the package SMEFTsim [223], as well as SmeftFR which can be used for Montecarlo simulations including Dim8 SMEFT operators, and which is discussed in the last subsection.
4.1 smelli: Towards a global SMEFT likelihood
The Python package smelli is a powerful tool for constraining SMEFT WCs and parameters of UV models matched to the SMEFT. Its goal is to provide a likelihood that is as global as possible while being fast enough to allow comprehensive fits and parameter scans.
NP extensions of the SM aim to resolve certain theoretical issues or tensions with experimental data. Typically, however, they have effects on many observables beyond their original purpose. It is therefore crucial to carry out global phenomenological analyses of NP models in order to assess their viability and to show their actual superiority over the SM. This is a challenging task as it involves computing predictions for a large number of observables and doing so for each model. Fortunately, this problem can be tremendously simplified by using the SMEFT in an intermediate step. In particular, a global likelihood function that yields the probability of observing the experimental data given SMEFT WCs^{Footnote 18} can also be used as a likelihood function for model parameters of all NP models that can be matched to the SMEFT. For such models, a global phenomenological analysis can be divided into two parts.

1.
The NP model has to be matched to the SMEFT in order to express the SMEFT WCs at the matching scale \(\Lambda _\text {NP}\), \(\textbf{C} (\Lambda _\text {NP})\), in terms of model parameters \(\vec \xi \), i.e.
$$\begin{aligned} \textbf{C} (\Lambda _\text {NP}) = f_\text {match}(\mathbf {\xi })\,, \end{aligned}$$(3.1)where the matching function \(f_\text {match}\) and the model parameters \(\vec \xi \) depend on the specific NP model. It might be necessary to include oneloop effects in this step, in particular if the leading contribution to relevant WCs is not generated by the treelevel matching.

2.
The SMEFT WCs at the scale \(\Lambda _\text {NP}\), \(\textbf{C}(\Lambda _\text {NP})\), have to be constrained by experimental data. This requires the computation of theory predictions for a large number of observables at various scales, both in the SMEFT and in the LEFT. Importantly, WCs at different scales and in different EFTs are connected by RG running and matching. The oneloop contributions introduced by the RG running have been shown to be crucial in constraining NP models (see e.g. Refs. [224, 225]). Theoretical predictions and experimental measurements of all relevant observables can then be used to construct a global likelihood function for the SMEFT WCs at the scale \(\Lambda _\text {NP}\),
$$\begin{aligned} L_\text {SMEFT}\left( \textbf{C}(\Lambda _ \text {NP})\right) \,. \end{aligned}$$(3.2)Through Eq. (3.1), this also directly provides a likelihood function for the parameters \(\vec \xi \) of a NP model,
$$\begin{aligned} L_\text {NP}\left( \vec \xi \right) = L_\text {SMEFT}\left( f_\text {match}(\vec \xi )\right) . \end{aligned}$$(3.3)
The matching in step 1 depends only on the NP model, but is independent of both the experimental data and the theoretical predictions of the observables. Full treelevel matching of generic models to SMEFT has been performed in Ref. [137] and several tools are being developed to fully automate generic oneloop matching [48, 49, 134].
The phenomenological part in step 2 is independent of the NP model, so that a SMEFT likelihood function, once constructed, can be be used for generic phenomenological analyses of NP models. It is important to stress that different sectors of observables should not be considered separately, since RG effects mix all sectors, and matching a NP model to the SMEFT will generally lead to effects in many sectors. It is therefore crucial to consider a global SMEFT likelihood function that encompasses as many sectors as possible.
4.1.1 smelli – the SMEFT likelihood
To establish a comprehensive global likelihood function in the space of dimensionsix SMEFT WCs, the open source Python package smelli – the SMEFT likelihood – was introduced in Ref. [226]. It builds on several other opensource projects that provide key components:

wilson [108] – running and matching beyond the SM wilson is a Python package for the running and matching of WCs in the LEFT and the SMEFT. It implements the oneloop running of all dimensionsix operators in the SMEFT [8,9,10], matching to the LEFT at the electroweak scale [13], and oneloop running of all dimensionsix LEFT operators in QCD and QED [14, 20]. Furthermore, it takes into account effects from rediagonalization of Yukawa matrices after running above the EW scale [227, 228].

flavio [217] – A Python package for flavour and precision physics in and beyond the SM The Python package flavio can compute theoretical predictions for a wide range of observables from different sectors, including flavour physics, electroweak precision tests, Higgs physics, and other precision tests of the SM. NP contributions are taken into account in terms of WCs of dimensionsix operators in the SMEFT and the LEFT. flavio also comes with an extensive database of experimental measurements and allows the construction of likelihoods based on these measurements and their corresponding theoretical predictions.

WCxf [133] – the Wilson coefficient exchange format smelli, wilson, and flavio all use the Wilson coefficient exchange format (WCxf) to represent WCs, which makes it easy to interface these codes with each other and with any other code that supports the WCxf standard.
In order to achieve a reasonably fast evaluation of the likelihood function in smelli, two simplifying approximations are used to deal with nuisance parameters \(\vec \theta \) that enter the theory predictions \(\textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta })\):

For observables with negligible theoretical uncertainties compared to the experimental uncertainties, each likelihood \(L_\text {exp}^i\) from a given experimental measurement is evaluated with nuisance parameters fixed to their central values \(\vec \theta _0\),
$$\begin{aligned} L_\text {exp}^i\left( \textbf{O}_\text {th} (\textbf{C}, \mathbf {\theta }_0)\right) \!. \end{aligned}$$(3.4) 
For observables with significant theoretical uncertainties,^{Footnote 19} both the theoretical and experimental uncertainties are approximated as multivariate Gaussian and a combined likelihood is constructed for all correlated observables. The experimental covariance matrix \(\Sigma _\text {exp}\) and the central experimental values \(\vec O_\text {exp}\) are extracted from the original experimental likelihoods. The theoretical covariance matrix \(\Sigma _\text {th}\) is obtained by sampling the nuisance parameters \(\vec \theta \) from their respective likelihood distributions, while their central values \(\vec \theta _0\) are used for the theoretical predictions \(\mathrm{\textbf{O}}_\text {th}(\textbf{C}, \mathbf {\theta }_0)\). Both covariance matrices enter the combined likelihood \({{\tilde{L}}}_\text {exp}\) defined by
$$\begin{aligned} 2 \ln \tilde{L}_\text {exp}\left( \textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta }_0)\right) = \textbf{D}^T (\Sigma _\text {exp}+ \Sigma _\text {th})^{1} \textbf{D}\,, \nonumber \\ \textbf{D} = \textbf{O}_\text {th}(\textbf{C}, \mathbf {\theta }_0)  \textbf{O}_\text {exp}\,. \end{aligned}$$(3.5)
The global likelihood is then constructed by combining the individual approximated likelihood functions,
The smelli Python package that provides this global likelihood function is available in the Python package manager pip and can be installed using

python3 m pip install smelli user
which will download smelli with all dependencies from the Python package archive (PyPI) and install it in the user’s home directory. The source code of the package and more information about using it can be found in

the smelli GitHub repository https://github.com/smelli/smelli,

the smelli API documentation https://smelli.github.io/smelli,

the introductory tutorial in Ref. [229].
4.1.2 Status and prospects of smelli
The smelli project is under active development and has been extended several times in recent years, in particular also since the SMEFTTools 2019 workshop [1] where smelli v1.3 was presented.
The first version of smelli focused on flavour and electroweak precision observables. These included flavourchanging neutral and charged current B and K decays, mesonantimeson mixing observables in the B, K and D systems, chargedlepton flavour violating B, K, \(\tau \), \(\mu \) and Z decays, as well as Z and W electroweak precision observables and the anomalous magnetic moments of the charged leptons.
In the context of Ref. [230], smelli has been extended to Higgs physics, and the signal strengths of various decay ( \(h\rightarrow \gamma \gamma \), \(Z\gamma \), ZZ, WW, bb, cc, \(\tau \tau \), \(\mu \mu \)) and production channels (gg, VBF, Zh, Wh, \(t{\bar{t}}h\)) have been implemented.
With smelli v2.0 more new observables and features have been introduced. Beta decays were implemented following Ref. [231], adding the lifetimes and correlation coefficients of neutron beta decay as well as superallowed nuclear beta decays. Furthermore, additional K decays and the total and differential cross sections for \(e^+ e^\rightarrow W^+ W^\) pair production, as measured at LEP2, have been added. Apart from new observables and some minor innovations, one of the most important new features of smelli v2.0 is a proper treatment of the Cabibbo–Kobayashi–Maskawa (CKM) matrix in SMEFT. Inspired by Ref. [232], smelli uses a CKM input scheme that takes four observables as proxies for the four CKM parameters. The default CKM input scheme uses \(R_{K\pi }=\Gamma (K^+\rightarrow \mu ^+\nu )/\Gamma (\pi ^+\rightarrow \mu ^+\nu )\) (mostly fixing \(V_{us}\)), \(BR(B^+\rightarrow \tau \nu )\) (fixing \(V_{ub}\)), \(BR(B\rightarrow X_c e \nu )\) (fixing \(V_{cb}\)), and \(\Delta M_d/\Delta M_s\) (mostly fixing the CKM phase \(\delta \)). The CKM elements are then expressed in terms of the four CKM input observables and the SMEFT WCs that enter the predictions of these observables. This removes a major limitation of smelli and allows semileptonic charged current meson decays to be included in the likelihood.
Since smelli v2.0 there have been several new developments that will be incorporated in future versions of smelli. A new numerical method has been developed in the context of Ref. [233], which allows a numerically efficient implementation of the NPdependence of the theory covariance matrix. This will remove another major limitation of smelli and will enable the inclusion of observables whose theoretical uncertainties have a strong NP dependence, as e.g. the neutron Electric Dipole Moment (EDM). In addition, the new method of Ref. [233] increases the computational speed by orders of magnitude, resulting in a significantly shorter evaluation time of the global likelihood function and allowing for much more comprehensive analyses. These new features have already been successfully applied in Ref. [234], where a global likelihood was constructed that includes neutral and charged current Drell–Yan tails, which will be implemented in a future version of smelli.
4.2 HighPT: A tool for DrellYan tails beyond the Standard Model
High \(p_T\) tails in DrellYan processes can provide useful complementary information to lowenergy and electroweak observables when investigating the flavor structure beyond the SM. The Mathematica package HighPT allows to compute DrellYan cross sections for dilepton and monolepton final states at the LHC. The observables can be computed at treelevel in the SMEFT, including the relevant operators up to dimensioneight, with a consistent expansion up to \({\mathcal {O}}(\Lambda ^{4})\). Furthermore, hypothetical TeVscale bosonic mediators can be included at tree level in the computation of the crosssections, thus allowing to account for their propagation effects. Using the Run2 searches by ATLAS and CMS, the LHC likelihood for all possible leptonic final states can be constructed within the package, which therefore provides a simple framework for high \(p_T\) DrellYan analyses. We illustrate the main features of HighPT with a simple example.
Semileptonic interactions have received a lot of attention in the literature in recent years, driven mainly by interesting data in B meson decays. In this context, it has been stressed several times that not only lowenergy observables can contribute to constrain the new physics scenarios, but high \(p_T\) observables, especially DrellYan tails, can give complementary and independent information and, sometimes even more stringent bounds [235,236,237,238,239]. A comprehensive analysis of these effects has been implemented for the first time in HighPT [240, 241], a Mathematica package that allows to compute hadronic crosssections, event yields and the likelihoods from different LHC searches involving leptonic final states. The aim is to provide an easytouse integrated framework to directly obtain a likelihood, in order to easily extract bounds on new physics parameters (both WCs in the SMEFT and couplings of TeVscale mediators), to be then juxtaposed with lowenergy experiments.
4.2.1 DrellYan crosssection
The most general DrellYan process can be written, at parton level, as the scattering
where i, j ( \(\alpha ,\beta \)) are quark (lepton) flavour indices, and \(q,q'\) indicate either up or downtype quarks, while \(\ell ,\ell '\) generically stand for either a charged lepton or a neutrino.^{Footnote 20} The amplitude can be expressed in terms of form factors as
which captures all possible \(SU(3)_c\times U(1)_{\textrm{em}}\) and Lorentzinvariant structures. The sum over \(X,Y=L,R\) extends over left and righthanded chiralities, and we have defined the Mandelstam variables \({{\hat{s}}} = k^2 = (p_\ell + p_{\ell '})^2\), and \({{\hat{t}}} = (p_\ell  p_{q'})^2\). The form factors \({\mathcal {F}}_I\) can be decomposed as
where
is an analytic function in \({{\hat{s}}}\) and \({{\hat{t}}}\), describing local interactions (i.e. effective operators of \(d\ge 6\)), while \({\mathcal {F}}_{I,\text {Poles}}\) captures the effect of simple poles in the s, t or u channel, due to some TeVscale mediator. The differential crosssection at parton level then is
where \(M_{IJ}^{XY}\) decribes the interference between different form factors. This crosssection needs to be convoluted with the parton luminosity functions and integrated over the appropriate region in order to match the experimental searches (see [241] for further details).
4.2.2 DrellYan in the SMEFT
When working in the context of the SMEFT, the WCs can be mapped to the form factor description of the scattering process by suitable matching conditions [241]. Writing the SMEFT Lagrangian as
the crosssection, up to \({\mathcal {O}}(\Lambda ^{4})\), can be schematically written as
where \({\mathcal {A}}_i^{(6)}\) ( \({\mathcal {A}}_i^{(8)}\)) indicates the contribution from dimensionsix (dimensioneight) operators. The classes of operators contributing to DrellYan up to this order are summarized in Table 6.
4.2.3 Collider limits
In order to compare the theory prediction for the crosssection with the searches performed by the experimental collaborations, detector effects, such as limited resolution or acceptance, must be taken into account. For binned distributions, this is done by introducing a response matrix K, such that
where x indicates a generic particlelevel observable, divided into M bins, and \(x_{\textrm{obs}}\) is the experimentlevel observable. \(\sigma _q\) here indicates the crosssection for bin q. The matrix K needs to be extracted from Monte Carlo simulation for each independent combination of form factors. With all the elements described so far, one can define a \(\chi ^2\) likelihood as
where \({\mathcal {N}}_A^b\) is the number of background events and \({\mathcal {N}}_A^\textrm{obs}\) the number of observed events in bin A, both provided by the experimental collaborations. The uncertainty \(\Delta _A\) is obtained by adding in quadrature the background and observed uncertainties, \(\Delta _A^2=(\delta {\mathcal {N}}^b_A)^2+ {\mathcal {N}}_A^{\textrm{obs}}\), where the last term corresponds to the Poissonian uncertainty of the data. \({\mathcal {N}}_A(\theta )\), on the other hand, is the predicted number of events in bin A, depending on the new physics parameters \(\theta \). HighPT includes recasts from ATLAS and CMS searches for all possible dilepton (ee, \(\mu \mu \), \(\tau \tau \), \(e\mu \), \(e\tau \), \(\mu \tau \)) and monolepton ( \(e\nu \), \(\mu \nu \), \(\tau \nu \)) final states [240], such that a likelihood, written as a polynomial in the WCs, can be obtained for each of them.
4.2.4 Using HighPT: an example
In order to briefly illustrate the main features of HighPT, we show here an explicit example. For a detailed review of all the functionalities see [240]. The main routine of the package is the function ChiSquareLHC, yielding the \(\chi ^2\) likelihood as a list, with each element corresponding to a bin e.g. in \(m_{\ell \ell }^2\).^{Footnote 21} Consider the dimuon search by CMS [242] and the dimensionsix coefficients \([{\mathcal {C}}_{lq}^{(1)}]_{2211}\), \([{\mathcal {C}}_{lq}^{(1)}]_{2222}\), as in [4]. The likelihood can be extracted as
which computes the \(\chi ^2\) keeping only the specified operators, and to \({\mathcal {O}}(\Lambda ^{4})\). The default setting is \(\Lambda = 1\) TeV, but this can be changed at any time, together with the order of the EFT truncation [240]. Within the same framework, one can compute the projected likelihood for the HLLHC by
where the first option corresponds to a rescaling of the background uncertainty by \(\Delta {\mathcal {N}}_A^b \rightarrow (L_\textrm{projected}/L_\textrm{current})^{1/2} \,\Delta {\mathcal {N}}_A^b=\sqrt{3000/140}\,\Delta {\mathcal {N}}_A^b\), while the second is the likelihood computed assuming that the ratio of background error over background is constant, i.e. \(\Delta {\mathcal {N}}_A^b/{\mathcal {N}}_A^b=\text {const}\). Minimizing these likelihoods, one can plot for example the 95% C.L. contours as in Fig. 6.
4.2.5 Summary and outlook
We have introduced HighPT, a Mathematica package designed to translate the data from DrellYan searches at the LHC into a likelihood function in terms of WCs. It is worth stressing that, despite the focus in this brief overview has been on the SMEFT, HighPT currently includes also a set of leptoquark mediators, allowing to include possible propagation effects of such new states in the computation of the crosssection [240]. We have shown in a short example how the \(\chi ^2\) can be computed, including also an option for HLLHC projections. Future directions of development for the package include the implementation of electroweak and lowenergy observables, in order to be able to get a global likelihood for combined analyses in a unified framework. Another possible extension is the inclusion of more high \(p_T\) observables related to semileptonic interactions, such as processes with a jet in the final state, and the inclusion of processes mediated by fourquark operators.
4.3 EOS: Flavor Phenomenology with the EOS Software
Recent studies in flavor physics have revealed a consistent numerical pattern where large amounts of experimental data are analyzed to infer theory parameters of and BSM. Constraining the WCs of effective field theories have been proven particularly useful as they provide a modelindependent framework to study scenarios BSM. In this context, performing the flavor analyses in a separated software and exporting the resulting constraints in terms of likelihoods in the WCs space is crucial for model building.
EOS [243, 244] is such a software.^{Footnote 22} It is an open source flavor software dedicated to the calculation of observables and the inference of theory parameters from an extendable list of models and constraints. It is particularly suited to the extraction of constraints on the parameters of effective field theories in the context of global analysis. It is written for three main use cases:

the numerical prediction of experimental observables with a wide range of theoretical and statistical techniques;

the inference of theory parameters from an extensible database of experimental and theoretical likelihoods;

and the production of Monte Carlo samples that can e.g. be used to study the experimental sensitivity to a specific observable.
EOS is written in C++ but offers a rich Python interface meant to be used e.g. within a Jupyter notebook.
4.3.1 Installation and documentation
EOS can be installed using Python package installer:
and the Python module can be accessed using
EOS documentation [245, 246] contains further installation instructions, basic tutorials, as well as detailed examples for advanced usage.
4.3.2 How to derive flavor constraints in the SMEFT
The two main objects in EOS are Observable and Analysis. The former allows to compute any buildin (pseudo)observable by specifying a set of parameters, options and kinematics. EOS prebuilt observables are classified by their QualifiedNames and can be listed using the Observables command. An updated list can also be found together with the documentation. Experimental measurements and theory constraints are expressed in terms of likelihoods with the Constraint class.
The Analysis class allows to evaluate a set of constraints within ranges of parameters provided by the user. Once the analysis object is defined, it can be optimized to identify the bestfit point(s) and it accepts sampling routines.
A SMEFT analysis therefore consists of the following steps:

1.
List the experimental and theory constraints relevant for the analysis. New constraints can be added using manual_constraints.

2.
List the relevant nuisance parameters. The parameters of interest are the WCs of the effective theory relevant to the observables (e.g. ”ubmunumu::Re{cVL}”for a study of \(B\rightarrow \pi \mu \nu _\mu \)). The matching from the lowenergy effective theory to the SMEFT is performed at a later stage.

3.
Create an Analysis object, specifying model: LEFT as a global option. This analysis can be optimized to find the bestfit point and the corresponding goodnessoffit information.

4.
Create posteriorpredictive samples of the analysis, using one of the sampling routines: sample_mcmc, sample_pmc or sample_nested (EOS > v1.0.5).

5.
After marginalizing over the nuisance parameters, the samples can be exported to any matching software (e.g. wilson [108]) and converted to SMEFT parameters using the EOS basis of the WCxf format [247].
Alternatively, WCs can be imported from wilson directly into a Parameters object using the FromWCxf routine.
4.3.3 EOS vs. other flavor software
EOS is developed since 2011 [243] and was used in many phenomenological studies (see e.g. [248,249,250,251,252,253,254] for the most recent ones). It is however not the only openly available flavour software and competes, among others, with flavio [217], SuperIso [218, 219], HEPfit [255] and FlavBit [220]. The unique features of EOS are described below.

EOS is particularly suited to study and compare different models of hadronic matrix elements (theory calculations, parameterizations...). It thus implements the possibility to select from various hadronic models at run time. As far as theory calculations are concerned, EOS implements all the necessary tools for the evaluation of these elements using QCD sum rules.

The careful implementation of hadronic matrix elements makes it the primary tool to a simultaneous inference of hadronic and new physics parameters. The underlying correlations are of primary importance when combining many experimental results.

EOS also offers the possibility of producing pseudoevents from an extensible set of PDFs. These events can then be used, e.g. for sensitivity studies and in preparation for experimental measurements.
4.3.4 Recent and future developments
EOS development is done via its GitHub page, where the issue tracker allows the user to ask for new features, observables or constraints. The longterm plans are also discussed via the discussion panel.
In parallel to the implementations of new observables, parameterizations and recent experimental results, a considerable work has been done on the improvement of statistical tools. This development was performed in preparation to new phenomenology analyses which now usually reach \({\mathcal {O}}(100)\) nuisance parameters. Such large numbers make the approach based on basic MonteCarlo techniques inefficient if not impossible. EOS now offers an interface to the dynesty [256, 257] package to make full use of nested sampling algorithm.
In the longterm, EOS will contain “prepackaged” lowenergy analyses. The idea is to simplify the use of lowenergy constraints in the conception of new physics models. In particular, following the steps described above can be particularly time and CPUdemanding when the number of nuisance parameters is large. This is typically the case in flavor physics due to involved parameterizations of the hadronic form factors. For example, the extraction of the WCs of the \(b\rightarrow s\mu \mu \) weak effective theory using \(B\rightarrow K\mu \mu \), \(B\rightarrow K^*\mu \mu \) and \(B_s\rightarrow \phi \mu \mu \) requires at least 130 parameters for a consistent description of the hadronic transitions [253]. Provided that these nuisance parameters are uncorrelated to the other parameters entering a global SMEFT analysis, repeating this analysis in its entirety would be pointless and computationally challenging.
We therefore propose to simplify the publication of likelihoods containing only the parameters of interests (the WCs in this case). The posterior densities can be fitted with a Gaussian Mixture Model and used in EOS or other flavor software.
4.4 HEPfit: effective field theory analyses with HEPfit
HEPfit is a tool developed to facilitate the combination of all different types of available constraints that can be used to learn from the parameter space of the SM or any new physics model. In the case of new physics, these constraints include experimental searches looking for the direct production of new particles, i.e. direct searches, or to find deviations from SM predictions in measurements of SM processes, i.e. indirect searches. The code has great flexibility in the form in which experimental likelihoods for these searches can be implemented, allowing e.g. correlations, binned measurements or nonGaussian likelihoods. Theory constraints such as, e.g. unitarity, and theory uncertainties (including correlations) can also be taken into account in the analysis of a desired model.
The abovementioned types of information can be combined and used to sample the model parameter space via the builtin Bayesian Markov Chain Monte Carlo (MCMC) engine, which uses the Bayesian Analysis Toolkit (BAT) library [258]. This enables the possibility of doing Bayesian statistical inference of the model parameters. This Bayesian analysis framework is parallelized with MPI so it can be run in clusters and CPUs capable of multithread computing. Alternatively, HEPfit can also be used in library mode to compute predictions for observables. These can then be used to perform inference in any other statistical framework. To use HEPfit’s Bayesian framework, the user only needs to provide the priors for the different model input parameters, those observables to be included in the likelihood calculation and the settings of the MCMC. Examples can be found in Section 7 of Ref. [255].
Another important feature of HEPfit is that, aside from the observables and models currently implemented in the code, the latter including the SM and several new physics scenarios, the user can implement their own custom observables and/or models as external modules.
On the technical side, HEPfit is developed in C++ and it requires a series of mandatory dependencies such as the GNU Scientific Library, the BOOST libraries, and ROOT. To use the HEPfit MCMC engine BAT is also required. Finally, to enable the parallel use of HEPfit one needs OpenMPI. See the Installation section in [255] for more details.
Aside from the SM, the current version of HEPfit already includes several BSM models, such as TwoHiggs doublet models [259], as well as several modelindependent frameworks for the phenomenological description of new physics effects using EFTs. The implementation of these EFT is briefly described in the next section, following the status of the most uptodate (developer’s) version of the code, which can be found in the Downloads area of https://hepfit.roma1.infn.it. These features are expected to appear in the next public release of HEPfit.
4.4.1 Effective field theory implementation in HEPfit
Assuming that BSM physics is characterized by a mass scale \(\Lambda \) and that for energies \(E\ll \Lambda \) the particle spectrum and symmetries of nature are those of the SM, two types of EFT can be used to describe the physics at such energies: the SMEFT (see e.g. Ref. [260]), where the Higgslike boson is embedded in a \(SU(2)_L\) doublet as in the SM; and the HEFT, where the Higgs boson is described by a singlet scalar state, i.e. not belonging to an \(SU(2)_L\) doublet. Most of the current development in HEPfit is focused on the SMEFT, whose power counting follows an expansion in operators of increasing canonical mass dimension, and thus BSM effects are suppressed by correspondingly larger powers of the EFT cutoff scale \(\Lambda \). Hence, the effective Lagrangian expansion takes the following form,
where \(\mathcal{O}_i^{(d)}\) are operators of mass dimension d and \(C_i^{(d)}\) the corresponding WCs. The first term, \(\mathcal{L}_5\), only contains the leptonnumberviolating Weinberg operator. In a leptonnumber preserving theory the leading order (LO) new physics effects are therefore given by the dimensionsix operators in \(\mathcal{L}_6\) and these are the effects implemented in HEPfit.
In the current implementation of SMEFT effects in HEPfit, which can be found in the socalled NPSMEFTd6 model class, new physics contributions from dimensionsix operators are considered for several types of observables:

Electroweak precision measurement (Zpole observables at LEP/SLD and measurements of the W mass and decay widths). These are implemented to the stateoftheart precision in the SM and to LO in the SMEFT [261, 262].

LHC Higgs measurements, including the signal strengths for the different production and decay modes, as well as the Simplified Template Cross Section bin parameterization from [265]. A comprehensive set of Higgs observables at future \(e^+ e^\) or \(\mu ^+ \mu ^\) colliders at different energies, with or without polarization, is also available in the code, for future collider studies [266,267,268].
The current version of the code allows the use of either the \(\left\{ M_Z, \alpha , G_F\right\} \) or the \(\left\{ M_Z, M_W, G_F\right\} \) schemes for the SM electroweak input parameters for most of these observables.
A comprehensive set of topquark observables at the LHC is also available in HEPfit, via the NPSMEFT6dtopquark model class used in [269]. These include differential cross section measurements of \(t{\bar{t}}Z\) and \(t{\bar{t}}\gamma \) processes and inclusive cross sections for \(t{\bar{t}}W\), \(t{\bar{t}}H\) and single top processes. (See Fig. 7 right.) These topquark observables are also being implemented as part of the main NPSMEFTd6 class for global analyses.
For the abovementioned set of observables, new physics corrections are currently implemented at the linear level in \(1/\Lambda ^2\),
The coefficients \(F_i\) parametrizing the dependence on the WCs \(C_i\) are computed at leading order, either analytically, as in the case of the electroweak precision measurements or, for LHC Higgs and topquark observables, numerically, by fitting Eq. (3.18) to the results of MadGraph5_aMC@NLO [111] simulations using our own UFO implementation of the SMEFT or any of the models available in the literature, e.g. SMEFTsim [223] or SMEFT@NLO [270]. Our expressions are given in the socalled Warsaw basis [4], but we give the possibility of choosing as model parameters some operators in other bases, in which case the corresponding expressions are obtained via the SM equations of motion. Different flavor assumptions can be chosen for fermionic operators, not restricted to flavor universality.
Flavor physics is another sector that has been the focus of attention during the development of HEPfit [271,272,273,274,275], with multiple \(\Delta F=2\) and \(\Delta F=1\) observables included in the code. As in the case of the electroweak precision measurements the SM prediction has been implemented including all available corrections. New physics corrections are implemented as a function of the WCs of the LEFT, and the full matching with the SMEFT is currently work in progress (so far it is only implemented for interactions relevant for the analysis of B anomalies [274, 275]). Combined analyses of flavor physics with electroweak precision observables can be found in, e.g. [272, 276], see Fig. 8.
As mentioned above, most of the SMEFT effects implemented in HEPfit are currently available at leading order and at \(\mathcal{O}(1/\Lambda ^2)\). Part of the work to extend such calculations include the implementation of \(\mathcal{O}(1/\Lambda ^4)\) effects, see e.g. [277], and the full renormalization group running [8,9,10] via the integration of RGESolver [109] in HEPfit. The remaining contributions needed to obtain the full nexttoleading order (NLO) calculation of given observables are becoming increasingly available in the literature (e.g. [278, 279]) and will be gradually implemented.
Finally, aside from the SMEFT implementation, a model describing the HEFT corrections to single Higgs processes is also available in HEPfit. These corrections include the effects from the leading order HEFT Lagrangian, using a power counting in terms of chiral dimensions [280]. These include all operators of chiral dimension two, but we also include several operators of chiral dimension four, to parameterize local contributions from new particle loops in \(H\rightarrow gg, \gamma \gamma \) and \(Z\gamma \). The results from a global analysis using LHC run 1 and 2 Higgs data in this HEFT formalism can be found in Ref. [281].
4.5 SmeftFR v3: a tool for creating and handling vertices in SMEFT
Athanasios Dedes, Janusz Rosiek and Michal Ryczkowski
The abundance of parameters and interaction vertices in SMEFT requires automation. The scope of the SmeftFR v3 code [282] is to derive the Feynman rules for interaction vertices from dimension5, and 6, and, so far, all bosonic dimension8 operators which can easily be imported into other codes, such as FeynArts [190] and MadGraph5 [111], for further symbolic or numeric calculations of matrix elements and crosssections.
SmeftFR starts from the most commonly used dimension5,6 “Warsaw” basis [4] and dimension8 basis of Ref. [87] of operators in the unbroken phase, and, following the steps of ref. [283] generates all relevant Feynman rules in the physical mass basis quantized in Unitary or \(R_\xi \)gauges. It is written in Mathematica language and uses the package FeynRules [155]. The code SmeftFR is an open access, publicly available code and can be downloaded from www.fuw.edu.pl/smeft.
There are several advances in SmeftFR v3 [282] compared to its predecessor SmeftFR v2 [284]. Apart from general optimizing and speeding up the code, SmeftFR v3 can calculate vertices consistently up to the order \(1/\Lambda ^4\) in the EFT expansion, including terms quadratic in dim6 WCs and linear in bosonic dim8 WCs. What is particularly important is that SmeftFR v3 is able to express the SMEFT interaction vertices directly in terms of the chosen set of observable input parameters, avoiding the need of reparametrizations of transition amplitudes calculated in terms of SM gauge and Higgs couplings. For convenience, SmeftFR v3 is augmented with two predefined^{Footnote 23} input parameter schemes in the electroweak sector including corrections of order \(O(1/\Lambda ^4)\):

The GFscheme with input parameters \((G_F,M_Z,M_W,M_H)\),

The AEMscheme with input parameters \((\alpha _{em},M_Z,M_W,M_H)\).^{Footnote 24}
Moreover, SmeftFR v3 employs the flavour input scheme of ref. [232] which inserts the SMEFT corrected CKM matrix elements starting directly from flavour observable processes.^{Footnote 25}
4.5.1 SmeftFR v3 by an example
All details about physics and usage of SmeftFR v3 are presented in [282]. To get the essence of what SmeftFR can do in practice, it is better to study a stepbystep example for a given set of dim6 and dim8, CPeven, operators. The processes we have in mind are vectorboson scattering at the LHC. The subsequent steps follow the Mathematica notebook file given in the SmeftFR distribution, SmeftFRinit.nb. After loading FeynRules and SmeftFR codes, we need first to set the operator’s set (in gauge basis). For the processes we have in mind, we set:
The naming of operators is given in App. B of Ref. [282], e.g. \(Q_{\varphi \Box } \rightarrow \texttt {``phiBox}\) ”, \(Q_{\varphi ^4 D^4}^{(1)} \rightarrow \texttt { ``phi4n1}\) ”, etc. The next step is to initialize the SMEFT Lagrangian with a chosen set of available options:
Here we choose to generate vertices in the \(R_\xi \)gauges up to the EFT expansion order of \(1/\Lambda ^4\) and with maximal 4 external legs (this option does not affect the UFO and FeynArts file generation where there is no such restriction). We have also chosen to use the \(G_F\)input parameter scheme, and no SMEFT corrections to the CKM matrix. Moreover, we use real numerical parameter values for WCs (as required by MadGraph5) taken from the file named, WCxfInput. The next step is to load the parameters’ modelfile and calculate the Lagrangian in the gauge basis, find fieldbilinears and diagonalize mass matrices to maximal order \(1/\Lambda ^4\), and finally, find the SMEFT Lagrangian in the mass basis and generate the Feynman rules, at this stage keeping the field redefinitions necessary to canonicalize the Lagrangian as symbols, without expanding them in \(1/\Lambda \) powers. Up to now, the program takes \(\sim 7\) mins on a typical laptop.^{Footnote 26} The obtained vertices in this form are stored in ”/output/smeft_feynman_rules.m”file.
Now we are ready to expand the fieldredefinition parameters and read the full vertices in user’s \(G_F\)scheme, previously adopted. We use the FeynRules command to select the \(h\gamma \gamma \)vertex and obtain:
As expected from gauge invariance, the resulting vertex is proportional entirely to the Lorentz factor \((p_1^{\mu _2}p_2^{\mu _1}  g^{{\mu _1}{\mu _2}} p_1\cdot p_2)\). In this vertex, there are terms linear in dim6 WCs plus quadratic terms of the form (dim6) \(^2\). Although, the chosen set of WCs associated to dim8 operators does not appear in \(H\gamma \gamma \)vertex, in the following example they do:
The quartic Zvertex is generated for the first time at dim8 level from the chosen operators \(Q_{\phi ^4 D^4}^{(1),(2),(3)}\) ! The user could enjoy investigating further new vertices that did not appear up to dim6 level. Finally, we note here that analogous vertices can be extracted at this stage in standard SM parametrization ( \(({\bar{g}},{\bar{g}}',v)\)scheme) or even in the unexpandedfieldredefinition version of this scheme.
If we now want to continue with interfaces to LaTeX, UFO, FeynArts and WCxf formats we have to Quit[] Mathematica kernel and open the notebook SmeftFR_ interfaces.nb located in the homedirectory of the SmeftFR distribution. We again have to load FeynRules and SmeftFR engines and reload the mass basis Lagrangian by typing:
where the “user”input scheme from the previous session is used (i.e. the \(G_F\)scheme), expansion is up to \(1/\Lambda ^4\), etc. (we do not include 4fermion operators, so this option is irrelevant for the chosen set of operators in this example). The whole SMEFT Lagrangian in the mass basis is finally stored in variable SMEFT$MBLagrangian for further use by interface routines.
At this point, we can continue by exporting numerical values of WCs from the FeynRules modelfile to a WCxffile format. The created file can be used to transfer numerical values of WCs to other codes that also support WCxfformat. In addition, as already possible in SmeftFR v2, we can generate a LaTeX file with vertices and corresponding Feynman graphs. Since the resulting expressions for (dim6) \(^2\) and dim8 contributions are (usually) too long, we have kept in the LaTeX output only the linear dim6 terms.
4.5.2 SmeftFR UFO and MadGraph5
SmeftFR v3 provides a new routine for producing UFO model files that may be useful in running realistic Monte Carlo simulations and replaces the standard FeynRules one. It can assign the correct “interaction orders” for both the SM couplings and the higher order operators, as required by MC generators to properly truncate transition amplitude calculations and reads:
For details, including several comparisons to other existing codes such as, SMEFT@NLO [285], Dim6Top [286] and SMEFTsim [287], the user must consult Ref. [282]. The generation of the UFO model files (especially in the “user” scheme) is a timeconsuming process. For this particular example, it took about 2 hrs to generate the ”/output/UFO” directory. Moreover, the resulting UFO modelfile may lead to lengthy calculations in MadGraph5 itself. If the goal of the user is to examine the influence of a single SMEFT operator on the chosen set of processes at a time, one may either start with the model containing several SMEFT operators and manually set only one of them to be nonzero by using MadGraph5’s, set command (e.g. set CW 1e06) or produce separate models, each containing one of the SMEFT operators and load different models before each run. Both options lead to the same results, but the latter one may be especially attractive to users with limited CPU facilities. Whichever we choose, one must copy the produced UFO modeldirectory to the modelsdirectory of MadGraph5 and then import it with the command import model UFO. We are now ready to generate matrix elements and crosssections with MadGraph5.
For example, the crosssection for vectorboson scattering at LHC is calculated with: generate p p > w+ w+ j j QCD=0 (& NP=0  SM, NP<=1  \({\mathcal {O}}(\Lambda ^{2})\) and NP<=2  \({\mathcal {O}}(\Lambda ^{4})\) order). In order to highlight the significance of the quadratic (dim6) \(^2\) corrections, we adopt for the input WCs large values which could arise from a hypothetical strongly coupled sector. The resulting crosssections are given in Table 7, with further definitions in its caption. As we can see, the quadratic effects for \((C_W)^2\) are by a factor of 4400 bigger than the linear contributions. For the pure scalar operators, the effects of (dim6) \(^2\) terms depend on the sign of \(C_{\varphi \Box }\), while the effect of dim8 coefficient \(C_{\varphi ^4 D^4}^{(i)}\), has an impact of about 100 in the crosssection. The tendency in the results for the pure scalar operators, presented in Table 7, follow the analytic amplitude expression for the \(W_L^+W_L^+\)scattering in Eq. (3.19) below. To our knowledge, the effects of these (dim6) \(^2\) and dim8 modifications to the crosssection appear for the first time in the literature.
4.5.3 SmeftFR to FeynArts
SmeftFR can generate a FeynArts output by just using the native FeynRules command,
This is also a timeconsuming stage (about double the UFO file generation). The generated file is stored in the ”output/FeynArts”directory. We can use the files suffixed *.gen,*.mod and *.pars in the patched FeynArts program, FormCalc [288] or FeynCalc [289]. As an example, we create treelevel diagrams for the vector boson scattering, \(W^+ W^+ \rightarrow W^+ W^+\) and isolate the longitudinal Wbosons, \(W_L^+\). We obtain the tree amplitude at high energies expanded for \(s\gg M_W^2\), with \(\theta \) being the scattering angle,
This result, up to linear dim6 operators, agrees with Ref. [290] whereas all other contributions, the quadratic (dim6) \(^2\) and the linear dim8 effects, are new. The advantage of using a \(R_\xi \)gauge (here Feynman gauge), is that we can confirm this result by using the Goldstone–Boson equivalence Theorem comparing Eq. (3.19) with the amplitude for charged Goldstone boson scattering, \(G^+ G^+ \rightarrow G^+ G^+ \). Indeed, we find agreement. This is a serious nontrivial check since the Feynman diagrams involved in \(W_L^+ W_L^+\) elastic scattering contain in addition, the coefficients \(C_W, C_{\varphi WB}, C_{\varphi B}, C_{\varphi W}\) in a complicated way, but in the end their contributions cancel out. We have also verified, that in the ”AEM”input scheme the combination of WCs appearing in (3.19) are exactly the same and therefore, numerically, the result is identical. This is another check towards correctness of SMEFT vertices generated by SmeftFR v3.
4.5.4 Conclusions
We briefly presented a stepbystep example illustrating the practical use and capabilities of the recently released SmeftFR v3 code [282]. SmeftFR brings forward Feynman rules for a desired set of WCs by consistently including corrections of upto order \(O(1/\Lambda ^4)\) in the EFT expansion. SmeftFR generates interaction vertices in terms of chosen physical input parameters. Furthermore, SmeftFR offers LaTeX output, as well as UFO and FeynArts modelfiles useful for numerical and analytical calculations.
In Table 7 and in Eq. (3.19), we show an example in which, (dim6) \(^2\) and dim8 operator effects should not be ignored when mapping experimental data onto their associated WCs. For such research, SmeftFR v3 is a requisite.
4.6 Application of EFT tools to the study of positivity bounds
Mikael Chala
Positivity bounds are restrictions on the Smatrix of welldefined relativisticquantum theories that follow from locality, unitarity and crossingsymmetry. In order to discuss the findings from Refs. [104, 125], let us first consider any such theory with a lowenergy spectrum coinciding with that of the SM and with heavy fields of mass \(\sim M\). Let us focus on twototwo Higgs scattering for simplicity, \(\phi \phi \rightarrow \phi \phi \). In the forward limit, the corresponding scattering amplitude satisfies that \({\mathcal {A}}(s)={\mathcal {A}}(s)\) due to crossingsymmetry, and it is analytic everywhere in the complex plane of the Mandelstam invariant s up to certain “mild” singularities (definitely not as severe as delta functions [291]).
In first approximation, the only singularities of \({\mathcal {A}}(s)\) are single poles at \(s=\pm M^2\), from where it can be easily proven that \({\mathcal {A}}^{\prime \prime }(s=0) \ge 0\) [292]. But this positivity restriction is actually much more widely satisfied. For example, let us assume that the singularities of \({\mathcal {A}}(s)\) are branchcuts sitting along the \(\text {Re}(s)\) axis, with branch points at \(s\sim M^2\). Then, following Fig. 9, we can compute the quantity
which fulfils
where in the first equality we have used that, by virtue of the Froissart’s bound [293], the integral over the circular paths of \(\Gamma \) vanishes; in the second equality we have relied on the Schwarz reflection principle \({\mathcal {A}}(s^*)={\mathcal {A}}(s)^*\); and in the last step we have invoked the optical theorem, which relates the imaginary part of the forward amplitude to the total cross section \(\sigma (s)\).
Now, by analyticity, and using Cauchy’s theorem, \(\Sigma \) can be also computed from the residue of \({\mathcal {A}}(s)/s^3\) in the origin, which is nothing but the second derivative of the amplitude itself at \(s=0\), from where we conclude again that \({\mathcal {A}}^{\prime \prime }(s=0)\ge 0\). Similar results can be drawn even in the case in which the branch cut extends all the way to \(s=0\) [294].
Because the amplitude in the vicinity of the origin can be computed within the EFT, the aforementioned restriction translates to bounds on the parameters of the EFT. Thus, if this process occurs already at tree level, then
where \(\Lambda \sim M\) represents the cutoff of the EFT. (Note that odd terms in s, and in particular the linear one and hence contributions from dimensionsix EFT interactions, are absent due to the invariance of the amplitude under \(s\rightarrow s\).) From this equation and \({\mathcal {A}}^{\prime \prime }(s=0)\ge 0\), it can be concluded that \(a_2\ge 0\).
If the EFT amplitude vanishes at tree level, then the WCs \(a_i\) must be understood as evaluated at a scale \(\mu \ll \Lambda \). For \(a_2\) in particular:
where \(\beta \) and \(\beta ^\prime \) are the oneloop beta functions induced by dimensioneight and pairs of dimensionsix operators, respectively. From this schematic point of view several interesting conclusions can be drawn [104, 125]:

1.
The matching contribution \(a_2(\Lambda )\) must be always nonnegative at tree level.

2.
It must be also nonnegative if it does not run at one loop.

3.
On the contrary, if it does run, then it can be negative.

4.
\(\beta _2\) must be nonpositive, because \(\beta _2'\) can be neglected in the limit of vanishing gauge and Yukawa couplings of the UV.

5.
\(\beta _2^\prime \) must be nonpositive whenever \(\beta _2\) is zero.
4.6.1 Explicit computations using EFT tools
The conclusions 3.6–3.6 can be explicitly checked within different UV completions of the SM. For concreteness, we focus mostly on the restrictions ensuing from the processes \(\varphi _i\varphi _j\rightarrow \varphi _i\varphi _j\), with \(\varphi _i\) representing any of the real degrees of freedom of the Higgs doublet \(\phi \). We assume that the Higgs is massless.
The condition \(a_2\ge 0\) translates into the bounds \(c_{\phi ^4 D^4}^{(2)}\ge 0\), \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}\ge 0\) and \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\ge 0\), where \(c_{\phi ^4 D^4}^{(1,2,3)}\) are the WCs of the only three \(\phi ^4 D^4\) SMEFT dimensioneight operators in the basis of Ref. [87]:
The benefit of testing conclusions 3.6–3.6 with explicit computations within concrete UV models is that it strengthens the confidence on results that might be hard to follow at the pure abstract level. In turn, a careful validation of these conclusions implies a thorough crosscheck of the EFT tools (which entails performing highly nontrivial computations of matching and running up to dimension eight) against robust results supported by very fundamental physics principles.
In what follows, we describe the different ways in which we have tested the conclusions 3.6–3.6 using matchmakereft [48], SuperTracer [117], and MatchingTools [112].
Treelevel matching Let us consider five different singlefield extensions of the SM that induce \(\phi ^4 D^4\) operators at tree level:
This should be read as follows: \({\mathcal {S}}\) is a full singlet of \(SU(3)_c\times SU(2)_L\) with hypercharge \(Y=0\) (in subindex), which when integrated out produces the WCs (in arbitrary units) specified in the last parenthesis; likewise for the scalar triplet \(\Xi \) and for the three vectors. In these and all cases hereafter, the omitted UV couplings appear squared, so they are always positive.
The WCs above satisfy the positivity relations in a nontrivial way. For example, in the \({\mathcal {W}}\) case, \(c_{\phi ^4 D^4}^{(3)}\) and \(c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\) are negative, but precisely \(c_{\phi ^4 D^4}^{(2)}\), \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}\) and \(c_{\phi ^4 D^4}^{(1)}+c_{\phi ^4 D^4}^{(2)}+c_{\phi ^4 D^4}^{(3)}\) are nonnegative.
We have obtained these results with matchmakereft. Despite being “simply” a treelevel computation, the task is not as easy as it might seem. Within matchmakereft, where the matching is performed by computing onelightparticle irreducible (1PI) Green’s functions offshell in both the UV and in the IR, one needs to specify the full set of EFT operators independent up to field redefinitions, as well as their reduction to physical ones in onshell observables. Fortunately, these results, for the bosonic sector of the dimensioneight SMEFT, can be found in Ref. [88]; see also Ref. [295]. But even implementing this into matchmakereft can be very cumbersome.
As an alternative crosscheck, we have verified the values of the WCs by using MatchingTools, which performs the matching by solving for the classical equations of motion. The advantage is that no EFT basis needs to be provided a priori, but the problem is that the final result involves operators related by all kind of redundancies (field redefinitions, integration by parts, different names of same indices,...). As a matter of example, integrating out \({\mathcal {W}}\) within MatchingTools gives (suppressing couplings) [88]:
Our approach to reduce this Lagrangian consists of using dedicated routines to export the output of MatchingTools to Feynrules [155], where it is in turn exported to FeynArts [190] and FormCalc [296], in which 1PI amplitudes are computed and matched onto the basis of Green’s functions of Ref. [88]. The final result is finally reduced onto a physical basis using the relations obtained from equations of motion therein. It reads:
in agreement with matchmakereft (the ellipses stand for higherpoint interactions).
Oneloop matching Let us now take the scalar singlet and triplet cases up to one loop, in the limit in which the only relevant couplings in the UV are the trilinear terms (which we set to unit). Working with matchmakereft, we get:
for the scalar case, and
in the triplet case. Here, we ignore the WCs that arise already at tree level. In both models, at least the condition \(c_{\phi ^4 D^4}^{(2)}\ge 0\) is broken, as expected from conclusion 3.6.
We have crosschecked this result with the help of SuperTracer [117]. To this aim, the output of SuperTracer is simplified within the code itself, and the final result is processed following the same strategy as with MatchingTools. This provides a very strong and robust test of the validity of both matchmakereft and SuperTracer.
Let us now take scalar quadruplet extensions of the SM, with \(Y=1/2\) and \(Y= 3/2\). These scalars couple linearly to three Higgses. The only operators that they induce at tree level are of the form \(\phi ^6 D^{2n}\), which do not renormalize \(\phi ^4 D^4\). Consequently, following conclusion 3.6, we expect the positivity bounds to hold. Indeed, from matchmakereft we obtain (we ignore couplings again):
for \(Y=1/2\) and
for \(Y=3/2\).
Oneloop running Despite the breaking of positivity in oneloop matching, as for example highlighted in Eq. (3.29), the amplitude for \(\varphi _i\varphi _j\rightarrow \varphi _i\varphi _j\) is nonnegative in the deep IR because there it is dominated by the running of \(c_{\phi ^4 D^4}^{(2)}\) induced by treelevel operators. It can be indeed checked that \(\beta _{\phi ^4 D^4}^{(2)}\) is always nonpositive [104, 297].
On the other hand, this implies that \(\beta _{\phi ^4 D^4}^{(2)\prime }\) does not need to be negative. Computed again with matchmakereft as well as with FeynArts+FormCalc, we obtain for example:
which is positive, for example, in the scalar singlet case ( \(c_{\phi ^4 D^4}^{(1,2)}=0\), \(c_{\phi ^4 D^4}^{(3)}=1\)). In the equation above, \(g_2\) stands for the \(SU(2)_L\) gauge coupling and the ellipses represent terms proportional to other SM couplings.
In cases where \(\beta _2\) vanishes, we do expect \(\beta _2'\) to be nonpositive; see conclusion 3.6. One such case is given by the renormalisation of \(W^2\phi ^2 D^2\) operators (where W is the \(SU(2)_L\) gauge boson) by \(\phi ^4 D^4\) operators. We know that \(\beta _2\) vanishes in this case because loops with two insertions of \(\phi ^4 D^{2n}\) operators must have at least four Higgses.
Among the \(W^2\phi ^2 D^2\) operators, there is one that is restricted by the positivity of the amplitude for \(W\phi \rightarrow W\phi \). The corresponding \(\beta _2\) function reads:
The ellipses encode non \(\phi ^4 D^4\) operators. This quantity is necessarily nonpositive, because the parenthesis is nonnegative (at tree level). Indeed, we can recast it in the form
which is nonnegative because the three terms in the sum are nonnegative, as we saw before.
For all these calculations, we have relied on matchmakereft with full crosscheck using FeynArts+ FormCalc.
4.6.2 Towards fullyautomated oneloop matching
Even with the help of current EFT tools, the explicit computations described before can become extremely tedious. This is because the simplification of the Lagrangian resulting from integrating out the heavy degrees of freedom is highly redundant. To the best of our knowledge, there is no generic and publicly available method to reduce the effective Lagrangian to a physical basis in an automated way.^{Footnote 27} In this final section, we comment briefly on the approach we have adopted to face this problem, and on the progress we have made so far.
Our idea for automating the process of reducing a redundant Lagrangian to a physical basis of operators consists in requiring explicitly that both Lagrangians provide exactly the same Smatrix for all different processes that can be computed within the EFT (up to the corresponding order in the expansion in inverse powers of the cutoff).
In practice, this amounts to equating all needed treelevel onshell connected and amputated Feynman graphs. As a matter of example, let us focus here on the SMEFT Higgs sector up to dimension eight. For the redundant Lagrangian, we consider that comprised by all Higgs operators in the Green’s basis of Ref. [298] (dimension six), together with those in Ref. [88] (dimension eight). For the physical one, we stick to the basis of Ref. [87]. The notation below follows the conventions in these references.
Upon equating the resulting calculations in both theories, one obtains a set of equations from where the physical WCs in the physical theory can be so