A comprehensive approach to new physics simulations

We describe a framework to develop, implement and validate any perturbative Lagrangian-based particle physics model for further theoretical, phenomenological and experimental studies. The starting point is FeynRules, a Mathematica package that allows to generate Feynman rules for any Lagrangian and then, through dedicated interfaces, automatically pass the corresponding relevant information to any supported Monte Carlo event generator. We prove the power, robustness and flexibility of this approach by presenting a few examples of new physics models (the Hidden Abelian Higgs Model, the general Two-Higgs-Doublet Model, the most general Minimal Supersymmetric Standard Model, the Minimal Higgsless Model, Universal and Large Extra Dimensions, and QCD-inspired effective Lagrangians) and their implementation/validation in FeynArts/FormCalc, CalcHep, MadGraph/MadEvent, and Sherpa.


Introduction
At the Large Hadron Collider (LHC) discoveries most probably will not be an easy task. The typical final states produced at this proton-proton collider running at very high energies will involve a large number of jets, heavy-flavor quarks, as well as leptons and missing energy, providing an overwhelming background to many new physics searches. Complex signal final state signatures will then need a very careful understanding of the detector and an accurate modeling of the data themselves. In this process, Monte Carlo (MC) simulations will play a key role in describing control data sets and devising robust search strategies. Already the first step, i.e., establishing "an excess over the Standard Model (SM) background", might be very difficult, depending on the type of signature involved [1]. At this stage, matrix-element-based MC (which give reliable predictions for shapes and can still be tuned to some extent to the data) will be used to describe backgrounds and possibly candidates signals. For some specific signals, an accurate prediction of the background normalization and shapes, validated via control samples, could be also needed. At the same time, accurate measurements and comparisons with the best theoretical predictions (e.g., at the next-to-next-to-leading order, resummation calculations, . . . ) of a set of standard-candle observables will also be mandatory to claim a good understanding and control of physics and detector effects. Very accurate predictions, possibly including even weak corrections, and a reliable estimate of errors (such as those introduced by the parton distribution functions) will then be needed.
Once the presence of excess(es) is confirmed, model building activities will be triggered, following both top-down and bottom-up approaches. In each case, tools that are able to make predictions for wide classes of Beyond the Standard Model (BSM) physics, as well as those that help in building up an effective field theory from the data (such as the so called OSET method [2]), could be employed. Finally, as a theoretically consistent picture arises, measurements of the parameters (masses, spin, charges) will be carried out. In this case it will be necessary to have at least next-to-leading order (NLO) predictions (i.e., a reliable normalization) for the signal processes. As our knowledge about the detector and the newly discovered physics scenario gets stronger, more accurate determinations will be possible and sophisticated analyses tools could be employed, on the very same lines as current top quark analyses at the Tevatron collider, e.g., see Ref. [3].
As schematically outlined above, Monte Carlo simulations will play a key, though different role at each stage of the exploration of the TeV scale, i.e., the discovery and identification of BSM Physics, and the measurement of its properties. The realization of the need for better simulation tools for the LHC has spurred an intense activity over the last years, that has resulted in several important advances in the field. At the matrix-element level, these include the development of general purpose event generators, such as CompHep/CalcHep [4,5,6], MadGraph/MadEvent [7,8,9,10] , Sherpa [11,12] and Whizard [13], as well as high efficiency multiparton generators which go beyond the usual Feynman diagram techniques, such as Alpgen [14], Helac [15] and Comix [16]. As a result, the problem of generating automatically leading-order matrix elements (and then cross sections and events) for a very large class of renormalizable processes has been solved. Quite amazingly, enormous progress has also been achieved very recently in the automatization of NLO computations. First the generation of the real corrections with the appropriate subtractions has be achieved in an automatic way [17,18,19,20,21]. Then several new algorithms for calculating loop amplitudes numerically have been proposed (see, e.g., Ref. [22] for a review) and some of them successfully applied to the computation of SM processes of physical interest [23,24,25].
An accurate simulation of a hadronic collision requires a careful integration of the matrix-element hard process, with the full parton showering and hadronization infrastructure, as efficiently provided by Pythia [26,27], Herwig [28,29] and Sherpa. Here also, significant progress has been made in the development of matching algorithms such as that by Catani, Krauss, Kuhn and Webber (CKKW) [30,31,32], Mangano (MLM) [33] and others [34,35,36], in their comparison [37,38] and application to SM [33,39] and BSM [40] processes. A breakthrough in merging fixed order calculations and parton show-ers was achieved by Frixione, Webber and Nason [41,42], who showed how to correctly interface an NLO computation with a parton shower to avoid double counting and delivered the first event generator at NLO, MC@NLO. More recently, a new method along the same lines, dubbed POWHEG, has been proposed [43] and applied to Drell-Yan and Higgs production [44,45,46,47].
The progress in the field of Monte Carlo tools outlined above shows that we are, or will be soon, able to simulate all the relevant SM processes at the LHC with an unprecedented level of accuracy. It is therefore worth considering the status of the predictions for physics Beyond the Standard Model. Quite interestingly, the challenges in this case are of a quite different nature. The main reason is that presently there is not a leading candidate for BSM, but instead a plethora of models have been suggested, based on very different ideas in continuous evolution. The implementation of complex BSM models in existing general purpose event generators like those enumerated above remains a long, often painstaking and error-prone, process. The derivation of the numerous Feynman rules to describe the new interactions, and their implementation in codes following conventions is a very uninteresting and time consuming activity. In addition, the validation of a given implementation often relies on a comparison of the obtained analytical and numerical results with those available in the literature. Again, due to presence of various conventions, the restricted number of public results and the lack of a dedicated framework, such a comparison is often done manually, in a partial and not systematic way. Finally, besides a handful of officially endorsed and publicly distributed BSM models (e.g., the Minimal Supersymmetric Standard Model), many implementations remain private or only used by a restricted set of theorists and/or phenomenologists, and never get integrated into the official chain of simulation tools used by experimental collaborations. Instead, various "home-made" or "hacked" versions of existing MC softwares are commonly used for specific BSM process studies, leading to problems in the validation, traceability and maintenance procedures.
In this work we address the problem of having an efficient framework where any new physics model can be developed and its phenomenology can be tested against data. A first step in the direction of deriving Feynman rules automatically starting from a model Lagrangian has been made in the context of the CompHep/CalcHep event generator with the LanHep package [48]. Our aim is to go beyond this scheme and create a general and flexible environment where communication between theorists and experimentalists in both directions is fast and robust. The desiderata for the new physics phenomenological framework linking theory to experiment and vice versa which we provide a solution for are: 5. Full traceability of event samples. 6. Both top-down and bottom-up approaches are natural. This paper is organized in five main sections and various appendices. In the first section, by discussing a simple example, we expose the strategy which we propose to address the challenges that model builders, phenomenologists and experimentalists have to face to study the phenomenology of a new physics model. In Section 3, we briefly recall how the FeynRules package works and present some of the new features recently implemented. Section 4 contains a brief description of the various interfaces already available. Section 5 contains the information about the models that have already been implemented. In Section 6 we present our strategy to validate BSM model implementations, and illustrate our procedure on the already implemented models. Finally, in Section 7, we discuss the outlook of our work. In the appendices we collect some technical information as well as a few representative validation tables, which constitute the quantitative results of this paper.

A simple example: from the Standard Model to the Hidden Abelian Higgs Model
From the phenomenological point of view, we can distinguish two classes of BSM models. The first class of models consists of straightforward extensions of the SM, obtained by adding one (or more) new particles and interactions to the SM Lagrangian. In this bottom-up approach, one generally starts from the SM, and adds a set of new operators according to some (new) symmetry. The second class of models are obtained in a top-down approach, where the fundamental Lagrangian is determined by the underlying global and local symmetries, and the SM is only recovered in some specific limit.
In this section we describe in detail our framework to develop, test, implement and validate any perturbative Lagrangian-based particle physics model. We concentrate on the case of the bottom-up models, and show how our framework allows to easily extend the SM and to go in a straightforward way from the model building to the study of the phenomenology. We will comment on the top-down models in subsequent sections.

The model
As an illustration, we use the Hidden Abelian Higgs (HAH) Model, described in Ref. [49]. This model can be seen as the simplest way to consistently add a new gauge interaction to the SM. More specifically, we consider an SU (3) C × SU (2) L × U (1) Y × U (1) X gauge theory where all SM particles are singlets under the new gauge group U (1) X . A new Higgs field φ is added that is also a singlet under the SM gauge group and breaks the U (1) X symmetry when it acquires its vacuum expectation value (vev), φ = ξ/ √ 2 . The most general Lagrangian describing this model is given by L HAH = L HAH,Gauge + L HAH,Fermions + L HAH,Higgs + L HAH,Yukawa , (2.2) where Φ denotes the SM Higgs field. The covariant derivative reads and we define the field strength tensors, (2.4) g s , g, g ′ and g X denote the four coupling constants associated with the SU (3) C × SU (2) L × U (1) Y × U (1) X gauge groups while T , σ i , Y and X are the corresponding generators and ǫ ijk represents the totally antisymmetric tensor. We do not write explicitly the terms in the Lagrangian describing the matter sector of the theory as they are identical to those of the SM described in detail in Section 5.1, L HAH,Fermions = L SM,Fermions and L HAH,Yukawa = L SM,Yukawa . (2.5) The kinetic mixing term in L HAH,Gauge induces a mixing between the two U (1) gauge fields and thus a coupling between the matter fermions and the new gauge boson. The kinetic terms for the U (1) fields can be diagonalized via a GL(2, R) rotation, After this field redefinition, the gauge sector takes the diagonal form but the covariant derivative now contains an additional term coupling the fieldX µ to the hypercharge, with η = χ/ 1 − χ 2 . When the Higgs fields acquire their vev 1 , the gauge symmetry gets broken to SU (3) C × U (1) EM , and we obtain a non-diagonal mass matrix for the neutral weak gauge bosons, which can be diagonalized by an O(3) rotation, where the mixing angles are given by with ∆ Z = M 2 X /M 2 Z 0 , where M X and M Z 0 denote the masses of the gauge bosons before the kinetic mixing, 12) and q X denotes the U (1) X charge carried by φ. The photon remains massless while the two other states acquire a mass given by As a result of electroweak symmetry breaking, non-diagonal mass terms for the Higgs fields appear that can be diagonalized via an orthogonal transformation, where the mixing angles and mass eigenvalues are given by tan 2θ h = κ v ξ ρ ξ 2 − λ v 2 , M 2 1,2 = λ v 2 + ρ ξ 2 ∓ (λ v 2 + ρ ξ 2 ) 2 + κ 2 v 2 ξ 2 .

(2.15)
Once the Lagrangian is written down and diagonalized in terms of mass eigenstates, one can easily identify the minimal set of parameters which the model depends on. Not all the parameters introduced above are independent, as most of them are related by some algebraic relations, e.g., the relation between the mass eigenvalues of the gauge bosons, Eq. (2.13), and the fundamental parameters appearing in the Lagrangian. Our choice of independent input parameters is given in Table 1. All other parameters appearing in L HAH,Gauge and L HAH,Higgs can be reexpressed in terms of these parameters. Note, however, that there are strong experimental constraints from LEP on the masses and the couplings of additional neutral gauge bosons to fermions, which need to be taken into account when building the model. In Ref. [49] it was pointed out that η = 0.01 is still allowed. In general, in order to determine a benchmark point that takes into account the direct and indirect experimental constraints, it is required to perform (loop) computations for several physical observables. We will comment more on this in the next subsection. Let us note that, although Eq. (2.2) is a very simple extension of the SM, from a more technical point of view an implementation of the HAH model in a matrix-element generator is already not trivial. In this case, it is not sufficient to start from the existing SM implementation and just add the vertices contained in L HAH,Higgs , because mixing in the gauge and scalar sectors implies that all SM vertices involving a Higgs boson and/or a Z boson need to be modified. For example, although there is no direct coupling between the abelian Higgs field φ and the matter fermions, all the Yukawa couplings receive contributions from the two mass eigenstates h 1 and h 2 , weighted by the mixing angle θ h , resulting in an almost complete rewriting of the SM implementation. In the next subsection we will describe how this difficulty can be easily overcome and the phenomenology of the Hidden Abelian Higgs model studied.

From model building to phenomenology
The starting point of our approach is FeynRules (see Section 3). Since in this case we are interested in a simple extension of the SM, it is very easy to start from the Feyn-Rules implementation of the SM which is included in the distribution of the package and to extend the model file by including the new particles and parameters, as well as the HAH model Lagrangian of Eq. (2.2). Note that, at variance with the direct implementation into a matrix-element generator where we need to implement the vertices one at the time, we can work in FeynRules with a Lagrangian written in terms of the gauge eigenstates and only perform the rotation to the mass eigenbasis as a second step. This implies that it is not necessary to modify L HAH,Fermions since the new fields only enter through L HAH,Gauge and L HAH,Higgs .
Several functions are included in FeynRules to perform sanity checks on the Lagrangian (e.g., hermiticity). The diagonalization of the mass matrices can be easily performed directly in Mathematica R 2 and FeynRules allows us to easily obtain the Feynman rules for the model. As already mentioned, not only the Feynman rules of the Higgs and gauge sectors are modified with respect to the Standard Model, but also the interaction vertices in the fermionic sector change due to the mixing of the scalars and the neutral After this preliminary study of our model where the mass spectrum of the theory was obtained and basic sanity checks have been performed, typically the model is confronted with all relevant direct and indirect constraints coming from experiment. This is a necessary step to find areas of parameters space which are still viable. Once interesting regions in parameter space are identified, the study of the collider phenomenology of the model, e.g., at the LHC, can start with the calculation of cross sections and decay branching ratios. Let us consider first the the calculation of the decay widths of both SM and new particles. Using the FeynArts implementation of the new model obtained via the FeynRules interface, it becomes a trivial exercise to compute analytically all tree-level two-body decays for the Higgs bosons and the Z ′ boson (alternatively, one could calculate them numerically via e.g., CalcHep or MadGraph/MadEvent). The results for the branching ratios of the dominant decay modes are shown, for the benchmark scenario considered here, in Table 2. Once decay widths are known, cross sections can be calculated. However, in many cases it is insufficient to have only predictions for total cross sections, as a study of differential distributions, with possibly complicated multi-particle final states, is necessary to dig the signal out of the backgrounds. Furthermore, even a parton-level description of the events might be too simplified and additional radiation coming from the colored initial and finalstate particles, as well as effects coming from hadronization and underlying events need to be accounted for. For this reason, phenomenological studies are in general performed using generators which include (or are interfaced to) parton shower and hadronization Monte Carlo codes. The parton level events for the hard scattering can be generated by the general purpose matrix-element (ME) program, and those events are then passed on to the parton shower codes evolving the parton-level events into physical hadronic final states. However, similar to FeynArts/FormCalc, for new models the ME generators require the form of the new vertices, and different programs use different conventions for the vertices, making it difficult to export the implementation from one ME generator to another. To solve this issue, FeynRules includes interfaces to several ME generators that allow to output the interaction vertices obtained by FeynRules directly in a format that can be read by the external codes. For the moment, such interfaces exist for CalcHep/CompHep, MadGraph/MadEvent and Sherpa. It should be emphasized that some of these codes have the Lorentz and/or color structures hardcoded, something that limits the range of models that can be handled by a given MC. In this respect (and others) each of MC tools has its own strengths and weaknesses: having several possibilities available maximizes the chances that at least one generator is able to efficiently deal with a given model and the case in which several MC tools can be used, as most of the examples discussed in this paper, allows for a detailed comparison and robust validation of the implementation.
For the sake of illustration, we used the MadGraph/MadEvent implementation of the HAH model to generate signal events for the gg → h 2 → h 1 h 1 → γγbb signal proposed in Ref. [50] as a signature of this model at the LHC. Using the same set of cuts, and the same smearing method as in Ref. [50], we have been able to easily generate final state invariant mass distributions for both signal and background events. The result can be seen in Fig. 1, which compares well to the Fig. 5

Validation of New Physics Models
In the previous subsection we discussed how the implementation of the model in Feyn-Rules allows to go all the way from model building to phenomenology without having to deal with the technicalities of the various ME generators. In this section we argue that our approach does not only allow to exploit the strength of each ME generator, but it also has a new power in the validation of BSM models by directly comparing various ME generators among themselves. Since the various ME generators use different conventions for the interaction vertices it becomes hence possible to compare the implementations of the same model into various different matrix-element generators, and thus the different tools, among themselves. Furthermore, in many cases generator specific implementations of BSM models already exist, at least for restricted classes of models, in which case the FeynRules model can be directly validated against the existing tool A, and then exported to any other tool B for which an interface exists. In the same spirit, any BSM model should be able to reproduce the SM results for observables which are independent of the new physics. Let us illustrate this procedure through the example of the HAH model. We start by implementing our model into CalcHep, MadGraph/MadEvent and Sherpa by means of the corresponding interfaces. We then compute the cross sections for all interesting twoto-two processes for this model, and check that we obtain the same numbers from every ME generator. Note that, since in this case we have modified the scalar sector of the SM, we pay particular attention to the unitarity cancellations inherent to the SM in weak boson scattering, showing in this way that our implementation does not spoil these cancellations. Since the different codes used for the computation of the cross-sections all rely on different conventions for the interaction vertices, we hence demonstrated that our model is consistent not only by checking that the cancellations indeed take place in all the implementations, but we also get the same results for the total cross section after the cancellation, a strong check very rarely performed for a general BSM model implementation. We will comment on the validation of more general models in subsequent sections.
Finally, let us comment on the fact that a robust implementation of a BSM model into a code does not only require the model to be validated to a very high-level, but the implementation should also be clearly documented in order to assure its portability and reproducibility. For this reason FeynRules includes an output to T E X which allows to output the content of the FeynRules model files in a human-readable format, including the particle content and the parameters which define the model, as well as the Lagrangian and the Feynman rules.

From phenomenology to experiment
Our approach does not only apply to phenomenological studies at the theory level, but it allows in principle to continue and pass the model to an experimental collaboration for full experimental studies. In general, the experimental softwares used to simulate the detector effects have very strict requirements regarding sanity checks for new modules (e.g., private Monte Carlo programs) to be included in the software, and a very long and tedious validation procedure is needed. However, if we will come to the point where we have to discriminate between various competing models at the LHC, this approach becomes extremely inefficient due to the large number of tools to be validated. In our approach this validation can be avoided, streamlining in this way the whole chain all the way from model building to the experimental studies and vice versa.
Let us again illustrate this statement through the example of the HAH model. Since this model is now implemented in FeynRules, we can easily pass it into various ME generators using the translation interfaces, and we demonstrated the validation power inherent to this approach in the previous section. These models can then be used in a ME generator in the same way as any other built-in model, without any modification to the original code, i.e., without creating a private version of the ME generator. If the model is validated it can easily be passed on to the experimental community, which can then read the FeynRules output and use it in their favorite ME generator already embedded in their software framework.
By following this procedure tedious validations for each model implementation in a given MC are avoided. In addition, the portability and the reproducibility of the experimentally tested models is guaranteed. As at the origin of the chain is solely the Feyn-Rules model file, all the information is concentrated in one place, and thus everybody can reproduce all the results at any stage, from the model building to the collider signatures, by starting from the very same file. In addition, since the FeynRules model file contains the Lagrangian of the model, it is very easy to go back to the model file and understand its physical content, a step which might be very difficult working with manually created model files for the ME generators, written in an often rather cryptic programming language hiding the essential physics. In this way it becomes very easy to use later the very same model information, and to reproduce analyses by just changing benchmark points or by adding a single new particle/interaction to the same model.

FeynRules in a nutshell
FeynRules is a Mathematica package which allows to derive Feynman rules directly from a Lagrangian [51]. The user provides the Lagrangian for his/her model (written in Mathematica) as well as all the information about the particle and parameter content of the model. This information uniquely defines the model, and hence is enough to derive all the interaction vertices from the Lagrangian. FeynRules can in principle be used with any model which fulfills basic quantum field theoretical requirements (e.g., Lorentz and gauge invariance), the only current limitation coming from the kinds of fields supported by FeynRules (see below). In particular it can also be used to obtain Feynman rules for effective theories involving higher-dimensional operators. In a second step, the interaction vertices obtained by FeynRules can be exported by the user to various matrix-element generators by means of a set of translation interfaces included in the package. In this way the user can directly obtain an implementation of his/her model into these various tools, making it straightforward to go from model building to phenomenology. Presently, interfaces to CalcHep/CompHep, FeynArts/FormCalc, MadGraph/MadEvent and Sherpa are available. In the following we briefly describe the basic features of the package and the model files, the interfaces to the matrix-element generators being described in S Scalar fields F Fermion fields (Dirac and Majorana) V Vector fields U Ghost fields T Spin two fields Section 4. For more details on both the FeynRules package as well as the interfaces, we refer the reader to the FeynRules manual and to the FeynRules website [51,52].

Model description
The FeynRules model definition is an extension of the FeynArts model file format and consists of the definitions of the particles, parameters and gauge groups that characterize the model and the Lagrangian. This information can be placed in a text file or in a Mathematica notebook or a combination of the two as convenient for the user.
Let us start with the particle definitions. Following the original FeynArts convention, particles are grouped into classes describing "multiplets" having the same quantum numbers, but possibly different masses. Each particle class is defined in terms of a set of class properties, given as a Mathematica replacement list. For example, the up-type quarks could be written as This defines a Dirac fermion (F) represented by the symbol q. Note that the antiparticles are automatically declared and represented by the symbol qbar. This class has three members, u, c, and t (ubar, cbar, and tbar for the antiparticles, respectively), distinguished by a generation index (whose range is defined at the beginning of the model definition). These fields carry an additional index labelled Colour. The complete set of allowed particle classes is given in Table 3. Additional information, like the mass and width of the particles, as well as the U (1) quantum numbers carried by the fields can also be included. Finally, some more specific information not directly needed by FeynRules but required by some of the matrix-element generators (e.g., the Particle Data Group (PDG) codes [53]) can also be defined. A complete description of the particle classes and properties can be found in the FeynRules manual.
A Lagrangian is not only defined by its particle content, but also by the local and global symmetries defining the model. FeynRules allows to define gauge group classes in a way similar to the particle classes. As an example, the definition of the QCD gauge group can be written where the gluon field G is defined together with the quark field during the particle declaration. The declaration of abelian gauge groups is analogous. FeynRules uses this information to construct the covariant derivative and field strength tensor which the user can use in his/her Lagrangian. The third main ingredient to define a model is the set of parameters which it depends on. In general, not all the parameters appearing in a Lagrangian are independent, but they are related through certain algebraic relations specific to each model, e.g., the relation cos θ w = M W /M Z relating at tree-level the masses of the weak gauge bosons to the electroweak mixing angle. FeynRules therefore distinguishes between external and internal parameters. External parameters denote independent parameters which are given as numerical inputs to the model. An example of a declaration of an external parameter reads defining an external parameter aS with numerical value 0.118. Several other properties representing additional information needed by matrix-element generators are also available, and we refer to the FeynRules manual for an extensive list of parameter class properties. Internal parameters are defined in a similar way, except that the Value is given by an algebraic expression linking the parameter to other external and/or internal parameters. For example, the cos θ w parameter definition could read Note that it is also possible to define tensors as parameters in exactly the same way, as described in more detail in the manual. At this point, we need to make a comment about the conventions used for the different particle and parameter names inside FeynRules. In principle, the user is free the choose the names for the gauge groups, particles and parameters at his/her convenience, without any restriction. The matrix-element generators however have certain information hardcoded (e.g., reference to the strong coupling constant or electroweak input parameters). For this reason, conventions regarding the implementation of certain SM parameters has been established to ensure the proper translation to the matrix-element generator. These are detailed in the manual and recalled in Appendix A.
In complicated models with large parameter spaces, it is sometimes preferable to restrict the model to certain slices of that parameter space. This can be done as usual by adjusting the parameters to lie at that particular point in parameter space and doing the desired calculation. However, sometimes these slices of parameter space are special in that many vertices become identically zero and including them in the Feynman diagram calculation can be very inefficient. In order to allow both the general parameter space and a restricted parameter space, we introduce the model restriction. A model restriction is a Mathematica list containing replacements to the parameters which simplify the model. For example, in the SM the CKM matrix has non-zero matrix elements, but it is sometimes useful to restrict a calculation to a purely diagonal CKM matrix. Rather than creating a new implementation of the SM with a diagonal CKM matrix, a restriction can be created and used when desired. The following statement restricts the SM to a diagonal CKM matrix When this restriction is applied, all vertices containing off diagonal CKM elements vanish identically and are removed before passing it on to a matrix-element generator. The result is that these vertices never appear in Feynman diagrams and the calculation is more efficient. Several restriction files can be created corresponding to various different slices of parameter space. The user then selects the restriction file that they are interested in and applies it to the model before running the translation interfaces. where the model can be implemented in as many files as convenient or it can be implemented directly in the Mathematica notebook in which case the list of files would be empty. The restriction definitions can also be placed in a file or directly in the Mathematica notebook. The Lagrangian can now be entered directly into the notebook 3 using standard Mathematica commands, augmented by some special symbols representing specific objects like Dirac matrices. As an example, we show the QCD Lagrangian, FeynRules then computes all the interaction vertices associated with the Lagrangian L and stores them in the variable verts. The vertices can be used for further computations within Mathematica, or they can be exported to one of the various matrix-element generators for further phenomenological studies of the model. The translation interfaces can be directly called from within the notebook, e.g., for the FeynArts interface,

WriteFeynArtsOutput[ L ];
This will produce a file formatted for use in FeynArts. All other interfaces are called in a similar way. As already mentioned, let us note that, even if FeynRules is not restricted and can derive Feynman rules for any Lagrangian, the matrix-element generators usually have some information on the Lorentz and color structures hardcoded, and therefore they are much more limited in the set of vertices they can handle. Each matrix-element generator has its own strengths, and in the FeynRules approach the same model can be easily exported to various codes, exploiting in this way the strength of each individual tool. In practice, the interfaces check whether all the vertices are compliant with the structures supported by the corresponding matrix-element generator. If not, a warning is printed and the vertex is discarded. Each interface produces at the end a (set of) text file(s), often consistently organized in a single directory, which can be read into the matrix-element generator at runtime and allows to use the new model in a way similar to all other built-in models. For more details on the various interfaces, we refer to Section 4 and to the manual.

Interfaces
In this section we provide a concise description of the FeynRules interfaces to several matrix-element generators and symbolic tools available to perform simulations/calculations from Lagrangian-based theories. The most important features of the general structure of the new physics models in the codes and their limitations are emphasized. Complete description of the options and more technical details can be found in the FeynRules user's manual, available on the FeynRules website. Interfaces to other codes, once available, will be included in the main release of the package and documented in the user's manual.

FeynArts/FormCalc
FeynArts is a Mathematica package for generating, computing and visualizing Feynman diagrams, both at tree-level and beyond [54]. For a given process in a specific model, FeynArts starts by generating all the possible topologies, taking into account the number of external legs and internal loops associated to the considered case, together with the set of constraints required by the user, such as the exclusion of one-particle reducible topologies.
This stage is purely topological and does not require any physical input. Based on a pre-defined library containing topologies without any external leg for tree-level, one-loop, two-loop and three-loop calculations, the algorithm successively adds the desired number of external legs. Then, the particles present in the model must be distributed over the obtained topologies in such a way that the resulting diagrams contain the external fields corresponding to the considered process and only vertices allowed by the model. Finally, a Mathematica expression for the sum over all Feynman diagrams is created. The second step in the perturbative calculation consists in the evaluation of the amplitudes generated by FeynArts. This can be handled with the help of the Mathematica package FormCalc which simplifies the symbolic expressions previously obtained in such a way that the output can be directly used in a numerical program [55]. FormCalc first prepares an input file that is read by the program Form which performs most of the calculation [56,57,58,59,60]. The Lorentz indices are contracted, the fermion traces are evaluated, the color structures are simplified using the SU (3) algebra, and the tensor reduction is performed 4 . The results are expressed in a compact form through abbreviations and then read back into Mathematica where they can be used for further processing. This allows to combine the speed of Form with the powerful instruction set of Mathematica.

Model framework
The FeynArts models have a very simple structure which can be easily extended to include BSM models. In particular, the current distribution of FeynArts contains already several models, including a complete implementation of the Standard Model, as well as a Two-Higgs-Doublet Model and a completely generic implementation of the Minimal Supersymmetric Standard Model.
The FeynArts models are separated into two files: • The generic model file: This file is not specific to any model, but it contains the expressions for the propagators and the Lorentz structures of the vertices for generic scalar, fermion and vector fields. Note that since this file is not specific to any model, different BSM models can be related to the same generic model file.
• The classes model file: This file is dedicated to a specific model, and contains the declarations of the particles and the analytic expressions of the couplings between the different fields. This information is stored in the two lists M$ClassesDescription for the particle declarations and M$ClassesCouplings for the couplings.
FeynArts requires all the particles to be grouped into classes, and as a consequence also all the classes couplings must be given at the level of the particle classes. If this is the case, the number of Feynman diagrams generated at runtime is much smaller, which speeds up the code. Since the FeynRules model files are an extension of the FeynArts classes model files, the explicit structure of the particle class definitions is very similar to the FeynRules particle classes discussed in Section 3.

FeynRules interface
FeynRules includes an interface that allows to output the interaction vertices derived from the Lagrangian as a FeynArts model file. Note however that at the present stage only the classes model file is generated by FeynRules, the generic model file is hardcoded. The generic model file used by FeynRules generated models,feynrules.gen, is included in the FeynRules distribution in the Interfaces/FeynArts subdirectory and needs to be copied once and for all into the Models directory of FeynArts. feynrules.gen is based on the corresponding Lorentz.gen file included in FeynArts, with some extensions to higher-dimensional scalar couplings as those appearing in non-linear sigma models (see Section 5.6).
The FeynRules interface to FeynArts can be called within a Mathematica notebook via the command

WriteFeynArtsOutput[ L ];
FeynRules then computes all the vertices associated with the Lagrangian L, and checks whether they all have Lorentz structures compatible with the generic couplings in the generic coupling file, and if so, it extracts the corresponding classes coupling. It not, a message is printed on the screen and the vertex is discarded. At this point we should emphasize that in order to obtain FeynArts couplings at the level of the particle classes, it is necessary that the Lagrangian is also given completely in terms of particle classes. If the interface encounters a Lagrangian term which violates this rule, it stops and redefines all the classes such that all particles live in their own class. It then starts over and recomputes all the interaction vertices, this time for a Lagrangian where all particle classes are expanded out explicitly. In this way a consistent FeynArts model file is obtained which can be used with FeynArts. It should however be noted that the generation of the diagrams can be considerably slower in this case, which makes it desirable to write the Lagrangian in terms of particle classes whenever possible.
The model file produced by FeynRules has the usual FeynArts structure. Besides the lists which contain the definitions of the particle classes and the couplings, the Feyn-Rules generated model files contain some more information, which can be useful at various stages during the computation: -M$ClassesDescription: This is in general a copy of the corresponding list in the original FeynRules model file.
-M$ClassesCouplings: Each entry in the list represents a given interaction between the particle classes, together with the associated coupling constant, represented by an alias gcxx, xx being an integer. Note that at this level FeynRules does not compute the counterterms necessary for loop calculations, but they should be added by the user by hand.
-M$FACouplings: A replacement list, containing the definition of the couplings gcxx in terms of the parameters of the model.

CalcHep/CompHep
The CalcHep [4,6] and CompHep [4,5] software automate the tree-level Feynman diagram calculation and production of partonic level collider events. Models with very general Lorentz structures are allowed and general color structures can be incorporated via auxiliary fields. Vertices with more than four particles are not supported at this time. In this subsection, we will describe the model file structure and how the FeynRules interface to CalcHep and CompHep works.

Models in CalcHep and
CompHep are essentially comprised of four files: • prtclsN.mdl: a list of all the particles in the model along with information about the particles that is necessary for calculation of Feynman diagrams.
• varsN.mdl: a list of the independent (external) parameters in the model along with their numerical value.
• funcN.mdl: a list of the dependent (internal) parameters of the model along with their functional definition. These definitions can contain any standard mathematical functions defined in the C code.
• lgrngN.mdl: a list of all the vertices in the model. It includes the specification of the particles involved in the vertex, an overall constant to multiply the vertex with and the Lorentz form of the vertex.
Note that the letter N in the names of the files is an integer which refers to the number of the model.

FeynRules interface
The CalcHep/CompHep interface can be invoked with the command where L denotes the Lagrangian. When invoked, this interface will create the directory M$ModelName with -CH appended if it does not already exist. It will create the files prtclsN.mdl, varsN.mdl, funcN.mdl and lgrngN.mdl. It will then derive the Feynman rules with four particles or less and write them to lgrngN.mdl. It will simplify the vertex list by renaming the vertex couplings as x1, x2, x3, etc. and write the definitions of these couplings in funcN.mdl along with the other internal parameters.
Although CalcHep and CompHep can calculate diagrams in both Feynman and unitary gauge, they are much faster in Feynman gauge and it is highly recommended to implement a new model in Feynman gauge. However, if a user decides to implement the model in unitary gauge, he/she should remember that according to the way CalcHep/CompHep were written, the ghosts of the massless non-abelian gauge bosons must still be implemented. (In particular, the gluonic ghosts must be implemented in either gauge for this interface.) One major constraint of the CalcHep/CompHep system is that the color structure is implicit. For many vertices (e.g., quark-quark-gluon), this is not a problem. However, for more complicated vertices, there may be an ambiguity. For this reason, the writers of CalcHep/CompHep chose to split them up using auxiliary fields. Although this can be done for very general vertices, it is not yet fully automatized in FeynRules. Currently, only the gluon four-point vertex and squark-squark-gluon-gluon vertices are automatically split up in this way.
The model files are ready to be used and can be directly copied to the CalcHep/ CompHep models directories. The default format for this interface is the CalcHep format. A user can direct this interface to write the files in the CompHep format by use of the CompHep option. The user who writes CompHep model files should note one subtlety. If the model is written to the CompHep directory and if the user edits the model inside CompHep and tries to save it, CompHep will complain about any C math library functions in the model. Nevertheless, it does understand them. We have checked that if the model works in CalcHep, it will work in CompHep and give the same results.
CalcHep has the ability to calculate the widths of the particles on the fly. By default, this interface will write model files configured for automatic widths. This can be turned off by setting the option CHAutoWidths to False. This option is set to False if CompHep is set to True.
This interface also contains a set of functions that read and write the external parameters from and to the CalcHep variable files (varsN.mdl). After loading the model into FeynRules, the external parameters can be updated by running

ReadCHExtVars[ Input -> < file > ]
This function accepts all the options of the CalcHep interface plus the option Input which instructs FeynRules where to find the CalcHep variable file. The default is varsN.mdl in the current working directory. If reading a CompHep variable file, then the option CompHep should be set to true. After reading in the values of the variables in the CalcHep file, it will update the values in FeynRules accordingly.
The current values of the external parameters in FeynRules can also be written to a CalcHep external variable file (varsN.mdl) using

WriteCHExtVars[ Output -> < file >]
This can be done to bypass writing out the entire model if only the model parameters are changed.

MadGraph/MadEvent
The MadGraph/MadEvent v4.4 software [7,8,9,10] allows users to generate tree-level amplitudes and parton-level events for any process (with up to nine external particles). It uses the Helas library [61,62] to calculate matrix elements using the helicity formalism in the unitary gauge. Starting from version 4, users have the possibility to use several pre-defined BSM models, including the most generic Two-Higgs-Doublet Model and the Minimal Supersymmetric Standard Model, but can also take advantage of the UsrMod interface to implement simple Standard Model extensions.
The existing scheme for new model implementations in MadGraph/MadEvent has two major drawbacks. First, users need to explicitly provide algebraic expressions for the coupling values used by MadGraph to calculate amplitudes. Second, the first version of the UsrMod interface only works for models extending the existing Standard Model by adding a limited set of new particles and/or interactions. This renders difficult any attempt to modify existing BSM models, or to generalize models previously implemented with this method.
The current version of MadGraph rely on a new clearly defined structure for all model libraries. New model libraries can be generated via the corresponding interface to FeynRules, which generates all the required code files automatically. Finally, a new version of the UsrMod scripts exists which can be used complementary to FeynRules for simple extensions of existing models. All these three new frameworks are introduced and described in the present section.

Model framework
All model libraries supported in the latest versions of MadGraph/MadEvent now have the same structure. They are composed of a set of text and Fortran files grouped in a single directory, stored in the Models subdirectory of the root MadGraph/MadEvent installation: • particles.dat: a text file containing a list of all particles entering the model and the corresponding properties (name, spin, mass, width, color representation, PDG code, . . . ) • param card.dat: a text file containing the numerical values of the necessary external parameters for a specific model. The parameter card has a format compliant with the SUSY Les Houches Accord (LHA) and is dependent on the physics model. One should pay attention to the fact that some of these parameters are related one to each other (e.g., the masses and the widths are generally related to more fundamental Lagrangian parameters). If possible, this file should also contain (preferably at the end) a list of Les Houches QNUMBERS blocks describing properties of non-SM particles to facilitate the interface of matrix-element and parton-shower based generators, as proposed in Ref. [63].
• intparam definition.inc: a text file containing all the algebraic expressions relating internal parameters to external and/or internal parameters. There are two different kinds of internal parameters. Indeed, most of the expressions can be computed once and for all, but in some cases where the parameter depends on the scale of the process (e.g., the strong coupling constant), it might be desirable to re-evaluate it at an event-by-event basis.
• interactions.dat: a text file containing a list of all interactions entering the model. Each interaction is characterized by an ordered list of the involved particles, the name of the corresponding coupling, the corresponding type of coupling (for coupling order restrictions) and possible additional switches to select particular Helas routines.
• couplingsXX.f (where XX can be any integer number): these files contain the algebraic expressions for the couplings, expressed as Fortran formulas. By convention, the file couplings1.f contains all expressions which should be re-evaluated when an external parameter (e.g., the renormalization scale) is modified on an event-by-event basis. The files couplingsXX.f where XX is greater than 1 contain all expressions which should be only re-evaluated if the default external parameter values are explicitly read from the LHA param card.dat parameter card. The actual number of these files may vary, but a single file should be small enough to be compiled using standard Fortran compilers. The full list of these files should be included in the makefile through the makeinc.inc include file.
• input.inc and coupl.inc: Fortran files containing all the necessary variable declarations. All parameters and couplings can be printed on screen or written to file using the routines defined in param write.inc and coupl write.inc, respectively. If needed, the latter can also be printed in a stricter format using routines defined in helas couplings.f, so they can be used by external tools (e.g., Bridge [64]).
Additional Fortran files, which are not model dependent, should also be provided in order to build the full library. Most of them simply include one or more of the above files, except lha read.f which contains all the routines required to read the LHA format. A makefile allows the user to easily compile the whole package, to produce a library or a test program called testprog which can be used to quickly check the library behavior by producing a standard log output.
The UsrMod v2 framework The UsrMod v2 framework has been designed as the successor of the widely-used original UsrMod template described in Ref. [9]. Taking advantage of the fixed structure we just defined, it provides the user with two new possibilities. First, any pre-existing model can be used as a starting point. This of course includes all models described in the present paper and soon part of the MadGraph/MadEvent distribution, but also all future models. This gives a natural framework for building simple extensions following a bottom-up approach, i.e., by adding successively new particles and new interactions and testing their implications at each step. Second, the possible modifications are no longer restricted to the addition of new particles/interactions, but any alteration of the model content (including particle removal, modification of existing properties, . . . ) allowed in the context of MadGraph/MadEvent is supported in an user-friendly way.
The UsrMod v2 approach can advantageously replace the full FeynRules package when only minor modifications to an existing MadGraph/MadEvent model are necessary, e.g., in order to study the phenomenology of a specific new particle and/or interaction, or when the use of the FeynRules machinery is not possible, e.g., if a Mathematica license is not available. However, when possible, we believe the use of FeynRules should be favored over the UsrMod v2 for the consistent implementation of full new models in MadGraph/MadEvent, especially due to the extended validation possibilities available in this context.
From the implementation point of view, the UsrMod v2 package consists in various Python scripts (with one single main script) and works on any platform offering support for this programming language. The actual implementation of a new model is decomposed into four distinct phases: 1. Saving: the model directory used as a starting point should be copied to a new location. The USRMOD script should be then run a first time to create a content archive used as a reference to identify the forthcoming modifications.

2.
Modifying: the particles.dat, interactions.dat and ident card.dat files can be modified to arbitrarily add, remove or modify the particle, interaction and parameter content.

3.
Creating: the USRMOD script should be then run a second time to actually modify all the model files to consistently reflect the changes applied in the previous phase.

4.
Adjusting: the couplingsXX.f file(s) can finally be edited, if necessary, to add or modify the relevant coupling expressions. The param card.dat file can also be edited to modify default values of external parameters.
At any time, the archive file created during the first phase can be used to restore the initial content of all model files. Several archive files can also be simultaneously saved into the same directory to reflect, for example, the successive versions of a single model. Finally, the intrinsic structure of the UsrMod v2 package favors various technical (not physical) consistency checks in the output files to minimize as much as possible the compilation and runtime errors.

FeynRules interface
The MadGraph/MadEvent interface can be called from the FeynRules package using the

WriteMGOutput[ L ]
routine described in the FeynRules documentation 5 , where L is the name of the model Lagrangian. Since MadGraph/MadEvent currently only supports calculations in the unitary gauge, all the Goldstone and ghost fields are discarded in the particles.dat output, which is directly generated from the model description. After expanding all possible field indices (e.g., associated to flavor), an exhaustive list of non-zero vertices is generated and output as interactions.dat. If possible, the relevant coupling is extracted and, in case it does not already exist, stored in a new coupling variable of the form MGVXXX in a couplingsXX.f file. All the other required model-dependent files are finally generated, including the param card.dat where the default values (which are also the default value for the reading routines in param read.inc) are set as specified in the FeynRules model file, and where the QNUMBERS blocks correctly reflect the new particle content. All the produced files, together with the relevant model independent files are stored in a local directory model name MG, ready to be used in MadGraph/MadEvent. As mentioned previously, the testprog test program can be compiled and run to check the consistency of the created library.
The two main restrictions of the MadGraph/MadEvent interface are related to the allowed Lorentz and color structures of the vertices. As already mentioned, even though FeynRules itself can deal with basically any interaction involving scalars, fermions, vectors and spin-two tensors, the Helas library, used by MadGraph/MadEvent to build and evaluate amplitudes is more restricted. In the case no correspondence is found for a specific interaction, a warning message is displayed by the interface and the corresponding vertex is discarded. If this particular vertex is required for a given application, the user has still the possibility to implement it manually following the Helas library conventions and to slightly modify the interface files to deal with this addition. In case the vertex structure is also not present in MadGraph/MadEvent, a more involved manual modification of the code is also required. The second limitation of the present interface comes from the fact the color factor calculations are currently hardcoded internally MadGraph. While Feyn-Rules can deal with fields in any representation of the QCD color group, MadGraph itself is basically limited to the color representations appearing in the Standard Model and Minimal Supersymmetric Standard Model, e.g., a color sextet is not supported.
Let us mention that work to alleviate both limitations is already in progress. The FeynRules package could, for example, be used to generate automatically missing Helas routines, while a more open version of the MadGraph matrix-element generator, e.g., taking advantage of a high-level programming environment, could advantageously deal with arbitrary color structures

Sherpa
Sherpa [11,12] is a general-purpose Monte Carlo event generator aiming at the complete simulation of physical events at lepton and hadron colliders. It is entirely written in C++ featuring a modular structure where dedicated modules encapsulate the simulation of certain physical aspects of the collisions.

Interactions Particles Parameters
Model Base The central part is formed by the hard interaction, described using perturbative methods. The respective generator for matrix elements and phase-space integration is Amegic++ [65], which employs the spinor helicity formalism [66,67] in a fully automated approach to generate processes for a variety of implemented physics models, see Sec. 4.4.1. Phasespace integration is accomplished using self-adaptive Monte Carlo integration methods [68,69,70,71]. The QCD evolution of partons originating from the hard interaction down to the hadronization scale is simulated by the parton-shower program Apacic++ [72]. It accounts for parton emissions off all colored particles present in the Standard Model and the Minimal Supersymmetric Standard Model. New shower generators have recently been developed in the framework of Sherpa [73,74]. They account for QCD coherence and kinematics effects in a way consistent with NLO subtraction schemes, which makes them preferred over Apacic++. They will be employed in future versions of Sherpa using an improved merging prescriptions for matrix elements and showers [36]. An important aspect of Sherpa is its implementation of a general version of the CKKW algorithm for merging higher-order matrix elements and parton showers [30,31]. It has been validated in a variety of processes [39,75,76] and proved to yield reliable results in comparison with other generators [37,38]. Furthermore, Sherpa features an implementation of the model for multiple-parton interactions presented in Ref. [77], which was modified to allow for merging with hard processes of arbitrary final-state multiplicity and eventually including CKKW merging [78]. Furthermore Sherpa provides an implementation of a cluster-fragmentation model [79], a hadron and tau decay package including the simulation of mixing effects for neutral mesons [80], and an implementation of the YFS formalism to simulate soft-photon radiation [81].

Model framework
Physics model definitions within Sherpa are hosted by the module MODEL. Here the particle content and the parameters of any model get defined and are made accessible for use within the Sherpa framework. This task is accomplished by instances of the basic class Model Base. Furthermore the interaction vertices of various models are defined here that in turn can be used by Amegic++ to construct Feynman diagrams and corresponding helicity amplitudes 6 . The corresponding base class from which all interaction models are derived is called Interaction Model. A schematic overview of the MODEL module is given in Fig. 2. The list of currently implemented physics models reads: the Standard Model including effective couplings of the Higgs boson to gluons and photons [82], an extension of the SM by a general set of anomalous triple-and quartic gauge couplings [83,84], the extension of the SM through a single complex scalar [85], the extension of the Standard Model by a fourth lepton generation, the SM plus an axigluon [86], the Two-Higgs-Doublet Model, the Minimal Supersymmetric Standard Model, and the Arkani-Hamed-Dimopoulos-Dvali (ADD) model of large extra dimensions [87,88], for details see Ref. [12]. Besides routines to set up the spectra and Feynman rules of the models listed above corresponding helicity-amplitude building blocks are provided within Amegic++ that enable the evaluation of production and decay processes within the supported models. In particular this includes all the generic three-and four-point interactions of scalar, fermionic and vector particles present in the SM and Minimal Supersymmetric Standard Model plus the effective operators for the loop-induced Higgs couplings and the anomalous gauge couplings. The implementation of the ADD model necessitated the extension of the helicity formalism to interaction vertices involving spin-two particles [89]. A necessary ingredient when dealing with the Minimal Supersymmetric Standard Model are specific Feynman rules for Majorana fermions or fermion number violating interactions. To unambiguously fix the relative signs amongst Feynman diagrams involving Majorana spinors the algorithm described in Ref. [90] is used. Accordingly, the explicit occurrence of charge-conjugation matrices in the Feynman rules is avoided and instead a generalized fermion flow is employed that assigns an orientation to complete fermion chains. This uniquely determines the external spinors, fermion propagators and interaction vertices involving fermions. The implementation of new models in Sherpa in the traditional way is rather straightforward and besides the public model implementations shipped with the Sherpa code there exist further private implementations that were used for phenomenological studies, cf. Ref. [91,92,93]. From version Sherpa-1.2 on Sherpa will support model implementations from FeynRules outputs -facilitating the incorporation of new models in Sherpa further.

FeynRules interface
To generate FeynRules output to be read by Sherpa, the tailor-made FeynRules routine WriteSHOutput[ L ] 6 Note that within Sherpa Feynman rules are always considered in unitary gauge.
has to be called, resulting in a set of ASCII files that represent the considered model through its particle data, model parameters and interaction vertices 7 . To allow for an on-the-flight model implementation from the FeynRules outputs, instances of the two basic classes Model Base and Interaction Model Base are provided dealing with the proper initialization of all the particles and parameters, and the interaction vertices of the new model, respectively. The actual C++ classes for these tasks are called FeynRules Model and Interaction Model FeynRules, see Fig. 2. The master switch to use a FeynRules generated model within Sherpa is MODEL = FeynRules to be set either in the (model) section of the Sherpa run card or on the command line once the Sherpa executable is called. Furthermore the keywords FR PARTICLES, FR IDENTFILE, FR PARAMCARD, FR PARAMDEF and FR INTERACTIONS, specifying the names of corresponding input files, need to be set. The actual format and assumed default names of these input cards will be discussed in the following: • FR PARTICLES specifies the name of the input file listing all the particles of the theory including their SM quantum numbers and default values for their masses and widths, default name is Particle.dat. An actual particle definition, e.g., for the gluon, looks like Hereby kf defines the code the particle is referred to internally and externally, typically its PDG number [53]. The values for Mass and Width need to be given in units of GeV. The columns 3*e and Y specify three times the electric charge and twice the weak-isospin. SU(3) defines if the particle acts as a singlet (0), triplet (3) or octet (8) under SU (3) C . 2*Spin gives twice the particle's spin and maj indicates if the particle is charged (0), self-adjoint (-1) or a Majorana fermion (1). The flags on, stbl and m on are internal basically and define if a particle is considered/excluded, considered stable, and if its kinematical mass is taken into account in the matrixelement evaluation. IDName and TeXName indicate names used for screen outputs and potential L A T E X outputs, respectively.
• In FR IDENTFILE all the external parameters of the model get defined, default file name is ident card.dat. Names and counters of corresponding parameter blocks to be read from FR PARAMCARD are listed and completed by the actual variable names and their numerical types, i.e., real R or complex C. Besides, variable names for all particle masses and widths are defined here. To given an example, the section defining the electroweak inputs of the SM may look like • In the file specified through FR PARAMCARD the numerical values of all elementary parameters, particle masses and decay widths are given, default file is param card.dat. Following the example above the electroweak inputs of the SM can be set through: The parameter definitions get interpreted using an internal algebra interpreter, no additional compilation is needed for this task. All standard C++ mathematical functions are supported, e.g., sqr, log, exp, abs. For complex valued parameters, e.g., CKM11, the real and imaginary part can be accessed through Real(CKM11) and Imag(CKM11), the complex conjugate is obtained through Conjugate(CKM11).
• The keyword FR INTERACTIONS (Interactions.dat) specifies the input file containing all the vertices of the considered model in a very simple format: The keyword VERTEX signals the start of a new Feynman rule followed by the PDG codes of the involved particles. Note, the first particle is always considered incoming the others outgoing. Counters number 1 and 2 indicate the right and left-handed coupling of the vertex rule, the right and left-hand projector being given by P R/L = 1 2 (1l ± γ 5 ), respectively. Couplings are given in terms of the elementary and derived parameters. Counter number 3 explicitly gives the color structure of the interaction in terms of the SU (3) C structure constants or generators. The spin structure of the vertex is given under 4, identified through a keyword used by Sherpa to relate a corresponding sub-amplitude to the correct helicity-amplitude building block.
Sherpa's interface to FeynRules is designed to be as general as possible, it is, however, by construction restricted in two ways. The functional form of the model parameters, and respectively the couplings, is limited by the capabilities of the algebra interpreter that has to construe them. This limitation, however, might be overcome by using an external code to calculate the needed variables and redefining them as external giving their numerical values in FR PARAMCARD. More severe limitations originate from the restricted ability of Sherpa/Amegic++ to handle new types of interactions. Only three and four-point functions can be incorporated. For the color structures only the SU (3) C objects 1, δ i,j , δ a,b , T a ij , f abc and products of those, e.g., f abc f cde , are supported. Lorentz structures not present in the SM or the Minimal Supersymmetric Standard Model are currently not supported by the interface. Furthermore, Sherpa cannot handle spin-3/2 particles. QCD parton showers are only invoked for the colored particles present in the SM and the Minimal Supersymmetric Standard Model. Hadronization of new colored states is not accomplished, they have to be decayed before entering the stage of primary hadron generation.

Models
In this section we briefly present the implementation of the Standard Model and other several important New Physics models in FeynRules. Our aim is to show that very complete and sophisticated implementations are possible and that FeynRules offers a very natural and convenient framework where models can be first developed (from the theoretical point of view) and then tested/constrained against experimental data. Since the main focus is the implementation procedure, the actual model descriptions, as well as the information about values of parameters, are kept to the minimum. More exhaustive information about each of the following models, all of which being publicly available, can be found on the FeynRules website.

Model description
As it serves as basis to any new bottom-up implementation, we briefly describe here the Standard Model implementation. The SM of particle physics is described by an SU (3) C × SU (2) L × U (1) Y gauge theory, where the electroweak symmetry is spontaneously broken so that the fundamental fermions and the weak gauge bosons acquire a mass. The particle content of the SM is summarized in Table 4 The pure gauge sector of the theory reads where the SM field strength tensors are defined following the conventions introduced in Eq. (2.4). The Lagrangian describing the matter fermions can be written as where D µ denotes the SU (3) C × SU (2) L × U (1) Y covariant derivative, and we use the conventions of Eq. (2.3). The superscript 0 refers to the gauge eigenstates. Note in particular that explicit mass terms for the matter fermions are forbidden by gauge symmetry. The Higgs field is described by the Lagrangian If µ 2 < 0, then the Higgs field acquires a vev that breaks the electroweak symmetry spontaneously. Expanding the Higgs field around its vev, we generate mass terms for the Higgs boson H and the electroweak gauge fields. The mass eigenstates for the gauge bosons are the W and Z bosons, as well as the photon, which remains massless. The relations between those fields and the original SU (2) L × U (1) Y gauge fields are where we introduced the weak mixing angle The interactions between the fermions and the Higgs field are described by the Yukawa interactions After electroweak symmetry breaking the Yukawa interactions generate non-diagonal mass terms for the fermions that need to be diagonalized by unitary rotations on the left and right-handed fields. Since there is no right-handed neutrino, we can always rotate the leptons such that the mass matrix for the charged leptons becomes diagonal and lepton flavor is still conserved. For the quarks however, the diagonalization of the mass matrices introduces flavor mixing in the charged current interactions, described by the well-known CKM matrix.

FeynRules implementation
The SM implementation in FeynRules is divided into the four Lagrangians described in the previous section. In particular, one can use the dedicated functions for the field strength tensors and the covariant derivative acting on the left and right-handed fermions. Matrix-element generators however need as an input the mass eigenstates of the particles, and therefore it is mandatory to rotate all the gauge eigenstates into mass eigenstates according to the prescriptions discussed in the previous section. This can be done very easily in FeynRules by writing the Lagrangian in the gauge eigenbasis, and then letting FeynRules perform the rotation into the mass eigenstates (note, at this point, that FeynRules does not diagonalize the mass matrices automatically, but this information has to be provided by the user). However, as the SM Lagrangian is the starting point for many bottom up extensions, the actual implementation was performed directly in terms of the fermion mass eigenstates. The benefit is a slight speed gain due to the rotations in the fermion sector. The default values of the external parameters in the model file are given in Table 7, in Appendix B.1. Three restriction files for the SM implementation are provided with the default model distribution: • Massless.rst: the electron and the muon, as well as the light quarks (u, d, s) are massless.
Another particularity of the SM implementation is that it was performed both in unitary and in Feynman gauge. The model file contains a switch FeynmanGauge, which, if turned to False, puts to zero all the terms involving ghost and Goldstone fields. The default value is False.

Possible extensions
The SM is at the basis of almost all BSM models, and thus the number of possible extensions of the SM implementation is basically unlimited. A first extension of this model was presented in Section 2 with the HAH model, based on the simplest possible extension of the gauge sector of the SM. Other possibilities are the addition of higher-dimensional operators compatible with the SM symmetries (see Section 5.6) or the inclusion of righthanded neutrinos via see-saw models.

The general Two-Higgs-Doublet Model
The Two-Higgs-Doublet Model (2HDM) has been extensively studied for more than twenty years, even though it has often been only considered as the scalar sector of some larger model, like the Minimal Supersymmetric Standard Model or some Little Higgs models for example. The general 2HDM considered here already displays by itself an interesting phenomenology that justifies its study like, for example, new sources of CP violation in scalar-scalar interactions, tree-level flavor changing neutral currents (FCNC's) due to nondiagonal Yukawa interactions, or a light pseudoscalar state and unusual Higgs decays (see Ref. [94]).

Model description
The 2HDM considered here is based on two SU (2) L doublets φ 1 and φ 2 with the same hypercharge Y = +1. If one imposes only gauge invariance, the most general renormalizable The Lagrangian of the Higgs sector differs from the SM, and can be written 11) and the scalar potential reads, in the notation of Ref. [95], where µ 1,2 and λ 1,2,3,4 are real parameters while µ 3 and λ 5,6,7 are a priori complex. We assume that the electromagnetic gauge symmetry is preserved, i.e., that the vevs of φ 1 and φ 2 are aligned in the SU (2) L space in such a way that a single SU (2) L gauge transformation suffices to rotate them to the neutral components, with v 1 and v 2 two real parameters such that v 2 The most general form for the Yukawa interactions of the two doublets reads withφ i ≡ iσ 2 φ * i and where the 3 × 3 complex Yukawa coupling matrices ∆ i and Γ i are expressed in the fermion physical basis, i.e., in the basis where the fermion mass matrices are diagonal. We choose as free parameters the Γ i matrices, while the other Yukawa couplings, the ∆ i matrices, are deduced from the matching with the observed fermion masses. Conventionally, the two indices a and b of the elements of the Yukawa matrices Γ i ab and ∆ i ab refer to the generations of the SU (2) L doublet and singlet, respectively.

FeynRules implementation
The 2HDM Lagrangian implemented in FeynRules is composed of Eqs. (5.12), (5.14), together with the canonically normalized kinetic energy terms for the two doublets and the other SM terms. An important feature of this model is the freedom to redefine the two scalar fields φ 1 and φ 2 using arbitrary U (2) transformations since this transformation leaves the gauge-covariant kinetic energy terms invariant. This notion of basis invariance has been emphasized in Ref. [95] and considered in great detail more recently in Refs. [96,97,98]. Since a given set of Lagrangian parameter values is only meaningful for a given basis, let us take advantage of this invariance property to select the Higgs basis (by defining the additional file HiggsBasis.fr) where only one of the two Higgs fields acquires a non-zero vev, i.e., which reduces the number of free parameters in the most general 2HDM to ten (seven real parameters, three complex ones and three minimization conditions). Besides the usual three massless would-be Goldstone bosons, the physical spectrum contains a pair of charged Higgs bosons with mass and three neutral states with the squared mass matrix The symmetric matrix M is diagonalized by an orthogonal matrix T . The diagonalization yields masses m i for the three physical neutral scalars S i of the model (where the index i refers to mass ordering), The doublet components are related to these physical states through The Yukawa couplings of the model are fully determined by the Γ i matrices in Eq. (5.14), since the ∆ i are, by definition, fixed to the diagonal fermion mass matrices in the Higgs basis.
In the current implementation of the 2HDM into FeynRules, the user has to provide numerical values for all the λ i parameters in the basis of Eq. (5.16), together with the charged Higgs mass m H ± . The other parameters of the potential, such as the µ i , are then deduced using Eqs. (5.17) and (5.18). As a consequence, the orthogonal matrix T must be calculated externally. This, together with the change of basis required if the user wants to provide potential parameters and Yukawa coupling values in bases different from this of Eq. (5.16), can be done using the TwoHiggsCalc calculator introduced in Ref. [9] which has been modified to produce a parameter file compatible with the present implementation. This calculator can also be used to calculate the required Higgs boson tree-level decay widths.

The most general Minimal Supersymmetric Standard Model
Most present supersymmetric models are based on the four-dimensional supersymmetric field theory of Wess and Zumino [99]. The simplest model is the straightforward supersymmetrization of the Standard Model, with the same gauge interactions, including R-parity conservation, and is called the MSSM [100,101]. Its main features are to link bosons with fermions and unify internal and external symmetries. Moreover, it allows for a stabilization of the gap between the Planck and the electroweak scale and for gauge coupling unification at high energies, provides the lightest supersymmetric particle as a dark matter candidate and appears naturally in string theories. However, since supersymmetric particles have not yet been discovered, supersymmetry must be broken at low energies, which makes the superpartners heavy in comparison to their Standard Model counterparts.
Supersymmetric phenomenology at colliders has been extensively investigated for basic processes at leading order [102,103,104,105,106,107,108,109,110,111,112,113] and next-to-leading order [114,115,116,117,118,119,120] of perturbative QCD. More recently, for some processes, soft-gluon radiation has been resummed to all orders in the strong coupling constant and the results have been matched with the next-to-leading order calculations [121,122,123,124,125,126]. However, even if those calculations are useful for inclusive enough analyses, they are not suitable if we are interested in a proper description of the full collider environment, for which Monte Carlo event generators are needed. For a couple of years, all the multi-purpose generators already mentioned contain a built-in version of the MSSM. The model files for FeynArts/FormCalc are described in Ref. [127,128], for CalcHep in Ref. [129], for MadGraph/MadEvent in Ref. [130], and for Sherpa in Ref. [131]. The Sherpa and FeynArts/FormCalc implementations keep generic mixing in the scalar sector while the other generators rely on a simplified model with only helicity mixing for third generation sfermions.
Our MSSM implementation in FeynRules is the most general one in a sense that it is keeping all the flavor-violating and helicity-mixing terms in the Lagrangian and also all the possible additional CP -violating phases. This yields thus 105 new free parameters [132], and in order to deal in a transparent way with all of those, our implementation will follow the commonly used universal set of conventions provided by the Supersymmetry Les Houches Accord (SLHA) [133,134], except for some minor points. We will dedicate a complementary paper to a complete description of the model [135].

Field content
Each of the Standard Model quarks and leptons is represented by a four-component Dirac spinor f 0 i , where i stands for the generation index and the superscript 0 denotes interaction eigenstates. It has two associated scalar superpartners, the sfermionf 0 Li and the antisfermionf 0i † R , being related to the two-component holomorphic Weyl fermion χ f i and antifermion χf i , respectively. Let us recall that we relate the Dirac fermion representations to the Weyl ones by  fermionic partners, the higgsinos ψ H i , Finally, the spin-one vector bosons of the Standard Model will be associated to Majorana fermions, the gauginos ψ B , ψ W k and ψ g . The names and representations under the Standard Model gauge groups SU (3) C × SU (2) L × U (1) Y of the various fields are summarized in Table 5. Starting from the expression of the Lagrangian in the gauge-eigenstate basis of fields given above, we diagonalize the non-diagonal mass matrices arising after electroweak symmetry breaking and provide transformation rules allowing to re-express the Lagrangian in the physical basis.
Supersymmetry-conserving Lagrangian In order to have more compact notations, we introduce the SU (2) L -doublets of left-handed fermions and sfermions, The Lagrangian for the scalar sector can be divided into one purely kinetic part, and one part derived from the D-terms, the F -terms and the superpotential W . Indeed, auxiliary F -and D-fields must be added to the theory in order to preserve supersymmetry when considering off-shell states, thereby keeping the total number of fermionic and bosonic degrees of freedom equal. Solving their equations of motion leads, for a set of n Dirac fermions {ψ n } and 2n associated supersymmetric scalar partners {φ Ln , φ n † R }, to where ψ c is the field charge-conjugated to ψ and the derivatives of the superpotential W are given by Let us note that in the case of Majorana fermions, there is only one associated scalar field. In the framework of the MSSM, the superpotential reads where y u , y d and y l denote the 3 × 3 Yukawa matrices, µ the Higgs off-diagonal massmixing, and ǫ the SU (2) L invariant tensor. We could add to this superpotential other gauge-invariant and renormalizable terms, with the Yukawa-like couplings λ, λ ′ and λ ′′ , and a slepton-Higgs off-diagonal mass term κ. Those couplings would however violate either the lepton number L or the baryon number B, as well as the individual lepton flavors. Moreover, they allow for various B-violating or L-violating processes which have never been seen experimentally. We could just forbid these terms by postulating B and L conservation, but neither B nor L are fundamental symmetries of nature since they are violated by non-perturbative electroweak effects [136]. Therefore, an alternative symmetry is rather imposed, the R-parity [137], defined by S being the spin of the particle, forbidding any term different from those in Eq. (5.30). All the Standard Model particles have thus a positive R-parity while the superpartners have a negative one. In this paper, we will only describe the implementation of the Rparity conserving MSSM. However, the R-parity violating extension of our implementation is straightforward, available and described in Ref. [135]. We can now write the remaining part of the scalar Lagrangian, From the W ij terms of the superpotential, bilinear in the fermionic fields, we can also generate the Yukawa couplings between the matter fermions and the Higgs fields, The terms of the MSSM Lagrangian containing higgsino and gaugino fields can be divided into three parts; pure kinetic terms, 35) Yukawa interactions obtained from the superpotential terms W ij , and additional supersymmetry-conserving gauge-like interactions which have no counterpart in the Standard Model and which are not taken into account through the covariant derivatives,

Supersymmetry-breaking Lagrangian
As stated in the introduction, the masses of the superpartners must be considerably larger than those of the Standard Model particles. Realistic supersymmetric models must hence include supersymmetry breaking, which is expected to occur spontaneously at some high scale. The Lagrangian density then respects supersymmetry invariance, but the vacuum state does not. Moreover, in order not to introduce quadratic divergences in loopcalculations, supersymmetry has to be broken softly. In practice, since we do not know the supersymmetry-breaking mechanism and the corresponding scale, we will add all possible terms breaking supersymmetry explicitly at low-energy [138], The first line of Eq. (5.38) contains gaugino mass terms, the second and third lines the sfermion mass terms, where m 2 Q , m 2 L , m 2 u , m 2 d , m 2 e are 3 × 3 hermitian matrices in generation space, the fourth line mass terms for the Higgs fields, and the fifth line the trilinear scalar interactions, T u , T d , and T e being also 3 × 3 matrices in generation space. Let us note that the additional Higgs mass terms are required in order to break electroweak symmetry spontaneously. We remind the reader that for the R-parity violating MSSM, additional soft supersymmetry-breaking terms must also be included,

Particle mixing
In order to break the electroweak symmetry to electromagnetism, the classical Higgs potential, i.e. all the Lagrangian terms quadratic or quartic in the Higgs fields, must have a non-trivial minimum. Due to gauge invariance, one of the charged Higgs vev can be rotated away, which yields a zero vev for the other charged Higgs. Again, we can use gauge-invariance to choose the remaining vevs real and positive, so that we can replace the neutral Higgs fields in L MSSM by where v u and v d are the two vacuum expectation values of the neutral Higgses and h 0 u and h 0 d are complex scalar fields. Let us note that those relations are only valid in unitary gauge, which is the case we are dealing with here. We can then extract mass matrices for the gauge bosons B 0 and W k , diagonalize them, and derive the physical mass-eigenstates, the photon A, the W and Z boson. The transformation rules relating the mass and interaction bases are given by Eq. (5.6). The weak mixing angle θ w and the physical masses M Z and M W are defined by This matrix relates the four physical neutralinosχ 0 i to the interaction-eigenstates, Similarly, we can collect the terms yielding the charged gaugino-higgsino mass matrix X and diagonalize it through the two unitary matrices U and V , Those matrices relate the interaction-eigenstates to the physical charginosχ ± i according to Let us note that the fields ψ W ± are obtained after rotating ψ W 1 and ψ W 2 as in Eq.
and render the charged-current interactions proportional to the CKM matrix, The leptonic sector is however diagonalized with the help of only two matrices, since the neutrinos are assumed massless, Let us note that the matrices U u , U d , V l and U l do not appear in the rotated Lagrangian.
In the sfermion sector, we define the super-CKM basis [139] as the basis in which the sfermion interaction-eigenstates undergo the same rotations as their fermionic counterparts. As a consequence, the squark charged-current interactions are also proportional to the CKM matrix. However, the fermion and sfermion fields can be misaligned due to possible off-diagonal mass terms in the supersymmetry-breaking Lagrangian L MSSM,Soft . The diagonalization of the four mass matrices M 2 u , M 2 d , M 2 l and M 2 ν requires thus the introduction of three 6 × 6 matrices R u , R d and R l and one 3 × 3 matrix R ν , diag (m 2 u 1 , . . . , m 2 (5.51) These matrices relate the physical mass-eigenstates to the interaction-eigenstates through The less general built-in implementation of the MSSM in MadGraph/MadEvent and CalcHep can easily be recovered by setting all the off-diagonal elements of these general mixing matrices to zero, except for the third generation flavor-conserving and helicitymixing elements. In a framework of R-parity violating scenarios, additional mixings between charged leptons and charginos, neutrinos and neutralinos, Higgses, sleptons and sneutrinos can arise. Moreover, the sneutrino fields can acquire vevs, since they are not protected by lepton number conservation, unlike in the R-parity conserving MSSM where it is a conserved quantum number. We refer the reader to Ref. [135] for more information.

Current implementation
We describe here the implementation of the most general R-parity conserving MSSM in FeynRules. We divide the complete Lagrangian in six pieces,  • PrmExt.fr: the external parameters, following the SLHA conventions, • PrmInt.fr: all the internal parameters, derived from the external ones, • PrmAux.fr: auxiliary parameters, such as identity matrices, • FldFer.fr: physical fermionic states (quarks, leptons, charginos, neutralinos, gluino), • FldVec.fr: physical gauge bosons (gluon, photon, W and Z bosons), • FldSca.fr: physical scalar fields (sfermions and Higgses), Before running the model, the user has to provide values for the four switches FeynmanGauge, $CKMDiag, $sWScale and $svevScale.
• The switch FeynmanGauge is not used in the present implementation and has to be set to False, since the whole model is implemented in unitary gauge. However, further developments will allow for various gauges.
• The switch $CKMDiag allows for a CKM matrix different from the identity or not, depending on its True or False value. The CKM matrix could also be forced to the identity through a restriction file.
• The two switches $sWScale and $svevScale can be set to the values "weak" or "susy", regarding the scale at which the electroweak parameters and the vevs of the Higgs fields will be evaluated.

Possible extensions
A large number of supersymmetric models can be built from the most general MSSM. As stated above, we can take the proper limit in the various mixing matrices and get back to the commonly studied constrained MSSM scenarios. Moreover, other gauges are planned to be implemented (e.g., Feynman gauge or non-linear gauges). Other possible extensions consist in the addition of new particles, such as an additional singlet (the Next-to-Minimal Supersymmetric Standard Model) or right-handed neutrinos (and the corresponding sneutrinos).

The Minimal Higgsless Model
To date, we have not detected a single fundamental scalar field and we do not know if we ever will. Electroweak symmetry breaking may instead occur as the result of the condensation of a strongly coupled bound state as in technicolor theories [140,141] or it may occur as the result of boundary conditions in a compactified extra dimension as in so-called Higgsless theories [142,143]. These theories are closely related [144] and each contains a tower of vector resonances which are responsible for unitarizing W W and W Z scattering in the absence of a fundamental scalar field (such as the Higgs) [145,146,147]. Unitarity constrains the mass of the lowest of these resonances to be below ∼ 1.2 TeV, making it discoverable (or excludable) at the LHC [148,149,150].
Although, it is impossible to implement the entire tower of resonances, it is also unnecessary as the low energy phenomenology is dominated by the lowest modes. Deconstruction [151,152] gives a consistent, gauge invariant way of implementing only a subset of the resonances in a low energy effective theory. The minimal deconstructed Higgsless model contains all the non-scalar fields of the Standard Model plus the first new resonances, the W ′ , Z ′ and heavy partners of the fermions and is called the Minimal Higgsless Model or Three-Site Model [153]. A schematic "moose" diagram of the Minimal Higgsless Model is presented in Fig. 3.
Precision electroweak constraints [154,155,156,157] are satisfied in Higgsless models by allowing the fermions to delocalize into the bulk in a certain "ideal" way [158,159,160]. The consequence of this is that the heavy partners of the W and Z, the W ′ and Z ′ , are fermiophobic and have very small or vanishing couplings to the light SM fermions. Nevertheless, these heavy vector resonances can be discovered (or excluded) at the LHC [148,149,150].

Model description Gauge Sector
The gauge group of the Three-Site Model is where SU (2) 0 is represented by the leftmost circle in Fig. 3 and has coupling g, SU (2) 1 is represented by the center circle in Fig. 3 and has couplingg and U (1) 2 is represented by the rightmost dashed circle in Fig. 3 and has coupling g ′ . We define The horizontal bars in Fig. 3 represent non-linear sigma models Σ j which come from unspecified physics at a higher scale and which give mass to the six gauge bosons other than the photon. This is encoded in the leading order effective Lagrangian term where The non-linear sigma models can be written in exponential form which exposes the Goldstone bosons that become the longitudinal components of the massive gauge bosons. π j and W j are written in matrix form and are where j is 0 or 1. The mass matrices of the gauge bosons can be obtained by going to unitary gauge (Σ j → 1) and are, for the charged and neutral gauge bosons respectively where and the photon is massless. After diagonalizing the gauge boson mass matrices, we find that the other masses are given by for the charged gauge bosons and where for the neutral gauge bosons. We note that x can be obtained from the ratio of the charged gauge boson masses The couplings can be determined in terms of the electric charge e, x and t. (5.69)

Gauge Fixing Sector
As we mentioned previously, the horizontal lines in Fig. 3 represent non-linear sigma fields. Although tree level calculations can be done in unitary gauge, there are times when a different gauge is useful. Many calculations with gauge bosons in the external states can be computed more simply using the equivalence theorem and replacing the massive gauge bosons with the Goldstone bosons that they eat. Another case where a gauge different from unitary gauge is advantageous is in CalcHep, where the time of computation of processes is dramatically decreased when using Feynman gauge. For this reason, we have implemented this model in both Feynman and unitary gauges.
We must first determine the Goldstone bosons that are eaten by the gauge bosons. We do this using the Lagrangian of Eq. (5.59). Expanding the non-linear sigma field, we obtain the mixing terms between the gauge bosons and the Goldstone bosons. After inserting the eigenwave functions of these fields, we obtain where δ is 1 if the gauge boson is neutral but 0 otherwise. The gauge fixing function is constructed to fix the gauge and cancel the mixing of the Goldstone bosons and gauge bosons. For each site, the gauge-fixing term is where by π ns 1 we mean just the neutral sector of π 1 , namely With this definition, the gauge fixing Lagrangian is

Ghost Sector
The ghost Lagrangian terms are obtained by multiplying the BRST transformation of the gauge fixing term on the left with the antighost. To do this, we must find the BRST transformations of the gauge fixing terms. We begin by writing the infinitesimal BRST transformation of the fields in the gauge fixing term.
for the gauge bosons where c j is the ghost for site j in matrix notation where j is 0 or 1. The BRST transformation to quadratic order in the Goldstone bosons is The ghost Lagrangian is

Fermion Sector
The vertical lines in Fig. 3 represent the fermionic fields in the theory. The vertical lines on the bottom of the circles represent the left-handed chiral fermions while the vertical lines attached to the tops of the circles are the right-handed chiral fermions. Each fermion is a fundamental representation of the gauge group to which it is attached and a singlet under all the other gauge groups except U (1) 2 . The charges under U (1) 2 are as follows: If the fermion is attached to an SU (2) then its charge is 1/6 for quarks and −1/2 for leptons. If the fermion is attached to U (1) 2 its charge is the same as its electromagnetic charge: 0 for neutrinos, −1 for charged leptons, 2/3 for up type quarks and −1/3 for down type quarks. The usual gauge invariant kinetic terms are used, The fermions attached to the internal site (SU (2) 1 ) are vectorially coupled and Dirac masses are thus allowed. We have taken these masses to be M F . The symmetries also allow various linkings of fermions via the non-linear sigma fields. We have assumed a very simple form, inspired by an extra dimension and represented by the diagonal lines in Fig. 3. The left chiral field at site j is linked to the right chiral field at site j + 1 through the non-linear sigma field at link j. The mass parameter for these diagonal links is taken to be ǫ L M F and ǫ R M F for the left and right links respectively. All together, the masses of the fermions and the leading order interactions of the fermions and non-linear sigma fields are given by where ǫ L is the same for all fermions but ǫ R is a diagonal matrix which distinguishes flavors. For example, for the top and bottom quarks we have The mass matrix can be obtained by going to unitary gauge and diagonalized by a biunitary transformation. By doing this, we find the following masses,

FeynRules implementation
The FeynRules implementation was initially based on a LanHep implementation [148]. It was translated to FeynRules syntax and slightly modified to fit the requirements of the FeynRules package and interfaces. All symmetries, fields and parameters were implemented according to the definitions of the last subsection. The independent variables that the user can adjust are the electromagnetic and strong couplings, the masses of the Z, W , W ′ and SM fermions (where not set explicitly to zero) and the scale of the heavy fermions M F . The scale of the Z pole mass and Fermi constant are also implemented as required by some Monte Carlo codes, but they are not used directly in this model. A change in these last two parameters will not affect the value of the other parameters. Two gauges were implemented in this model file. A variable FeynmanGauge, similar to the SM case, was created to switch between the two. When the switch is set to True, Feynman gauge is chosen and the Lagrangians contain the Goldstone bosons eaten by the gauge bosons and the ghosts. If, on the other hand, it is set to False, all Goldstone and ghost terms are set to zero.

Extra Dimensional models
One popular approach to solve the hierarchy problem of the Standard Model is to extend space-time to higher dimensions [87,161,162]. In this framework, the usual fourdimensional space is contained in a four-dimensional brane embedded in a larger structure with N additional dimensions, the bulk. Moreover, gravitational and gauge interactions unify close to the only fundamental scale of the theory, the weak scale. In theories with Large Extra Dimensions (LED) [87,88,163], the gravitational interactions are the only ones propagating into the bulk, which dilutes their coupling strength and make it appear weaker inside the four-dimensional branes. As a consequence, the graviton field is accompanied by a tower of massive Kaluza-Klein states. In scenarios with Universal Extra Dimensions [161], each field of the Standard Model possesses a tower of excitations with the same quantum numbers, but different masses. Even if none of these states has been observed so far, TeV-range excitations could be detected at the present Tevatron or the future Large Hadron Collider. In a minimal Universal Extra Dimension scenario (MUED) [164], one has one single flat additional dimension, y, which is spatial and compactified on a S 1 /Z 2 orbifold of radius R. Momentum conservation in the extended space-time generates a conserved quantum number, the Kaluza-Klein parity, which implies that different Kaluza-Klein modes cannot mix and that the lightest excitation could be a candidate for dark matter [165,166,167].

Model description
Large Extra Dimension Model In a LED theory with one compact and space-like additional dimension, all the Standard Model fields are confined into a four-dimensional brane and there are only Kaluza-Klein excited states for the graviton. The Kaluza-Klein states have higher masses than the standard graviton but both behave the same, i.e., they couple gravitationally to the Standard Model fields. We start by specifying the generic effective Lagrangian of an unbroken gauge theory [168], The Lagrangian of each sector can be expressed as For a generic unbroken gauge theory, the various energy-momentum tensors T i µν for an unbroken gauge theory read 8 , where Φ is a (complex) scalar, Ψ a fermion, and A µ a vector field. D µ and F µν denote the usual covariant derivatives and field strength tensors. From this general model, we can derive a realistic Large Extra Dimensional theory containing all the Standard Model fields. We choose to work in the unitary gauge also for the gravitational part, which eliminates the non-physical degrees of freedom absorbed by the massive fields, and we re-write the Lagrangian as, where G µν (n) is the n-th Kaluza-Klein mode of the physical graviton and j now denotes explicitly the Standard Model sectors. The energy-momentum tensor of the Standard Model can be written as a sum of four parts, The energy-momentum tensor of the fermionic sector reads, Similarly, the energy-momentum tensor for the gauge sector reads, Finally, the energy-momentum tensors for the sectors involving a Higgs boson read,

Minimal Universal Extra Dimension Model
We consider a theory in five dimensions, the fifth one being spatial and compactified on a S 1 /Z 2 orbifold of radius R, i.e., the points y and −y are identified, where y is the extra coordinate. This symmetry is essential to define chiral fermions. Unlike LED models, in UED models all fields have access to the extra dimension and depend on the fifth coordinate y. We split the most general Lagrangian in four pieces [161,169], The gauge sector is described by the field strength tensor terms We define the subscript M = (µ, 5), where µ is the usual four-dimensional Lorentz index and 5 the fifth dimension index. The fermionic sector is decomposed into its leptonic and quark part, The hypercharge Y , the Pauli matrices σ k and the color matrices T a are the generators of the gauge groups, while the five-dimensional coupling constants are related to the fourdimensional ones through Finally, the Higgs Lagrangian is given by

FeynRules implementation Large Extra Dimension Model
The FeynRules implementation for the LED model described is based on the Lagrangian defined in Eq. (5.84). The theory contains, besides the graviton, the full set of SM fields, together with their respective coupling to the graviton field via the stress tensors. For the graviton we restricted the implementation to the lowest mode, i.e., only the massless graviton is taken into account. An extension of this model to include Kaluza-Klein excitations of the graviton, as well as to any BSM extension of the SM, e.g., the HAH model described in Section 2, is straightforward.

Minimal Universal Extra Dimension Model
To implement the MUED model in FeynRules, we start with the most general fivedimensional Lagrangian described above. Then, FeynRules derives the effective fourdimensional Lagrangian and the corresponding Feynman rules automatically. This is achieved by expanding the five-dimensional fields and imposing the dimensional reduction by integrating out the extra coordinate y [170], where (A µ (x µ , y), A 5 (x µ , y)), H(x µ , y), Ψ(x µ , y) and ψ(x µ , y) denote a five-dimensional gauge field, a Higgs field, a fermionic doublet and a fermionic singlet field, respectively. To integrate out the extra dimensional coordinate, we follow the integration procedure described in Refs. [161,169,170], using orthogonality relations. Let us note that if one would like to add other pieces to the Lagrangian, it is sufficient to deal with five-dimensional expressions, together with the appropriate definitions of the field expansions. In our implementation we are considering Kaluza-Klein excitations only up to the first mode. The inclusion of the next Kaluza-Klein states is of course straightforward, each considered Kaluza-Klein mode has to be defined in the model file. For a given field, we identify the zeroth mode as the Standard Model particle and define as new particles the Kaluza-Klein excitations. The zeroth mode particle content is thus identical to the SM one, see Table 4, while the particle content of the first modes is summarized in Table 6.

Particle description Spin Representations
Lepton doublet L Up-quark singlet u Down-quark singlet d Higgs H (1) 0 1, 2, 1/2 As for the SM particles, the new gauge-eigenstates mix to physical states (5.103) The first mode analog of the electroweak mixing angle is assumed to be very small. Therefore the first modes of the Z boson and the photon are simply W

(1) µ and B
(1) µ . The tree-level mass M (1) of the first gauge boson excitations are, where m (0) is the zeroth mode mass. In the fermion sector, there might be a mixing between SU (2) L doublets and singlets. However, this mixing is expected to be highly suppressed. Therefore we ignore it in our implementation. As in the SM, the zeroth mode fermions get their mass from the Yukawa couplings after the Higgs get its vev. For the Kaluza-Klein first modes, the tree-level mass terms follow the structure where F (n) and f (n) denote the doublet and the singlet fields, respectively. The diagonal contributions come from the kinetic terms in the fifth dimension and the off-diagonal ones are induced by the Higgs vev. After the diagonalization of the mass matrices, the masses of the physical eigenstates are obtained, (5.105)

Possible extensions
As we have mentioned before, we are only considering expansions of the fields up to the first Kaluza-Klein excited states. However one could easily incorporate the next modes by expanding the fields up to the n th component. It is straightforward to incorporate the second mode Kaluza-Klein excitations, by following the example of the first modes.

An access to the low energy world
Effective theories allow us to have predictions about the masses and the interactions at low energy without having to know everything about the fundamental theory. We only need to know the (exact or approximate) symmetries of the fundamental theory. One of the most successful examples is Chiral Perturbation Theory (χPT), the effective theory of the strong interactions. Most of the processes involving pseudoscalar mesons happen at scales at which pertubative QCD cannot be trusted anymore. Despite our resulting incapacity to compute their properties directly from the fundamental Lagrangian, the shape of the effective Lagrangian can be inferred using the global approximate chiral symmetry of QCD. The other famous example, especially in this pre-LHC era, are all the effective models developed to solve the hierarchy problem where the Higgs boson is a pseudo-Goldstone boson of a new strong sector. These models were built on the fundaments laid down by QCD. However, in this case, much less is known, neither the fundamental nor all the low energy degrees of freedom. This is probably the origin of the number of models available going from Technicolor to Extra Dimensions. However the low energy effects of many of them, called Strongly Interacting Light Higgs (SILH) Models, can be described by the same effective Lagrangian up to some coefficients [171].

Model description
χPT at the lowest order The leading effective Lagrangian invariant under the SU (n F ) L × SU (n F ) R × U (1) V , where n F is the number of massless quark flavors, is [172] L χPT = L (p 0 ,1/Nc) + L (p 2 ,0) , where U transforms as U → g L U g † R . The first part of the Lagrangian is due to the axial U (1) breaking required by the η ′ mass [136] and is at the source of the strong CP problem. The second term is the usual non-linear sigma Lagrangian and the last one takes into account the quark masses. Only the first three flavors are sufficiently light compared to the confinement scale of QCD of about 1 GeV to consider the chiral symmetry as approximate, so we set n F = 3 in the SM. At low energy, the symmetry of the strong interactions has to be broken spontaneously to its vectorial subgroup U (n F ) V in the limit where the number of colors (N c ) is large. The unitary matrix U can thus be developed as a function of the pseudo-Goldstone bosons around its vacuum [173], The coefficients a k are partially fixed by unitarity, . . .

(5.109)
The remaining free parameters, b, c, . . . , can be used to check the computation of any physical quantity which should be independent of these quantities [174,175], or they can be fixed to obtain the most suited form for U . For example, the common form U = exp i √ 2π f requires b = 1 6 and c = 1 120 . The mass matrix of the three neutral states, π 3 , η 8 and η 0 is not diagonal. Consequently, the physical eigenstates are obtained by a rotation.

The SILH Model
The Strongly-Interacting Light Higgs Model is an effective theory of a possible strong sector responsible for the electroweak symmetry breaking (EWSB). The aim of this model is to disentangle if the EWSB is due to a strong sector or not by testing the interactions of the Higgs and the gauge bosons.
The SILH requirements are that the new strong sector has a spontaneously broken symmetry at low energy and that the Higgs boson is an exact Goldstone boson when the SM interactions are switched off. As for the mass matrix of the light quarks in QCD, the SM interactions are included order by order as small breaking parameters and induce a mass term for the Higgs boson. The little Higgses and Holographic composite Higgs models are examples of such theories.
At scales lower than the resonances, the effects of the model are described by a set of dimension-six operators involving the Higgs boson, The c i are free parameters of the model and can thus be fixed to reproduce the low energy behavior of a more specific model.

FeynRules implementation
χPT in FeynRules χPT is implemented in FeynRules at the lowest order (5.106), namely at the order p 2 . In the implemented model, the Lagrangian is developed up to the O π 6 9 with arbitrary coefficients b and c. Consequently, vertices depend linearly on scalar products of the momenta and contain up to six scalars. The Lorentz structure of these vertices are not included in the default generic FeynArts file which contains only renormalizable structures. Let us note that at present this implementation only works with the FeynArts interface.
Since the isospin limit m u = m d =m is taken in the FeynRules model file, only η 8 and η 0 mix. The mass eigenstates are given by The isospin breaking can be easily added. The major modification is to extend the 2 × 2 matrix in (5.112) to a 3 × 3 mixing matrix since the three neutral states mix in this case.
However, the effects are small and thus usually neglected. For the same reason, the mass matrix of the quarks is assumed to be real, but, the phase allowed by the U (1) A breaking can easily be added in the definition of the quark mass matrix.
The lowest order Lagrangian, Eq. (5.106), can reproduce the experimental data within 20%. These discrepancies can be solved with next order corrections. The inclusion of the next order operators either O p 4 , 0 [176] or O p 2 , 1/N c [177,178] in the Lagrangian is also straightforward since U is already defined in the model file. In the same way, the weak and electromagnetic interactions could also be added. However, the Lorentz structure of the vertices is hardcoded in the FeynArts interface, so the new structures generated by these new operators need to be added in the interface and in an associated FeynArts generic file, as it was already done for the lowest order Lagrangian.

SILH in FeynRules
The SILH implementation is based on the SM implementation but restricted to the unitary gauge. The two Lagrangians of Eqs. (5.110) and (5.111) have been added to the SM one with all the c i considered as free external parameters. The first one contributes to the kinetic term of the Higgs and of the gauge bosons, so the physical fields are renormalized versions of the bare fields appearing in the Lagrangian 10 , The gauge couplings have also to be redefined to obtain canonical kinetic terms for the gauge bosons, compared to their SM values, (5.115) All of these redefinitions have been done at the first order in ξ = v 2 f 2 . Any result is thus valid only at this order. It should be noted that all of these non-renormalizable interactions cannot be transferred directly to any other HEP tool for the moment.

Validation of the implemented models
In this section, we first review some generic features of the various validation procedures used to assess the model robustness and then move to the results of the validation process for the Beyond the Standard Model theories presented in the previous section.

Strategy for the validation of the implemented models
In order to validate the implementation of a model inside FeynRules, a very first natural check is to compare the obtained Feynman rules, using directly the Mathematica output, against the ones found in the literature. Subsequently, using the existing interfaces to symbolic tools such as FeynArts, we can go further in the validation procedure with analytical checks of some observables, confronting the results obtained with the help of the model files generated by FeynRules to the corresponding expressions found in the literature.
The next step in our validation procedure regards the model files generated through the different interfaces between FeynRules and matrix-element generators. For a given model, we calculate predictions using the set of Monte Carlo tools interfaced to Feyn-Rules both with the model files generated by FeynRules and with the built-in (stock) implementations of the considered model, if they exist of course. After consistently fixing the set of external parameters to the values corresponding to a chosen benchmark point, results for various quantities, such as decay widths, total cross sections or (unintegrated) squared matrix elements at a given phase-space point, are computed and confronted. For a phase-space integrated observable σ, we evaluate all the possible quantities where σ a,b refers to predictions obtained with two specific generator and a given model file.
The ∆ ab 's quantify possible discrepancies between different implementations. For example, ∆ MG-FR, CH-ST represents the discrepancies between predictions obtained with CalcHep, using the built-in implementation of the considered model (CH-ST), and with MadGraph, using the FeynRules-generated model files (MG-FR). For unintegrated squared matrix elements |M | 2 , we generalize this quantity to summing over a given number of phase-space points.
For each model presented in the previous section, the input parameters used in the comparison and some numerical examples are given in Appendix B. The complete list of results can be found on the FeynRules website [52].

The Standard Model
All the Feynman rules obtained with FeynRules were checked against the expressions given in the literature, while the total cross sections for a set of 35 key processes have been evaluated in CalcHep, MadGraph/MadEvent and Sherpa and compared to the existing stock versions. A selection of processes, together with the set of external parameters used for this check, is given in Appendix B.1. Note in particular that Mad-Graph/MadEvent and Sherpa work in unitary gauge, whereas CalcHep allows for both unitary and Feynman gauges. We therefore also demonstrated explicitly the gauge invariance of our implementation.

The general Two-Higgs-Doublet Model
The validation of the present implementation is done to by comparing 2 → 2 matrix elements, for 10 different phase-space points, between the FeynRules unitary-gauge implementations in MadGraph/MadEvent, Sherpaand CalcHep, and the existing stock implementation in MadGraph/MadEvent described in Ref. [9] and distributed with the code. Since all the vertices appearing in the 2HDM are also present in various BSM models presented in this work, such as the MSSM, this limited comparison is clearly sufficient to fully validate our implementation.
The selected benchmark point includes non-trivial values for all the λ i parameters of Eq. (5.12), as well as Yukawa couplings for the second and third generation fermions leading to FCNC effects. Agreement, claimed if the quantity ∆ P S defined in Eq. (6.2) is smaller than 0.1%, has been found for all the tested processes involving various combinations of scalars, fermions and gauge bosons both in the initial and final states. Furthermore, a selection of 185 2 → 2 cross sections have been computed using CalcHep, Mad-Graph/MadEventand Sherpa, and compared to the existing MadGraph/MadEvent implementation. In all cases the results agree within 1% between the different codes. Some examples are shown in Appendix B.2. In addition to these checks, the behavior of various cross sections at high energies for processes involving new scalars has also been verified. In each case, the cross section behavior is in agreement with unitarity expectations.

The most general Minimal Supersymmetric Standard Model
We have compared the Feynman rules computed by FeynRules to those which can be found in the literature, both for the general MSSM [179,180] and for a constraint MSSM where all the scalar mixings are neglected [101,181], and we have found agreement for all the vertices. Then, we have re-calculated all tree-level squark and gaugino hadroproduction helicity amplitudes in the case of general (and possibly complex) scalar mixing with the help of FeynArts/FormCalc and the model file generated by FeynRules. The results have been compared to the analytical formulas given in Refs. [111,112] and we found complete agreement.
To validate the FeynRules-generated model files for the various Monte Carlo generators, we compared the results obtained in the very particular limit where CP symmetry and flavor are assumed to be conserved within the whole model, the CKM matrix appearing in the charged-current interactions being thus neglected as well. In addition, in the scalar sector, the flavor-conserving helicity mixings are neglected for first and second generation sfermions. In this scenario, built-in implementations exist in MadGraph/MadEvent and CalcHep. As a benchmark scenario, we choose the typical minimal supergravity point SPS 1a [182], defined by the input parameters given in Appendix B.3. For Sherpa, the validation process is on-going and extensions to other programs that will be interfaced to FeynRules in the future are foreseen. Let us note also that, since CalcHep is not able to deal with files following the SLHA conventions, we had to modify the built-in files in order to correctly fix the free parameters of the model.
We start by evaluating all the 320 two-body decay widths corresponding to kinematically allowed decays in our scenario in order to check the norm of various three-point vertices. We find a complete agreement for both SM and MSSM processes. In particular, we have confronted the MG-FR implementation to the already validated MG-ST implementation [131], and the agreement between the two predictions was evaluated through the quantity ∆ MG-ST,MG-FR defined in Eq. (6.1), which has been found to be smaller than 1%, the differences being hence associated to a pure Monte-Carlo statistical error. Some examples can be found in the Table 15 of Appendix B.3.
Subsequently, we have investigated the total cross section for 636 key 2 → 2 processes with the help of both MadGraph/MadEvent and CalcHep, and with both FeynRules-generated and built-in model files. In order to properly compare the different implementations and unitary cancellations, we manually set all the widths of the particles to zero. We checked that the six quantities ∆ ab defined in Eq. (6.1) for any set of two predictions a and b are below the percent level. This check allows not only to verify the absolute value of all the three-point vertices and some four-point ones, but also to start being sensitive to the relative sign of a large part of those vertices. This is also the very first systematic comparison between predictions obtained with MadGraph/MadEvent and with CalcHep for the MSSM. We have considered the production of either a pair of any Standard Model particles, or a pair of any supersymmetric particles, from any Standard Model initial state. Even though most of these channels are not really phenomenologically relevant because most of the initial states are impossible to realize at a collider, they allow for a sensitive check of each three-point vertex, and a large part of the four-point vertices. Let us note that the remaining untested vertices, such as the four-scalar interactions in L Scalar FDW in Eq. (5.33), are irrelevant for 2 → 2 leading-order calculations with a Standard Model initial state. We have considered two different cases, one where we have fixed the energies of the initial particles to 600 GeV, and one to 1 TeV. We found that the six possible ∆ ab quantities are below the percent level comparing any two of the four implementations, MG-FR, MG-ST, CH-FR and CH-ST, except for where the discrepancies are due to the presence of singularities in the matrix elements, which makes any of the considered Monte Carlo generators unable to correctly evaluate the total cross section, and except for about 15 processes where the built-in implementation of the MSSM in CalcHep gives results different from those of all the other implementations due to mistakes in the model files for several triple scalar couplings. For the two processes in Eq. (6.3), we have evaluated the (unintegrated) squared matrix elements |M | 2 at given phase-space points for all models and computed the quantity ∆ P S of Eq. (6.2). Summing over 100 different random phase-space points, we have found that ∆ P S is below 1%, which is sufficient to conclude that the four implementations agree. Some examples of numerical results can be found in Tables 16, 17 and 18 in Appendix B.3. Let us recall that the complete list is available on the FeynRules webpage. Finally, in order to test every sign of each three-point vertex and a large part of the four-point vertices, we have evaluated squared matrix elements for 100 random phase-space points and calculated the quantity ∆ P S defined in Eq. (6.2). We have investigated 2708 processes, relative to the production of two supersymmetric particle plus one Standard Model particle at a center-of-mass energy of 2 TeV. Using MG-FR and MG-ST, we have found a complete agreement for each process, i.e., ∆ P S was below 0.1 %. All the results can be found on the FeynRules website. The comparison with matrix elements calculated by CH-FR and CH-ST is devoted to a further study.

The Minimal Higgsless Model
Our FeynRules implementation was compared to the LanHep implementation, both in unitary and in Feynman gauge. It was run in Feynman gauge in CalcHep and CompHep and in unitary gauge in CalcHep, MadGraph/MadEvent and Sherpa. The parameters used for this validation and the final particles cuts are given in Appendix B.4. In all cases, the cross section was calculated, compared, and agreement to better than 1% was found.
Some examples of the obtained cross sections are given in the various tables of Appendix B.4. In Table 20, a selection of strong processes are presented. In Tables 21 and  22, charged and neutral electroweak processes are presented, respectively.

Extra Dimensional Models
Large Extra Dimension Model The Feynman rules for LED obtained by FeynRules were explicitly checked with those available in the literature. Unlike for the models presented above, we cannot use any matrix-element generator to compute cross sections or decay rates, because the interfaces linking FeynRules to Monte-Carlo generators are not yet defined to work for a theory with spin-two particles. Therefore we have chosen to validate our LED model implementation via analytical expressions. We found complete agreement both for the Feynman rules of the generic LED theory which can be found in Refs. [168,183] and for those of the full LED implementation. Let us notice that from the generic LED model validated above, we can extrapolate the results of the latter to guess those of the QCD and electromagnetic part of the full LED model. This check was performed and conclusive.

Universal Extra Dimension Model
For the MUED model, we start by confronting the Feynman rules to those available in the literature. In a second step, we have calculated cross sections for 2 → 2 processes and compared the results obtained with the help of MadGraph, Sherpa and CalcHep, and we confronted our results to those obtained with the help of an existing CalcHep implementation [169]. The input parameters for the benchmark point which we have chosen for our numerical validation are given in Appendix B.5. Let us note that the only Higgs field which we have considered is the Standard Model one, even though its first Kaluza-Klein excitation is also implemented. The comparison has been carried out for the 118 processes in total [165,166,167,184]. First, using MadGraph and considering Standard Model processes, we have calculated squared matrix elements at given phasespace points and confronted the results obtained with the FeynRules-generated model file to those obtained with the built-in Standard Model implementation. Subsequently, using the FeynRules-generated model file for MadGraph, Sherpa and CalcHep and the existing CalcHep implementation of the MUED model, we have compared total cross sections for the 118 chosen processes at a center-of-mass energy of 1400 GeV. We have found agreement for each of them, and some examples are shown in Tables 24 and 25 in Appendix B.5.

Low-energy effective theories
χPT at lowest order The χPT model was checked analytically. The expressions of the amplitudes computed with pen & paper work of Ref. [185] have been compared to the one obtained using FeynRules and FeynArts. The loop amplitudes have been integrated using cutoff regularization and only the terms quadratic in the cutoff Λ have been computed. More precisely, we computed the one-loop corrections to the two-point functions π − π, η − η, η ′ − η ′ and η − η ′ . The momentum independent part of each of these amplitudes, but the last one, corresponds to mass corrections and is b-independent as it should. For example, the renormalization of the pion wave function is while its mass correction vanishes 11 . The η ′ → ηππ decay amplitude has also been computed at tree-level and at one-loop. In this last result, also the c-dependence cancels.
Eventually, a total of about 60 diagrams were computed with both methods and perfect agreement was found.

SILH Model
The check of the SILH model consists in an analytic comparison between the decay widths computed in Ref. [171] and computations based on the vertices given by FeynRules. We used the same simplifications, i.e., c T = 0 and (c W + c B )M 2 W /M 2 ρ = 0 and neglect L vect . For tree-level decay widths of the Higgs into two fermions, both implementations leads to exactly the same results. For decays into a gauge boson pair, the contribution to the Feynman rules from higher-dimensional operators read, e.g. for the hW W vertex, where p 1 , p 2 and p 3 denote the momenta of the Higgs boson and the two W bosons respectively. As a consequence, the corrections are not just proportional to the SM decay widths since the vertices have a more complicated kinematic structure. We find in agreement with Ref. [186]. In the situation where we have the hierachy of couplings g < g ρ < 4π, the second term in Eqs. (6.6) and (6.7) is suppressed parametrically with respect to the first one and could thus be neglected, leading to just a rescaling of the SM decay widths [171]. Let us note however that in the case where the ratio g 2 /(4π) 2 is not too small, this additional term could have a numerical impact on the decay rates.

Outlook
We have described a new framework where BSM physics scenarios can be developed, studied and automatically implemented in Monte Carlo or symbolic calculation tools for theoretical and experimental investigations. The main purpose of this work has been to contribute to streamlining the communication (in both directions) between the theoretical and the experimental HEP communities. The cornerstone of our approach is the Mathematica package FeynRules where any perturbative quantum field theory Lagrangian, renormalizable or not, can be written in a straightforward way and the corresponding Feynman rules obtained automatically. All the relevant information can then be passed through dedicated interfaces to matrix element generators for Feynman diagram calculations at the tree level or at higher orders. The scheme itself looks very simple and is in fact not a new idea. The novelty, however, lies in several technical and design aspects which, we believe, constitute a significant improvement over the past.
First, the use of Mathematica as a working environment for the model development gives all the flexibility that is needed for symbolic manipulations. The many built-in features of Mathematica, such as matrix diagonalization and pattern recognition functions, play an important role in building not only robust interfaces for very different codes but also to open up new possibilities. For instance, besides implementing BSM models, new high-level functionalities/applications can be easily developed by the users themselves, made public and possibly included in subsequent official releases. In other words, the code is naturally very open to community contributions. A typical application that could take advantage of this open structure is the (semi-)automatic development of model calculators inside the FeynRules package itself, including mass spectrum and decay width calculations.
Second, the interfaces to MC codes, all of them quite different both in philosophy, architecture and aim, offer the possibility of testing and validating model implementations at an unprecedented level of accuracy. It also maximizes the probability that a given model might be dealt with by at least one matrix element generator. For example, purely symbolic generators such as FeynArts/FormCalc can be used for tree-level (or even loop) calculations which can then be compared or extended to numerical results from CalcHep, MadGraph/MadEvent or Sherpa. In this respect we note that one of the current major and common limitations of the matrix-element generators (but not of FeynRules) is connected to use of a fixed library of Lorentz and/or gauge structures for the vertices. These libraries are in principle extendable, but at present this is done by hand and it entails a tedious and often quite long work. The automatization of this part (and possibly the reduction of higher-dimensional operators to renormalizable ones) through FeynRules would be the final step towards full automatization of any Lagrangian into Monte Carlo codes. Work in this direction is already in progress.
With such a framework in place, we hope that several of the current problems and drawbacks in new physics simulations faced by the experimental groups will be alleviated if not completely solved. First the need of dedicated codes for specific models, which then call for long and tedious validations both from the physics point of view as well as in their interplay with collaboration softwares, will be greatly reduced. General purpose tools, from Herwig and Pythia to Sherpa, MadGraph/MadEvent, Whizard, CalcHep/CompHep (and potentially also Helac and Alpgen), several of which have been successfully embedded in frameworks such as Athena (ATLAS) and CMSSW (CMS), offer several ready-to-go solutions for any FeynRules based models. Reducing the proliferation of highly dedicated tools will greatly simplify the maintenance and reliability of the software and more importantly the reproducibility of the MC samples in the mid and long terms. In addition, we believe an effort towards making new Lagrangians in Feyn-Rules and the corresponding benchmark points publicly available (for example by the proponents of the model themselves at the same time of the release of the corresponding publication) would certainly be a great advantage for the whole HEP community.
It is our hope that FeynRules will effectively facilitate interactions between theorists and experimentalists. Until now there has not been a preferred way to link the two communities. Various solutions have been proposed and used, most of them plagued by significant limitations. The best available proposal so far has been to use parton-level events in the Les Houches Event File (LHEF) format [187]. This is a natural place to cut a line given that theorists and phenomenologists can generate events through various private or public tools, and then pass them for showering and hadronization to codes such as Pythia or Herwig. These codes are not only already embedded in the experimental software (for the following detector simulations), but also have been (or will be) tuned carefully to reproduce control data samples.
While we think this is still a useful approach that should be certainly left open and supported, we are convinced the framework we propose is a promising extension. The deep reason is that, in our approach, there is no definite line between theory and experiment. On the contrary it creates a very extended region where the two overlap and work can be done in the same common framework. This leaves much more freedom in where exactly the two ends meet and what kind of checks and information can be exchanged. As a result there are several practical advantages that come for free.
As an example, we remind that the LHEF format only standardizes the information on the events themselves and some very basic global properties, but any information on the physical model (i.e., the explicit form of the Lagrangian) or the parameter choices is in general absent. This is of course due to implementation differences among various codes which severely compromise any standardization attempt at this level. It is clear that, in the long run, this might lead to serious problems of traceability of the MC samples and various ambiguities in understanding experimental analyses (such as placing of exclusion limits). This problem is of course completely overcome by the approach advocated here, since models are now fully and univocally defined. In this sense FeynRules itself offers the sought for standardization.
Another, and maybe even more striking example is that, within FeynRules, model building and/or refinement can in principle be done in realtime together with the related experimental analyses. One could imagine, for example, that if the TeV world is as rich as we hope and as data start showing hints for new particles or effects, a large number of competing Lagrangians (and not only benchmark points as used in the typical top-down SUSY analyses) could be easily and quickly implemented and readily confronted to data. This could be done in a virtuous loop, where theorists and experimentalists "meet" at a convenient point of the simulation chain. In other words, various top-down and bottomup studies can fit naturally in this framework, partially addressing often reported worries about the actual possibilities to extract precise information on BSM physics from LHC data.
Finally, let us also mention a more long-term advantage of the proposed framework. Automatic NLO calculations for SM processes are now clearly in sight, and, in this con-text, it might be reasonable to ask whether those developments can be extended to BSM processes. We believe the answer is yes, and, since this generalization will probably rely on the simultaneous implementation of the model characteristic in different codes (e.g., dealing with different parts of the calculation like real and virtual corrections, or analytic and numerical results) our approach might also naturally play a crucial role in this context.

Acknowledgments
We are particularly thankful to the organizers and participants of the MC4BSM series of workshops for the many stimulating discussions that have helped in shaping the approach presented in this paper.

A. The FeynRules convention for Standard Model inputs
There are several parameters and particles that have special significance in Feynman diagram calculators. Examples of these are the strong and electromagnetic couplings, the names of the fundamental representation and the structure constants of the strong gauge group and so on. For this reason, we have chosen to fix the names of these objects at the FeynRules level. Adherence to these standards will increase the chances of successful translation.
The strong gauge group has special significance in many Feynman diagram calculators. A user who implements the strong gauge group should adhere to the following rules. The indices for the fundamental and adjoint representations of this gauge group should be called Colour and Gluon respectively. Furthermore, the names of the QCD gauge boson, coupling constant, structure constant, totally symmetric term and fundamental representation should be given by G, gs, f, dSU3 and T as in the following example: Note that α S is given as the external parameter and g S as the internal parameter. The description of α S may be edited, but it should be remembered that, for the Monte Carlo programs that run the strong coupling constant, the value of α S should be set at the Z pole. For calculation programs that do not run the strong coupling, on the other hand, it should be set according to the scale of the interaction. A description may also be added to the parameter g S .
The electromagnetic interaction also has special significance in many Feynman diagram calculators and we outline the following standard definitions. The electric coupling constant should be called ee, the electric charge should be called Q. The declaration of the electric charge should follow the following conventions for naming: -> "Z pole mass" } Moreover, the weak coupling constant name gw and the hypercharge symbol Y are used by some calculators and the user is encouraged to use these names where appropriate. The masses and widths of particles should be assigned whenever possible. If left out, FeynRules will assign the value 1 to each. Finally, particles are also identified by a PDG number. The user is strongly encouraged to use existing PDG codes in their model wherever possible. If not included, a PDG code will be automatically assigned by FeynRules beginning at 6000001. 12 The reason for choosing α −1 EW as the external input parameter, and not αEW itself is only to be compliant with the Les Houches Accord.

B. Validation tables
In this Appendix we report the main results of our work, i.e., the validation tables. In general, the tables list quantities (such as decay widths or cross sections) that have no direct phenomenological interest but they are physical, easily reproducible and provide an exhaustive check of the (complex) values of all the couplings of the model. In several instances, other powerful checks (such as gauge invariance, unitarity cancellation at high energy, and so on) have been performed that are not presented here. When possible, comparisons between the so-called "stock implementation", i.e. implementations already available in the Monte Carlo tools, have been made as well as comparisons between different Monte Carlo's also in different gauges. All numbers quoted in this section are expressed in pico-barn and correspond to a collision in the center-of-mass frame. In all cases agreement to better than 1% was obtained.

B.1 The Standard Model
In this section we give the results for the 35 cross sections tested for the SM between CalcHep, MadGraph/MadEvent and Sherpa for a total center of mass energy of 550 GeV. A p T cut of 20 GeV was applied to each final state particle. The set of external parameters used for the test is given in Table 7. A selection of processes is shown in Tables 8 and 9.  H H → Z Z 6.268e+1 6.266e+1 6.266e+1 6.266e+1 6.266e+1 6.266e+1

Lepton and weak boson processes in the Standard Model
H H → W + W − 9.449e+1 9.450e+1 9.447e+1 9.447e+1 9.448e+1 9.448e+1  Table 9: Cross sections for a selection of SM production processes. The built-in SM implementation in MadGraph, CalcHep and Sherpa are denoted MG-ST, CH-ST and Sherpa, respectively, while the FeynRules-generated ones MG-FR, CH-FR and SH-FR. The center-of-mass energy is fixed to 550 GeV, and a p T cut of 20 GeV is applied to each final state particle.

B.2 The general Two-Higgs-doublet model
In this section we give a selection of the results for the 185 cross sections tested for the 2HDM between CalcHep, MadGraph/MadEvent and Sherpa for a total center-ofmass energy of 800 GeV. A p T cut of 20 GeV was applied to each final state particle. The set of external parameters used for the test is given in Table 10. A selection of processes is shown in Tables 11, 12 3 3 GeV   τ + ν τ → tb 5.097e−1 5.097e−1 5.098e−1 5.099e−1   Table 13: Cross sections for a selection of processes in the 2HDM with two weak bosons in the initial state. The built-in 2HDM implementation in MadGraph is denoted MG-ST, while the FeynRules-generated ones are MG-FR, CH-FR and SH-FR. The center-of-mass energy is fixed to 800 GeV, and a p T cut of 20 GeV is applied to each final state particle.

B.3 The most general Minimal Supersymmetric Standard Model
In order to fully determine the MSSM Lagrangian at low energy scale, it is sufficient to fix the SM sector and the supersymmetry-breaking scenario. We have chosen the typical minimal supergravity point SPS 1a [182] which is completely defined once we fix the values of four parameters at the gauge coupling unification scale and one sign. The complete set of input parameters is shown in Table 14. We then use the numerical program SOFTSUSY [188] to solve the renormalization group equations linking this restricted set of supersymmetry-breaking parameters at high-energy scale to the complete set of masses, mixing matrices and parameters appearing in L MSSM at the weak scale. The output is encoded in a file following the SLHA conventions, readable by FeynRules after the use of an additional translation interface taking into account the small differences between the SLHA2 format and the one of our implementation. This interface is available on the FeynRules website, as well as an the corresponding SLHA2 output file.

B.5 Extra Dimensional models
The benchmark point which we have used for our numerical comparison of the MUED implementation in FeynRules to the existing one in CalcHep is defined through the various external parameters shown in Table 23. The masses of the first Kaluza-Klein excitations are computed via one-loop calculations [169,189]. We obtain for the excitations of the quarks,   Table 24: Cross sections for a selection of processes in MUED with two gauge bosons in the initial state. The existing MUED implementation in CalcHep is denoted CH-ST, while the FeynRulesgenerated ones in MadGraph CalcHep and Sherpa are denoted MG-FR, CH-FR and SH-FR. The center-of-mass energy is fixed to 1400 GeV, and a p T cut of 20 GeV is applied to each final state particle.

MUED processes with gauge boson excitations
Some examples of the obtained results for the calculation of cross sections of some 2 → 2 processes relative to the production of two Standard Model particles or two Kaluza-Klein excitations are shown in Tables 24 and 25. We set the center-of-mass energy to 1400 GeV and we applied a p T of 20 GeV on each final state particle. S → tt 9.208e+0 9.211e+0 9.215e+0 9.211e+0 b 1

MUED processes with fermion excitations
Sb 1 S → bb 9.092e+0 9.100e+0 9.104e+0 9.107e+0 Table 25: Cross sections for a selection of fermionic processes in MUED. The existing MUED implementation in CalcHep is denoted CH-ST, while the FeynRules-generated ones in MadGraph CalcHep and Sherpa are denoted MG-FR, CH-FR and SH-FR. The center-of-mass energy is fixed to 1400 GeV, and a p T cut of 20 GeV is applied to each final state particle.