The GAMBIT Universal Model Machine: from Lagrangians to Likelihoods

We introduce the GAMBIT Universal Model Machine (GUM), a tool for automatically generating code for the global fitting software framework GAMBIT, based on Lagrangian-level inputs. GUM accepts models written symbolically in FeynRules and SARAH formats, and can use either tool along with MadGraph and CalcHEP to generate GAMBIT model, collider, dark matter, decay and spectrum code, as well as GAMBIT interfaces to corresponding versions of SPheno, micrOMEGAs, Pythia and Vevacious (C++). In this paper we describe the features, methods, usage, pathways, assumptions and current limitations of GUM. We also give a fully worked example, consisting of the addition of a Majorana fermion simplified dark matter model with a scalar mediator to GAMBIT via GUM, and carry out a corresponding fit.


Introduction
The Standard Model (SM) has been exceptionally successful in explaining the fundamental particle physics that underpins the workings of the Universe. Despite these successes, there are still both experimental and theoretical considerations in tension with the SM: the nature of dark matter (DM), the generation of neutrino arXiv:2107.00030v3 [hep-ph] 15 Dec 2021 masses, the hierarchy problem, cosmological matterantimatter asymmetry, various experimental anomalies (such as flavour or g − 2), and the stability of the electroweak vacuum. Clearly, the search for physics beyond the SM (BSM) is a multidisciplinary endeavour. Performing global fits of BSM theories ensures complementarity between experimental and theoretical efforts. This entails consistently predicting multiple observables, comparing them rigorously with experimental results, and using sophisticated statistical sampling techniques in order to obtain meaningful, quantitative inference on theories and their parameters.
The first version of GAMBIT shipped with various parameterisations of the minimal supersymmetric Standard Model (MSSM) at the weak [10] and grand unified theory (GUT) scales [11], a scalar singlet DM extension of the SM [12] and an effective field theory of flavour interactions [6,13]. Since then, studies of vacuum stability for scalar singlet DM [14], generic Higgs portal DM models [15], DM effective field theory models [16,17], axions and axion-like particles [18], additional MSSM models [19], both left- [20] and right-handed neutrinos [7], and inflationary and other cosmological models [9] have expanded the GAMBIT model database substantially.
One of the most notable features of GAMBIT is that it is open source. This allows users to study new models and add their own experimental and theoretical constraints to GAMBIT in a modular manner. Although adding a new model to GAMBIT is largely formulaic, it is still not a completely trivial task, as the user still requires some level of understanding of the underlying software design. To this end, we present the GAMBIT Universal Model Machine (GUM): a tool for interfacing symbolic Lagrangian-level tools to GAMBIT, to further automate the procedure of comparing theory to experiment [21].
Not only does automation increase efficiency and effectiveness in BSM physics, it also reduces the scope for human error, which is inevitably introduced when coding complicated expressions by hand. Development of Lagrangian-level tools has been a very important step in the development of automation in BSM physics. The original motivation for creating Lagrangian-level tools was to automatically write outputs that could be used for generating matrix element functions, which could in turn be used by Monte Carlo event generators to simulate new physics at particle colliders. The first tool to achieve this was LanHEP [22][23][24][25], originally created to compute vertices for CompHEP [26][27][28] from a simple Lagrangian input. With the release of FeynRules [29][30][31][32], this quickly expanded to generating output for other matrix element codes, such as MadGraph/MadEvent [33][34][35][36][37], CalcHEP [38,39], FeynArts [40][41][42][43], SHERPA [44] and WHIZARD/O'Mega [45,46]. SARAH [47][48][49][50][51][52] was also developed around the same time, initially with a particular focus on supersymmetry, but soon expanding to a much larger range of models.
The success of FeynRules and SARAH in generating Feynman rules for use by matrix element generators lead to the creation of a new filetype, the 'Universal FeynRules Output' (UFO) [53]. These UFO files encode information about the particles, the parameters and interaction vertices for a given model. They can be generated by both FeynRules and SARAH, and handled by a range of matrix element generators such as MadGraph, GoSam [54,55] and Herwig++ [56,57].
Although FeynRules and SARAH were both created to solve essentially the same problem, they serve different purposes. FeynRules is concerned with computing Feynman rules for any given Lagrangian, including effective ones, and performing physics at tree level. SARAH on the other hand places far more emphasis on renormalisable theories. As a result, any UV-complete model can be implemented in both FeynRules and SARAH, and any output generated by FeynRules for such models can also be created by SARAH. However, SARAH is also able to compute renormalisation group equations (RGEs) at 2-loop order and particle self-energies at 1-loop order, allowing its 'downstream beneficiaries' SPheno and FlexibleSUSY to generate corrected mass spectra at the 1-loop level.
Although the outputs of SARAH are more sophisticated than those of FeynRules, it also has limitations. Unlike in FeynRules, it is not generally possible to define non-renormalisable theories or higher-dimensional effective theories in SARAH. We therefore provide interfaces to both FeynRules and SARAH to allow the user to Generated GAMBIT backends FeynRules SARAH Usage in GAMBIT

CalcHEP
Decays, cross-sections micrOMEGAs (via CalcHEP) DM observables Pythia (via MadGraph) Collider physics SPheno Particle mass spectra, decay widths Vevacious C++ Vacuum stability Table 1: GAMBIT backends with GUM support and Lagrangian-level tools used to generate them. Apart from the external packages listed, GUM also produces GAMBIT Core and physics module code tailored to the model and observables of interest.
incorporate a vast range of theories into GAMBIT, from effective field theories (EFTs) via FeynRules to complex UV-complete theories in SARAH. We stress that if a model can be implemented in SARAH, then the user should use SARAH over FeynRules -both to use GAM-BIT to its full potential, and to perform more detailed physics studies. The basic outputs available from GUM in each case are summarised in Table 1. 1 This manual is organised as follows: in Sec. 2, we describe the code structure and outputs of GUM. In Sec. 3 we give usage details, including installation, the GUM file, and particulars of FeynRules and SARAH model files. In Sec. 4 we provide a worked example, where we use GUM to add a simplified DM model to GAMBIT, and perform a quick statistical fit to DM observables. Finally, in Sec. 5, we discuss future extensions of GUM and summarise. We include details of the new GAMBIT interfaces to CalcHEP, Vevacious and SARAH-SPheno (the auto-generated version of SPheno created using SARAH) in the Appendix.
GUM is open source and part of the GAMBIT 2.0 release, available from gambit.hepforge.org under the terms of the standard 3-clause BSD license. 2

Code design
GAMBIT consists of a set of Core software components, a sampling module ScannerBit [3], and a series of physics modules [4][5][6][7][8][9]. Each physics module is in charge of a domain-specific subset of GAMBIT's physical calculations. GUM generates various snippets of code that it then adds to parts of the GAMBIT Core, as well as to some of the physics modules, enabling GAMBIT to employ the capabilities of those modules with the new model.
1 Some readers will note the absence of FlexibleSUSY from this list; this is due to the complex C++ templates used in Flexi-bleSUSY and the fact that supporting it fully as a backend in GAMBIT requires significant development of the classloading abilities of the backend-on-a-stick script (BOSS) [1]. Once this challenge has been overcome, future versions of GUM will also generate code for FlexibleSUSY and its other flexi-bretheren. 2 http://opensource.org/licenses/BSD-3-Clause.
Within the Core, GUM adds code for any new particles to the GAMBIT particle database, and code for the new model to the GAMBIT models database, informing GAMBIT of the parameters of the new model so that they can be varied in a fit. GUM also generates interfaces (frontends) to the external codes (backends) that it is able to generate. The backends supported by GUM in this manner are those listed as outputs in Table 1.
Within the physics modules, GUM writes new code for the SpecBit [8] module, responsible for spectrum generation within GAMBIT, DecayBit [8], responsible for calculating the decays of particles, DarkBit [4], responsible for DM observables, and ColliderBit [5], the module that simulates hard-scattering, hadronisation and showering of particles at colliders, and implements subsequent LHC analyses.
GUM is primarily written in Python, with the exception of the Mathematica interface, which is written in C++ and accessed via Boost.Python.
Initially, GUM parses a .gum input file, using the contents to construct a singleton gum object. Details of the input format can be found in Sec. 3.3. GUM then performs some simple sanity and consistency checks, such as ensuring that if the user requests DM observables, they have also specified a DM candidate. GUM then opens an interface to either FeynRules or SARAH via the Wolfram Symbolic Transfer Protocol (WSTP), loads the FeynRules or SARAH model file that the user has requested into the Mathematica kernel, and performs some additional sanity checks using the inbuilt diagnostics of each package.
Once GUM is satisfied with the FeynRules or SARAH model file, it extracts all physical particles, masses and parameters (e.g. mixings and couplings). The minimal information required to define a new particle is its mass, spin, color representation, PDG code, and electric charge (if non-self conjugate). For a parameter to be extracted, it must have an associated LHA block in the FeynRules or SARAH model file, and an index within that block. Additionally for FeynRules files, the interaction order used in UFO files must be set. For details on the syntax required for all required particle and parameter defini-tions, see Sec. 3.4.3 for FeynRules model files, and Sec. 3.5.3 for SARAH model files.
After extracting particle and parameter information, GUM cross-checks that all particles in the new model exist in the GAMBIT particle database, and adds entries if they do not. GUM uses this same particle and parameter information to also write new entries in both the GAMBIT model database and the SpecBit module. All other calculations rely on a combination of new code within GAMBIT and backends. In the following sections we provide details of the new code generated by GUM in the GAMBIT model database (Sec 2.1), within GAM-BIT physics modules (Sec 2.2), and in the form of new backends and their corresponding frontend interfaces in GAMBIT (Sec 2.3).
Many of the GAMBIT code outputs are only generated if the user elects to generate relevant new backend codes with GUM. Details of which backends must be generated with GUM for it to generate different GAMBIT source files can be found in Table 2.
At the end of its run, GUM outputs to screen a set of suggested commands for reconfiguring and rebuilding GAMBIT and the new backends. It also emits an example GAMBIT input file for running a scan of the new model (yaml_files/new_model_example.yaml, where new_model is the name of the new model).

The GAMBIT model database
For every new model requested, GUM adds a new entry to GAMBIT's hierarchical model database. GUM operates under the condition that no model of the same name exists already in the hierarchy, and there are no entries for it in either Models, SpecBit or DarkBit, and will throw an error if it does. If the requested model is a new model, GUM creates a new model file new_ model.hpp in the Models directory (see Table 2), with the parameters extracted from FeynRules or SARAH.
In addition to the model file, GUM creates a list of expected contents for the model's particle spectrum object in SpectrumContents/new_model.cpp. This includes not just pole masses of BSM particles, but also the parameters of the model itself, mixing matrices and various SM parameters. GUM also writes a corresponding simple container for the spectrum new_model_SimpleSpec that defines functions for accessing the spectrum contents and exposes them to the GAMBIT Spectrum class.

SpecBit
SpecBit includes structures for storing and passing around the so-called mass spectrum, i.e. the pole masses, mixings and Lagrangian parameters of the model. If SPheno is used to obtain the spectrum then the R ξgauge will be set to ξ = 1 and the Lagrangian parameters given in the M S scheme for non-superymmetric models and the DR scheme for supersymmetric models. The scale at which these parameters are given will depend on the spectrum generator and will also be stored in the spectrum stored in SpecBit.
Following the structure of the simple spectrum container, GUM writes module functions in SpecBit that allow the construction of an object of the Spectrum class. The capability new_model_spectrum and its module functions are declared in the header file SpecBit_new_ model_rollcall.hpp and defined in SpecBit_new_model.cpp. The spectrum is either defined directly in terms of phenomenological model parameters, or generated from the Lagrangian parameters using SPheno.
By default, in the absence of a spectrum generator, GUM writes a simple module function in SpecBit, get_new_model_spectrum, that fills a Spectrum object with SM values and the input parameters of the model. If using SARAH to generate GAMBIT code, the pole masses of BSM particles are computed using the tree-level relations provided by SARAH. However, these tree-level masses from SARAH are only used for very simple models, such as those without additional Higgs states, as more complicated models include non-trivial mixings and modify the electroweak symmetry breaking (EWSB) conditions. In the latter case a spectrum generator (e.g. SPheno) should be used. When producing GAMBIT output from FeynRules, however, there are no such relations available, and thus the particle masses are model parameters and the Spectrum object is filled with those.
If the SPheno output is requested from SARAH for a model, GUM writes a module function, get_new_ model_spectrum_SPheno, with the backend requirements necessary to generate the full spectrum, with all particle masses, mixing matrices, etc. Hence, for improved precision spectra, it is recommended that the user implement their model using SARAH, and request the spectrum to be provided by SPheno.
If Vevacious output is requested, for each new BSM model GUM writes new modelspecific code in the SpecBit vacuum stability file, SpecBit/src/SpecBit_VS.cpp, and adds appropriate entries to the corresponding rollcall header. GUM provides two new module functions to interact with Vevacious.  Some files are only written or edited when a model-specific version of a particular backend is requested in the .gum file. This is achieved by including the corresponding option backend_name:true, e.g. spheno:true, pythia:true, etc. Entries in the rightmost column indicate where this is the case, i.e. for rows with an entry present in the rightmost column, the file listed in that row will only be modified/generated if the backend(s) named in the rightmost column are requested in the .gum file. Where no such entries exist in the right column, the addition or modification of the GAMBIT source is always performed, regardless of the contents of the .gum file. The entry "wimp_candidate" indicates that the output is only generated if the option of the same name is set in the .gum file, as wimp_candidate:pdg. Note that the entry "decaying_dm_candidate:pdg" can be used in place of wimp_candidate if the DM candidate decays instead of annihilates. All filenames containing new_model are newly created by GUM; all others are existing files that GUM amends.
Firstly, prepare_pass_new_model_spectrum_to_vevacious with capability pass_spectrum_to_vevacious, which interfaces the Spectrum object to the Vevacious object. Secondly, vevacious_file_location_new_model, which directs GAMBIT to the location of the input Vevacious files generated by SARAH.

DecayBit
Whenever decay information is requested for a new model, GUM amends the header DecayBit_rollcall.hpp and source DecayBit.cpp files to add the decays of the particles in the model. The information for the decays can be provided separately by two backends in the GUM pipeline: CalcHEP and SPheno. CalcHEP generates tree-level decays for each new BSM particle, plus new contributions to any existing particles in DecayBit such as the SM Higgs and the top quark. GUM adds these to DecayBit by adding the new decay channels wherever possible to any existing DecayTable::Entry provided by a module function with capability particle_name_decay_rates. If no such func-tion exists, it instead creates a new module function CH_new_model_particle_name_decays, with this capabilitytype signature, where particle_name comes from the SARAH/FeynRules model file. GUM then modifies the module function all_decays to add the decays of any particles for which it has written new module functions. GUM presently only supports the generation of 1 → 2 decays, as support for 3-body decays within DecayBit is currently limited to fully integrated partial widths; consistent inclusion of integrated partial widths provided by SPheno and differential decay widths provided by CalcHEP into DecayBit will require further development of the DecayTable infrastructure, and a careful treatment of integration limits. Note also that GUM does not currently write any BSM contribution for W and Z boson decay via the CalcHEP interface, but this is planned for future releases.
SPheno by default computes leading order decays (tree-level or one-loop) for all BSM and SM particles, and adds universal higher-order corrections for specific decays, which are very important for Higgs decays. In addition it provides an alternative computa-tion of full one-loop decays for all particles. The choice of method is left to the user via the SPheno option OneLoopDecays. As SPheno provides decay widths for all particles in the spectrum, GUM creates a new module function all_new_model_decays_from_SPheno, which returns a DecayTable filled with all decay information computed by SPheno.
The default behaviour of GUM is to ensure that it always generates decay code of some sort when needed. This ensures that a complete DecayTable for the new model can be provided within GAMBIT for dependency resolution, by providing the capability all_decays in DecayBit. A viable GAMBIT DecayTable is required for the functioning of many external codes such as Pythia and micrOMEGAs: if any new particle is a mediator in a process of interest, then its width is needed. To this end, GUM activates CalcHEP output automatically if decays are needed by other outputs that have been activated in the .gum file, but neither CalcHEP nor SPheno output has been explicitly requested.

DarkBit
If the user specifies a Weakly-Interacting Massive Particle (WIMP) DM candidate for a model 3 , GUM writes the relevant code in DarkBit. Each individual model tree in DarkBit has its own source file, so GUM generates a new source file src/new_model.cpp, and amends the DarkBit rollcall header accordingly. At a minimum, GUM includes a new module function DarkMatter_ID_new_model in this file. It then adds the remainder of the source code according to which backends the user selects to output code for in their .gum file; currently available options are CalcHEP and micrOMEGAs.
If the user requests CalcHEP output, then a new module TH_ProcessCatalog_new_model providing the Process Catalogue is written. The Process Catalogue houses all information about annihilation and decay of DM, and decays of all other particles in the spectrum. All computations of indirect detection and relic density likelihoods in DarkBit begin with the Process Catalogue. For details, see Sec. 6.3 of [4]. All processes that GUM adds to the Process Catalogue are 2 → 2 processes computed at tree level by CalcHEP.
The Process Catalogue is used to compute the relic abundance of annihilating DM via the DarkBit interface to the Boltzmann solver in DarkSUSY. The Process Catalogue interface does not currently fully support coannihilations nor 3-body final states, so GUM does not generate these processes. Such functionality is planned for future releases of DarkBit, and will be supported by GUM at that time. Note however that GUM does support the full generation of micrOMEGAs versions including these effects via the CalcHEP-micrOMEGAs interface. If co-annihilations or three-body final states are expected to be important in a new physics model, the user should therefore use micrOMEGAs to compute the relic abundance from DarkBit, in preference to the DarkSUSY Boltzmann solver.
For decaying DM candidates, the user would need to implement their own relic density calculation, as appropriate for the specific model in question. Although DarkBit can calculate spectral yields from DM decay, at present the likelihoods for indirect detection in DarkBit do not support decaying dark matter (in large part because neither gamLike nor nulike presently support decaying DM). Existing direct detection likelihoods can still be used out of the box without any relic density calculation, if the user assumes that the decaying DM candidate constitutes all of the DM.
When writing micrOMEGAs output, GUM adds new entries to the ALLOW_MODELS macro for existing mi-crOMEGAs functions in DarkBit. To use micrOMEGAs' relic density calculator, GUM adds an entry to the module function RD_oh2_Xf_MicrOmegas. For more information, see Sec. 4.2 of Ref. [4]. GUM also provides an interface to the module function DD_couplings_MicrOmegas, which returns a DM_nucleon_couplings object containing the basic effective spin-independent and spin-dependent couplings to neutrons and protons G p SI , G n SI , G n SD , and G n SD . This object is readily fed to DDCalc for computing likelihoods from direct detection experiments. For more information, see Sec. 5 of Ref. [4].
For more complicated models where the standard spin-independent and spin-dependent cross-sections are not sufficient, micrOMEGAs is not able to compute relevant couplings. In this case, the user should perform a more rigorous calculation of WIMP-nucleon (or WIMPnucleus) couplings by alternative means. This is required when, for example, scattering cross-sections rely on the momentum exchange between incoming DM and SM particles, or their relative velocity. The full set of 18 non-relativistic EFT (NREFT) DM-nucleon operators are defined in both DarkBit and DDCalc 2, and described in full in the Appendix of Ref. [15]. These operators fully take into account velocity and momentum transfer up to second order, and should typically be used in cases where the entirety of the physics is not captured by just σ SI and σ SD . Whilst the new 2.1 version of GAM-BIT [16] allows for automated translation of high-scale relativistic effective DM-parton couplings to low-scale NREFT couplings via an interface to DirectDM [75,76], there is no established automated matching procedure for connecting other high-scale models (as defined in FeynRules or SARAH model files) to the Wilson coefficients of the relativistic EFT. GUM therefore does not automatically write any module functions connecting the GAMBIT Spectrum object to the NREFT interface of DDCalc; once such a procedure exists, GUM will be extended accordingly.

ColliderBit
In ColliderBit, simulations of hard-scattering collisions of particles are performed using the Monte Carlo event generator Pythia [77]. These events are passed through detector simulation and then fed into the GAMBIT analysis pipeline, which predicts the signal yields for the new model. These can then be compared to the results of experimental searches for new particles.
For a new BSM model, the matrix elements for new processes unique to the model must be inserted into Pythia in order for it to be able to draw Monte Carlo events from the differential cross-sections of the model. To achieve this, GUM communicates with MadGraph to generate matrix element code for Pythia, and writes the appropriate patch to insert it into Pythia. Alongside the matrix elements, this Pythia patch also inserts any newly defined LHA blocks.
When Pythia output is requested, GUM writes a series of new ColliderBit module functions in the source file ColliderBit/src/models/new_model.cpp, and a corresponding rollcall header file. The new functions give ColliderBit the ability to i) collect the relevant Spectrum and DecayTable objects from other modules and provide them to the newly-generated copy of Pythia (capability In addition to the likelihood from LHC new particle searches, ColliderBit also provides likelihoods associated with Higgs physics. This is done via interfaces to HiggsBounds [78][79][80][81] and HiggsSignals [82,83], which use information on Higgs signal rates and masses from the Tevatron, LEP and the LHC. When a new model is added to GAMBIT with GUM, if SPheno output is requested from SARAH, GUM constructs a new HiggsCouplingsTable used as input to the Higgs likelihoods, and amends the appropriate module function entries in ColliderBit_Higgs_rollcall.hpp and ColliderBit_Higgs.cpp.

Backends
In the GAMBIT framework, backends are external tools that GAMBIT links to dynamically at runtime, in order to compute various physical observables and likelihoods. Out of the full list of backends that can be interfaced with GAMBIT, a small selection of them can work for generic BSM models. In particular, GUM is able to produce output for SPheno [68,69], Vevacious [74], CalcHEP [39], micrOMEGAs [67], Pythia [77], HiggsBounds [78][79][80][81] and HiggsSignals [82,83]. Thus, we briefly describe here the specific outputs generated by GUM for each of these backends, along with and any corresponding GUM and GAMBIT YAML input file entries needed to use them. Unless otherwise stated, GUM has been developed to work with specific versions of the backends.

Required .gum file entry: spheno:true
SPheno is a spectrum generator capable of computing one-loop masses and tree-level decay branching fractions in a variety of BSM models. The model-specific code is generated by SARAH and combined with the out-of-the-box SPheno code into a single backend for GAMBIT. For each model GUM thus provides an interface between GAMBIT and the SPheno version via a new frontend, SARAHSPheno_new_model_4_0_3.cpp. Details about this interface, which differs significantly from the SPheno interface described in [8], can be found in Appendix B.2.
In order to generate SPheno output from SARAH, the user must provide a SPheno.m file in the same directory as the SARAH model files. For details of the contents of these files, we refer the reader to the SARAH manual [52].
Once the appropriate GAMBIT code is generated by GUM, the new capability new_model_spectrum is added to SpecBit to compute the spectrum using SPheno. The new SPheno generated Spectrum object can be obtained for a specific model in a run via the YAML entry in the GAMBIT input file: ObsLikes: -purpose: Observable capability: new_model_spectrum As usual, if more than one module function can provide the same capability, as can happen, for example, if FlexibleSUSY is also present, the SPheno specific one can be selected by the rule in the GAMBIT input file.
Rules: -capability: new_model_spectrum function: get_new_model_spectrum_SPheno In addition to their masses and mixings, SPheno can compute the tree-level decay branching fractions for all particles in the spectrum, including some radiative corrections to the decays of Higgs bosons. The GUM-generated code in DecayBit includes the new module function all_new_model_decays_from_SPheno, which returns a GAMBIT DecayTable as computed by SPheno. This provides an alternative to the usual all_decays function in DecayBit. When run with default settings, GAMBIT will preferentially select all_new_ model_decays_from_SPheno whenever possible, rather than all_decays, as the former is more model-specific. If necessary, the user can also manually instruct GAMBIT to use this function by specifying a rule for the decay_rates capability: Rules: -capability: decay_rates function: all_new_model_decays_from_SPheno To build a newly-added SARAH-generated SPheno ("SARAH-SPheno") within GAMBIT, the appropriate commands to run in the GAMBIT build directory are cmake .. make sarah-spheno_new_model where the backend name sarah-spheno is used to differentiate the corresponding code from the out-of-the-box version of SPheno.
Vevacious computes the stability of the scalar potential for generic extended scalar sectors [74]. A recent C++ version [84] has recently been interfaced to GAM-BIT as a backend [85] and is the one used by GUM. The GAMBIT interface to Vevacious is explained in more detail in Appendix B.3.
To test the stability of the EWSB vacuum, Vevacious checks whether other deeper minima exist, in which case it computes the tunnelling probability to either the nearest (to the EWSB vacuum) or the deepest of such minima. The user can select whether to compute the tunnelling probability for either of them by using the sub_capabilities options in the GAMBIT input entry for the capability VS_likelihood as ObsLikes: -purpose: LogLike capability: VS_likelihood sub_capabilities: -global -nearest If both minima are chosen, Vevacious computes the probability of tunnelling to both if they are different. The capability compare_panic_vacua in GAMBIT checks if the nearest minimum is also the global minimum. If that is the case, the transition to the minimum is only computed once, which reduces the computation time significantly. In many instances, such as for the MSSM, the tunneling path optimization within Vevacious can be a very time-consuming step, therefore computing the tunnelling probability to both minima is not always recommended, as it requires running Vevacious twice for parameter points where the minima are different, and thus can be prohibitively slow for very large scalar sectors.
For each minimum, Vevacious by default computes both the zero-temperature (quantum) tunnelling probability as well as the finite-temperature (thermal fluctuation) probability. To select quantum or thermal, it is possible to provide options to the sub_capabilities as If both minima are selected, the same tunnelling strategy must be selected for both. GAMBIT computes the likelihood by combining the decay widths for all (independent) transitions as reported by Vevacious. For each new model, SARAH generates the model files required by Vevacious and GUM moves them into the patch directory in GAMBIT. Note that if the user wishes to request Vevacious output, they must also request SPheno output (and provide a SPheno.m file). This is due to the GUM interface utilising Mathematica symbols provided by SARAH's SPheno routines.
Once a new model has been generated by GUM, Vevacious can be built from within the GAMBIT build directory with the command make vevacious which will either download and build Vevacious if it is not installed, or simply move the new model files from the GAMBIT patch directory to the Vevacious directory if it is already built. Note that building Vevacious for the first time will also download and install MINUIT [86], PHC [87], and HOM4PS2 [88].

CalcHEP 3.6.27
Required .gum file entry: calchep:true Optional: wimp_candidate:pdg for annihilating DM, or decaying_dm_candidate:pdg for decaying DM GUM uses the backend convenience function CH_Decay_Width provided by the new CalcHEP frontend (described in Appendix B.1), to compute tree-level decay widths.
For each new BSM decay, GUM generates a modelspecific DecayTable::Entry. For each newly-added decaying particle, GUM writes a module function CH_new_ model_particle_name_decays, which requires the ability to call the backend convenience function CH_Decay_Width. All new decays are then gathered up by the existing DecayBit function all_decays, which GUM modifies by adding an if(ModelInUse(new_model)) switch for newlyadded decaying particles in the new model.
The appropriate GAMBIT input rule for a CalcHEPgenerated DecayTable is simply In the case of wimp_candidate:pdg, GUM utilises the backend convenience function CH_Sigma_V provided by the CalcHEP frontend, to build the Process Catalogue. It does this by computing 2 → 2 scattering rates as a function of the relative velocity v rel , which are in turn fed to the appropriate module functions.
In the case of decaying_dm_candidate:pdg, GUM instead utilises the branching ratios from the DecayTable::Entry for the decaying DM candidate, which can be constructed from CalcHEP, as described above.
The information contained within the Process Catalogue can be used by the GAMBIT native relic density solver for annihilating DM (using the function RD_oh2_general via DarkSUSY), and for all indirect detection rates. For annihilating DM, these utilise the velocity weighted annihilation cross-section σv rel . This is usually evaluated at v rel = 0 (such as in the case of γ rays), but for solar neutrinos, v rel is set to the solar temperature T Sun . For decaying DM, indirect detection rates are explicitly disabled until backend support for decaying DM becomes available.
In the first release of GUM, there is no support for 4-fermion interactions in CalcHEP, as neither FeynRules nor SARAH is able to produce output for these.
From the GAMBIT build directory the command MicrOMEGAs uses CalcHEP files as input so is subject to the same caveats as CalcHEP, covered above. MicrOMEGAs assumes that there is an additional symmetry under which the SM is even and any dark matter candidate is odd. MicrOMEGAs distinguishes an odd particle by having its name begin with a tilde, such as ∼chi. If no particle name in a theory begins with a tilde, the micrOMEGAs routines will fail. 4 The particle name is set by the ParameterName option in FeynRules, and the OutputName option in SARAH. If the indicated DM candidate does not have a particle name beginning with a tilde, GUM throws an error.
GUM provides a simple interface to the relic density calculation in micrOMEGAs, in the case of annihilating DM. The GAMBIT input entry for computing the relic density with micrOMEGAs is: Rules: -capability: RD_oh2 function: RD_oh2_MicrOmegas If the user provides a decaying DM candidate, then GUM does not provide an interface to the micrOMEGAs relic density routines. If the user wishes to compute the relic density for a decaying DM candidate, they must implement their own relic density calculation by hand.
GUM also provides a simple interface to the direct detection routines in micrOMEGAs, which simply provide calculations of the spin-independent and spin-dependent cross-sections, which are added for both annihilating and decaying DM. This is fed to DDCalc [4,15] which computes expected rates for a wide range of direct detection experiments.
As each installation of micrOMEGAs is a separate backend, each requires a specific build command to be run in the GAMBIT build directory: make micromegas_new_model Future versions of GAMBIT and GUM will interface to micrOMEGAs 5 [89], which contains routines for computing the relic abundance of DM via freeze-in, and allows for two-component DM. If the user requests Pythia output for a given model, either FeynRules or SARAH generates a collection of UFO files. GUM calls MadGraph directly using the UFO model files, and generates new output for Pythia in the form of matrix elements. GUM then writes the appropriate entries in the backend patch system and connects the new matrix elements with those existing in Pythia, and adds the corresponding entry to the file default_bossed_versions.hpp. Because Pythia is a C++ code, and the Pythia class it defines is used directly in ColliderBit, new versions of the backend must be processed for automated classloading from the corresponding shared library by BOSS (the backend-on-a-stick script [1]). This process generates new header files that must be #include(d) in GAMBIT itself, and therefore picked up by CMake. Therefore, a new version of Pythia is correctly built by running the commands cmake .. make pythia_new_model cmake .. make -jn gambit in the GAMBIT build directory, where n specifies the number of processes to use when building. In the current version of ColliderBit, functions from nulike [90,91] are also required in order to perform inline marginalisation over systematic errors in the likelihood calculation; this can be built with make -jn nulike also in the GAMBIT build directory. The matrix elements generated by MadGraph can include extra hard partons in the final state so that jet-parton matching is needed to avoid double counting between the matrix elements and the parton shower in Pythia. Currently, this is not automatic in GUM and must be implemented by the user as needed.

Pythia 8.212
Traditional MLM matching, e.g. as found in Mad-Graph [36], applies a k T jet measure cut on partons (xqcut) at the matrix element level, and separates events with different multiplicities. The optimal value of this cut should be related to the hard scale of the process, e.g. the mass of the produced particles, and tuned to ensure smoothness of differential jet distributions and invariance of the cross-section.
The hard scattering events are then showered and a jet finding algorithm (the k T -algorithm [92,93] implemented in Pythia's SlowJet class in our case) is used on the final state partons to match the resulting jets to the original partons from the hard scatter. A jet is considered to be matched to the closest parton if the jet measure k T (parton, jet) is smaller than a cutoff qCut. This parton shower cut, qCut, should be set slightly above xqcut. The event is rejected unless each jet is matched to a parton, except for the highest multiplicity sample, where extra jets are allowed below the k T scale of the softest matrix element parton in the event.
While a full implementation of this matching procedure for use in GUM is still in development, to perform simple jet matching for models generated with GUM, the user can make use of the MLM matching machinery already present in Pythia. This can be accessed by the ColliderBit initialisation of Pythia to allow the relevant jet matching inputs to be passed through Pythia settings in the GAMBIT YAML file, for example: Using the above approach one can approximate the matching for a single extra hard parton in dark matter pair production, by applying the matching cuts only on the Pythia side, on events from the matrix elements generated by MadGraph. Here, we also apply a p T cut on the partons in the hard scatter in Pythia (pTHatMin). While this p T cut and the xqcut are not equivalent, the difference is small for single-jet events because the geometrical part of the k T jet measure becomes unimportant.
A side-effect of using pTHatMin is that it applies to all final state particles, and so it must be set well below any / E T cuts in the analysis. Potential poor initial cross-section estimates in Pythia may lower the phase space selection efficiency, inflating computation time. To combat this, pTHatMin may be raised, requiring a sensible balance between efficiency and analysis cut constraints. Due to the limitations of this approach, the accuracy of jet matching cannot be guaranteed, and should be confirmed on a per-model, per-analysis basis.
We refer the reader to the ColliderBit manual [5] for additional details on the Pythia options used within the GAMBIT YAML file.
The particle numbering scheme used by both GAM-BIT and Pythia is that of the PDG. For dark matter particles to be correctly recognised as invisible by both libraries, their PDG codes must be within the range 51 − 60. Other particles that Pythia and GAMBIT tag as invisible are the SM neutrinos, neutralinos, sneutrinos, and the gravitino. Where possible, all particles in SARAH and FeynRules files passed to GUM by the user should adhere to the PDG numbering scheme. For more details, see Sec. 43 of the PDG review [94].
GUM checks that any newly-added particle in the GAMBIT particle database is consistent with the definition in Pythia. If there is an inconsistency between the two, GUM will throw an error. For example, the PDG code 51 is not filled in the GAMBIT particle database by default, but is reserved for scalar DM in Pythia. GUM will throw an error if the user attempts to add a new particle with PDG code 51 but with spin 1/2. Another advantage of using SPheno to compute decays is that all relevant couplings for HiggsBounds and HiggsSignals are automatically computed. Whenever SPheno output is generated, GUM also generates an interface to the GAMBIT implementations of HiggsBounds and HiggsSignals via the GAMBIT type HiggsCouplingsTable.
GUM achieves this by generating a function that produces an instance of the GAMBIT native type HiggsCouplingsTable from the decay output of SPheno.
The HiggsCouplingsTable object provides all decays of neutral and charged Higgses, SM-normalised effective couplings to SM final states, branching ratios to invisible final states, and top decays into light Higgses. For more details, we refer the reader to the SpecBit manual [8].
GAMBIT categorises models into two types: 'SM-like' refers to models with only the SM Higgs plus other particles, and 'MSSM-like' refers to models with extended Higgs sectors. The appropriate type is automatically selected for each model by the GAMBIT dependency resolver, by activating the relevant one of the module functions in ColliderBit that can provide capability

HB_ModelParameters.
For 'SM-like' models, GUM edits the ColliderBit module function SMLikeHiggs_ModelParameters to simply pass details of the single Higgs boson from the Spectrum object of the new model. For 'MSSM-like' models, GUM edits the ColliderBit function MSSMLikeHiggs_ModelParameters, which communicates the properties of all neutral and charged Higgses to HiggsBounds/HiggsSignals in order to deal with extended Higgs sectors.
To ensure the interface to the HiggsCouplingsTable works as expected, the user should make sure that the PDG codes of their Higgs sector mimics those of both GAMBIT and the SLHA: # CP-even neutral Higgses h0: [25,35,45] # CP-odd neutral Higgses A0: [36,46] # Charged Higgs H+: 37 The MSSMLikeHiggs_ModelParameters function automatically supports all MSSM and NMSSM models within GAMBIT, as well as any model with a similar Higgs sector (e.g. a Two-Higgs Doublet Model or any subset of the NMSSM Higgs sector). If the user has extended Higgs sectors beyond this, i.e. with more Higgses than the NMSSM, then they will need to extend both GUM and GAMBIT manually.
On the GAMBIT side, if the Higgs sector has multiple charged Higgses, more than three CP-even or more than two CP-odd neutral Higgses, the user must write a new function in ColliderBit/src/ColliderBit_Higgs.cpp to construct the HiggsCouplingsTable correctly. If there are new CP-even Higgses, this will also require a new entry in Elements/src/smlike_higgs.cpp to determine the 'most SM-like' Higgs.
In GUM, the user must add the PDG codes of additional mass eigenstates to the function get_higgses in gum/src/particledb.py under the appropriate entries neutral_higgses_by_pdg and charged_higgses_by_pdg, and also make appropriate changes to the functions write_spectrum_header in gum/src/spectrum.py to reflect any changes to the construction of the but neither actually needs to be rebuilt once a new model is added by GUM. Note that all CMake flags used in GUM are entirely independent from those used within GAMBIT. From the GAMBIT root directory, the following commands will build GUM: cd gum mkdir build cd build cmake .. make -jn where n specifies the number of processes to use when building.

Running GUM
The input for GUM is a .gum file, written in the YAML format. This file contains all of the information required for GUM to write the relevant module functions for GAMBIT, in a similar vein to the input YAML file used in GAMBIT. GUM is executed with an initialisation file new_model .gum with the -f flag, as in There are three operational modes of gum: dry run, regular and reset. During a dry run, no code is actually written to GAMBIT. GUM checks that the Mathematica model file (either FeynRules or SARAH) is suitable for use, and writes a number of proposed source files for GAMBIT, but does not actually copy them to the GAMBIT source directories. This mode can be used for safe testing of new .gum and model files, without modifying any of GAMBIT.
A regular run of GUM will perform all necessary checks, add the new model to GAMBIT and generate all relevant GAMBIT code requested in the .gum file. After a regular GUM execution, GUM prints a set of commands to standard output for the user to run. It is recommended that the user copies these commands and runs them as instructed, as the order of the suggested build and CMake steps can be important, due to new templated C++ types being provided by backends (currently just Pythia).
In addition to the above, GUM outputs a reset (.mug) file after a successful run. This file is used in the reset mode, and enables the user to remove a GUM-generated model from GAMBIT. Hence, after adding a new model, the user can run the command ./gum -r new_model.mug which will remove all source code generated by GUM associated with the model new_model. Note that if the user manually alters any of the auto-generated code, the resetting functionality may not work as expected.

Input file and node details
GUM files are YAML files in all but name: they are written in YAML format and respect YAML syntax. The only mandatory nodes for a GUM input file are the math node, specifying details of the Mathematica package used, and the output node, which selects the GAMBIT backends that GUM should generate code for.
The full set of recognised nodes in a .gum file is in the case of decaying DM. Note that only one DM candidate can be specified at present. Future versions of GAMBIT will allow for multiple DM candidates, and for the lightest stable particle (LSP) to be determined by the Spectrum object.
Any additional particles that are to be treated as invisible when calculating missing momentum in collider analyses are given by the invisibles node. They are given in a list, with anti-particle PDG codes also required when necessary. If this node is not present, ColliderBit will use the default list provided in contrib/heputils/include/HEPUtils/Event.h.

# Specify the PDGs of any additional invisibles invisibles: [9900001,-9900001]
The output option specifies for which backends GUM should generate code. The default for each possible backend output is false. If the output node is empty, or if all backend output is set to false, GUM will terminate with an error message. The output_options node allows the user to pass specific settings relevant for each backend to GUM. We briefly go through these in turn. The syntax for this is To tell MadGraph which processes to generate Pythia matrix elements for, the user should provide a list of all BSM processes in MadGraph syntax under the output_options::pythia::collider_processes subnode. The user needs to know the names of each particle within MadGraph in order to fill in this information.
While Pythia is able to perform its own showering for initial jets, these will be very soft. If the user specifically requires hard ISR jets, such as for a monojet signal associated with DM pair production, these matrix elements should be explicitly requested. In doing so, the user must be careful and aware that collider events are not double counted, i.e. jet matching is performed. We explain our treatment of jet matching in Pythia in Sec. 2.3.5.
For example, to generate matrix elements for monojet and mono-photon production in association with pair production of a DM candidate X, one would include The collider_processes sub-node is currently always required if pythia:true is set in the .gum file. As with the existing Pythia functionality in Col-liderBit, the new function getPythia_new_model introduced to ColliderBit by GUM recognises a YAML option pythia_settings, which can be provided by a user in their GAMBIT input file. In particular, the boolean sub-option new_model:all of the pythia_settings option allows one to activate all processes specified in the collider_processes entry of the .gum file.
Other sub-nodes of the .gum file's pythia entry offer the ability to use the native multiparticle description within MadGraph (multiparticles), and to select events with the relevant particles in the initial and final state (pythia_groups). An example including all available subnodes for the pythia entry is shown below: -p p > chi0 chi0bar j -p p > chi+ chi--p p > chi-H+ # Define some groups so we can import processes # with these particles in the initial or final # states when we do a scan pythia_groups: -Neutralino: [chi0_1, chi0_2, chi0_3, chi0_4] In this example, including the example pythia_groups node will generate an additional group of events known as newSUSYNeutralino:all, which can also be set in the pythia_settings option of the new ColliderBit module function getPythia_newSUSY. Setting this flag to on picks out all processes in which any of the particles in the pythia_group is an initial or a final state. This is useful for when one wishes to simulate events only for a specific subset of the processes for which matrix elements have been generated for the new model.

Outputs
FeynRules is designed to study particle physics phenomenology at tree level, and does not directly interface to any spectrum generators. FeynRules is therefore well suited to EFTs and simplified models, as gauge invariance and renormalisability are not typically required in these cases. Because of this, when working from the outputs of FeynRules, GUM is only able to provide minimal interfaces to the SpecBit module and the GAMBIT model database.
FeynRules is able to output two file formats usable by GUM: CalcHEP (.mdl files) and UFO files. GUM uses .mdl files with CalcHEP to compute tree-level decay rates and DM annihilation cross-sections, and with micrOMEGAs to compute DM relic densities and direct detection rates. The UFO files are currently only used by the MadGraph-Pythia 8 chain, for collider physics. See Sec. 2 for details.

Porting a FeynRules model to GAMBIT
To add a model to GAMBIT based upon a FeynRules file, GUM tries to find the new_model.fr model file, and any restriction (.rst) files that the user may wish to include, first in gum/contrib/FeynRules/Models/new_model/ where the folder gum/ is located inside the GAMBIT root directory. If these model files do not come with the This behaviour is also possible with GUM via the additional option base_model. In this case, GUM expects SM.fr to be located in Models/SM/ and SingletDM.fr to be in Models/SingletDM/ (where both paths can be independently relative to gum/ or to gum/contrib/FeynRules/). A user would indicate this in their input file like so: After loading the model, GUM performs some diagnostics on the model to ensure its validity, checking that the Lagrangian is Hermitian, and that all kinetic and mass terms are correctly diagonalised according to the FeynRules conventions. For more details, we refer the reader to the FeynRules manual [32].

Requirements for FeynRules files
GUM interacts with loaded FeynRules files via the EParamList and PartList commands. To successfully parse the parameter list, every parameter must have a BlockName and OrderBlock associated with it.
A model implemented in FeynRules will be parametrised in GAMBIT by the full set of parameters denoted as external, by ParameterType -> External in the input .fr file. Additionally, all masses for non-SM particles are added as input parameters, as they are not computed by spectra.
For example, the SM extended by a scalar singlet S via a Higgs portal with the interaction Lagrangian L ⊃ λ hs H † HS 2 would be parametrised in GAMBIT by the coupling λ hs , as well as the mass of the new field m S .
The user should not use non-alphanumeric characters (apart from underscores) when defining parameter names (including the ExternalParameter field), as this will typically result in errors when producing output. The exception to this is a tilde, which is often used to signify a conjugate field, or in the case of micrOMEGAs a DM candidate.
For the MadGraph-Pythia pathway to work correctly, each new external parameter must have its InteractionOrder set. See Sec. 6.1.7 of the FeynRules manual for details [32]. A fully compliant FeynRules entry for a parameter looks as follows: Here we see that the ParticleName begins with a tilde, so that micrOMEGAs can correctly identify it as a WIMP DM candidate, the PDG code is assigned to 52 (generic spin-1/2 DM, as per the PDG), and the particle mass mchi will be added as an external parameter. Note that because this particle has SelfConjugate -> True, GUM does not require the electric charge to be set. If the particle were Dirac, i.e. SelfConjugate -> False, GUM would require the additional entry QuantumNumbers -> {Q -> 0}.
For a particle η that should decay, an appropriate entry for the particle width would look like Width -> {weta, 1.}, enabling the contents of the DecayTable to be passed to CalcHEP. Note that in this case, weta will not be set as a free parameter of the model in GAMBIT, but derived from the model parameters and accessible channels.
Although FeynRules is able to compute the Feynman rules for a theory containing 4-fermion interactions, it does not support generating CalcHEP files for these models. 6 The first release of GUM does not support theories implemented in FeynRules with 4-fermion interactions, however such support is planned for future releases.

Outputs
As shown in Table 1, SARAH is able to generate output for CalcHEP, micrOMEGAs, Pythia, SPheno and Vevacious. As SARAH is able to generate CalcHEP, Mad-Graph/Pythia and micrOMEGAs output, it can mirror the capabilities of FeynRules in the context of GUM.
SARAH has been labeled a 'spectrum generator generator', as it can also automatically write Fortran source code for SPheno for a given model. GUM is able to automatically patch the SPheno source code generated by SARAH, and write a frontend interface to that SARAH-SPheno version.

Porting a SARAH model to GAMBIT
To add a model to GAMBIT based upon a SARAH file, the model file new_model.m must be located in gum/contrib/SARAH/Models/new_model/ or gum/Models/new_model/ The usual SARAH files parameters.m and particles.m should also be present in one of these locations. To generate spectra via SPheno, a SPheno.m file must also be provided in the same directory.
GUM loads a new model in SARAH by invoking the command Start["new_model"], which is selected by the .gum entries math: package: sarah model: new_model In order to validate the model GUM uses the SARAH command CheckModel[]. SARAH provides the results of the CheckModel[] function only to stdout and via error messages. GUM therefore captures the output and message streams from Mathematica in order to gather this information, and decides whether the errors should be considered fatal or not. Non-fatal errors, including gauge anomalies, possible allowed terms in the Lagrangian or missing Dirac spinor definitions, are directed to GUM's own standard output as warnings. Fatal errors, such as non-conservation of symmetries or those associated with particle and parameter definitions, cause GUM to abort, as subsequent steps are guaranteed to fail in these cases. This instructs GUM to add a LesHouches block YN to the SimpleSpec definition, which will be filled by a spectrum generator.

Requirements for SARAH files
GUM is concerned with the properties of physical particles in the mass basis (designated [EWSB] in SARAH). An example particle implementation from the particles.m file is: Here the important entries are the Mass entry, where Mass -> LesHouches signifies that the particle mass will be provided by the GAM-BIT Spectrum object (whether that is filled using SPheno or a tree level calculation), the PDG entry, which specifies a list over all generations for the mass eigenstates (in this example there is just one), and the ElectricCharge field. As opposed to FeynRules, where all parameters and particle masses become GAMBIT model parameters, the SARAH pathway attempts to optimise this list through various means. In the absence of a spectrum generator (e.g. SPheno, see below), almost all the parameters in ParameterDefinitions become model parameters.
Only those with explicit dependencies on other parameters are removed, i.e. those with the Dependence or DependenceSpheno fields. In addition, SARAH provides tree-level relations for all masses, via TreeMass[particle_ name,EWSB], so even in the absence of a spectrum generator, none of the particle masses become explicit model parameters. For models with BSM states that mix together into mass eigenstates 7 , the tree-level masses are not used and an error is thrown to inform the user of the need to use a spectrum generator.
If the user elects in their .gum file to generate any outputs from SARAH for specific backends, GUM requests that SARAH generate the respective code using the relevant SARAH commands. These are MakeCHep[]

Register the values of various flags needed to
properly set up the interface to SPheno. These are "SupersymmetricModel", "OnlyLowEnergySPheno", "UseHiggs2LoopMSSM" and "SA'AddOneLoopDecay".

A worked example
To demonstrate the process of adding a new model to GAMBIT with GUM, in this section we provide a simple worked example. Here we use GUM to add a model to GAMBIT, perform a parameter scan, and plot the results with pippi [95]. This example is designed with ease of use in mind, and can be performed on a personal computer in a reasonable amount of time. For this reason we select a simplified DM model, implemented in FeynRules.
In this example, we consider constraints from the relic density of dark matter, gamma-ray indirect detection and traditional high-mass direct detection searches. It should be noted that this is an example, not a full global scan, so we do not use all of the information available to us -a real global fit of this model would consider nuisance parameters relevant to DM, as well as a full set of complementary likelihoods such as from other indirect DM searches, low-mass direct detection searches, and cosmology.
The FeynRules model file, .gum file, GAMBIT input file and pip file used in this example can be found within the Tutorial folder in GUM.

The model
The model is a simplified DM model, where the Standard Model is extended by a Majorana fermion χ acting as DM, and a scalar mediator Y with a Yukawa-type coupling to all SM fermions, in order to adhere to minimal flavour violation. The DM particle is kept stable by a Z 2 symmetry under which it is odd, χ → −χ, and all other particles are even. Both χ and Y are singlets under the SM gauge group.
Here, we assume that any mixing between Y and the SM Higgs is small and can be neglected. This model has been previously considered in e.g. [96,97] and is also one of the benchmark simplified models used in LHC searches [98][99][100]. The model Lagrangian is Note that this theory is not SU (2) L invariant. One possibility for a 'realistic' model involves Y -Higgs mixing, which justifies choosing the Y f f couplings to be proportional to the SM Yukawas y f .
The free parameters of the model are simply the dark sector masses and couplings, {m χ , m Y , c Y , g χ }. In this example we follow the FeynRules pathway, working at tree level.

The .gum file
Firstly, we need to add the FeynRules model file to the GUM directory. The model is named 'MDMSM' (Majorana DM, scalar mediator). Starting in the GUM root directory, we first create the directory that the model will live in, and move the example file from the Tutorial folder to the GUM directory: mkdir Models/MDMSM cp Tutorial/MDMSM.fr Models/MDMSM/ As we are working with FeynRules, the only backends that we are able to create output for are CalcHEP, mi-crOMEGAs and MadGraph/Pythia. For the sake of speed, in this tutorial we will not include any constraints from collider physics. This is also a reasonable approximation, as for the mass range that we consider here, the constraints from e.g. monojet, dijet and dilepton searches are subleading (see e.g. Ref. [96] and Appendix A). We therefore set pythia:false. The contents of the supplied .gum file are simple: Note the selection of the PDG code of the DM particle as 52, so that if we were to use Pythia, χ would be correctly identified as invisible.
We can run this from the GUM directory, ./gum -f Tutorial/MDMSM.gum and GUM will automatically create all code needed to perform a fit using GAMBIT. On a laptop with an Intel Core i5 processor, GUM takes about a minute to run. All that remains now is to (re)compile the relevant backends and GAMBIT, and the new model will be fully implemented, and ready to scan. GUM prints a set of suggested build commands to standard output to build the new backends and GAMBIT itself. Starting from the gum directory, these are cd ../build cmake .. make micromegas_MDMSM make calchep make -jn gambit where n specifies the number of processes to use when building. Note that GUM does not adjust any CMake flags used in previous GAMBIT compilations, so the above commands assume that the user has already configured GAMBIT appropriately and built any relevant samplers before running GUM. A user wishing to instead configure and build GAMBIT from scratch after running GUM, in order to e.g. run the example scan of Sec. 4 using differential evolution sampling and MPI parallelisation, would need to instead do (again, starting from the gum directory) cd ../build cmake -D WITH_MPI=ON .. make diver cmake .. make micromegas_MDMSM make calchep make gamlike make ddcalc make -jn gambit For more thorough CMake instructions, see the README in the gum/Tutorial, and CMAKE_FLAGS.md in the GAMBIT root directory.

Phenomenology and Constraints
The constraints that we will consider for this model are entirely in the DM sector, as those from colliders are less severe. gamma-ray observations of dwarf spheroidal galaxies (dSphs) [103]. Tree level cross-sections are computed by CalcHEP, γ ray yields are consequently computed via DarkSUSY [104,105], and the constraints are applied by gamLike [4].
As the relic density constraint is imposed only as an upper bound, we rescale all DM observables by the fraction of DM, f = Ω χ /Ω DM .
Here we will use the GAMBIT input file gum/ Tutorial/MDMSM_Tute.yaml. Although it does contain a little more than the GAMBIT input file automatically generated by GUM (yaml_files/MDMSM_example.yaml), it is still fairly standard, so we will cover only the important sections here. For an overview of YAML files in GAMBIT, we refer the reader to Sec. 6 of the GAMBIT manual [1].
Firstly the parameters section indicates all models required for this scan: not just the MDMSM parameters, but also SM parameters, nuclear matrix elements and DM halo parameters. The parameter range of interest for the MDMSM model will be masses ranging from 45 GeV to 10 TeV, and dimensionless couplings ranging from 10 −4 to 4π. We will scan each of these four parameters logarithmically. The scanner section selects the differential evolution sampler Diver [3] with a fairly loose stopping tolerance of 10 −3 and a working population of 10,000 points. To perform the scan we copy the GAMBIT input file to the yaml_files folder within the GAMBIT root direc-tory. This is a necessary step, as we need to !import the appropriate Standard Model YAML file from the relative path include (i.e. the folder yaml_files/include in the GAMBIT root directory). From the GAMBIT root directory, we cp gum/Tutorial/MDMSM_Tute.yaml yaml_files/ and run GAMBIT with n processes, mpirun -n n gambit -f yaml_files/MDMSM_Tute.yaml The above scan should converge in a reasonable time on a modern personal computer; this took 11 hr to run using 4 cores on a laptop with an i5-6200U CPU @ 2.30GHz, sampling 292k points in total. The results of this scan are shown below.
Note that whilst the scan has converged statistically, the convergence criterion that we set in the input file above is not particularly stringent, so many of the contours presented in this section are not sampled well enough to be clearly defined. A serious production scan would typically be run for longer, and more effort made to map the likelihood contours more finely. Nonetheless, the samples generated are more than sufficient to extract meaningful physics.
Once the scan has finished, we can plot the result using pippi [95]. As Diver aims to finds the maximum likelihood point, we will perform a profile likelihood analysis with pippi. Assuming that pippi is in $PATH, do cd gum/Tutorial pippi MDMSM.pip which will produce plots of the four model parameters against one another, as well against as a raft of observables such as the relic abundance and spin-independent cross-section (rescaled by f ).

Results
The upper panel of Fig. 1 shows the profile likelihood in the plane of the DM mass m χ against the mediator mass m Y . The relic density requirement maps out the structure in the same plane. There are two sets of solutions: firstly when the DM is heavier than the mediator, m χ > m Y (bordered by the red dashed line in Fig. 1), and secondly where DM annihilates on resonance, 2m χ ≈ m Y (centred on the purple dashed line in Fig. 1).
When m χ < m Y and the Y Y annihilation channel is not kinematically accessible, annihilation predominantly occurs via an s-channel Y to bb or tt, depending on the DM mass. In this case, the only way to efficiently deplete DM in the early Universe is when annihilation is on resonance, m χ ≈ m Y /2. Away from the resonance when the Y Y channel is closed, even couplings of 4π are not large enough to produce a sufficiently high annihilation cross-section to deplete the thermal population of χ to below the observed value.
When kinematically allowed, χχ → Y → tt is the dominant process responsible for depleting the DM abundance in the early Universe. When m χ < m t and m χ < m Y , the only way to produce the correct relic abundance is when exactly on resonance, 2m χ = m Y , annihilating mostly to bb. The effect of the t threshold can clearly be seen in Fig. 1: as the χχ → tt channel opens up, the contours do not trace the resonance 2m χ = m Y quite as tightly. This is because the cross-section to tt is significantly larger, as the mediator coupling to the SM leptons are proportional to their Yukawas. This means that near the resonance region it is far easier to satisfy the DM abundance constraint, which leads to the spread about the purple line for m χ > m t in Fig. 1.
When the DM candidate is heavier than the mediator, the process χχ → Y Y is kinematically accessible, and proceeds via t-channel χ exchange. When this channel is open, the correct relic abundance, Ω χ h 2 , can be acquired independently of c Y by adjusting m χ and g χ . This can be seen in Fig. 2.
In this regime, the relic abundance constrains the DM coupling g χ , as seen in the central panel of Fig. 2, with annihilation cross-section σv ∝ g 2 χ c 2 Y /m 2 χ . We plot m χ against g χ in Fig. 3; the lower bound is set by the resonance region, and is (unsurprisingly) poorly sampled for low values of m χ .
To show the impact of allowing DM to be underabundant, we perform a separate scan where we instead employ a Gaussian likelihood for the relic abundance. This can be achieved by instead using the following entry in the Rules section of the GAMBIT input file: # Choose to implement the relic density likelihood # as a detection, not an upper bound -capability: lnL_oh2 function: lnL_oh2_Simple We show the m χ -m Y plane for this scan in the lower panel of Fig. 1. For a given point in the m χ -m Y plane, the couplings g χ and c Y must be correctly tuned to fit the relic density requirement: clearly, the scanner struggles to find such points compared to when DM can be underabundant. Notably, the sampler struggles to find the very fine-tuned points on resonance when tt is not kinematically accessible.
In the lower panel of Fig. 3 we show the m χ -g χ plane when requiring that χ fits the observed relic abundance, coloured by m Y mχ . There is a well-defined red area scattered along a straight line with m Y mχ < 1, corresponding to efficient annihilation to Y Y , i.e. for σv ∝ g 4 χ /m 2 χ . This is reflected in the lower panel of Fig. 1: almost all of the valid samples for m χ < m t are in the regime where m χ > m Y , i.e. above the red dashed line. Here we see that the slope of the line followed by the red area in the lower panel of Fig. 3 is exactly half that of the lower bound on g χ , due to the fact that the latter is instead set by resonant annihilation to fermions, which involves one less power of g χ in the corresponding matrix element, i.e. σv ∝ g 2 χ c 2 Y /m 2 χ . Direct detection processes proceed via t-channel Y exchange. The functional form of the spin-independent cross-section is [e.g. 98]: where µ χN is the DM-nucleon reduced mass, N = n, p, and the form factor Here the light-quark form factors are  [4,107]. Thus for a given DM mass m χ , direct detection constrains the parameter combination g χ c Y /m 2 Y , rescaled by the DM fraction f ≡ Ω χ /Ω DM . Fig. 4 shows the spin-independent cross-section on protons as a function of the DM mass. As it is possible for χ to be underabundant for all masses, it is easy to evade direct detection limits by simply tuning the couplings. We also plot the projection from LZ [108], which shows the significant effect that future direct detection experiments can have on the parameter space of this model, including the ability to probe the current best-fit point.
The best-fit region in Fig. 4 lies just below the XENON1T limit: this is due to a small excess (less than 2σ) in the data, which can be explained by this model. This excess is discussed in more detail in a GAMBIT study of scalar singlet DM [14].
Note that for all annihilation channels, the annihilation cross-section is proportional to the square of the White contours with coloured shading correspond to the thermal average cross-section at dark matter freezeout. The contours show the 1 and 2σ confidence regions, and the white star the best-fit point. For comparison, in grey contours we show the 1 and 2σ confidence regions for the annihilation cross-section in the v → 0 limit, rescaled by the square of the fraction of the predicted relic abundance, f ≡ Ωχ/Ω DM . This is the effective annihilation cross-section that enters indirect detection rates in the late Universe.
relative velocity of DM particles in the direction perpendicular to the momentum transfer, i.e. σv ∝ v 2 ⊥ . This means that annihilation is velocity suppressed, especially in the late Universe where v ⊥ ∼ 0. As annihilation processes are also suppressed by the square of the DM fraction f , indirect detection signals therefore do not contribute significantly to the likelihood function. The velocity dependence of the cross-section is fully taken into account by micrOMEGAs in computing the relic density, however, so the thermally averaged value at freezeout is much larger than the late-time value. We show the thermally-averaged value at freezeout in Fig. 5, which, as expected, overlaps the canonical thermal value σv = 3 × 10 −26 cm 3 s −1 . For comparison, in grey contours we also plot f 2 (σv) v→0 , the effective cross-section for indirect detection. In this case, all parameter combinations give cross-sections several orders of magnitude below the canonical thermal value, heavily supressing all possible indirect detection signals.
If we wish to remove the model from GAMBIT, we simply run the command ./gum -r mug_files/MDMSM.mug and the GAMBIT source reverts to its state prior to the addition of the model.

Summary
The standard chain for a theorist to test a BSM theory against data has been greatly optimised, and largely automated in recently years, with the development of Lagrangian-level tools such as FeynRules and SARAH. On the phenomenological side, GAMBIT has been designed as a modular global fitting suite for extensive studies of BSM physics. GUM adds the final major missing piece to the automation procedure. By providing an interface between GAMBIT, SARAH and FeynRules, it makes global fits directly from Lagrangians possible for the first time. This will make the process of performing statistically rigorous and comprehensive phenomenological physics studies far easier than in the past.
We have shown that GUM produces sensible results for a simplified model, in good agreement with previous results found in the literature. This is based on a scan that can be performed on a personal computer in a reasonable time frame.
The modular nature of GUM means that extension is straightforward. Since the first version of this paper appeared, GUM has already been extended in GAMBIT 2.1 to include a four-fermion EFT plugin connecting FeynRules and CalcHEP [16]. Other extensions planned include computation of modifications to SM precision observables and decays, multi-component and co-annihilating dark matter models, and interfacing to the GAMBIT flavour physics module FlavBit via Fla-vorKit. We also plan to add new interfaces to public codes not included in this release, including to spectrum generators and decay calculators associated with FlexibleSUSY, and to the dark matter package MadDM. We will also update supported backends to the latest versions, in particular micrOMEGAs 5 [89] It has been demonstrated that monojet searches are not necessarily the most constraining searches for the MDMSM [96,111]. In fact, given the large Yukawa couplings the tree level production of top quark pairs together with the mediator Y , despite the large final state masses, should be the most sensitive final state at the 13 TeV LHC. To investigate the constraints from this process, we select a 139 fb −1 ATLAS search for final states with two leptons, jets and missing momentum, which is targeted to this specific final state [112].
The computational requirement of a GAMBIT scan increases significantly when full collider simulations with ColliderBit are included. For this example scan we therefore only vary the mass m Y of the mediator particle. The simulations are performed using the GUM-generated Pythia interface, as described in sections 2.2.4 and 2.3.5. For each parameter point in the scan we generate 12 million Pythia events. The events are then passed through fast detector simulation in ColliderBit and selection cuts emulating the ATLAS search. This search targets events with two opposite-charge leptons, jets and missing transverse momentum. No large excesses are observed in this search; across all signal regions the observed event counts agree with the Standard Model expectations to around the 2σ level.
The ATLAS analysis defines both exclusive and inclusive signal regions based on the 'stransverse mass' kinematic variable and the signal lepton flavours. For our scan we consider the exclusive signal regions. There is no publicly available full likelihood function for this analysis, nor any data on correlations, and we therefore take the conservative approach of only using the likelihood contribution from the single signal region with the best expected sensitivity at each point in our scan. Figure 6 shows the resulting ATLAS likelihood function in our scan of the mediator mass m Y , with the other model parameters set to m χ = 1 GeV and c Y = g χ = 1.  . The red dashed line shows ∆ ln L = −2, corresponding to the ∆ ln L limit for the approximate 2σ confidence interval on m Y . For c Y = g χ = 1 and m χ = 1 GeV, mediator masses below ∼ 250 GeV are disfavoured at the 2σ level, in agreement with the constraint that the ATLAS analysis obtains on the mediator mass in a similar BSM scenario. When compared to the [45, 10 4 ] GeV scan range for m Y in the MDMSM scan in Sec. 4, and further taking into account that the four-dimensional scan in Sec. 4 allowed couplings as small as 10 −4 , we see that the collider likelihood would have had a minimal impact on the profile likelihood results of that scan.

B.1: CalcHEP
CalcHEP provides squared matrix elements for a given process at tree level. The GAMBIT interface to CalcHEP contains two simple convenience functions, CH_Decay_Width and CH_Sigma_V, which apply the correct kinematics to convert a matrix element into a 1 → 2 decay width, or a 2 → 2 DM annihilation cross-section.
The function CH_Decay_Width is used by DecayBit to add a new Entry to its DecayTable. To obtain the decay width, one simply passes the name of the model and the decaying particle as they are known internally in CalcHEP, along with a std::vector<std::string> containing the names of the decay products (also as known to CalcHEP). Note that at present only two-body final states are allowed by CalcHEP, but the interface generalises nearly automatically to higher-multiplicity final states.
The function CH_Sigma_V returns the product σv lab for DM annihilation χ + χ → A + B. It does not support co-annihilations. This function is used by the DarkBit Process Catalogue. The arguments for CH_Sigma_V are identical to CH_Decay_Width, except that the in states must be a std::vector<std::string> containing the DM candidate and its conjugate. The function also requires the relative velocity in the centre-of-mass frame double v_rel (in units of c), and the DecayTable, to pass updated mediator widths to CalcHEP.
For matrix elements with numerical instabilities for zero relative velocity, we compute the cross-section at a reference velocity of v lab = 1 × 10 −6 .

B.2: SARAH-SPheno
SPheno is a spectrum generator capable of providing one-loop mass spectra as well as decay rates at tree and loop level. GAMBIT has included a frontend interface to the release version of SPheno 3.3.8 since GAMBIT 1.0, and to 4.0.3 since GAMBIT 1.5. Details about the interface can be found in Appendix B of [8]. There are important differences between the frontend interfaces to the release version of SPheno and to the SARAHgenerated version (which we refer to as SARAH-SPheno). We give details of these differences below.
SARAH generates the Fortran SPheno files to compute the spectrum and decays for a given model. These differ from the out-of-the-box SPheno, which only works with various versions of the MSSM. After generating these files with SARAH, GUM moves them to the main GAMBIT directory, to be combined with the downloaded version of SPheno at build time.
In order to improve the usability of SARAH-SPheno in GAMBIT, we have patched two variables into the Fortran code. The first, ErrorHandler_cptr, is a pointer to a void function that returns control to GAMBIT after a call to the SPheno subroutine TerminateProgram. This prevents GAMBIT being terminated when SPheno fails. Instead, it raises an invalid_point exception, and carries on. The second new variable is SilenceOutput, which provides a GAMBIT input option that allows the user to silence all output of SARAH-SPheno to stdout. This option defaults to false.

Rules:
-capability: unimproved_new_model_spectrum function: get_new_model_spectrum_SPheno options: SilenceOutput: true #default: false The interface to the spectrum computation from SARAH-SPheno remains fairly similar to that described for the release version of SPheno in Ref. [8]. Some variables and functions have changed names and library symbols. The computations have been re-ordered slightly, but otherwise remain unperturbed.
The major change to the spectrum is the computation of mass uncertainties, while the previous SPheno interface merely applied a universal uncertainty to all masses. These uncertainties are computed by SPheno for all spectrum masses and added to the GAMBIT spectrum object if requested, using the option GetMassUncertainty. This option defaults to false.

Rules:
-capability: unimproved_new_model_spectrum function: get_new_model_spectrum_SPheno options: GetMassUncertainty: true #default: false Setting GetMassUncertainty:true causes the mass uncertainties to be added to the spectrum in the SLHA block DMASS.
The most significant difference between the frontend interface to SARAH-SPheno compared to the release version of SPheno is that the former includes computation of decays. The backend convenience function run_SPheno_decays provides a new capability SARAHSPheno_new_model_decays, which maps the decay widths computed by SPheno into a GAMBIT DecayTable. Internally, this backend function fills the SARAH-SPheno internal variables with the spectrum details and computes all the branching fractions using the SARAH-SPheno function CalculateBR. Note that the branching fractions for charged processes are rescaled within the frontend interface, as they are double-counted in SPheno so must be rescaled by a factor of 1/2. Various GAM-BIT input options are added for the computation of the decays, the most notable of which are -OneLoopDecays, which switches on the alternate computation of full one-loop decays, -MinWidth, which specifies the minimum width value in GeV for a decay to be added to the table, and These can be given as options of the decay_rates capability as Lastly, the new SARAH-SPheno interface provides information about Higgs couplings via the backend convenience function get_HiggsCouplingsTable, which provides the capability SARAHSPheno_new_ model_HiggsCouplingsTable. This function simply fills a GAMBIT HiggsCouplingsTable object from various internal variables in SARAH-SPheno.

B.3: Vevacious (C++)
Vevacious computes the stability of the EWSB vacuum in BSM theories when deeper vacua exist. It does so by first finding all minima of the tree-level potential, calculating one-loop and thermal corrections, and computing the likelihood for our vacuum not to have decayed before the present epoch. Vevacious has been recently rewritten in C++ and without dependence on external tools for the tunnelling calculation. This is the version that GUM uses. The GAMBIT interface for Vevacious (C++) is described in detail in [85], so we will only summarise it briefly here.
Out of the box, Vevacious simply requires an SLHA2 file as input. To avoid file operations, from GAMBIT the spectrum object is passed to the central Vevacious object by the capability pass_spectrum_to_vevacious. It is this capability in SpecBit for which GUM can write a new module function for each model.
Besides the spectrum, Vevacious requires other information to initialise prior to running the main routines. The various operational options for Vevacious are set via the module function initialize_vevacious within SpecBit. These are described in Table A1. The specific minima to which Vevacious must compute the tunnelling probability are extracted from the GAMBIT input file by the panic_vacua capability, and the specific tunnelling strategy by tunnelling_strategy. The capability compare_panic_vacua checks whether the minima requested are identical.
The main Vevacious computations are performed using the method RunPoint from the VevaciousPlusPlus class, native to Vevacious. GAMBIT has access to this class dynamically via the class structure generated by BOSS and calls this method in the capability check_vacuum_stability_vevacious.
The likelihood of tunnelling to any minimum is provided by the capability VS_likelihood. The selection

PathResolution
Number of equally-spaced nodes along candidate paths in field space that are moved for finding the optimal tunneling path. Default: 1000. of which minimum to which to compute the transition and the tunnelling strategy is done by using its sub_capabilities, as described in section 2.3.2. Lastly, the details of the tunnelling computations by Vevacious can be extracted as a map using the capability VS_results.