1 Introduction

The Standard Model (SM) has been exceptionally successful in explaining the fundamental particle physics that underpins the workings of the Universe. Despite these successes, there are still both experimental and theoretical considerations in tension with the SM: the nature of dark matter (DM), the generation of neutrino masses, the hierarchy problem, cosmological matter-antimatter asymmetry, various experimental anomalies (such as flavour or \(g-2\)), and the stability of the electroweak vacuum. Clearly, the search for physics beyond the SM (BSM) is a multidisciplinary endeavour.

Performing global fits of BSM theories ensures complementarity between experimental and theoretical efforts. This entails consistently predicting multiple observables, comparing them rigorously with experimental results, and using sophisticated statistical sampling techniques in order to obtain meaningful, quantitative inference on theories and their parameters.

The GAMBIT software framework [1, 2] aims to address this need, providing a flexible, efficient and scalable code for BSM phenomenology. GAMBIT provides dedicated modules for statistical sampling [3], DM [4], collider [5] flavour [6] and neutrino [7] physics, as well as spectrum generation, decay and precision physics [8] and cosmology [9].

The first version of GAMBIT shipped with various parameterisations of the minimal supersymmetric Standard Model (MSSM) at the weak [10] and grand unified theory (GUT) scales [11], a scalar singlet DM extension of the SM [12] and an effective field theory of flavour interactions [6, 13]. Since then, studies of vacuum stability for scalar singlet DM [14], generic Higgs portal DM models [15], DM effective field theory models [16, 17], axions and axion-like particles [18], additional MSSM models [19], both left- [20] and right-handed neutrinos [7], and inflationary and other cosmological models [9] have expanded the GAMBIT model database substantially.

One of the most notable features of GAMBIT is that it is open source. This allows users to study new models and add their own experimental and theoretical constraints to GAMBIT in a modular manner. Although adding a new model to GAMBIT is largely formulaic, it is still not a completely trivial task, as the user still requires some level of understanding of the underlying software design. To this end, we present the GAMBIT Universal Model Machine (GUM): a tool for interfacing symbolic Lagrangian-level tools to GAMBIT, to further automate the procedure of comparing theory to experiment [21].

Not only does automation increase efficiency and effectiveness in BSM physics, it also reduces the scope for human error, which is inevitably introduced when coding complicated expressions by hand. Development of Lagrangian-level tools has been a very important step in the development of automation in BSM physics. The original motivation for creating Lagrangian-level tools was to automatically write outputs that could be used for generating matrix element functions, which could in turn be used by Monte Carlo event generators to simulate new physics at particle colliders. The first tool to achieve this was LanHEP [22,23,24,25], originally created to compute vertices for CompHEP [26,27,28] from a simple Lagrangian input. With the release of FeynRules [29,30,31,32], this quickly expanded to generating output for other matrix element codes, such as MadGraph/MadEvent [33,34,35,36,37], CalcHEP [38, 39], FeynArts [40,41,42,43], SHERPA [44] and WHIZARD/O’Mega [45, 46]. SARAH [47,48,49,50,51,52] was also developed around the same time, initially with a particular focus on supersymmetry, but soon expanding to a much larger range of models.

The success of FeynRules and SARAH in generating Feynman rules for use by matrix element generators lead to the creation of a new filetype, the ‘Universal FeynRules Output’ (UFO) [53]. These UFO files encode information about the particles, the parameters and interaction vertices for a given model. They can be generated by both FeynRules and SARAH, and handled by a range of matrix element generators such as MadGraph, GoSam [54, 55] and Herwig++ [56, 57].

Table 1 GAMBIT backends with GUM support and Lagrangian-level tools used to generate them. Apart from the external packages listed, GUM also produces GAMBIT Core and physics module code tailored to the model and observables of interest

As the search for new physics spans more than just collider physics, it has been necessary for Lagrangian-level tools to generate output for tools in other areas of physics, outside of collider phenomenology. The UFO-compatible package MadDM [58,59,60] has been built on top of MadGraph, for computing DM relic densities and direct and indirect detection signals. From SARAH, inputs can now also be generated for DM phenomenology with micrOMEGAs [61,62,63,64,65,66,67], spectrum generation with SPheno [68, 69] and FlexibleSUSY [70,71,72], flavour physics observables with SPheno and FlavorKit [73], and calculations of the stability of electroweak symmetry breaking (EWSB) vacuum with Vevacious [74].

Although FeynRules and SARAH were both created to solve essentially the same problem, they serve different purposes. FeynRules is concerned with computing Feynman rules for any given Lagrangian, including effective ones, and performing physics at tree level. SARAH on the other hand places far more emphasis on renormalisable theories. As a result, any UV-complete model can be implemented in both FeynRules and SARAH, and any output generated by FeynRules for such models can also be created by SARAH. However, SARAH is also able to compute renormalisation group equations (RGEs) at 2-loop order and particle self-energies at 1-loop order, allowing its ‘downstream beneficiaries’ SPheno and FlexibleSUSY to generate corrected mass spectra at the 1-loop level.

Although the outputs of SARAH are more sophisticated than those of FeynRules, it also has limitations. Unlike in FeynRules, it is not generally possible to define non-renormalisable theories or higher-dimensional effective theories in SARAH. We therefore provide interfaces to both FeynRules and SARAH to allow the user to incorporate a vast range of theories into GAMBIT, from effective field theories (EFTs) via FeynRules to complex UV-complete theories in SARAH. We stress that if a model can be implemented in SARAH, then the user should use SARAH over FeynRules – both to use GAMBIT to its full potential, and to perform more detailed physics studies. The basic outputs available from GUM in each case are summarised in Table 1.Footnote 1

This manual is organised as follows: in Sect. 2, we describe the code structure and outputs of GUM. In Sect. 3 we give usage details, including installation, the GUM file, and particulars of FeynRules and SARAH model files. In Sect. 4 we provide a worked example, where we use GUM to add a simplified DM model to GAMBIT, and perform a quick statistical fit to DM observables. Finally, in Sect. 5, we discuss future extensions of GUM and summarise. We include details of the new GAMBIT interfaces to CalcHEP, Vevacious and SARAH-SPheno (the auto-generated version of SPheno created using SARAH) in the Appendix.

GUM is open source and part of the GAMBIT 2.0 release, available from gambit.hepforge.org under the terms of the standard 3-clause BSD license.Footnote 2

2 Code design

GAMBIT consists of a set of Core software components, a sampling module ScannerBit [3], and a series of physics modules [4,5,6,7,8,9]. Each physics module is in charge of a domain-specific subset of GAMBIT ’s physical calculations. GUM generates various snippets of code that it then adds to parts of the GAMBIT Core, as well as to some of the physics modules, enabling GAMBIT to employ the capabilities of those modules with the new model.

Within the Core, GUM adds code for any new particles to the GAMBIT particle database, and code for the new model to the GAMBIT models database, informing GAMBIT of the parameters of the new model so that they can be varied in a fit. GUM also generates interfaces (frontends) to the external codes (backends) that it is able to generate. The backends supported by GUM in this manner are those listed as outputs in Table 1.

Within the physics modules, GUM writes new code for the SpecBit [8] module, responsible for spectrum generation within GAMBIT, DecayBit [8], responsible for calculating the decays of particles, DarkBit [4], responsible for DM observables, and ColliderBit [5], the module that simulates hard-scattering, hadronisation and showering of particles at colliders, and implements subsequent LHC analyses.

GUM is primarily written in Python, with the exception of the Mathematica interface, which is written in and accessed via Boost.Python.

Table 2 Details of the new files that GUM writes or modifies in each part of GAMBIT, for the file entry . Some files are only written or edited when a model-specific version of a particular backend is requested in the file. This is achieved by including the corresponding option , e.g. , etc. Entries in the rightmost column indicate where this is the case, i.e. for rows with an entry present in the rightmost column, the file listed in that row will only be modified/generated if the backend(s) named in the rightmost column are requested in the file. Where no such entries exist in the right column, the addition or modification of the GAMBIT source is always performed, regardless of the contents of the file. The entry “ ” indicates that the output is only generated if the option of the same name is set in the file, as . Note that the entry “ ” can be used in place of if the DM candidate decays instead of annihilates. All filenames containing are newly created by GUM; all others are existing files that GUM amends

Initially, GUM parses a input file, using the contents to construct a singleton object. Details of the input format can be found in Sect. 3.3. GUM then performs some simple sanity and consistency checks, such as ensuring that if the user requests DM observables, they have also specified a DM candidate. GUM then opens an interface to either FeynRules or SARAH via the Wolfram Symbolic Transfer Protocol (WSTP), loads the FeynRules or SARAH model file that the user has requested into the Mathematica kernel, and performs some additional sanity checks using the inbuilt diagnostics of each package.

Once GUM is satisfied with the FeynRules or SARAH model file, it extracts all physical particles, masses and parameters (e.g. mixings and couplings). The minimal information required to define a new particle is its mass, spin, color representation, PDG code, and electric charge (if non-self conjugate). For a parameter to be extracted, it must have an associated LHA block in the FeynRules or SARAH model file, and an index within that block. Additionally for FeynRules files, the interaction order used in UFO files must be set. For details on the syntax required for all required particle and parameter definitions, see Sect. 3.4.3 for FeynRules model files, and Sect. 3.5.3 for SARAH model files.

After extracting particle and parameter information, GUM cross-checks that all particles in the new model exist in the GAMBIT particle database, and adds entries if they do not. GUM uses this same particle and parameter information to also write new entries in both the GAMBIT model database and the SpecBit module. All other calculations rely on a combination of new code within GAMBIT and backends. In the following sections we provide details of the new code generated by GUM in the GAMBIT model database (Sect. 2.1), within GAMBIT physics modules (Sect. 2.2), and in the form of new backends and their corresponding frontend interfaces in GAMBIT (Sect. 2.3).

Many of the GAMBIT code outputs are only generated if the user elects to generate relevant new backend codes with GUM. Details of which backends must be generated with GUM for it to generate different GAMBIT source files can be found in Table 2.

At the end of its run, GUM outputs to screen a set of suggested commands for reconfiguring and rebuilding GAMBIT and the new backends. It also emits an example GAMBIT input file for running a scan of the new model ( , where is the name of the new model).

2.1 The GAMBIT model database

For every new model requested, GUM adds a new entry to GAMBIT ’s hierarchical model database. GUM operates under the condition that no model of the same name exists already in the hierarchy, and there are no entries for it in either Models, SpecBit or DarkBit, and will throw an error if it does. If the requested model is a new model, GUM creates a new model file in the Models directory (see Table 2), with the parameters extracted from FeynRules or SARAH.

In addition to the model file, GUM creates a list of expected contents for the model’s particle spectrum object in . This includes not just pole masses of BSM particles, but also the parameters of the model itself, mixing matrices and various SM parameters. GUM also writes a corresponding simple container for the spectrum that defines functions for accessing the spectrum contents and exposes them to the GAMBIT   class.

2.2 Modules

2.2.1 SpecBit

SpecBit includes structures for storing and passing around the so-called mass spectrum, i.e. the pole masses, mixings and Lagrangian parameters of the model. If SPheno is used to obtain the spectrum then the \(R_\xi \)-gauge will be set to \(\xi =1\) and the Lagrangian parameters given in the \({\overline{MS}}\) scheme for non-superymmetric models and the \({\overline{DR}}^\prime \) scheme for supersymmetric models. The scale at which these parameters are given will depend on the spectrum generator and will also be stored in the spectrum stored in SpecBit.

Following the structure of the simple spectrum container, GUM writes module functions in SpecBit that allow the construction of an object of the class. The capability and its module functions are declared in the header file and defined in . The spectrum is either defined directly in terms of phenomenological model parameters, or generated from the Lagrangian parameters using SPheno.

By default, in the absence of a spectrum generator, GUM writes a simple module function in SpecBit, , that fills a object with SM values and the input parameters of the model. If using SARAH to generate GAMBIT code, the pole masses of BSM particles are computed using the tree-level relations provided by SARAH. However, these tree-level masses from SARAH are only used for very simple models, such as those without additional Higgs states, as more complicated models include non-trivial mixings and modify the electroweak symmetry breaking (EWSB) conditions. In the latter case a spectrum generator (e.g. SPheno) should be used. When producing GAMBIT output from FeynRules, however, there are no such relations available, and thus the particle masses are model parameters and the object is filled with those.

If the SPheno output is requested from SARAH for a model, GUM writes a module function, , with the backend requirements necessary to generate the full spectrum, with all particle masses, mixing matrices, etc. Hence, for improved precision spectra, it is recommended that the user implement their model using SARAH, and request the spectrum to be provided by SPheno.

If Vevacious output is requested, for each new BSM model GUM writes new model-specific code in the SpecBit vacuum stability file, , and adds appropriate entries to the corresponding rollcall header. GUM provides two new module functions to interact with Vevacious. Firstly, with capability , which interfaces the object to the Vevacious object. Secondly, , which directs GAMBIT to the location of the input Vevacious files generated by SARAH.

2.2.2 DecayBit

Whenever decay information is requested for a new model, GUM amends the header and source files to add the decays of the particles in the model. The information for the decays can be provided separately by two backends in the GUM pipeline: CalcHEP and SPheno.

CalcHEP generates tree-level decays for each new BSM particle, plus new contributions to any existing particles in DecayBit such as the SM Higgs and the top quark. GUM adds these to DecayBit by adding the new decay channels wherever possible to any existing provided by a module function with capability . If no such function exists, it instead creates a new module function , with this capability-type signature, where comes from the SARAH/FeynRules model file. GUM then modifies the module function to add the decays of any particles for which it has written new module functions. GUM presently only supports the generation of \(1\rightarrow 2\) decays, as support for 3-body decays within DecayBit is currently limited to fully integrated partial widths; consistent inclusion of integrated partial widths provided by SPheno and differential decay widths provided by CalcHEP into DecayBit will require further development of the infrastructure, and a careful treatment of integration limits. Note also that GUM does not currently write any BSM contribution for W and Z boson decay via the CalcHEP interface, but this is planned for future releases.

SPheno by default computes leading order decays (tree-level or one-loop) for all BSM and SM particles, and adds universal higher-order corrections for specific decays, which are very important for Higgs decays. In addition it provides an alternative computation of full one-loop decays for all particles. The choice of method is left to the user via the SPheno option . As SPheno provides decay widths for all particles in the spectrum, GUM creates a new module function , which returns a filled with all decay information computed by SPheno.

The default behaviour of GUM is to ensure that it always generates decay code of some sort when needed. This ensures that a complete for the new model can be provided within GAMBIT for dependency resolution, by providing the capability in DecayBit. A viable GAMBIT is required for the functioning of many external codes such as Pythia and micrOMEGAs: if any new particle is a mediator in a process of interest, then its width is needed. To this end, GUM activates CalcHEP output automatically if decays are needed by other outputs that have been activated in the file, but neither CalcHEP nor SPheno output has been explicitly requested.

2.2.3 DarkBit

If the user specifies a Weakly-Interacting Massive Particle (WIMP) DM candidate for a modelFootnote 3, GUM writes the relevant code in DarkBit. Each individual model tree in DarkBit has its own source file, so GUM generates a new source file , and amends the DarkBit rollcall header accordingly. At a minimum, GUM includes a new module function in this file. It then adds the remainder of the source code according to which backends the user selects to output code for in their file; currently available options are CalcHEP and micrOMEGAs.

If the user requests CalcHEP output, then a new module providing the Process Catalogue is written. The Process Catalogue houses all information about annihilation and decay of DM, and decays of all other particles in the spectrum. All computations of indirect detection and relic density likelihoods in DarkBit begin with the Process Catalogue. For details, see Sec. 6.3 of [4]. All processes that GUM adds to the Process Catalogue are \(2\rightarrow 2\) processes computed at tree level by CalcHEP.

The Process Catalogue is used to compute the relic abundance of annihilating DM via the DarkBit interface to the Boltzmann solver in DarkSUSY. The Process Catalogue interface does not currently fully support co-annihilations nor 3-body final states, so GUM does not generate these processes. Such functionality is planned for future releases of DarkBit, and will be supported by GUM at that time. Note however that GUM does support the full generation of micrOMEGAs versions including these effects via the CalcHEP-micrOMEGAs interface. If co-annihilations or three-body final states are expected to be important in a new physics model, the user should therefore use micrOMEGAs to compute the relic abundance from DarkBit, in preference to the DarkSUSY Boltzmann solver.

For decaying DM candidates, the user would need to implement their own relic density calculation, as appropriate for the specific model in question. Although DarkBit can calculate spectral yields from DM decay, at present the likelihoods for indirect detection in DarkBit do not support decaying dark matter (in large part because neither gamLike nor nulike presently support decaying DM). Existing direct detection likelihoods can still be used out of the box without any relic density calculation, if the user assumes that the decaying DM candidate constitutes all of the DM.

When writing micrOMEGAs output, GUM adds new entries to the macro for existing micrOMEGAs functions in DarkBit. To use micrOMEGAs ’ relic density calculator, GUM adds an entry to the module function . For more information, see Sec. 4.2 of Ref. [4]. GUM also provides an interface to the module function , which returns a object containing the basic effective spin-independent and spin-dependent couplings to neutrons and protons \(G_{\mathrm{SI}}^{\mathrm{p}}\), \(G_{\mathrm{SI}}^{\mathrm{n}}\), \(G_{\mathrm{SD}}^{\mathrm{n}}\), and \(G_{\mathrm{SD}}^{\mathrm{n}}\). This object is readily fed to DDCalc for computing likelihoods from direct detection experiments. For more information, see Sec. 5 of Ref. [4].

For more complicated models where the standard spin-independent and spin-dependent cross-sections are not sufficient, micrOMEGAs is not able to compute relevant couplings. In this case, the user should perform a more rigorous calculation of WIMP-nucleon (or WIMP-nucleus) couplings by alternative means. This is required when, for example, scattering cross-sections rely on the momentum exchange between incoming DM and SM particles, or their relative velocity. The full set of 18 non-relativistic EFT (NREFT) DM-nucleon operators are defined in both DarkBit and DDCalc 2, and described in full in the Appendix of Ref. [15]. These operators fully take into account velocity and momentum transfer up to second order, and should typically be used in cases where the entirety of the physics is not captured by just \(\sigma _{\mathrm{{SI}}}\) and \(\sigma _{\mathrm{{SD}}}\). Whilst the new 2.1 version of GAMBIT [16] allows for automated translation of high-scale relativistic effective DM-parton couplings to low-scale NREFT couplings via an interface to DirectDM [75, 76], there is no established automated matching procedure for connecting other high-scale models (as defined in FeynRules or SARAH model files) to the Wilson coefficients of the relativistic EFT. GUM therefore does not automatically write any module functions connecting the GAMBIT object to the NREFT interface of DDCalc; once such a procedure exists, GUM will be extended accordingly.

2.2.4 ColliderBit

In ColliderBit, simulations of hard-scattering collisions of particles are performed using the Monte Carlo event generator Pythia [77]. These events are passed through detector simulation and then fed into the GAMBIT analysis pipeline, which predicts the signal yields for the new model. These can then be compared to the results of experimental searches for new particles.

For a new BSM model, the matrix elements for new processes unique to the model must be inserted into Pythia in order for it to be able to draw Monte Carlo events from the differential cross-sections of the model. To achieve this, GUM communicates with MadGraph to generate matrix element code for Pythia, and writes the appropriate patch to insert it into Pythia. Alongside the matrix elements, this Pythia patch also inserts any newly defined LHA blocks.

When Pythia output is requested, GUM writes a series of new ColliderBit module functions in the source file , and a corresponding rollcall header file. The new functions give ColliderBit the ability to

  1. (i)

    collect the relevant and objects from other modules and provide them to the newly-generated copy of Pythia (capability , function ),

  2. (ii)

    initialise the new Pythia for Monte Carlo events (capability , function ), and

  3. (iii)

    call the new Pythia in order to generate a Monte Carlo event (capability , function ).

In addition to the likelihood from LHC new particle searches, ColliderBit also provides likelihoods associated with Higgs physics. This is done via interfaces to HiggsBounds [78,79,80,81] and HiggsSignals [82, 83], which use information on Higgs signal rates and masses from the Tevatron, LEP and the LHC. When a new model is added to GAMBIT with GUM, if SPheno output is requested from SARAH, GUM constructs a new used as input to the Higgs likelihoods, and amends the appropriate module function entries in and .

2.3 Backends

In the GAMBIT framework, backends are external tools that GAMBIT links to dynamically at runtime, in order to compute various physical observables and likelihoods. Out of the full list of backends that can be interfaced with GAMBIT, a small selection of them can work for generic BSM models. In particular, GUM is able to produce output for SPheno [68, 69], Vevacious [74], CalcHEP [39], micrOMEGAs [67], Pythia [77], HiggsBounds [78,79,80,81] and HiggsSignals [82, 83]. Thus, we briefly describe here the specific outputs generated by GUM for each of these backends, along with and any corresponding GUM and GAMBIT YAML input file entries needed to use them. Unless otherwise stated, GUM has been developed to work with specific versions of the backends.

2.3.1 (SARAH-)SPheno 4.0.3

Required file entry:

SPheno is a spectrum generator capable of computing one-loop masses and tree-level decay branching fractions in a variety of BSM models. The model-specific code is generated by SARAH and combined with the out-of-the-box SPheno code into a single backend for GAMBIT. For each model GUM thus provides an interface between GAMBIT and the SPheno version via a new frontend, . Details about this interface, which differs significantly from the SPheno interface described in [8], can be found in Appendix B.2.

In order to generate SPheno output from SARAH, the user must provide a file in the same directory as the SARAH model files. For details of the contents of these files, we refer the reader to the SARAH manual [52].

Once the appropriate GAMBIT code is generated by GUM, the new capability is added to SpecBit to compute the spectrum using SPheno. The new SPheno generated object can be obtained for a specific model in a run via the YAML entry in the GAMBIT input file:

figure dg

As usual, if more than one module function can provide the same capability, as can happen, for example, if FlexibleSUSY is also present, the SPheno specific one can be selected by the rule in the GAMBIT input file.

figure dh

In addition to their masses and mixings, SPheno can compute the tree-level decay branching fractions for all particles in the spectrum, including some radiative corrections to the decays of Higgs bosons. The GUM-generated code in DecayBit includes the new module function , which returns a GAMBIT   as computed by SPheno. This provides an alternative to the usual function in DecayBit. When run with default settings, GAMBIT will preferentially select whenever possible, rather than , as the former is more model-specific. If necessary, the user can also manually instruct GAMBIT to use this function by specifying a rule for the capability:

figure do

To build a newly-added SARAH-generated SPheno (“SARAH-SPheno ”) within GAMBIT, the appropriate commands to run in the GAMBIT build directory are

figure dp

where the backend name is used to differentiate the corresponding code from the out-of-the-box version of SPheno.

2.3.2 Vevacious 1.0 (C++)

Required entries: .

Vevacious computes the stability of the scalar potential for generic extended scalar sectors [74]. A recent C version [84] has recently been interfaced to GAMBIT as a backend [85] and is the one used by GUM. The GAMBIT interface to Vevacious is explained in more detail in Appendix B.3.

To test the stability of the EWSB vacuum, Vevacious checks whether other deeper minima exist, in which case it computes the tunnelling probability to either the nearest (to the EWSB vacuum) or the deepest of such minima. The user can select whether to compute the tunnelling probability for either of them by using the options in the GAMBIT input entry for the capability as

figure dw

If both minima are chosen, Vevacious computes the probability of tunnelling to both if they are different. The capability in GAMBIT checks if the nearest minimum is also the global minimum. If that is the case, the transition to the minimum is only computed once, which reduces the computation time significantly. In many instances, such as for the MSSM, the tunneling path optimization within Vevacious can be a very time-consuming step, therefore computing the tunnelling probability to both minima is not always recommended, as it requires running Vevacious twice for parameter points where the minima are different, and thus can be prohibitively slow for very large scalar sectors.

For each minimum, Vevacious by default computes both the zero-temperature (quantum) tunnelling probability as well as the finite-temperature (thermal fluctuation) probability. To select or , it is possible to provide options to the as

figure eb

If both minima are selected, the same tunnelling strategy must be selected for both. GAMBIT computes the likelihood by combining the decay widths for all (independent) transitions as reported by Vevacious.

For each new model, SARAH generates the model files required by Vevacious and GUM moves them into the patch directory in GAMBIT. Note that if the user wishes to request Vevacious output, they must also request SPheno output (and provide a file). This is due to the GUM interface utilising Mathematica symbols provided by SARAH ’s SPheno routines.

Once a new model has been generated by GUM, Vevacious can be built from within the GAMBIT build directory with the command

figure ed

which will either download and build Vevacious if it is not installed, or simply move the new model files from the GAMBIT patch directory to the Vevacious directory if it is already built. Note that building Vevacious for the first time will also download and install MINUIT [86], PHC [87], and HOM4PS2 [88].

2.3.3 CalcHEP 3.6.27

Required file entry:

Optional: for annihilating DM, or

             for decaying DM

GUM uses the backend convenience function provided by the new CalcHEP frontend (described in Appendix B.1), to compute tree-level decay widths.

For each new BSM decay, GUM generates a model-specific . For each newly-added decaying particle, GUM writes a module function , which requires the ability to call the backend convenience function . All new decays are then gathered up by the existing DecayBit function , which GUM modifies by adding an switch for newly-added decaying particles in the new model.

The appropriate GAMBIT input rule for a CalcHEP-generated is simply

figure ep

If the user specifies the PDG code of a WIMP candidate via or , then GUM creates a DarkBit entry for the new model.

In the case of , GUM utilises the backend convenience function provided by the CalcHEP frontend, to build the Process Catalogue. It does this by computing \(2\rightarrow 2\) scattering rates as a function of the relative velocity \(v_{\mathrm{{rel}}}\), which are in turn fed to the appropriate module functions.

In the case of , GUM instead utilises the branching ratios from the for the decaying DM candidate, which can be constructed from CalcHEP, as described above.

The information contained within the Process Catalogue can be used by the GAMBIT native relic density solver for annihilating DM (using the function via DarkSUSY), and for all indirect detection rates. For annihilating DM, these utilise the velocity weighted annihilation cross-section \(\sigma v_\text {rel}\). This is usually evaluated at \(v_{\mathrm{{rel}}}=0\) (such as in the case of \(\gamma \) rays), but for solar neutrinos, \(v_{\mathrm{{rel}}}\) is set to the solar temperature \(T_{\mathrm{{Sun}}}\). For decaying DM, indirect detection rates are explicitly disabled until backend support for decaying DM becomes available.

In the first release of GUM, there is no support for 4-fermion interactions in CalcHEP, as neither FeynRules nor SARAH is able to produce output for these.

From the GAMBIT build directory the command

figure ex

will build CalcHEP if it is not installed; otherwise it will move the new CalcHEP model files from the GAMBIT patch directory to the CalcHEP model directory.

2.3.4 micrOMEGAs 3.6.9.2

Required file entry:

Optional: for annihilating DM, or

             for decaying DM

MicrOMEGAs is a code capable of computing various DM observables for BSM models with WIMP candidates, such as the relic abundance, direct detection cross-sections, and indirect detection observables. Each micrOMEGAs installation in GAMBIT is a separate backend, as the source is compiled directly with the model files. Therefore for each newly-added micrOMEGAs model, GUM creates a new single backend for micrOMEGAs, via the new frontend .

MicrOMEGAs uses CalcHEP files as input so is subject to the same caveats as CalcHEP, covered above. MicrOMEGAs assumes that there is an additional symmetry under which the SM is even and any dark matter candidate is odd. MicrOMEGAs distinguishes an odd particle by having its name begin with a tilde, such as . If no particle name in a theory begins with a tilde, the micrOMEGAs routines will fail.Footnote 4 The particle name is set by the option in FeynRules, and the option in SARAH. If the indicated DM candidate does not have a particle name beginning with a tilde, GUM throws an error.

GUM provides a simple interface to the relic density calculation in micrOMEGAs, in the case of annihilating DM. The GAMBIT input entry for computing the relic density with micrOMEGAs is:

figure fg

If the user provides a decaying DM candidate, then GUM does not provide an interface to the micrOMEGAs relic density routines. If the user wishes to compute the relic density for a decaying DM candidate, they must implement their own relic density calculation by hand.

GUM also provides a simple interface to the direct detection routines in micrOMEGAs, which simply provide calculations of the spin-independent and spin-dependent cross-sections, which are added for both annihilating and decaying DM. This is fed to DDCalc [4, 15] which computes expected rates for a wide range of direct detection experiments.

As each installation of micrOMEGAs is a separate backend, each requires a specific build command to be run in the GAMBIT build directory:

figure fh

Future versions of GAMBIT and GUM will interface to micrOMEGAs 5 [89], which contains routines for computing the relic abundance of DM via freeze-in, and allows for two-component DM.

2.3.5 Pythia 8.212

Required entries: .

If the user requests Pythia output for a given model, either FeynRules or SARAH generates a collection of UFO files. GUM calls MadGraph directly using the UFO model files, and generates new output for Pythia in the form of matrix elements. GUM then writes the appropriate entries in the backend patch system and connects the new matrix elements with those existing in Pythia, and adds the corresponding entry to the file . Because Pythia is a code, and the class it defines is used directly in ColliderBit, new versions of the backend must be processed for automated classloading from the corresponding shared library by BOSS (the backend-on-a-stick script [1]). This process generates new header files that must be (d) in GAMBIT itself, and therefore picked up by CMake. Therefore, a new version of Pythia is correctly built by running the commands

figure fp

in the GAMBIT build directory, where specifies the number of processes to use when building. In the current version of ColliderBit, functions from nulike [90, 91] are also required in order to perform inline marginalisation over systematic errors in the likelihood calculation; this can be built with

figure fq

also in the GAMBIT build directory.

The user must provide a list of all processes to include in the new version of Pythia in the file under the heading ; see Sect. 3.3 for details.

Once a new Pythia has been created, it has access to all implemented LHC searches within ColliderBit. The relevant GAMBIT input file entry to include Pythia simulations is

figure ft

along with rules specifying how to resolve the corresponding dependencies and backend requirements, e.g.

figure fu

The matrix elements generated by MadGraph can include extra hard partons in the final state so that jet-parton matching is needed to avoid double counting between the matrix elements and the parton shower in Pythia. Currently, this is not automatic in GUM and must be implemented by the user as needed.

Traditional MLM matching, e.g. as found in MadGraph [36], applies a \(k_T\) jet measure cut on partons ( ) at the matrix element level, and separates events with different multiplicities. The optimal value of this cut should be related to the hard scale of the process, e.g. the mass of the produced particles, and tuned to ensure smoothness of differential jet distributions and invariance of the cross-section.

The hard scattering events are then showered and a jet finding algorithm (the \(k_T\)-algorithm [92, 93] implemented in Pythia ’s class in our case) is used on the final state partons to match the resulting jets to the original partons from the hard scatter. A jet is considered to be matched to the closest parton if the jet measure \(k_{T}(\text {parton},\text {jet})\) is smaller than a cutoff . This parton shower cut, , should be set slightly above . The event is rejected unless each jet is matched to a parton, except for the highest multiplicity sample, where extra jets are allowed below the \(k_{T}\) scale of the softest matrix element parton in the event.

While a full implementation of this matching procedure for use in GUM is still in development, to perform simple jet matching for models generated with GUM, the user can make use of the MLM matching machinery already present in Pythia. This can be accessed by the ColliderBit initialisation of Pythia to allow the relevant jet matching inputs to be passed through Pythia settings in the GAMBIT YAML file, for example:

figure ga

Using the above approach one can approximate the matching for a single extra hard parton in dark matter pair production, by applying the matching cuts only on the Pythia side, on events from the matrix elements generated by MadGraph. Here, we also apply a \(p_T\) cut on the partons in the hard scatter in Pythia ( ). While this \(p_T\) cut and the are not equivalent, the difference is small for single-jet events because the geometrical part of the \(k_T\) jet measure becomes unimportant.

A side-effect of using is that it applies to all final state particles, and so it must be set well below any cuts in the analysis. Potential poor initial cross-section estimates in Pythia may lower the phase space selection efficiency, inflating computation time. To combat this, may be raised, requiring a sensible balance between efficiency and analysis cut constraints. Due to the limitations of this approach, the accuracy of jet matching cannot be guaranteed, and should be confirmed on a per-model, per-analysis basis.

We refer the reader to the ColliderBit manual [5] for additional details on the Pythia options used within the GAMBIT YAML file.

The particle numbering scheme used by both GAMBIT and Pythia is that of the PDG. For dark matter particles to be correctly recognised as invisible by both libraries, their PDG codes must be within the range 51–60. Other particles that Pythia and GAMBIT tag as invisible are the SM neutrinos, neutralinos, sneutrinos, and the gravitino. Where possible, all particles in SARAH and FeynRules files passed to GUM by the user should adhere to the PDG numbering scheme. For more details, see Sec. 43 of the PDG review [94].

GUM checks that any newly-added particle in the GAMBIT particle database is consistent with the definition in Pythia. If there is an inconsistency between the two, GUM will throw an error. For example, the PDG code 51 is not filled in the GAMBIT particle database by default, but is reserved for scalar DM in Pythia. GUM will throw an error if the user attempts to add a new particle with PDG code 51 but with spin 1/2.

2.3.6 HiggsBounds 4.3.1 and HiggsSignals 1.4.0

Required file entry:

Another advantage of using SPheno to compute decays is that all relevant couplings for HiggsBounds and HiggsSignals are automatically computed. Whenever SPheno output is generated, GUM also generates an interface to the GAMBIT implementations of HiggsBounds and HiggsSignals via the GAMBIT type .

GUM achieves this by generating a function that produces an instance of the GAMBIT native type from the decay output of SPheno. The object provides all decays of neutral and charged Higgses, SM-normalised effective couplings to SM final states, branching ratios to invisible final states, and top decays into light Higgses. For more details, we refer the reader to the SpecBit manual [8].

GAMBIT categorises models into two types: ‘SM-like’ refers to models with only the SM Higgs plus other particles, and ‘MSSM-like’ refers to models with extended Higgs sectors. The appropriate type is automatically selected for each model by the GAMBIT dependency resolver, by activating the relevant one of the module functions in ColliderBit that can provide capability .

For ‘SM-like’ models, GUM edits the ColliderBit module function to simply pass details of the single Higgs boson from the object of the new model. For ‘MSSM-like’ models, GUM edits the ColliderBit function , which communicates the properties of all neutral and charged Higgses to HiggsBounds/HiggsSignals in order to deal with extended Higgs sectors.

To ensure the interface to the works as expected, the user should make sure that the PDG codes of their Higgs sector mimics those of both GAMBIT and the SLHA:

figure gp

The function automatically supports all MSSM and NMSSM models within GAMBIT, as well as any model with a similar Higgs sector (e.g. a Two-Higgs Doublet Model or any subset of the NMSSM Higgs sector). If the user has extended Higgs sectors beyond this, i.e. with more Higgses than the NMSSM, then they will need to extend both GUM and GAMBIT manually.

On the GAMBIT side, if the Higgs sector has multiple charged Higgses, more than three CP-even or more than two CP-odd neutral Higgses, the user must write a new function in to construct the correctly. If there are new CP-even Higgses, this will also require a new entry in to determine the ‘most SM-like’ Higgs.

In GUM, the user must add the PDG codes of additional mass eigenstates to the function in under the appropriate entries and , and also make appropriate changes to the functions in to reflect any changes to the construction of the .

The appropriate GAMBIT input entries for using HiggsBounds and HiggsSignals likelihoods are simply

figure hb

where the choice of function fulfilling the capability is automatically taken care of by the dependency resolver.

HiggsBounds and HiggsSignals can both be built with

figure hd

but neither actually needs to be rebuilt once a new model is added by GUM.

3 Usage

3.1 Installation

GUM is distributed with GAMBIT 2.0.0 and later. The program can be found within the folder in the GAMBIT root directory, and makes use of CMake. In addition to the minimum requirements of GAMBIT itself, GUM also requires at least

  • Mathematica 7.0

  • Python 2.7 or Python 3

  • The Python future module

  • Version 1.41 of the compiled Boost libraries Boost. Python, Boost.Filesystem and Boost.System

  • libuuid

  • libX11 development libraries.

Note that all CMake flags used in GUM are entirely independent from those used within GAMBIT. From the GAMBIT root directory, the following commands will build GUM:

figure hf

where specifies the number of processes to use when building.

3.2 Running GUM

The input for GUM is a file, written in the YAML format. This file contains all of the information required for GUM to write the relevant module functions for GAMBIT, in a similar vein to the input YAML file used in GAMBIT. GUM is executed with an initialisation file with the flag, as in

figure hj

The full set of command-line flags are:

  • to perform a dry run

  • to display help

  • to use the instructions from to run GUM

  • to use the instructions from to run GUM in reset mode

There are three operational modes of gum: dry run, regular and reset. During a dry run, no code is actually written to GAMBIT. GUM checks that the Mathematica model file (either FeynRules or SARAH) is suitable for use, and writes a number of proposed source files for GAMBIT, but does not actually copy them to the GAMBIT source directories. This mode can be used for safe testing of new and model files, without modifying any of GAMBIT.

A regular run of GUM will perform all necessary checks, add the new model to GAMBIT and generate all relevant GAMBIT code requested in the file. After a regular GUM execution, GUM prints a set of commands to standard output for the user to run. It is recommended that the user copies these commands and runs them as instructed, as the order of the suggested build and CMake steps can be important, due to new templated C++ types being provided by backends (currently just Pythia).

In addition to the above, GUM outputs a reset ( ) file after a successful run. This file is used in the reset mode, and enables the user to remove a GUM-generated model from GAMBIT. Hence, after adding a new model, the user can run the command

figure hr

which will remove all source code generated by GUM associated with the model . Note that if the user manually alters any of the auto-generated code, the resetting functionality may not work as expected.

3.3 Input file and node details

GUM files are YAML files in all but name: they are written in YAML format and respect YAML syntax. The only mandatory nodes for a GUM input file are the node, specifying details of the Mathematica package used, and the node, which selects the GAMBIT backends that GUM should generate code for.

The full set of recognised nodes in a file is

  • : describes the Mathematica package used, and subsequently, the model name, plus any other information relating specifically to the package

  • : give the PDG code for the annihilating DM candidate in the model

  • : give the PDG code for the decaying DM candidate in the model

  • : give the PDG codes for particles to be treated as invisibles in collider analyses

  • : selects which backends GUM should write output for

  • : specific options to use for each backend installation.

The node syntax is

figure id

For specific information on FeynRules files, see Sect. 3.4, and for SARAH files, see Sect. 3.5.

Information about the DM candidate of interest is given by either the node or the node. Note that only one of these nodes can be passed; if both are present in the file, GUM will throw an error. Although these nodes are optional, if neither is present, then no output will be written for DarkBit (including the Process Catalogue and direct detection interfaces). The syntax is

figure ih

in the case of annihilating DM, and similarly

figure ii

in the case of decaying DM. Note that only one DM candidate can be specified at present. Future versions of GAMBIT will allow for multiple DM candidates, and for the lightest stable particle (LSP) to be determined by the object.

Any additional particles that are to be treated as invisible when calculating missing momentum in collider analyses are given by the node. They are given in a list, with anti-particle PDG codes also required when necessary. If this node is not present, ColliderBit will use the default list provided in .

figure im

The option specifies for which backends GUM should generate code.

figure io

The default for each possible backend output is . If the node is empty, or if all backend output is set to , GUM will terminate with an error message.

The node allows the user to pass specific settings relevant for each backend to GUM. We briefly go through these in turn. The syntax for this is

figure it

To tell MadGraph which processes to generate Pythia matrix elements for, the user should provide a list of all BSM processes in MadGraph syntax under the sub-node. The user needs to know the names of each particle within MadGraph in order to fill in this information.

While Pythia is able to perform its own showering for initial jets, these will be very soft. If the user specifically requires hard ISR jets, such as for a monojet signal associated with DM pair production, these matrix elements should be explicitly requested. In doing so, the user must be careful and aware that collider events are not double counted, i.e. jet matching is performed. We explain our treatment of jet matching in Pythia in Sect. 2.3.5.

For example, to generate matrix elements for monojet and mono-photon production in association with pair production of a DM candidate X, one would include

figure iv

The sub-node is currently always required if is set in the file.

As with the existing Pythia functionality in ColliderBit, the new function introduced to ColliderBit by GUM recognises a YAML option , which can be provided by a user in their GAMBIT input file. In particular, the boolean sub-option of the option allows one to activate all processes specified in the entry of the file.

Other sub-nodes of the file’s entry offer the ability to use the native multiparticle description within MadGraph ( ), and to select events with the relevant particles in the initial and final state ( ). An example including all available sub-nodes for the entry is shown below:

figure jk

In this example, including the example node will generate an additional group of events known as , which can also be set in the option of the new ColliderBit module function . Setting this flag to picks out all processes in which any of the particles in the is an initial or a final state. This is useful for when one wishes to simulate events only for a specific subset of the processes for which matrix elements have been generated for the new model.

For SPheno, the user can request to turn loop decays off via the flag ,

figure js

3.4 FeynRules pathway

Here we describe the process by which GUM can parse a model defined in FeynRules. For details on how to correctly implement a model in FeynRules, we refer the reader to the FeynRules manual [32]. There are many examples of models available on the FeynRules website.Footnote 5

3.4.1 Outputs

FeynRules is designed to study particle physics phenomenology at tree level, and does not directly interface to any spectrum generators. FeynRules is therefore well suited to EFTs and simplified models, as gauge invariance and renormalisability are not typically required in these cases. Because of this, when working from the outputs of FeynRules, GUM is only able to provide minimal interfaces to the SpecBit module and the GAMBIT model database.

FeynRules is able to output two file formats usable by GUM: CalcHEP ( files) and UFO files. GUM uses files with CalcHEP to compute tree-level decay rates and DM annihilation cross-sections, and with micrOMEGAs to compute DM relic densities and direct detection rates. The UFO files are currently only used by the MadGraph-Pythia 8 chain, for collider physics. See Sect. 2 for details.

3.4.2 Porting a FeynRules model to GAMBIT

To add a model to GAMBIT based upon a FeynRules file, GUM tries to find the model file, and any restriction ( ) files that the user may wish to include, first in

figure jx

where the folder is located inside the GAMBIT root directory. If these model files do not come with the FeynRules installation, GUM instead looks in

figure jz

To emulate the FeynRules command the file simply needs the entry

figure kc

Many models hosted on the FeynRules website and elsewhere often utilise ‘base’ files and extensions, where one model builds upon another. For instance, a model called that builds on the Standard Model could be loaded in a Mathematica session using the FeynRules command This behaviour is also possible with GUM via the additional option . In this case, GUM expects to be located in and to be in (where both paths can be independently relative to or to ). A user would indicate this in their input file like so:

figure km

An additional FeynRules-only option for the node includes the ability to load FeynRules restriction ( ) files. A FeynRules restriction is useful when considering a restricted subspace of the model at hand. For example, to set the CKM matrix to unity, we can load the restriction file that is shipped with FeynRules, as

figure kq

Another FeynRules-only option is the ability to specify the name of the Lagrangian that FeynRules should compute the Feynman rules for. The definition of the Lagrangian can either be a single definition from the FeynRules file:

figure kr

or can be given as a string of Lagrangians:

figure ks

After loading the model, GUM performs some diagnostics on the model to ensure its validity, checking that the Lagrangian is Hermitian, and that all kinetic and mass terms are correctly diagonalised according to the FeynRules conventions. For more details, we refer the reader to the FeynRules manual [32].

3.4.3 Requirements for FeynRules files

GUM interacts with loaded FeynRules files via the and commands. To successfully parse the parameter list, every parameter must have a and associated with it.

A model implemented in FeynRules will be parametrised in GAMBIT by the full set of parameters denoted as external, by in the input file. Additionally, all masses for non-SM particles are added as input parameters, as they are not computed by spectra.

For example, the SM extended by a scalar singlet S via a Higgs portal with the interaction Lagrangian \({\mathcal {L}} \supset \lambda _{hs} H^\dagger H S^2\) would be parametrised in GAMBIT by the coupling \(\lambda _{hs}\), as well as the mass of the new field \(m_S\).

The user should not use non-alphanumeric characters (apart from underscores) when defining parameter names (including the field), as this will typically result in errors when producing output. The exception to this is a tilde, which is often used to signify a conjugate field, or in the case of micrOMEGAs a DM candidate.

For the MadGraph-Pythia pathway to work correctly, each new external parameter must have its set. See Sec. 6.1.7 of the FeynRules manual for details [32]. A fully compliant FeynRules entry for a parameter looks as follows:

figure lb

where the , and are all defined. We also set to , as FeynRules is not able to generate CalcHEP files for complex parameters. All parameters that are complex should be redefined as their real and imaginary parts, with all factors of i explicitly placed in the Lagrangian.

For a matrix, the does not need to be specified,

figure li

In this case, GUM will add 4 model parameters to the model for each matrix index, labelled by , i.e. for the above entry. Note that the values for each entry can be set to anything; these will all be set by GAMBIT during a scan.

An example of a particle implementation, for the Majorana DM candidate used in Sect. 4, is

figure ll

Here we see that the begins with a tilde, so that micrOMEGAs can correctly identify it as a WIMP DM candidate, the PDG code is assigned to 52 (generic spin-1/2 DM, as per the PDG), and the particle mass will be added as an external parameter. Note that because this particle has , GUM does not require the electric charge to be set. If the particle were Dirac, i.e. , GUM would require the additional entry .

For a particle \(\eta \) that should decay, an appropriate entry for the particle width would look like , enabling the contents of the to be passed to CalcHEP. Note that in this case, will not be set as a free parameter of the model in GAMBIT, but derived from the model parameters and accessible channels.

Although FeynRules is able to compute the Feynman rules for a theory containing 4-fermion interactions, it does not support generating CalcHEP files for these models.Footnote 6 The first release of GUM does not support theories implemented in FeynRules with 4-fermion interactions, however such support is planned for future releases.

3.5 SARAH pathway

3.5.1 Outputs

As shown in Table 1, SARAH is able to generate output for CalcHEP, micrOMEGAs, Pythia, SPheno and Vevacious. As SARAH is able to generate CalcHEP, MadGraph/Pythia and micrOMEGAs output, it can mirror the capabilities of FeynRules in the context of GUM.

SARAH has been labeled a ‘spectrum generator generator’, as it can also automatically write Fortran source code for SPheno for a given model. GUM is able to automatically patch the SPheno source code generated by SARAH, and write a frontend interface to that SARAH-SPheno version.

3.5.2 Porting a SARAH model to GAMBIT

To add a model to GAMBIT based upon a SARAH file, the model file must be located in

figure lv

or

figure lw

The usual SARAH files and should also be present in one of these locations. To generate spectra via SPheno, a file must also be provided in the same directory.

GUM loads a new model in SARAH by invoking the command , which is selected by the entries

figure mc

In order to validate the model GUM uses the SARAH command . SARAH provides the results of the function only to and via error messages. GUM therefore captures the output and message streams from Mathematica in order to gather this information, and decides whether the errors should be considered fatal or not. Non-fatal errors, including gauge anomalies, possible allowed terms in the Lagrangian or missing Dirac spinor definitions, are directed to GUM ’s own standard output as warnings. Fatal errors, such as non-conservation of symmetries or those associated with particle and parameter definitions, cause GUM to abort, as subsequent steps are guaranteed to fail in these cases.

3.5.3 Requirements for SARAH files

As with FeynRules, GUM extracts information from SARAH about the parameters and particles in the model. These are collected by SARAH in the and lists, respectively.

Definitions for new model parameters are located in the file within the SARAH model folder. A well-defined entry for a new SARAH parameter looks as follows:

figure mj

where the block and respective index are required fields.

For a matrix, the index does not need to be specified:

figure ml

This instructs GUM to add a block to the definition, which will be filled by a spectrum generator.

GUM is concerned with the properties of physical particles in the mass basis (designated in SARAH). An example particle implementation from the file is:

figure mr

Here the important entries are

  • the entry, where signifies that the particle mass will be provided by the GAMBIT object (whether that is filled using SPheno or a tree level calculation),

  • the entry, which specifies a list over all generations for the mass eigenstates (in this example there is just one), and

  • the field.

Note that SARAH has default definitions for many particles and parameters in and . Their properties can be inherited, or overwritten, via the field.

Information about mixing matrices is stored by SARAH in the variable . From this variable GUM learns the names of the mixing matrices associated with each particle. For Weyl fermions, GUM requests the name of the associated Dirac fermion, stored in the variable . As an example, the mixing matrices for the electroweakino sector of the MSSM are extracted as

figure nc

which associates the matrix with the Weyl fermion (neutralinos) and the matrixes and with (negative charginos) and (positive charginos). As these are Weyl fermions, the Dirac eigenstates are

figure nj

GUM thus knows to assign the mixing matrix to Dirac-eigenstate neutralinos , as well as the matrices and to Dirac-eigenstate charginos .

As opposed to FeynRules, where all parameters and particle masses become GAMBIT model parameters, the SARAH pathway attempts to optimise this list through various means. In the absence of a spectrum generator (e.g. SPheno, see below), almost all the parameters in become model parameters. Only those with explicit dependencies on other parameters are removed, i.e. those with the or fields. In addition, SARAH provides tree-level relations for all masses, via , so even in the absence of a spectrum generator, none of the particle masses become explicit model parameters. For models with BSM states that mix together into mass eigenstatesFootnote 7, the tree-level masses are not used and an error is thrown to inform the user of the need to use a spectrum generator.

If the user elects in their file to generate any outputs from SARAH for specific backends, GUM requests that SARAH generate the respective code using the relevant SARAH commands. These are for CalcHEP and micrOMEGAs, for MadGraph/Pythia, for SPheno and for Vevacious.

When SPheno output is requested, GUM interacts further with SARAH in order to obtain all necessary information for spectrum generation:

  1. 1.

    Replace parameters and masses with those in SPheno. The parameter names are obtained using the function operating on the lists and . The particle masses are obtained just by using the command.

  2. 2.

    Extract the names and default values of the parameters in the and blocks, as defined in the model file . For each of these, store the boundary conditions, also from , that match the and parameters to those in the parameter list. Note that as of GUM 1.0, only the boundary conditions in are parsed.

  3. 3.

    Remove from the parameter list those parameters that will be fixed by the tadpole equations, as they are not free parameters. These are collected from the list as defined in .

  4. 4.

    Get the names of the blocks, entries and parameter names for all SLHA input blocks ending in , e.g. , , etc. SARAH provides this information in the list .

  5. 5.

    Register the values of various flags needed to properly set up the interface to SPheno. These are and .

4 A worked example

To demonstrate the process of adding a new model to GAMBIT with GUM, in this section we provide a simple worked example. Here we use GUM to add a model to GAMBIT, perform a parameter scan, and plot the results with pippi [95]. This example is designed with ease of use in mind, and can be performed on a personal computer in a reasonable amount of time. For this reason we select a simplified DM model, implemented in FeynRules.

In this example, we consider constraints from the relic density of dark matter, gamma-ray indirect detection and traditional high-mass direct detection searches. It should be noted that this is an example, not a full global scan, so we do not use all of the information available to us – a real global fit of this model would consider nuisance parameters relevant to DM, as well as a full set of complementary likelihoods such as from other indirect DM searches, low-mass direct detection searches, and cosmology.

The FeynRules model file, file, GAMBIT input file and file used in this example can be found within the folder in GUM.

4.1 The model

The model is a simplified DM model, where the Standard Model is extended by a Majorana fermion \(\chi \) acting as DM, and a scalar mediator Y with a Yukawa-type coupling to all SM fermions, in order to adhere to minimal flavour violation. The DM particle is kept stable by a \({\mathcal {Z}}_2\) symmetry under which it is odd, \(\chi \rightarrow -\chi \), and all other particles are even. Both \(\chi \) and Y are singlets under the SM gauge group.

Here, we assume that any mixing between Y and the SM Higgs is small and can be neglected. This model has been previously considered in e.g. [96, 97] and is also one of the benchmark simplified models used in LHC searches [98,99,100]. The model Lagrangian is

(1)

Note that this theory is not \(SU(2)_L\) invariant. One possibility for a ‘realistic’ model involves Y-Higgs mixing, which justifies choosing the \(Y{\overline{f}}f\) couplings to be proportional to the SM Yukawas \(y_f\).

The free parameters of the model are simply the dark sector masses and couplings, \(\{m_\chi \), \(m_Y\), \(c_Y\), \(g_\chi \}\). In this example we follow the FeynRules pathway, working at tree level.

4.2 The .gum file

Firstly, we need to add the FeynRules model file to the GUM directory. The model is named ‘MDMSM’ (Majorana DM, scalar mediator). Starting in the GUM root directory, we first create the directory that the model will live in, and move the example file from the folder to the GUM directory:

figure ov

As we are working with FeynRules, the only backends that we are able to create output for are CalcHEP, micrOMEGAs and MadGraph/Pythia. For the sake of speed, in this tutorial we will not include any constraints from collider physics. This is also a reasonable approximation, as for the mass range that we consider here, the constraints from e.g. monojet, dijet and dilepton searches are subleading (see e.g. Ref. [96] and Appendix A). We therefore set . The contents of the supplied file are simple:

figure oy

Note the selection of the PDG code of the DM particle as 52, so that if we were to use Pythia, \(\chi \) would be correctly identified as invisible.

We can run this from the GUM directory,

figure oz

and GUM will automatically create all code needed to perform a fit using GAMBIT. On a laptop with an Intel Core i5 processor, GUM takes about a minute to run. All that remains now is to (re)compile the relevant backends and GAMBIT, and the new model will be fully implemented, and ready to scan. GUM prints a set of suggested build commands to standard output to build the new backends and GAMBIT itself. Starting from the directory, these are

figure pb

where specifies the number of processes to use when building.

Note that GUM does not adjust any CMake flags used in previous GAMBIT compilations, so the above commands assume that the user has already configured GAMBIT appropriately and built any relevant samplers before running GUM. A user wishing to instead configure and build GAMBIT from scratch after running GUM, in order to e.g. run the example scan of Sect. 4 using differential evolution sampling and MPI parallelisation, would need to instead do (again, starting from the directory)

figure pd

For more thorough CMake instructions, see the in the , and in the GAMBIT root directory.

4.3 Phenomenology and constraints

The constraints that we will consider for this model are entirely in the DM sector, as those from colliders are less severe. (Collider constraints are investigated in Appendix A.) The dark matter constraints are:

  • Relic abundance: computed by micrOMEGAs, and employed as an upper bound, in the spirit of effective DM models.

  • Direct detection: rates computed by micrOMEGAs, likelihoods from XENON1T 2018 [101] and LUX 2016 [102], as computed with DDCalc [4, 14, 15].

  • Indirect detection: Fermi-LAT constraints from gamma-ray observations of dwarf spheroidal galaxies (dSphs) [103]. Tree level cross-sections are computed by CalcHEP, \(\gamma \) ray yields are consequently computed via DarkSUSY [104, 105], and the constraints are applied by gamLike [4].

As the relic density constraint is imposed only as an upper bound, we rescale all DM observables by the fraction of DM, \(f=\varOmega _\chi /\varOmega _\text {DM}\).

Here we will use the GAMBIT input file . Although it does contain a little more than the GAMBIT input file automatically generated by GUM ( ), it is still fairly standard, so we will cover only the important sections here. For an overview of YAML files in GAMBIT, we refer the reader to Sec. 6 of the GAMBIT manual [1].

Firstly the parameters section indicates all models required for this scan: not just the MDMSM parameters, but also SM parameters, nuclear matrix elements and DM halo parameters. The parameter range of interest for the MDMSM model will be masses ranging from 45 GeV to 10 TeV, and dimensionless couplings ranging from \(10^{-4}\) to \(4\pi \). We will scan each of these four parameters logarithmically.

figure pk

The section includes likelihoods concerning the relic density, indirect detection from dSphs, and direct detection experiments.

figure pm

The section uniquely specifies the functions to use for the dependency resolver:

figure po

The scanner section selects the differential evolution sampler Diver [3] with a fairly loose stopping tolerance of \(10^{-3}\) and a working population of 10,000 points.

figure pp

To perform the scan we copy the GAMBIT input file to the folder within the GAMBIT root directory. This is a necessary step, as we need to the appropriate Standard Model YAML file from the relative path (i.e. the folder in the GAMBIT root directory). From the GAMBIT root directory, we

figure pu

and run GAMBIT with processes,

figure pw

The above scan should converge in a reasonable time on a modern personal computer; this took 11 hr to run using 4 cores on a laptop with an i5-6200U CPU @ 2.30GHz, sampling 292k points in total. The results of this scan are shown below.

Note that whilst the scan has converged statistically, the convergence criterion that we set in the input file above is not particularly stringent, so many of the contours presented in this section are not sampled well enough to be clearly defined. A serious production scan would typically be run for longer, and more effort made to map the likelihood contours more finely. Nonetheless, the samples generated are more than sufficient to extract meaningful physics.

Once the scan has finished, we can plot the result using pippi [95]. As Diver aims to finds the maximum likelihood point, we will perform a profile likelihood analysis with pippi. Assuming that pippi is in , do

figure py

which will produce plots of the four model parameters against one another, as well against as a raft of observables such as the relic abundance and spin-independent cross-section (rescaled by f).

4.4 Results

The upper panel of Fig. 1 shows the profile likelihood in the plane of the DM mass \(m_\chi \) against the mediator mass \(m_Y\). The relic density requirement maps out the structure in the same plane. There are two sets of solutions: firstly when the DM is heavier than the mediator, \(m_\chi > m_Y\) (bordered by the red dashed line in Fig. 1), and secondly where DM annihilates on resonance, \(2m_\chi \approx m_Y\) (centred on the purple dashed line in Fig. 1).

When \(m_\chi < m_Y\) and the YY annihilation channel is not kinematically accessible, annihilation predominantly occurs via an s-channel Y to \(b{\overline{b}}\) or \(t{\overline{t}}\), depending on the DM mass. In this case, the only way to efficiently deplete DM in the early Universe is when annihilation is on resonance, \(m_\chi \approx m_Y/2\). Away from the resonance when the YY channel is closed, even couplings of \(4\pi \) are not large enough to produce a sufficiently high annihilation cross-section to deplete the thermal population of \(\chi \) to below the observed value.

When kinematically allowed, \(\chi {\overline{\chi }} \rightarrow Y \rightarrow t{\overline{t}}\) is the dominant process responsible for depleting the DM abundance in the early Universe. When \(m_\chi < m_t\) and \(m_\chi < m_Y\), the only way to produce the correct relic abundance is when exactly on resonance, \(2m_\chi = m_Y\), annihilating mostly to \(b{\overline{b}}\). The effect of the t threshold can clearly be seen in Fig. 1: as the \(\chi {\overline{\chi }}\rightarrow t{\overline{t}}\) channel opens up, the contours do not trace the resonance \(2m_\chi = m_Y\) quite as tightly. This is because the cross-section to \({\overline{t}}t\) is significantly larger, as the mediator coupling to the SM leptons are proportional to their Yukawas. This means that near the resonance region it is far easier to satisfy the DM abundance constraint, which leads to the spread about the purple line for \(m_\chi > m_t\) in Fig. 1.

Fig. 1
figure 1

Profile likelihood in the \(m_\chi \)\(m_Y\) plane with the relic density as an upper bound (upper panel) and as an observation (lower panel). Above the red dashed line at \(m_\chi =m_Y\), DM can annihilate into Y bosons. The purple dashed line at \(2m_\chi =m_Y\) indicates the region where DM can annihilate on resonance. Contour lines show the 1 and \(2\sigma \) confidence regions. The white star shows the best-fit point. The grey contours in the lower panel the 1 and \(2\sigma \) contours from the upper panel

Fig. 2
figure 2

Profile likelihood in the \(\varOmega _\chi h^2\)\(c_Y\), (top) \(\varOmega _\chi h^2\)\(g_\chi \) (centre), and \(\varOmega _\chi h^2\)\(m_\chi \) (bottom) planes. The orange dashed line shows the standard \(\varLambda \)CDM limit from Planck [106]. Contour lines show 1 and \(2\sigma \) confidence regions, and the white star the best-fit point

Fig. 3
figure 3

Upper panel: profile likelihood in the \(m_\chi \)\(g_\chi \) plane when the relic density is an upper bound. Lower panel: the \(m_\chi \)\(g_\chi \) plane coloured by \(m_Y/m_\chi \), when treating the relic density as an observation. Shading and white contour lines represent 1 and \(2\sigma \) confidence regions. Grey contours in the lower panel are the 1 and \(2\sigma \) regions from the upper panel, where the relic density is used as an upper bound. The white star corresponds to the best-fit point

When the DM candidate is heavier than the mediator, the process \(\chi {\overline{\chi }} \rightarrow Y Y\) is kinematically accessible, and proceeds via t-channel \(\chi \) exchange. When this channel is open, the correct relic abundance, \(\varOmega _\chi h^2\), can be acquired independently of \(c_Y\) by adjusting \(m_\chi \) and \(g_\chi \). This can be seen in Fig. 2.

In this regime, the relic abundance constrains the DM coupling \(g_\chi \), as seen in the central panel of Fig. 2, with annihilation cross-section \(\langle \sigma v\rangle \propto g_\chi ^2 c_Y^2 / m_\chi ^2\). We plot \(m_\chi \) against \(g_\chi \) in Fig. 3; the lower bound is set by the resonance region, and is (unsurprisingly) poorly sampled for low values of \(m_\chi \).

To show the impact of allowing DM to be underabundant, we perform a separate scan where we instead employ a Gaussian likelihood for the relic abundance. This can be achieved by instead using the following entry in the section of the GAMBIT input file:

figure qa

We show the \(m_\chi \)\(m_Y\) plane for this scan in the lower panel of Fig. 1. For a given point in the \(m_\chi \)\(m_Y\) plane, the couplings \(g_\chi \) and \(c_Y\) must be correctly tuned to fit the relic density requirement: clearly, the scanner struggles to find such points compared to when DM can be underabundant. Notably, the sampler struggles to find the very fine-tuned points on resonance when \({\overline{t}}t\) is not kinematically accessible.

In the lower panel of Fig. 3 we show the \(m_\chi \)\(g_\chi \) plane when requiring that \(\chi \) fits the observed relic abundance, coloured by \(\frac{m_Y}{m_\chi }\). There is a well-defined red area scattered along a straight line with \(\frac{m_Y}{m_\chi } < 1\), corresponding to efficient annihilation to YY, i.e. for \(\langle \sigma v\rangle \propto g_\chi ^4/m_\chi ^2\). This is reflected in the lower panel of Fig. 1: almost all of the valid samples for \(m_\chi < m_t\) are in the regime where \(m_\chi > m_Y\), i.e. above the red dashed line. Here we see that the slope of the line followed by the red area in the lower panel of Fig. 3 is exactly half that of the lower bound on \(g_\chi \), due to the fact that the latter is instead set by resonant annihilation to fermions, which involves one less power of \(g_\chi \) in the corresponding matrix element, i.e. \(\langle \sigma v\rangle \propto g_\chi ^2 c_Y^2 / m_\chi ^2\).

Direct detection processes proceed via t-channel Y exchange. The functional form of the spin-independent cross-section is [e.g. 98]:

$$\begin{aligned} \sigma _{\mathrm{SI}}^{N} = \frac{\mu _{\chi N}^2 m_N^2}{\pi }\left( \frac{g_\chi c_Y}{v m_Y^2}\right) ^2 f_N^2\,, \end{aligned}$$
(2)

where \(\mu _{\chi N}\) is the DM-nucleon reduced mass, \(N=n,p\), and the form factor

$$\begin{aligned} f_N = \sum _{q=u,d,s} f^q_{N} + \frac{2}{27}f^G_N\,. \end{aligned}$$
(3)

Here the light-quark form factors are

$$\begin{aligned} f_p^u&= 0.0233, f_p^d = 0.0343, f_p^s = 0.0458 \end{aligned}$$
(4)
$$\begin{aligned} f_n^u&= 0.0160, f_n^d = 0.0499, f_n^s = 0.0458, \end{aligned}$$
(5)

and the gluon factors \(f^G_N = 1-\sum _{q=u,d,s} f^q_{N}\) are

$$\begin{aligned} f^G_p&= 0.8966, f^G_n = 0.8883. \end{aligned}$$
(6)

These follow directly from the values \(\sigma _s=43\) MeV, \(\sigma _l=58\) MeV chosen in the GAMBIT input file presented in Sect. 4.3. Details of the conversion between the two parameterisations, and a discussion of possible values for \(\sigma _s\) and \(\sigma _l\), can be found in Refs. [4, 107].

Thus for a given DM mass \(m_\chi \), direct detection constrains the parameter combination \(g_\chi c_Y/m_Y^2\), rescaled by the DM fraction \(f \equiv \varOmega _\chi /\varOmega _{\mathrm {DM}}\).

Figure 4 shows the spin-independent cross-section on protons as a function of the DM mass. As it is possible for \(\chi \) to be underabundant for all masses, it is easy to evade direct detection limits by simply tuning the couplings. We also plot the projection from LZ [108], which shows the significant effect that future direct detection experiments can have on the parameter space of this model, including the ability to probe the current best-fit point.

the best-fit region in Fig. 4 lies just below the XENON1T limit: this is due to a small excess (less than \(2\sigma \)) in the data, which can be explained by this model. This excess is discussed in more detail in a GAMBIT study of scalar singlet DM [14].

Fig. 4
figure 4

Profile likelihood in the \(\sigma _\mathrm {SI}^p\)\(m_\chi \) plane. The solid red line shows the exclusion from XENON1T [101] and the dotted orange line shows the projection from LZ [108]. The spin-independent scattering cross-section of dark matter with protons \(\sigma _\mathrm {SI}^p\) is rescaled by the fraction of predicted relic abundance \(f\equiv \varOmega _\chi /\varOmega _{\mathrm {DM}}\). Contour lines show the 1 and \(2\sigma \) confidence regions. The white star shows the best-fit point

Note that for all annihilation channels, the annihilation cross-section is proportional to the square of the relative velocity of DM particles in the direction perpendicular to the momentum transfer, i.e. \(\langle \sigma v \rangle \varpropto v_{\perp }^2\). This means that annihilation is velocity suppressed, especially in the late Universe where \(v_{\perp } \sim 0\). As annihilation processes are also suppressed by the square of the DM fraction f, indirect detection signals therefore do not contribute significantly to the likelihood function. The velocity dependence of the cross-section is fully taken into account by micrOMEGAs in computing the relic density, however, so the thermally averaged value at freezeout is much larger than the late-time value. We show the thermally-averaged value at freezeout in Fig. 5, which, as expected, overlaps the canonical thermal value \(\langle \sigma v \rangle = 3 \times 10^{-26} \text {cm}^3\text {s}^{-1}\). For comparison, in grey contours we also plot \(f^2(\sigma v)_{\mathrm{v \rightarrow 0}}\), the effective cross-section for indirect detection. In this case, all parameter combinations give cross-sections several orders of magnitude below the canonical thermal value, heavily suppressing all possible indirect detection signals.

Fig. 5
figure 5

Profile likelihoods for the effective annihilation cross-section as a function of dark matter mass, at different epochs in the evolution of the Universe. The dashed orange line signifies the canonical thermal cross-section \(\langle \sigma v \rangle = 3 \times 10^{-26} \text {cm}^3\text {s}^{-1}\). White contours with coloured shading correspond to the thermal average cross-section at dark matter freezeout. The contours show the 1 and \(2\sigma \) confidence regions, and the white star the best-fit point. For comparison, in grey contours we show the 1 and \(2\sigma \) confidence regions for the annihilation cross-section in the \(v\rightarrow 0\) limit, rescaled by the square of the fraction of the predicted relic abundance, \(f\equiv \varOmega _\chi /\varOmega _{\mathrm {DM}}\). This is the effective annihilation cross-section that enters indirect detection rates in the late Universe

If we wish to remove the model from GAMBIT, we simply run the command

figure qb

and the GAMBIT source reverts to its state prior to the addition of the model.

5 Summary

The standard chain for a theorist to test a BSM theory against data has been greatly optimised, and largely automated in recently years, with the development of Lagrangian-level tools such as FeynRules and SARAH. On the phenomenological side, GAMBIT has been designed as a modular global fitting suite for extensive studies of BSM physics. GUM adds the final major missing piece to the automation procedure. By providing an interface between GAMBIT, SARAH and FeynRules, it makes global fits directly from Lagrangians possible for the first time. This will make the process of performing statistically rigorous and comprehensive phenomenological physics studies far easier than in the past.

We have shown that GUM produces sensible results for a simplified model, in good agreement with previous results found in the literature. This is based on a scan that can be performed on a personal computer in a reasonable time frame.

The modular nature of GUM means that extension is straightforward. Since the first version of this paper appeared, GUM has already been extended in GAMBIT 2.1 to include a four-fermion EFT plugin connecting FeynRules and CalcHEP [16]. Other extensions planned include computation of modifications to SM precision observables and decays, multi-component and co-annihilating dark matter models, and interfacing to the GAMBIT flavour physics module FlavBit via FlavorKit. We also plan to add new interfaces to public codes not included in this release, including to spectrum generators and decay calculators associated with FlexibleSUSY, and to the dark matter package MadDM. We will also update supported backends to the latest versions, in particular micrOMEGAs 5 [89], HiggsBounds 5 [109], HiggsSignals 2 [110] and Pythia 8.3.