Active learning BSM parameter spaces

Active learning (AL) has interesting features for parameter scans of new models. We show on a variety of models that AL scans bring large efficiency gains to the traditionally tedious work of finding boundaries for BSM models. In the MSSM, this approach produces more accurate bounds. In light of our prior publication, we further refine the exploration of the parameter space of the SMSQQ model, and update the maximum mass of a dark matter singlet to 48.4 TeV. Finally we show that this technique is especially useful in more complex models like the MDGSSM.


Introduction
The Standard Model (SM) is a remarkably economic theory, making many predictions based on only a few parameters.In principle all of the parameters of the model can be fixed by observations, and then many more observables can be predicted.Once we add new particles and interactions to the SM -the bread and butter of many theoretical particle physicists -we necessarily add many new parameters.It is then in general a very difficult task to fix as many of these by existing observations and then make predictions for new observations.In many cases these observations have not yet been made; for example, we usually cannot fix the mass of a particle we have not yet observed.In others, the relationship with the observable is complicated; for example the Higgs mass in supersymmetric models, which is highly sensitive to loop corrections (see e.g.[1]).Therefore the first task with a new model is generally to explore the parameter space for plausible allowed regions according to some basic list of observations, before a more detailed examination can be made.
There now exists a whole chain of tools for computing the properties of general new theories.Most relevant for this work are SARAH [2][3][4][5][6][7][8], SPheno [9,10] and MicrOmegas [11,12], but there exist many others, such as HiggsSignals [13,14], HiggsBounds [15,16], Vevacious/Vevacious++ [17] and FlexibleSUSY [18,19].Thanks to SUSY Les Houches Accords [20,21], the output of these codes is standardised and can be passed from one to the other in the form of text files.However, the actual task of doing this job is generally left to the user, and of course there is then the task of choosing the strategy of exploring the parameter space of models.
Almost universally the community has adopted the practice of computing a likelihood function based on the desired set of observables and using techniques developed in other fields to sample the parameter space with a point density proportional to this likelihood.In its simplest form, this involves assigning a gaussian or log likelihood to each observable, with a given mean and variance that may be related to experimental measurements and uncertainties (e.g. when selecting for observables such as ∆ρ), or just reflect the uncertainty of the tools (e.g.taking an uncertainty of the Higgs mass of the order a few GeV).These likelihoods are generally combined assuming no correlations.The search strategy is then often a Markov Chain Monte Carlo (MCMC) based on the Metropolis-Hastings alogrithm; or other groups use more efficient versions based e.g on MultiNest [22].
GAMBIT [23,24], especially with the advent of the GAMBIT Universal Machine [25], attempts to solve this problem for the user by creating backends for common tools and taking over the task of interfacing between them.Likelihoods are computed by its included tools and combined across all observables; and the user then has the option of certain included search strategies [26].This is an admirable and powerful tool and best for large-scale scans on a computing cluster.
One problem with the use of likelihood functions is that for many use cases in High Energy Physics (HEP) they vary sharply around a small or narrow region in high-dimensional parameter spaces.While in principle they are continuous functions and therefore contain useful information when exploring regions away from the experimentally allowed region, allowing a smart algorithm to guide itself towards more likely points, in practice, and especially when combining many unrelated observables, they are effectively step or delta functions.This was noted in [27], where a different approach, named "Machine Learning Scan," was proposed in and later implemented in the tool xBit [28].This involves training a neural network directly on observables rather than on a likelihood function and this was then used to select points that would be in the "good" region.The user can then reasonably rapidly find many points in the allowed region, again with sample density proportional to the likelihood in that region, reproducing the results of MCMC-based strategies after sampling fewer points.
In this work we are interested in a different goal, namely for users who merely want to find, as a first step, the boundaries of the allowed region.Indeed, for many phenomenologists exploring a model for the first time, the most likely point or region is profoundly uninteresting; for many models they are more likely if we set new physics effects to large masses or very weak couplings.As a more pertinent example, in our previous work [8] we were interested in finding the heaviest possible dark matter mass for a given model; this is a long way away from the most likely region, and we were interested there in exploring the decision boundary itself.There we adapted a simple MCMC algorithm by biasing the likelihood function to favour heavier masses.Here we shall revisit that computation and show more clearly the effect of such a bias on the sampling distribution.However, the main result of this work is a new algorithm using active learning to train a neural network discriminator and choose points near the decision boundary so as to best explore the limits of the allowed parameter space of a model.
Active Learning (AL) (for a useful review see [29]) is the general name for machine learning where the algorithm chooses its own inputs.This means finding a measure of the uncertainty of the algorithm about its prediction for any given inputs, and then choosing points where the uncertainty is greatest.Since a neural network used as a predictor does not have a natural definition for this uncertainty, this is generally applied to approaches such as random forest classifiers, where the decision is made based on the results of a set of models, and then the uncertainty can be related to the differing predictions within the set.This has recently been applied in the HEP context [30,31].
Here we note that a neural network classifier does have a natural notion of uncertainty, and we propose an algorithm to use this to efficiently select points in high-dimensional parameter spaces.Since neural networks are much more sophisticated than random forests with unlimited potential for generalisation, for more complicated parameter spaces this should lead to more efficient sampling and discovery, and -despite the extra overhead in training neural networks -save time and give a more accurate description of the allowed range of a model.Indeed, once the model has been trained through active learning, the discriminator can then be used to describe the parameter space.
This paper is organised as follows.In section 2 we describe our algorithm and setup.We then apply AL scans to several models of increasing complexity: first, simple toy models (see section 3), then the CMSSM (section 4), the SMSQQ [8] (section 5), and finally the MDGSSM (section 6).We also compare the performance to standard MCMC scans, and to vanilla neural networks and random forest classifiers (RFCs).We conclude with a brief discussion of the possible use cases of AL scans, and perspectives for future work.The code for this work is implemented in a general framework and so will be released publically along with an upcoming publication.

Active learning with a neural network
In this section we describe our AL algorithm to train a neural network.We will assume the reader is familiar with neural networks and deep learning; they are becoming ubiquitous in science, see e.g.[32][33][34] for recent HEP theory applications.
Our approach is based on data that can be simply categorised as "good" or "bad."This decision can be made on a given point by an "oracle" which in a simple model could just be a formula, whereas in HEP applications it will be based on ranges of observables computed using given tools.In traditional likelihood-based approaches, being somewhat outside of the "excluded" range for one observable could be compensated by being very likely in other obserables, and indeed this has a reasonable explanation that if we sample hundreds of observables randomly then we should expect a few to be outside their "allowed" range.In our approach it is for the user to decide what "allowed" or interesting range they want to investigate, and this gives added flexibility without introducing biases.
Our first step is to create an initial dataset, typically consisting of random points, and query the oracle to get the result.We then feed these points and the oracle results to a neural network and let it train on those.We describe the parameters of the networks used in the subsequent sections, but they all consist of an input layer feeding to a series of 2 to 5 hidden layers of large (order 100) size connected by ReLU activation functions, and a final ouput layer with one neuron whose output is mapped to a sigmoid function.The implementation is done in pytorch and we use the default weight initialisation.
During training the neural network tries to minimize the loss function; we use the Binary Cross Entropy: where y i is the outcome of the data, and ŷi the fitted outcome by the neural network.In the case of what is studied in this work, ŷi is some number between 0 and 1, so we use a sigmoid function for the output of the final layer); y i is either 0 or 1 depending whether a point is bad or good.This loss function is minimized by influencing ŷi by changing the weights of the networks.After initial training, at each further step in the scanning/training cycle, our aim will be to choose K points to pass to the oracle, from L proposed to the discriminator.The results of the oracle's evaluation of the K points are then used to further train the discriminator.However, we have several choices about both how to propose the L points, and then how to select the batch of K.An initial strategy would be to choose the L points completely randomly (similar to the MLS approach [27]); this would help to randomly discover interesting regions, but is very inefficient in high-dimensional parameter spaces.Indeed, if we have n parameters with an interesting region a hypercube of side 0.1 of the parameter range, then a random scan would require O(10 n ) points to find it; if L = 100000 and n ≥ 6, then subsequent passes would not be likely to pick a point near the region.As an alternative we can choose points "near" good points that we have found, somewhat like the jumps during an MCMC; however, in that case we do not want to spend time in uninteresting regions away from the boundary near many good points, if the "good" region is large.Therefore we adopt a hybrid approach of choosing 10% of the L points purely randomly, and the remaining 90% from the vicinity of good points.
Furthermore, if the discriminator is not providing useful information then it is hardly worthwhile choosing points near its decision boundary.Hence the batch of K points contains a further proportion p of the K points selected randomly/within jumps of good points.We choose p based on the training score of the classifier: if q is the proportion of points incorrectly classified after the last training, then p = 2 × min(q, 0.5).
In other words, if the discriminator is no better than a coin toss then we need to propose entirely random points.
In order to select the pK points to pass to the oracle we also need a prescription for scoring the batch.First we assign an uncertainty score s i to each point: This score s i peaks at ŷi = 0.5 and reaches 0.25 there, and falls off to 0 for ŷi = 0 and ŷi = 1.
It is essentially equivalent to the Binary Cross Entropy above for one variable, but simpler to calculate.An initial choice would be to select the pK points which score highest.However, this would potentially lead to clustered points and insufficient diversity.This is also a classic issue in AL, and is solved by introducing a diversity/distance measure [35] to quantify the distance between points in feature space.In random forest appraoches, naively the entropy of the predictions on the sample can be maximised; or the Kullback-Leibler (KL) divergence can be used.In our case this is not possible, but we can instead use a score based on the physical distance in parameter space!We experimented with using a positive score for larger distances, but found that this simply drove the sampling to the boundaries, so instead we introduce a distance measure based on an electrostatic repulsion between points.If x i , x j are the vectors representing the input parameters for two points, and d 2 = |x i − x j | 2 then the "repulsion" is given by Here a is some constant that we take to be 0.0001.Of course, this requires that we have rescaled all of our input parameters to have range [0, 1].We then start with the point that has the highest score s i and remove it from the pool P and put it in the selected set pK. Then we iteratively add points until we have selected pK points as follows: 1. Compute r j ≡ i r(x i , x j ) for {x i } ∈ pK, {x j } ∈ P for each point x j 2. Compute the maximum total repulsion r max = max({r j }) and the standard deviation σ of the uncertainty scores {s i }.
3. Assign to each point a score: Add the point with the highest score S i to the set pK and remove it from the pool P .
Note that at each step it is not necessary to recompute the whole sum r j , since by storing the old scores we just need to add the scores for the total repulsion from the last point we added to our selected set.The parameter α is a diversity weighting that can be adjusted depending on the scan: if we think that the sample contains only one small intersting region we may wish to set α small.By using the standard deviation and r max we make sure that the relative weight of the diversity measure and the neural network uncertainty have a weight that depends only on α.
Once we have our set of K points, we pass these to the oracle, and then train the discriminator on the outputs over a given number of epochs.We also have a choice as to whether to train the discriminator on the whole dataset or just on new points.The cost of training on the full dataset is time, especially once a large number of points have been accrued; however, only training on new data, especially when those points are chosen to be near the boundary, can be deleterious.Hence we train on the full dataset every fixed number of iterations of this procedure.
As a final remark, it is also necessary to balance the dataset for training the discriminator: in cases where we have many bad points and few good ones (especially initially) it is advantageous for the discriminator to just classify everything as bad.Hence in our training set we make copies of the underrepresented point set so that the discriminator is fed an equal number of good and bad points at each training epoch.

Active learning toy models
We begin by illustrating the principle of active learning on a variety of toy models in two dimensions.The results of this are depicted in figures 1 and 2. In these figures one can see a fair amount of randomly distributed points, which is intended in order to make sure that no potentially interesting region is missed.Crucially, though, we see how well the algorithm figuratively zooms into the interesting regions to determine where the border between good and bad points runs.
To obtain these results, we use a scan with settings as shown in table 1.On all toy models, 20,000 points and 2,000 training steps suffice.The settings work for a variety of shapes, including irregular ones like the bean and the squiggle (see figure 1) and ones with holes or multiple pieces, like the beams and the blobs (see figure 2).Its ability to correctly identify multiple regions is particularly notable as this sets it somewhat apart from an MCMC scan.The known danger in the latter is that it zooms into one good region and missing other good regions in the process.This can be mitigated with additional measures.But in active learning, this danger is circumvented automatically, so that parameter spaces with more than one good region are a lot easier to deal with.
There are a few exceptions where the settings of our scan needed to be adjusted to account for special features of a model.In the case of the donut, more initial random points were needed to identify the hole; without these random points the algorithm finds the outer border but misses the inner one.In the case of the pizza, one of the borders is not recognised without more initial random points.The straight line of the demicircle is not recognized with lower numbers of random points either.
For the donut and the demicircle, the algorithm also needs to train twice as often on the full dataset in order to prevent it from weighting pieces of new data too much.The diversity alpha, which ensures that new points have a certain distance from one another, is also increased for the pizza and the demicircle.This helps identify the pointy corners of these shapes.For the donut, this is not necessary as has no pointy edges.
It is plausible, albeit far from proven, that similar settings adjustments may be necessary for models which exhibit holes or pointy edges in their parameter space.Nevertheless, the fact that we were able to correctly identify 7 out of 10 shapes with the exact same settings illustrates the versatility of active learning scans without the need for extensive finetuning.
All figures show the results after 20,000 points for the sake of having a nice plot; however, the algorithm accurately identified all borders between good and bad points already after 10,000 points.It's quite plausible that an interpolation of a grid-or random scan with this amount of points could have led to a similar accuracy.Unless one can press this interpolation into an analytical form, however, the knowledge of the boundaries will be of limited use.In contrast, the active learning scan -or any scan with a neural network, for this point -returns a model in additon to a list of points.This allows us to do two additional things: First, we can query the model about  1: Network settings for the toy models.Note that only 3 out of 10 models need additional finetuning, which speaks for the flexibility of our classifier.
another point, even if we already ran the scan and the point was not included in it.Second, we can refine this model by training it on more points if we so desire.Both these things would not be as straightforward with an interpolation.
The active learning approach trumps other approaches employing neural networks in the sense that it finds points, unlike for example a vanilla neural network.In addition, it automatically finds interesting points, i.e. points close to the boundaries.This puts it in a similar category as a GAN [36,37], where two neural networks try to fool one another with increasingly interesting points.The advantage of our approach, however, is its relative simplicity.With active learning, we only have one model's hyperparamters to tune -finetuning the two competing networks of a GAN is notoriously difficult [38,39] -and we need to train less, meaning that we potentially use less compute.And despite the simplicity, we find compelling accuracies even in higher-dimensional models, as can be seen in sections 5 and 6.

Active learning in the CMSSM
Having tested our algorithm on toy oracles, we will now apply it to a simple physics example: the valid dark matter parameter space of the constrained MSSM.This is a scenario where the masses and gauge couplings unify at the GUT scale, and leads to only five free parameters called m 0 , m 1/2 , A 0 , tan β and sign(µ); it is therefore used as a standard theoretical example, especially for exploration of the valid dark matter parameter space, e.g.[40][41][42].To make the problem resemble the toy model we will only consider the m 0 /m 1/2 plane, fixing tan β = 10, A 0 = 0 and sign(µ) = 1.We shall scan over m 0 , m 1/2 ∈ [100 GeV, 2000 GeV] for both an example MCMC scan and our active learning algorithm (using the same codes of SARAH and MicrOmegas for both to give a fair comparison).This is the same parameter plane considered in [28] and in one example in [42].As in the former reference we only impose a constraint on the dark matter density and ignore all other constraints.The difference between those two references is in the lattitude allowed; the latter takes Ωh 2 = 0.112 ± 0.012 while in the former Ωh 2 < 0.2 is considered acceptable.We consider Ωh 2 < 0.12 to be a "good" point and larger densities to be "bad."In the MCMC scan we use a log likelihood for Ωh 2 with mean 0.112 and variance 0.05.This is peaked around the decision boundary and therefore should provide a simple alternative to our AL procedure to find points near it.In the AL scan we classify valid points as those with Ωh 2 < 0.12.We use L = 10000, K = 100 and a variance of 200 in the steps around good points.
With those constraints, the parameter plane is not especially interesting: there is an acceptable region at very small m 1/2 which eventually leads to a region at m 0 > 1250 GeV for m 1/2 up to 400 GeV where no points are generated by the spectrum generator because there is no electroweak symmetry breaking.On the left side of the plane at small values of m 0 and large m 1/2 there is a coannihilation region, but in reference [42] (and as we find with our constraint on Ωh 2 ) this disappears into a region roughly from (m 1/2 , m 0 ) from (100, 540) GeV, up to (350, 1500) GeV where the LSP is a charged slepton.
Hence in our training we ignore the constraint on the charged LSP and just take the dark matter density from MicrOmegas-which shows the density of the neutralino, overlaying the unphysical region on the plot aferwards.Since the points are not especially physical and the idea is to compare strategies, and with the results of the previous references this can therefore be regarded as a toy model.
The results can be seen in figure 3.In the MCMC scan, a large amount of points end up in the area where m 0 ∈ [100, 500] and m 1/2 ∈ [750, 2000].This is inefficient because there are lots of would-be good points in that area, and because the rest of the parameter space remains relatively unexplored in consequence.In contrast, the AL scan clearly favours the regions which are on the border between good and bad.It nonetheless explores the rest of the parameter space in sufficient detail.The comparatively less explored bare band region roughly 10% away from the border region comes about as an artifact from the diversity measure which penalises points for being too close to those already chosen.
In these figures, we have also shown a discriminator line for both scans.This was achieved by retraining a neural network with the same settings as the original discriminator from the AL scan on either set of points.To make these two lines comparable to one another, not the original discriminator but the one retrained on the whole set of points was used to produce the line for the AL scan.Such a retraining of networks can of course in general lead to better discriminator performance.This is because the gradual introduction of more interesting points, i.e. points near the boundaries, can lead to distortions of the sort where the first points, which were less interesting, have a larger impact on the network than points which follow.With the points distributed as they are, it is not surprising that the line from the AL points traces more accurately the boundary between good and bad points.If we define "interesting" points to be those near the decision boundary, so for m 1/2 < 250 GeV, or all values to the left of a line from (100, 250) GeV to (350, 1500) GeV, then the active learning scan delivered 42% of its points in this region compared to only 32% for the MCMC.
In conclusion, even in a relatively simple and low-dimensional model like the CMSSM, the AL scan produces a superior choice of points than the MCMC scan.This has the advantage that one could use fewer points to find the boundaries, and we expect that the trained network on a set of points deliberately selected to locate the boundaries should have superior performance at locating the boundary; this can clearly be seen from the two plots.The cost of this, however, is that training the discriminator after each new set of points takes a certain amount of time.For such a simple scenario where we only take into account the dark matter density, there is no gain from using deep learning.However, if we were to include more constraints -especially collider constraints -we would expect that the time spent training networks to select points would be worth it.
5 Parameter space of the SMSQQ model

Highest singlet mass
We now move to explore the SMSQQ, a model with colourful mediators, with an active learning scan.This model is particularly useful to illustrate how colourful unitarity bounds contribute to a mass limit for dark matter.In our previous paper [8], we had used an iterative approach using several MCMC scans, narrowing in on the region we were interested in.The likelihood function used for the MCMC was artificially weighted in a way that forced it to prioritise points with a higher mass.Doing so, we were able to establish an upper bound for the singlet mass.
In this work, we further refined the search for a highest singlet mass by choosing new, updated ranges.The old and new ranges can be seen in table 4. The results are shown in table 2. One can see that by limiting Λ to a high but valid range, we were able to gain almost 1 TeV on the highest singlet mass.In this table, we also show the points with the highest singlet mass in an AL and an MCMC scan when we only generated 200k points and used looser ranges.One can see that these two masses are very close together, the one produced by the AL scan even being slightly higher than the one from the MCMC.This is remarkable because, unlike the MCMC, the AL scan is by no means skewed towards higher masses.

Performance of the AL scan
We then move to the AL and the MCMC scan with 200k points each to explore their properties.The MCMC and AL scans have ranges as provided in table 4. The settings of the AL scan can be found in table 3. We find that making an AL scan with many points is not completely straightforward though.Whether a network can handle a load of points depends at least in part on its number of parameters, i.e. its size.Other factors will be discussed in section 5.4.An additional difficulty is that the relative scarcity of good points in the ranges we want to explore: In a random scan with these ranges, solely around 3/10,000 points are good.While an MCMC would be able to find at least one good region somewhere despite this scarcity, the AL would not be able to operate on such few points because it does not work with gradients of any kind (as of now).We therefore start with a smaller AL scan with 50k points on a range of which we suspect that it contains more good points on average (see table 4 for these ranges).We then feed these 50k points, of which some 23 percent are good, to a larger network.This network then generates the remaining 150k points based on the lessons it has already drawn after training on the first 50k points.These remaining points are on the loose ranges that the MCMC scan also works on.Table 4: Variable ranges for the main AL scans in the SMSQQ and the scans to obtain the maximum mass of the singlet.For the new max m S scan, a tightening of Λ proved very useful.Tighter ranges were used for the smaller main scan to ensure that good points are found.In the larger scan this wasn't necessary because we were able to generate good points in the vicinity of those we had already found in the previous scan.
The difference between the first points and last 10k points of the AL scan can be seen in figure 4. The first feature that meets the eye is the fact that the model really starts exploring the parameter space, and gets increasingly confident about areas it barely covered in the first 10k points.This is especially apparent in the κ − m S plane.From the summary plots in the last row of plots in the figure it is evident how much exploring the parameter space helps find a point with a higher singlet mass, too.We have cross-checked this phenomenon with AL scans of the same model which differed from the one shown here in various settings or point numbers.
Figure 5 is similar to the last row in figure 4, except that now not the point density but the average discriminator value is shown.One can see that there are regions which the first 10k points already investigated sufficiently (in red), and regions which the last 10k points investigated and the discriminator is quite confident about (in blue).Grey regions are regions where the discriminator values of the first 10k points and the last 10k points are rougly equal, or where only one of the two exists and is close to zero.Note that the random points outside the region are grey with a blue border; however, this is an artifact from the grid interpolation.This can be verified by checking top rows of the 4 plot: Outside the region with good points, the discriminator is always close to zero.
Figure 6 shows the discriminator values in all of the 200k points.One can clearly see that the discriminator only returns values significantly larger than zero inside the region with good points, with the exception of a few outliers around the fuzzy corners of these regions, and a single outlier at a high singlet mass.Two substructures in these distributions are an interesting byproduct: First, the discriminator returns around the fuzzy edges tend to be a bit blotchy.This could be due to the fact that a portion of points is sampled in the vicinity of already-known good points.Around the fuzzy edges there is a significant proportion of bad points and an uneven distribution of good points.This exacerbates the effect of sampling around good points.Second, there are almost straight lines of points that the discriminator deems good, often close to a hard border.This is in parts due to the fact that more points are sampled close to borders.It turns out, however, that these are areas where there are indeed less bad points.We have verified that such a stark dark discriminator line always appears in regions close to a constraint on the dark matter density [43].It is therefore conceivable that the discriminator zeroes in on dark matter constraints particularly well, thus reducing the need to generate and evaluate bad points in those areas.This is rendered even more plausible by the fact that dark matter constaints tend to be pretty abrupt in comparison to those imposed by unitarity [8,[44][45][46][47][48][49][50][51][52][53][54][55][56][57][58] or vacuum stability [59][60][61][62][63][64][65].Gradient-agnostic discriminators like the one we are dealing with here are particularly well-suited for handling such constraints, unlike MCMCs.However, the implementation of a gradient of sorts into the AL scan so that it handles less abrupt constraints similarly well is left for future work.
Figure 7 shows the distribution of good and bad points and the contour line of where the discriminator thinks the border between good and bad points is.Comparing this to the previous plots, one can see that the discriminator does a pretty good job at identifying the borderline of good points.Marked in blue are points where the discriminator returned a value around 0.5, i.e.where it was not sure whether this point was good or bad.These points are not right on the borderline as they are in our two-dimensional toy models (see figures 1 and 2 for comparison).This is mostly due to the higher dimensionality of this model.It is noteworthy as well that there is no extreme amount of blue points in the regions where the discriminator showed a large confidence for good points, as shown in figure 6.There are some points; however, especially in the κ − m S plane one can see that their amount is not as large as we would assume had the discriminator not been so confident around the hard upper edge of the good region.One can also see that the point with maximum singlet mass is right on the borderline, as was expected.

AL vs. MCMC
Figure 8 shows a comparison of the points explored by the AL versus those of the MCMC.The first thing to jump to the eye is how much larger the region is that the MCMC covers.This does not speak in favor of the MCMC, however.While it does find 283 good points out of 200k total  Note how the distribution of points where the discriminator is uncertain (blue points) matches the regions where there are good points and never goes to regions with bad points.Also note how well the discriminator identifies the borderline between good and bad points even in this higher-dimensional model, as visualized here by an interpolation grid with 100 bins on each axis.The discriminator returns a value close to 1 where it suspects a good point, and close to 0 where it suspects a bad point.
(versus 70 out of 200k for a random scan), the AL finds a whopping 85,291 good points.Not only is this an enormous difference in numbers; as we established earlier, the good points that the AL scan finds tend to be of a higher quality because they are more often than not close to the borders of the good regions and thus help finding boundaries better.On the other hand, the AL scan failed to find a region of good points at κ > 140, 000 GeV and m S < 30, 000 GeV.This failure might have been mitigated by adding more random points to explore the rest of the parameter space, increasing the diversity measure, or generating more initial random points.Nevertheless, this shows that the AL scans can be quite sensitive to tuning parameters.We deem it feasible to build a similar scan which tunes relevant parameters automatically; however, we leave this for future work.
On the whole, and despite the one non-identified region, the advantages of AL scans are quite apparent from this figure.In higher-dimensional parameter spaces where good points are scarce, boundaries are difficult to predict, and points take a long time to evaluate with established HEP tools, AL scans propose a much more efficient way to explore parameter spaces.

AL vs. other networks
In a final step, we benchmark how well our AL models do against other networks in the SMSQQ.We set out generating two new AL scans with 200k points each, but this time stopping the training only after the error rate has dropped below 0.5 percent (compared to 5 percent earlier).We then train two neural networks (NN) and a random forest classifier (RFC) with the exact same settings on these points and a set of random points.We also try to find the amount of points after which the error rate falls below 20 percent; however, this is already reached after 50k points because   ).Each classifier was trained on one set of either 50k or 200k points which are either AL-generated or random.The 50k points that serve the AL as an initial dataset already lead to a very good result; therefore the active learning and the simple neural network are identical in this case.The neural network fails to train on 200k points; it stays at a 50% error throughout the process.The 24k RFC-misclassified points come from the RFC trained on 50k active learning points and tested on 50k active learning points from a separate run.Note that the amount of RFC-misclassified points tallies up with the error rate of 44.0% on 200k test points because, after rebalancing the test data set to contain equal amounts of good and bad points, we're left with about 24k test points.
the network is larger than the one used to generate these 50k initial training points.It is for this reason that the neural network and the active learning scan, trained on 50k points, do identically well (see table 5).We also see that the error rates of the RFC, kept on its original settings of 150 estimators, are vastly worse than those of the AL and NN.It is worth mentioning that the NN fails to train on 200k points, regardless of whether they are random or AL-generated.This cannot be solely because of the amount of points, as the AL trains just fine on 200k points.Rather, it either fails because the network parameters are not ideal for getting all points at once, or it fails because being presented with all these points at once is fundamentally too much for a network this size.As is often the case in this area of research, it is difficult to prove the one or the other, though.The AL network does fine, however, by getting the points in a piecemeal fashion, which gives it the chance to adjust its weights gradually.
As expected, the NN trained on 50k random points does consistently worse than that trained on 50k AL-generated points because the quality of AL-generated points is higher in the sense that there are more points in areas where there is a lot to learn.There is no difference between the AL trained on 50k and 200k points when tested only on random points, differences appear when testing with another set of AL-generated points (which are presumably more interesting).This difference further manifests when tested on only those points which are AL-generated and an RFC, trained on AL-generated points, misclassified (these points are potentially even more interesting).Keeping this in mind, we move to make two more AL scans of 50k points and a smaller network, with settings like the previous AL 50k scan (see table 3).We find that the error rate of the AL network drops below 20 percent after about 13k points when tested on a set of 200k random points.As before, the NN fails to train with this amount of points.Unsurprisingly, the NN trained on 13k random points does fairly poorly, especially on interesting points.Surprisingly, though, the NN trained on 13k AL-generated points does better than the AL model on these points, particularly for interesting points.This might be happening due to the fact that there are no initial points in the AL, meaning that the first round of K points might be getting an overly large amount of attention.There is a fairly easy fix to this, though: adding a sufficiently large amount of random points as initial training set into the AL should do the trick, like it was necessary for some of the toy models in section 3. It is somewhat surprising that the RFC does comparably well when it is trained and tested on random points.This seeming superiority vanishes quickly, however, when tested on more interesting points.As before, the performance of the AL improves with more points.This is not a monolith, however: The AL trained on 50k points actually does worse than that trained on 13k points, when tested on random points.It is better, however, at judging more interesting points.
Overall, this benchmarking effort demonstrates two advantages that can be generalised beyond the confines of this particular model: First, AL scans are very good at finding interesting points and at scoring well when tested on similarly interesting points.This makes them good for finding boundaries in new models.Second, by using the fact that AL scans feed points to their network in a piecemeal fashion, we are able to get away with smaller networks than we would if we somehow found interesting points and subsequently trained a network on these.These two advantages make AL scans particularly useful for working on new, complex, and higher-dimensional models in a cost-and compute-efficient way.Nevertheless, there is one obvious limitation to AL scans because many parameters need to be tuned, which requires some know-how.As mentioned earlier, writing an autotuning AL scan is left for future work.Table 8: Variable ranges for the two AL scans in the MDGSSM.The ranges were chosen so that they include all 10 reference points in [66].
In a final step, we apply the AL scan to the Minimal Dirac Gaugino Supersymmetric Standard Model (MDGSSM).This is a non-minimal supersymmetric scenario with many parameters at low energy.Collider constraints on strongly-coupled particles on this model were considered in [67].Subsequently in [66] the constraints on electroweak-charged particles were considered from both dark matter and collider searches.As a consequence of the constraints on colourful particles, it is prudent to consider them heavy (of order 2 to 3 TeV) where their exact values do not significantly affect the phenomenology.This leaves six interesting parameters for the low energy theory that greatly affect the masses and phenomenology of the electroweak sector.To place constraints, it is therefore necessary to scan over these parameters.However, it was found in [66] that only a small proportion of parameter choices lead to an acceptable Higgs mass.Therefore a random forest classifier was first trained on a random dataset and used to filter proposed points in a larger MCMC scan over the dark matter parameter space.Here we shall examine whether AL can do a better job at this task, by selecting points to train a discriminator that decides whether a given point has a Higgs mass in the range 122 -128 GeV.In particular, the SARAH code for the MDGSSM produces a lot of null points, where the result is not a bad Higgs mass but rather NaN.In contrast to a continuous fit, a discriminator like the one an AL scan employs is naturally equipped to deal with such outcomes.
We proceed in a two-step fashion again, putting a smaller network to work on 20k points, then feeding those points to a larger network and generating 100k points in total.(We are generating less points than for the SMSQQ for the simple reason that points in the MDGSSM take longer to generate.)As before, the larger network requires a smaller learning rate; in this model, a smaller stochastic gradient descent momentum stabilizes the larger network as well.The ranges of this scan are chosen such that they include all reference points listed in the literature [66].They are listed in table 8.Note that we need not make tighter ranges for the smaller scan in this model because there is a sufficient amount of good points in the chosen ranges (around 7 percent of randomly selected points are good).Nevertheless, the two-step approach is justified because, as we saw in section 5.4, feeding a generous amount of initial training points from the first run to the larger network brings better training results.
Figure 9 shows the result of this scan in various planes.These distributions do not at all exhibit neat, spatially secluded regions of good points like we saw with the SMSQQ in the previous section.Instead, good and bad points are pretty much all jumbled together, with structures visible mostly in planes containing √ 2λ T and very little structure otherwise.This makes it hard to separate good regions from bad ones in a spatial way, as illustrated by the large areas that the blue line in the plots encompasses.This occurs not as a fault of the scan, but as an intrinsic feature of the model: Several variables in the MDGSSM only have an indirect effect on the Higgs mass, such that their influence does not show in a plot.Crucially, however, a neural network -or an AL scan for that matter -is still able to use these variables as a basis to estimate whether they give rise to a good or bad Higgs mass.This is demonstrated in the section below.

AL vs. other networks
In the same fashion as with the SMSQQ, we now benchmark our AL scan against various neural networks and RFCs.The results are shown in table 9. Regularly testing on a dataset of 20k random points, we find that the error rate drops below 5 percent after around 24k not-null points, i.e. after adding about 7 sets of K new points to the initial dataset of 20k points.This is a substantially better error rate than the one we get with the SMSQQ.We should not overinterpret this, however, because the distributions of good and bad points are vastly different from those in the SMSQQ.Note that the AL scan that was retrained on another 24k points scored slightly more than 5 percent, which is due to statistics.This tells us that there is some uncertainty to all values shown in this table.Nevertheless, they give us an idea of where we are going with this.
As expected, the AL trained on 58k points outperforms the one that trained on 24k points only.This means that new interesting points add value to the overall performance of the network.The neural networks trained on the AL-generated points do similarly well as the AL networks.In the case of the 58k training points, the NN slightly outperforms the corresponding AL network.This highlights the importance of training the AL on its full dataset from time to time; this was not done with this model due to the fact that it requires further finetuning of the network to ensure stability in subsequent training rounds.Regularly retraining the AL network on its full dataset therefore is, as of now, only recommended for simpler models -we did this for the toy models in Unlike the results for the SMSQQ (see tables 5 and 6), a neural network trained on random points does similarly well to the one trained on AL points.Although all networks shown here handle the impact of indirectly influencing variables well, this shows a limitation to AL scans: when regions of good points are not clearly distinguishable from regions of bad points, the value of generating interesting points is of limited value.It is possible that a set of random points, fed to a sufficiently large neural network, will do a similarly good job as an AL scan in the role of a "Higgs-gatekeeper."There are two reasons why we would advise using an AL scan nevertheless: First, it is not always clear which spatially separated structures the AL might uncover anyways, which would allow it to generate at least some interesting points.As one can see from the fact that the error rates of tests on random points and AL-generated points do differ in a statistically significant way, the AL scan has managed to produce at least somewhat interesting points even with this model.Second, as we have seen in section 5.4, AL scans might allow us to get away with smaller networks by feeding it piecemeal portions of points at a time.Both reasons imply that AL scans are a more computational-and cost-efficient procedure.
In summary, even in fairly complex models like the MDGSSM, AL scans are a cost-efficient way of producing gatekeepers, be it for the Higgs mass or for another observable.This helps us save compute resources which, in other scans, might have gotten wasted using very effective but nevertheless time-consuming HEP tools.Feeding the "Higgs-gatekeeper" we presented here to an MCMC-or other scan is left for future work.The standout feature we see from the performance analysis on the MDGSSM is that the concept of such gatekeepers from AL scans is quite generalizable and can be used for all kinds of models.

Conclusions
In this work we propose a novel approach to explore the parameter spaces of new models.We have demonstrated on a variety of different models that active learning scans provide a cost-and computational-efficient way to find boundaries and identify areas with good points.We have also shown the use of this approach even with models that do not have a spatially secluded region of good points, as is the case in the MDGSSM.In such cases, AL scans can be used to produce socalled gatekeepers, i.e. networks which are able to predict whether the observables resulting from a set of variables are good or bad with relative certainty.These can be plugged into subsequent MCMC-or other scans to avoid spending unnecessary time and resources evaluating HEP tools on points that are most likely bad anyway.
In comparison to MCMC scans, AL scans do a much better job at finding and identifying regions of good points -exactly as we advertised.If the initial training set is not large enough or does not cover the entire parameter space, however, there is a risk of missing potential good regions (see section 5.3).To further ensure finding all good regions, one can finetune AL parameters such as the diversity measure or the proportion of random points in each training set.This shows the importance (and tediousness) of finetuning at this point in time.We think that implementing an AL scan that automatically tunes its own parameters without the need of the user's intervention is feasible; however, we leave this task for future work.
In comparison to RFCs, AL scans vastly outperform in terms of accuracy.Vanilla neural networks can sometimes slightly outperform AL networks when trained on the same points (see sections 5.4 and 6.2); however, such NNs cannot generate interesting points on their own.To combine the advantages of the two, one could retrain the AL network on its full dataset every so often.At this point in time, this is only feasible for small or simple models at this point, though, because retraining on larger models requires more finetuning of the network parameters.We also find that regular NNs sometimes fail to train with a large amount of training points (see section 5.4).This might be due to suboptimal settings to train with such large datasets, or it could be that the network is fundamentally not able to cope with such many points at once.If it is a matter of network parameters, an autotuning AL scan might fix this problem in the future.One last drawback of AL scans is that we need a sufficiently large amount of good points in order to start training(see section 5.2.This could be reduced in the future by introducing gradients to the selection of K from L training points.This, too, is left for future work.
Finally, the code for this work will be released as part of a general framework for running simple scans along with a future publication.At that point it would also be interesting to consider more sophisticated deep learning and AL approaches such as those in the swyft library [68], and whether they can be used to improve the performance of tasks that we are interested in here.

Figure 1 :
Figure1: AL-generated points of various toy models with 20,000 points each.Note how well the discriminator (solid line) and the oracle (dotted line) match, even with pointy or irregularly shaped objects.Also note that all points the discriminator is uncertain about or misclassified are on the oracle / discriminator lines.

Figure 2 :
Figure 2: 20,000 AL-generated points for more models.Note how the oracle and the discriminator match snugly even around multiple interesting regions or regions with holes.

Figure 3 :
Figure 3: Point distribution in the m 0 − m 12 plance of the MSSM with 20,000 points each.Top panel: AL scan; bottom panel: MCMC scan.The blue line indicates the location where the disciminator puts the line between good and bad points, as evaluated on a grid with 100 divisions in either direction.The dark green shaded area is the region with a charged LSP, which we did not take into account in the discriminator or MCMC scans.

Figure 4 :
Figure 4: Comparison of the first and last 10k points in the AL SMSQQ scan with 200k points.Left: the κ − m S plane, right: the κ − Λ plane.The uppermost plot shows the point distribution of the first 10k points, the middle one that of the last 10k points.In these plots, the light blue star indicates the location of the point with the highest singlet mass of that subset of data, the dark blue star the location of the highest singlet mass overall.The lowest plots are a comparison and summary of the two above: In a 100-by-100 grid, in each bin the number of points in the first and last 10k points are subtracted from one another.The dark blue star indicates the location of the point with the highest singlet mass m S in the whole dataset, the red one that of the first 10k points and the light blue one that of the last 10k.Note how, again, the area of interest broadens with time as the discriminator explores the borders of the regions of good points.

Figure 5 :
Figure 5: Comparison of the discriminator results in the first versus the last 10k AL-generated points.The dark blue star indicates the location of the point with the highest singlet mass m S in the whole dataset, the red one that of the first 10k points and the light blue one that of the last 10k.Note how the discriminator explores the border as more points get introduced.In each cell of a 100-by-100 bin grid, the average discriminator value of the first 10k points and the last 10k points are taken, then subtracted from one another.Thus, red or orange indicates areas where the discriminator was confident about finding good points among the first 10k points.Dark or light blue indicates areas where it was confident about finding good poitns among the last 10k.Grey areas indicate that the confidence in these points is roughly equal for the first and last 10k points.

Figure 6 :
Figure 6: Distribution of the discriminator values interpolated from a 100x100 grid.The dark purple star indicates the location of the point with the highest singlet mass m S .Take note of the substructures in the regions with good points: Along some borders, like e.g. the upper border in the κ − m S plane, the discriminator is quite sure about finding good points.Along other borders, for example the upper border in the m S − Λ plane, it is not as sure.On "fuzzy" borders, like e.g. the upper border in the m S − m O plane, the discriminator tends to produce blotchy results -as one would expect.

Figure 7 :
Figure 7: Scatter point distribution of good and bad AL-generated points in the SMSQQ.The bright green point indicates the location of the point of the highest singlet mass m S .Note how the distribution of points where the discriminator is uncertain (blue points) matches the regions where there are good points and never goes to regions with bad points.Also note how well the discriminator identifies the borderline between good and bad points even in this higher-dimensional model, as visualized here by an interpolation grid with 100 bins on each axis.The discriminator returns a value close to 1 where it suspects a good point, and close to 0 where it suspects a bad point.

Figure 8 :
Figure 8: Comparison of MCMC-versus AL-generated points in various parameter planes of the SMSQQ model.The dark pink star indicates the location of the point with the highest singlet mass m S found by the MCMC, the dark blue star the one found by the AL.The solid lines indicate the location of good points.Note how many more good points the AL finds than the MCMC, and how much the region of all AL-generated points closely matches the surroundings of the good points while still exploring the parameter space.

Figure 9 :
Figure 9: Distribution of good and bad points in various planes of the MDGSSM.Generally speaking there is visible structure for planes along −λ S , and just a little to almost no structure in all other planes.Even in the more insightful plots, it is hard to visually separate good regions from bad ones.

Table 2 :
Maximum singlet mass found in old MCMC scan and new MCMC scan with 1 million points each.The maximum singlet masses of the 200k MCMC and AL scans are also shown.The latter are very close to one another, despite the fact that only the MCMC was explicitly forced to prioritize high masses.

Table 3 :
Network settings for the two main AL scans in the SMSQQ.The learning rate of the larger network is smaller because it otherwise diverges.Explanations of each setting can be found in table1and in the text.

Table 5 :
Benchmark comparison of RFC, AL, and various neural networks (NN

Table 6 :
Benchmark like before but on the first 50k points, with smaller ranges and only K = 500 initial points.The neural network fails to train properly with 50k points, but succeeds with the smaller datasets.

6
Learning the Higgs mass in the MDGSSM 6.1 Setup and performance of the AL scan

Table 7 :
Network settings for the two AL scans in the MDGSSM.As in the SMSQQ, the learning rate of the larger network is smaller, otherwise it diverges.Explanations of the settings can be found in table 1 and in the text.

Table 9 :
Benchmark of AL, RFC, and neural networks on the MDGSSM.There are 58k-63k non-null AL-generated points out of 100k total points, and 62k random points of 100k.The 14k points were selected based on the number of points after which the original AL (with 58k points total) reached an error below 5%.The fact that the 14k AL does not do as well is due to statistical fluctuations.One can see that while the RFC trails far behind, the AL and NN do similarly well on the AL-generated points.The NNs trained on random points yield similar results but become less reliable with RFC-misclassified points when the number of training points is large.
section 3 but not in these more advanced ones.It is conceivable, however, that this might be more feasible with an autotuning AL scan, which we have left for future work.