1 Introduction

Simulations are widely used for parameter estimation in particle and nuclear physics. A typical analysis will follow one of two paths: forward-folding or unfolding. In the forward-folding pipeline, the target physics model must be specified at the time of inference. We focus on unfolding, where detector distortions are corrected in a first step (unfolding) and then the resulting cross section can be analyzed in the context of many models by any end users. In the unfolding pipeline, the first step is to identify observables sensitive to a given parameter(s). These are typically identified using physical reasoning. Then, the differential cross sections of these observables are measured, which includes unfolding with uncertainty quantification. Finally, the measured cross sections are fit to simulation templates with different values of the target parameters. This approach has been deployed to measure fundamental parameters like the top quark mass [1] and the strong coupling constant \(\alpha _s(m_Z)\) [2, 3] as well as parton distribution functions [4,5,6,7] and effective or phenomenological parameters in parton shower Monte Carlo programs [8].

A key drawback of the standard pipeline is that the observables are constructed manually. There is no guarantee that the observables are maximally sensitive to the target parameters. Additionally, the observables are usually chosen based on particle-level information alone and so detector distortions may not be small. Such distortions can reduce the sensitivity to the target parameter once they are corrected for by unfolding. In some cases, the particle-level observable must be chosen manually because it must be calculable precisely in perturbation theory; this is usually not the case when Monte Carlo simulations are used for the entire statistical analysis. There have been proposals to optimize the detector-level observable for a given particle-level observable [9] since they do not need to be the same. Alternatively, one could measure the full phase space and project out the desired observable after the fact [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29] (see Ref. [30] for an overview).

We propose to use machine learning for designing observables that are maximally sensitive to a given parameter(s) or model discrimination while also being minimally sensitive to detector distortions. Simultaneous optimization ensures that we only use regions of phase space that are measurable. A tailored loss function is used to train neural networks. We envision that this approach could be used for any case where simulations are used for parameter estimation. For concreteness, we demonstrate the new technique to the case of differentiating two parton shower Monte Carlo models of deep inelastic scattering. While neither model is expected to match data exactly, the availability of many events with corresponding detailed simulations makes this a useful benchmark problem. We do not focus on the unfolding or parameter estimation steps themselves, but there are many proposals for doing unfolding [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30] and parameter estimation [31,32,33,34,35,36,37,38,39,40,41] with machine learning. Instead, our focus is on the construction of observables that are engineered to be sensitive to target parameters or to distinguishing models while also insensitive to detector effects. The latter quality ensures that uncertainties arising from the dependence on unfolding ‘priors’ is small. This is explicitly illustrated using a standard, binned unfolding method in the deep inelastic scattering demonstration.

This paper is organized as follows. Section 2 introduces our approach to observable construction. The datasets used for demonstrating the new method are introduced in Sect. 3. Results with these datasets are presented in Sect. 4. The paper ends with conclusions and outlook in Sect. 5.

2 Methodology

We begin by constructing new observables that are simultaneously sensitive to a parameter while also being minimally sensitive to detector effects. This is accomplished by training neural networks f with the following loss function:

$$\begin{aligned} L[f] = L_\text {classic}[f(z),\mu ]+\lambda \, L_\text {new}[f(x),f(z)] , \end{aligned}$$
(1)

where \(\lambda >0\) is a hyperparameter that controls how much we regularize the network. We have pairs of inputs (ZX) where Z represents the particle-level inputs and X represents the detector-level inputs. Capital letters represent random variables while lower-case letter represent realizations of the corresponding random variables. We consider the case X and Z have the same structure, i.e. they are both sets of 4-vectors (so it makes sense to compute f(z) and f(x)). This is the standard case where X is a set of energy-flow objects that are meant to correspond to the 4-vectors of particles before being distorted by the detector. Furthermore, we fix the same definition of the observable at particle and detector level.

The first term in Eq. 1, \(L_\text {classic}\), governs the sensitivity of the observable f to the target parameter \(\mu \) at particle level. For regression tasks, \(\mu \) will be a real number, representing e.g. a (dimensionless) mass or coupling. For two-sample tests, \(\mu \in \{0,1\}\), where 0 represents the null hypothesis and 1 represents the alternative hypothesis. A classification setup may also be useful for a regression task, by using two samples at different values of the target parameter. The second term in Eq. 1, \(L_\text {new}\), governs how sensitive the new observable is to detector effects. It has the property that it is small when f(x) and f(z) are the same and large otherwise. When \(\lambda \rightarrow \infty \), the observable is completely insensitive to detector effects. This means that any uncertainty associated with removing such effects (including the dependence on unfolding ‘priors’) is eliminated. The best value of \(\lambda \) will be problem specific and should ideally be chosen based on one or more downstream tasks with the unfolded data.

Fig. 1
figure 1

Input features and resolution model for toy regression example. Two different experimental resolution functions, A and B, are shown

We introduce the method with a toy model for continuous parameter estimation (Sect. 2.1), which demonstrates the essential ideas in a simplified context. This is followed by a more complete binary classification example using simulated deep inelastic scattering events from the H1 experiment at HERA (Sect. 2.2), where the goal is to be maximally sensitive to distinguishing two datasets.

2.1 Toy example for continuous parameter estimation

The training samples are generated with a uniform distribution for the parameter of interest \(\mu \), so each event is specified by \((\mu _i,z_i,x_i)\). Then, we parameterize the observable f as a neural network and optimize the following loss function:

$$\begin{aligned} L[f] = \sum _{i} (f(z_i) - \mu _i)^2 + \lambda \sum _{i} (f(x_i)-f(z_i))^2 , \end{aligned}$$
(2)

where the form of both terms is the usual mean squared error loss used in regression tasks. The first term trains the regression to predict the parameter of interest \(\mu \) while the second term trains the network to make the predictions given detector level features x and particle level features z similar. We use the prediction based on particle-level features \(z_i\) in the first term in the loss function. Results for the alternative choice, using the detector-level features \(x_i\) are shown in Appendix A.

The loss function in Eq. 2 is similar to the setting of decorrelation, where a classifier is trained to be independent from a given feature [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60]. One could apply decorrelation techniques in this case to ensure the classifier is not able to distinguish between features at detector level and at particle level. However, this will only ensure that the probability density for f is the same for particle level and detector level. To be well-measured, we need more than statistical similarity between distributions - we need them to be similar event by event. The final term in Eq. 2 is designed for exactly this purpose.

All deep neural networks are implemented in Keras [61]/TensorFlow [62] and optimized using Adam [63]. The network models use two hidden layers with 50 nodes per layer and Rectified Linear Unit (ReLU) activation functions for intermediate layers and a linear activation function for the last layer.

Figure 1 illustrates the input features and resolution model for the toy study. Two particle-level features \(z_0\) and \(z_1\) are modeled as normal distributions: \(Z_0\sim \mathcal {N}(\mu ,0.5)\) and \(Z_1\sim \mathcal {N}(\mu ,0.1)\), where feature 1 is significantly more sensitive to the parameter of interest \(\mu \). The experimental resolution on the features is given by \(X_0\sim \mathcal {N}(Z_0, 0.1)\) and \(X_1\sim \mathcal {N}(Z_1, 0.5)\) so that feature 0 is well measured, while feature 1 has a relatively poor resolution. For this model, the net experimental sensitivity to \(\mu \) is the same for both features, but feature 0 is much less sensitive to detector effects. Our proposed method will take this into account in the training of the neural network.

Fig. 2
figure 2

Results of toy regression example as a function of the \(\lambda \) hyperparameter

To demonstrate the sensitivity to uncertainties associated with detector effects, we make predictions using f trained with resolution model A on a sample generated with resolution model B, shown in Fig. 1, where the width is increased by a factor of 1.4 for both features and a bias of 0.2 is introduced for the \(x_1\) feature. Figure 2 shows the results as a function of the \(\lambda \) hyperparameter. With \(\lambda =0\), the network relies almost entirely on feature 1, which has better particle-level resolution. As \(\lambda \) increases, more emphasis is placed on feature 0, which has much better detector-level resolution. The resolution of the prediction starts at close to \(\sqrt{0.5^2+0.1^2}\) for resolution model A, then reaches a minimum close to \(\sqrt{0.5^2+0.1^2}/\sqrt{2}\) near \(\lambda =0.5\) where both features have equal weight in the prediction. The bias in the prediction for resolution model B is large for \(\lambda =0\) but falls significantly with increasing \(\lambda \). This validates the key concept of the proposed method.

2.2 Full example for binary classification

In the second example, we wish to design an observable that will discriminate between two different Monte Carlo parton shower models, while minimizing uncertainties from detector effects. The objective will be to use the distribution of the observable to indicate which model best represents the data. The discrimination test will be successful if the particle-level distribution of the observable for only one of the models is statistically consistent with the unfolded distribution of the observable from data. Ideally, the difference in the shape of the particle-level distributions of the observable for the two models will be large compared to the size of the uncertainty from detector effects in the unfolding.

To design the observable for this task, we train a binary classification neural network to distinguish the two parton shower models, where we minimize detector effects with our additional term in the loss function. The observable is trained to classify events, but this is not a case where each event in the data may be from one of two categories, such as in a neural network trained to discriminate signal from background. All of the events in the data are from the same class. The task is to use the shape of the observable to test which model is more consistent with the data. The neural network is trained to find differences in the input features that allow it to distinguish the two models and this information is reflected in the shape of the neural network output distribution. This makes the shape comparison the appropriate statistical test for the observable.

Fig. 3
figure 3

Particle level distributions of the nine NN input features for the Djangoh and Rapgap generators

In the binary case, we have two datasets generated from simulation 1 (sim. 1) and simulation 2 (sim. 2). The loss function for classification is given by

(3)

where the first two terms represent the usual binary cross entropy loss function for classification and the third term represents the usual mean squared error loss term for regression tasks. The notation \(i\in \text {sim. j}\) means that \((z_i,x_i)\) are drawn from the jth simulation dataset. As in the regression case, the hyperparameter \(\lambda \) must be tuned and controls the trade off between sensitivity to the dataset and sensitivity to detector effects. The network model is the same as in the previous example except that the final layer uses a sigmoid activation function. The binary case is a special case of the previous section where there are only two values of the parameter of interest. It may also be effective to train the binary case for a continuous parameter using two extreme values of the parameter. In this paper, we use high-quality, well-curated datasets from the binary case because of their availability, but it would be interesting to explore the continuous case in the future.

To determine the efficacy of the new observable, we unfold the pseudodata. Unfolding corrects for detector effects by performing regularized matrix inversion on the response matrix. We employ the widely-used TUnfold method [64], which is a least-squared-based fit with additional regularization. We use TUnfold version 17.9 [64] through the interface included in the Root 6.24 [65] distribution. The response matrix is defined from a 2D binning the NN output, given detector level and particle level features. The matrix uses 24 and 12 bins for detector and particle inputs, respectively, which gives reasonable stability and cross-bin correlations in the unfolding results. The ultimate test is to show that the difference between sim. 1 at detector-level unfolded with sim. 2 for the response matrix and the particle level sim. 1 (or vice versa) is smaller than the difference between sim. 1 and sim. 2 at particle level. In other words, this test shows that the ability to distinguish sim. 1 and sim. 2 significantly exceeds the modeling uncertainty from the unfolding.

3 Datasets

We use deep inelastic scattering events from high-energy electron-proton collisions to demonstrate the performance of the new approach. These simulated data are from the H1 experiment at HERA [66, 67] and are used in the same way as Ref. [68]. They are briefly described in the following.

Two parton shower Monte Carlo programs provide the particle-level simulation: Rapgap 3.1 [69] or Djangoh 1.4 [70]. The energies of the incoming beams are \(E_e=27.6\) GeV and \(E_p=920\) GeV, for the lepton and proton, respectively, matching the running conditions of HERA II. Radiation from Quantum Electrodynamic processes is simulated by Heracles routines [71,72,73] in both cases. The outgoing particles from these two datasets are then fed into a Geant 3 [74]-based detector simulation.

Following the detector simulation, events are reconstructed with an energy-flow algorithm [75,76,77] and the scattered electron is reconstructed using the default H1 approach [23, 78, 79]. Mis-measured backgrounds are suppressed with standard selections [78, 79]. This whole process makes use of the modernized H1 computing environment at DESY [80]. Each dataset is comprised of approximately 10 million events.

Figure 3 shows histograms of the nine features used as input for the neural network training. These features include the energy E, longitudinal momentum \(p_z\), transverse momentum \(p_T\), and pseudorapidity \(\eta \) of the scattered electron and the total Hadronic Final State (HFS), as well as the difference in azimuthal angle between the two \(\Delta \phi (e,\textrm{HFS})\). The HFS is quite sensitive to the \(\eta \) acceptance of the detector. In order to have HFS features that are comparable for the particle and detector definitions, we only use generated final state particles with \(|\eta |<3.8\) in the definition of the particle-level HFS 4-vector. Both simulations provide event weights that must be used for physics analysis. In our study, we do not weight the simulated events in order to maximize the effective statistics of the samples in the neural network training. This has a small effect on the spectra, but has a large impact on the number of effective events available for training. The electron feature distributions agree very well for the two simulations, while there are some visible differences in the HFS features.

Fig. 4
figure 4

Neural network output distributions for four values of the \(\lambda \) hyperparameter, which sets the scale for the Detector - Particle disagreement penalty in the loss function. The top row shows the results for \(\lambda = 0\), where there is no penalty if the NN predictions for Detector-level input features and Particle-level input features disagree. The bottom three rows show increasing values of \(\lambda \): 1, 20, and 100

Fig. 5
figure 5

Results of unfolding the NN output. The top (bottom) row shows the unfolding for the Rapgap (Djangoh) generator. The left column shows the response matrix for the unfolding, where the distribution of the NN output given the Detector-based input features (horizontal axis) is normalized to unit area for each bin of the NN output given the Particle-based input features (vertical axis). The center column shows the unfolded distribution compared to the true distribution. The right column shows the matrix of correlation coefficients from the unfolding

4 Results

We now apply the method introduced in Sect. 2.2 to the DIS dataset described in Sect. 3. Figure 4 shows the results of four neural network trainings with different values of \(\lambda \), which sets the relative weight of the MSE term in the loss function (Eq. 3) that controls the sensitivity to detector effects. With \(\lambda =0\), the classification performance for particle-level inputs is strong, while there are significant disagreements between the particle and detector level neural network outputs. As \(\lambda \) increases, the particle and detector level agreement improves at the cost of weaker classification performance. In what follows, we will use the network trained with \(\lambda =100\). For a parameter estimation task, the entire distribution will be used for inference and therefore excellent event-by-event classification is not required.

Next, we investigate how these spectra are preserved after unfolding. Figure 5 shows the results of unfolding the neural network output, given detector-level features, to give the neural network output distribution for particle-level features. The input distribution for the unfolding is \(10^5\) events randomly chosen from a histogram of the neural network output for detector-level inputs from the simulation. The unfolding response matrices for the two simulations agree fairly well and are concentrated along the diagonal. The output of the unfolding shows very good agreement with the true distribution of the neural network output given particle-level inputs, demonstrating acceptable closure for the unfolding. The correlations in the unfolding result are mostly between neighboring bins of the distribution.

Fig. 6
figure 6

Model discrimination sensitivity and uncertainty for the neural network output distribution (left) and the hadronic final state \(\eta \) distribution (right). The uncertainty, shown by the shaded red distribution, is the model dependence of the unfolding added in quadrature with the statistical error from the unfolding (error bars on the black points). The black points show the ratio of the unfolded distribution from the Rapgap simulation divided by the particle-level distribution from the Djangoh simulation. The significance of the deviation from unity is a measure of the model discrimination sensitivity. Additional distributions and comparisons are given in Appendix B

One of the biggest challenges for the \(\lambda =0\) case is that it is highly sensitive to regions of phase space that are not well-constrained by the detector. As a result, the output of the unfolding is highly dependent on the simulation used in the unfolding (prior). The left side of Fig. 6 shows the model dependence of the unfolding and the ability of the neural network to perform the model classification task. The unfolding model dependence is estimated from a comparison with using the response matrix from the other simulation. We test the model discrimination sensitivity by dividing the unfolded distribution by the particle-level distribution from the other simulation. The degree to which this ratio deviates from unity is a measure of the model discrimination power of the method. For the neural network, the model dependence of the unfolding is small and generally less than 10%. The size of the deviation from unity for the neural network is large compared to the size of the uncertainty, which is dominated by the model dependence, indicating that the neural network can distinguish the two simulations.

Figure 3 shows that some of the HFS variables in the input features may be able to distinguish the two models directly. The right side of Fig. 6 shows the results of running the unfolding procedure using the HFS \(\eta \) distribution for model discrimination, where the model dependence is significantly larger, compared to the neural network output. The shape is distorted, including deviations up to 20%, when the response matrix from the other simulation was used in the unfolding. Since the modeling uncertainty is comparable to or larger than the size of the effect we are trying to probe, such observables are much less useful than the neural network output for the inference task.

We perform a quantitative evaluation of the model discrimination power for this example using a \(\chi ^2\) computed from the difference between the unfolded distribution and the particle-level distribution for a given model. The uncertainties in the \(\chi ^2\) are from a combination of the unfolding covariance matrix and a covariance matrix for model dependence uncertainty from the unfolding response matrix. The distribution of the \(\chi ^2\) from a set of toy Monte Carlo experiments with 2000 events per experiment is close to that from a \(\chi ^2\) PDF with 12 degrees of freedom when the model for the comparison in the \(\chi ^2\) is the same as the model used to generate the toy samples (Djangoh), which validates the unfolding statistical uncertainty in the covariance matrix and the \(\chi ^2\) calculation. We use this distribution to define a \(\chi ^2\) threshold for the critical region corresponding to a frequency of 1% for the correct model hypothesis to have a \(\chi ^2\) greater than the threshold. When the toy samples are drawn from the alternative model (Rapgap), we find that the frequency for the \(\chi ^2\) to be above the threshold is 98.6% for the designer neural network observable, but only 63.0% for the HFS \(\eta \) observable. This shows that the designer neural network observable has superior discrimination power.

5 Conclusions and outlook

Unfolded differential cross section measurements are a standard approach to making data available for downstream inference tasks. While some measurements can be used for a variety of tasks, often, there is a single goal that motivates the result. In these cases, we advocate to design observables that are tailored to the physics goal using machine learning. The output of a neural network trained specifically for the downstream task is an observable and its differential cross section likely contains more information than classical observables. We have proposed a new loss function for training the network so that the resulting observable can be well measured. The neural network observable is thus trained using a loss function composed of two parts: one part that regresses the inputs onto a parameter of interest and a second part that penalizes the network for producing different answers at particle level and detector level. A tunable, and problem-specific hyperparameter determines the tradeoff between these two goals. We have demonstrated this approach with both a toy and physics example. For the deep inelastic scattering example, the new approach is shown to be much more sensitive than classical observables while also having a reduced dependence on the starting simulation used in the unfolding. We anticipate that our new approach could be useful for a variety of scientific goals, including measurements of fundamental parameters like the top quark mass and tuning Monte Carlo event generators.

Fig. 7
figure 7

Results of toy regression example as a function of the \(\lambda \) hyperparameter, using Eq. A.1 for the loss function in the training

Fig. 8
figure 8

Results of testing the model dependence of the unfolding the neural network output distribution by varying the response matrix used in the unfolding. The top row normalizes the unfolded distribution with the true distribution from the same simulation, testing the unfolding closure (points) and unfolding model dependence (red histogram). The bottom row normalizes the unfolded distribution with the true distribution from the other simulation, showing the ability to distinguish the models

Fig. 9
figure 9

Results of testing the model dependence of the unfolding the HFS \(\varvec{\eta }\) distribution by varying the response matrix used in the unfolding. The top row normalizes the unfolded distribution with the true distribution from the same simulation, testing the unfolding closure (points) and unfolding model dependence (red histogram). The bottom row normalizes the unfolded distribution with the true distribution from the other simulation, showing the ability to distinguish the models

There are a number of ways this approach could be extended in the future. We require that the observable have the same definition at particle level and detector level, while additional information at detector-level like resolutions may be useful to improve precision. A complementary strategy would be to use all the available information to unfold the full phase space [30]. Such techniques may improve the precision by integrating all of the relevant information at detector level, but they may compromise specific sensitivity by being broad and have no direct constraints on measurability. It would be interesting to compare our tailored approach to full phase space methods in the future.