Abstract
In this article we present a neural network based model to emulate matrix elements. This model improves on existing methods by taking advantage of the known factorisation properties of matrix elements. In doing so we can control the behaviour of simulated matrix elements when extrapolating into more singular regions than the ones used for training the neural network. We apply our model to the case of leading-order jet production in e+e− collisions with up to five jets. Our results show that this model can reproduce the matrix elements with errors below the one-percent level on the phase-space covered during fitting and testing, and a robust extrapolation to the parts of the phase-space where the matrix elements are more singular than seen at the fitting stage.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
J. Bendavid, Efficient Monte Carlo Integration Using Boosted Decision Trees and Generative Deep Neural Networks, arXiv:1707.00028 [INSPIRE].
M.D. Klimek and M. Perelstein, Neural Network-Based Approach to Phase Space Integration, SciPost Phys. 9 (2020) 053 [arXiv:1810.11509] [INSPIRE].
E. Bothmann, T. Janßen, M. Knobbe, T. Schmale and S. Schumann, Exploring phase space with Neural Importance Sampling, SciPost Phys. 8 (2020) 069 [arXiv:2001.05478] [INSPIRE].
B. Stienen and R. Verheyen, Phase space sampling and inference from weighted events with autoregressive flows, SciPost Phys. 10 (2021) 038 [arXiv:2011.13445] [INSPIRE].
I.-K. Chen, M. Klimek and M. Perelstein, Improved neural network monte carlo simulation, SciPost Phys. 10 (2021).
S. Carrazza and F.A. Dreyer, Lund jet images from generative and cycle-consistent adversarial networks, Eur. Phys. J. C 79 (2019) 979 [arXiv:1909.01359] [INSPIRE].
E. Bothmann and L. Debbio, Reweighting a parton shower using a neural network: the final-state case, JHEP 01 (2019) 033 [arXiv:1808.07802] [INSPIRE].
K. Dohi, Variational Autoencoders for Jet Simulation, arXiv:2009.04842 [INSPIRE].
C. Gao, S. Höche, J. Isaacson, C. Krause and H. Schulz, Event Generation with Normalizing Flows, Phys. Rev. D 101 (2020) 076002 [arXiv:2001.10028] [INSPIRE].
S. Otten, S. Caron, W. de Swart, M. van Beekveld, L. Hendriks, C. van Leeuwen et al., Event Generation and Statistical Sampling for Physics with Deep Generative Models and a Density Information Buffer, Nature Commun. 12 (2021) 2985 [arXiv:1901.00875] [INSPIRE].
B. Hashemi, N. Amin, K. Datta, D. Olivito and M. Pierini, LHC analysis-specific datasets with Generative Adversarial Networks, arXiv:1901.05282 [INSPIRE].
R. Di Sipio, M. Faucci Giannelli, S. Ketabchi Haghighat and S. Palazzo, DijetGAN: A Generative-Adversarial Network Approach for the Simulation of QCD Dijet Events at the LHC, JHEP 08 (2019) 110 [arXiv:1903.02433] [INSPIRE].
A. Butter, T. Plehn and R. Winterhalder, How to GAN LHC Events, SciPost Phys. 7 (2019) 075 [arXiv:1907.03764] [INSPIRE].
F. Bishara and M. Montull, (Machine) Learning amplitudes for faster event generation, arXiv:1912.11055 [INSPIRE].
M. Backes, A. Butter, T. Plehn and R. Winterhalder, How to GAN Event Unweighting, SciPost Phys. 10 (2021) 089 [arXiv:2012.07873] [INSPIRE].
A. Butter, S. Diefenbacher, G. Kasieczka, B. Nachman and T. Plehn, GANplifying event samples, SciPost Phys. 10 (2021) 139 [arXiv:2008.06545] [INSPIRE].
Y. Alanazi et al., Simulation of electron-proton scattering events by a Feature-Augmented and Transformed Generative Adversarial Network (FAT-GAN), arXiv:2001.11103 [INSPIRE].
B. Nachman and J. Thaler, Neural resampler for Monte Carlo reweighting with preserved uncertainties, Phys. Rev. D 102 (2020) 076004 [arXiv:2007.11586] [INSPIRE].
M. Paganini, L. de Oliveira and B. Nachman, CaloGAN : Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks, Phys. Rev. D 97 (2018) 014021 [arXiv:1712.10321] [INSPIRE].
SHiP collaboration, Fast simulation of muons produced at the SHiP experiment using Generative Adversarial Networks, 2019 JINST 14 P11028 [arXiv:1909.04451] [INSPIRE].
D. Derkach, N. Kazeev, F. Ratnikov, A. Ustyuzhanin and A. Volokhova, Cherenkov Detectors Fast Simulation Using Neural Networks, Nucl. Instrum. Meth. A 952 (2020) 161804 [arXiv:1903.11788] [INSPIRE].
Y. Alanazi et al., AI-based Monte Carlo event generator for electron-proton scattering, arXiv:2008.03151 [INSPIRE].
A. Andreassen, P.T. Komiske, E.M. Metodiev, B. Nachman and J. Thaler, OmniFold: A Method to Simultaneously Unfold All Observables, Phys. Rev. Lett. 124 (2020) 182001 [arXiv:1911.09107] [INSPIRE].
M. Bellagente, A. Butter, G. Kasieczka, T. Plehn and R. Winterhalder, How to GAN away Detector Effects, SciPost Phys. 8 (2020) 070 [arXiv:1912.00477] [INSPIRE].
S. Otten, K. Rolbiecki, S. Caron, J.-S. Kim, R.R. de Austri and J. Tattersall, DeepXS: Fast approximation of MSSM electroweak cross sections at NLO, (2019) [arXiv:1810.08312].
A. Buckley, A. Kvellestad, A. Raklev, P. Scott, J.V. Sparre, J. Van Den Abeele et al., Xsec: the cross-section evaluation code, Eur. Phys. J. C 80 (2020) 1106 [arXiv:2006.16273] [INSPIRE].
S. Badger and J. Bullock, Using neural networks for efficient evaluation of high multiplicity scattering amplitudes, JHEP 06 (2020) 114 [arXiv:2002.07516] [INSPIRE].
J. Aylett-Bullock, S. Badger and R. Moodie, Optimising simulations for diphoton production at hadron colliders using amplitude neural networks, arXiv:2106.09474 [INSPIRE].
H. Truong, Fame, (2021) [https://github.com/htruong0/fame].
G. Altarelli and G. Parisi, Asymptotic Freedom in Parton Language, Nucl. Phys. B 126 (1977) 298 [INSPIRE].
A. Bassetto, M. Ciafaloni and G. Marchesini, Jet structure and infrared sensitive quantities in perturbative QCD, Phys. Rept. 100 (1983) 201.
S. Catani and M.H. Seymour, A General algorithm for calculating jet cross-sections in NLO QCD, Nucl. Phys. B 485 (1997) 291 [Erratum ibid. 510 (1998) 503] [hep-ph/9605323] [INSPIRE].
A. LeNail, Nn-svg: Publication-ready neural network architecture schematics, J. Open Source Softw. 4 (2019) 747.
R. Kleiss, W.J. Stirling and S.D. Ellis, A New Monte Carlo Treatment of Multiparticle Phase Space at High-energies, Comput. Phys. Commun. 40 (1986) 359 [INSPIRE].
M. Cacciari, G.P. Salam and G. Soyez, FastJet User Manual, Eur. Phys. J. C 72 (2012) 1896 [arXiv:1111.6097] [INSPIRE].
N. Dawe, E. Rodrigues, H. Schreiner, B. Ostdiek, D. Kalinkin, M.R. et al., scikit-hep/pyjet: Version 1.8.2, (2021) [DOI].
S. Catani, Y.L. Dokshitzer, M. Olsson, G. Turnock and B.R. Webber, New clustering algorithm for multi - jet cross-sections in e+e− annihilation, Phys. Lett. B 269 (1991) 432 [INSPIRE].
S. Badger, B. Biedermann, P. Uwer and V. Yundin, Numerical evaluation of virtual corrections to multi-jet production in massless QCD, Comput. Phys. Commun. 184 (2013) 1981 [arXiv:1209.0100] [INSPIRE].
J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07 (2014) 079 [arXiv:1405.0301] [INSPIRE].
F. Chollet et al., Keras, (2015) [https://keras.io].
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro et al., TensorFlow: Large-scale machine learning on heterogeneous distribuited systems, (2016) [arXiv:1603.04467v2].
A. Coccaro, M. Pierini, L. Silvestrini and R. Torre, The dnnlikelihood: enhancing likelihood distribution with deep learning, Eur. Phys. J. C 80 (2020) .
F. Bury and C. Delaere, Matrix element regression with deep neural networks — Breaking the CPU barrier, JHEP 04 (2021) 020 [arXiv:2008.10949] [INSPIRE].
J. Aylett-Bullock, n3jet, (2020) [https://github.com/JosephPB/n3jet].
J. Hestness, S. Narang, N. Ardalani, G. Diamos, H. Jun and H. Kianinejad et al., Deep learning scaling is predictable, empirically, (2017) [arXiv:1712.00409].
K. Xu, M. Zhang, J. Li, S.S. Du, K. ichi Kawarabayashi and S. Jegelka, How neural networks extrapolate: From feedforward to graph neural networks, (2021) [arXiv:2009.11848].
J. Kaplan, S. McCandlish, T. Henighan, T.B. Brown, B. Chess, R. Child et al., Scaling laws for neural language models, (2020) [arXiv:2001.08361].
X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’10), 13–15 May 2010, Chia Laguna Resort, Sardinia, Italy, p. 249.
L. Bottou, F.E. Curtis and J. Nocedal, Optimization methods for large-scale Machine Learning, (2018) [arXiv:1606.04838v3].
D.P. Kingma and J. Ba, Adam: A method for stochastic optimization, (2017) [arXiv:1412.6980].
Z.-H. Zhou, J. Wu and W. Tang, Ensembling neural networks: Many could be better than all, Artif. Intell. 137 (2002) 239.
B. Nachman, A guide for deploying deep learning in lhc searches: How to achieve optimality and account for uncertainty, SciPost Phys. 8 (2020).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
ArXiv ePrint: 2107.06625
Supplementary Information
ESM 1
(ZIP 5504 kb)
Rights and permissions
Open Access . This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Maître, D., Truong, H. A factorisation-aware Matrix element emulator. J. High Energ. Phys. 2021, 66 (2021). https://doi.org/10.1007/JHEP11(2021)066
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/JHEP11(2021)066