Skip to main content

Advertisement

Log in

Detector Monitoring with Artificial Neural Networks at the CMS Experiment at the CERN Large Hadron Collider

  • Original Article
  • Published:
Computing and Software for Big Science Aims and scope Submit manuscript

Abstract

Reliable data quality monitoring is a key asset in delivering collision data suitable for physics analysis in any modern large-scale high energy physics experiment. This paper focuses on the use of artificial neural networks for supervised and semi-supervised problems related to the identification of anomalies in the data collected by the CMS muon detectors. We use deep neural networks to analyze LHC collision data, represented as images organized geographically. We train a classifier capable of detecting the known anomalous behaviors with unprecedented efficiency and explore the usage of convolutional autoencoders to extend anomaly detection capabilities to unforeseen failure modes. A generalization of this strategy could pave the way to the automation of the data quality assessment process for present and future high energy physics experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. The LHC Study Group (1995) The Large Hadron Collider, conceptual design. Technical report, CERN/AC/95-05 (LHC) Geneva

  2. Chatrchyan S (2012) Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC. Phys Lett B 716(1):30–61

    Article  ADS  Google Scholar 

  3. Khachatryan V (2015) Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 TeV. Eur Phys J C 75(5):212

    Article  ADS  Google Scholar 

  4. Chatrchyan S et al (2008) The CMS experiment at the CERN LHC. J Instrum Bristol 2006 Currens 3:S08004–1

  5. Sirunyan AM et al (2018) Performance of the CMS muon detector and muon reconstruction with proton–proton collisions at \(\sqrt{s}= 13\,\text{tev}\). arXiv:1804.04528

  6. De Guio F (2015) The data quality monitoring challenge at CMS: experience from first collisions and future plans. Technical report, CMS-CR-2015-329

  7. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436

    Article  ADS  Google Scholar 

  8. CMS collaboration (2010) Calibration of the CMS drift tube chambers and measurement of the drift velocity with cosmic rays. J Instrum 5(03):T03016

  9. Tuura L, Eulisse G, Meyer A (2010) CMS data quality monitoring web service. J Phys Confer Ser (IOP Publishing) 219:072055

    Google Scholar 

  10. Borisyak M, Ratnikov F, Derkach D, Ustyuzhanin A (2017) Towards automation of data quality system for CERN CMS experiment. IOP Conf. Ser J Phys Confer Ser 898:092041. https://doi.org/10.1088/1742-6596/898/9/092041

    Google Scholar 

  11. Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural Comput 13(7):1443–1471

    Article  Google Scholar 

  12. Liu FT, Ting KM, Zhou ZH (2008) Isolation Forest. In: Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, IEEE Computer Society, Washington, DC, USA, pp 413-422. https://doi.org/10.1109/ICDM.2008.17

  13. Liu FT, Ting KM, Zhou Z-H (2012) Isolation-based anomaly detection. ACM Trans Knowl Discov Data (TKDD) 6(1):3

    Google Scholar 

  14. Aggarwal CC (2015) Outlier analysis. Data mining. Springer, New York, pp 237–263

    Google Scholar 

  15. Aggarwal CC (2014) Data classification: algorithms and applications. CRC Press, Boca Raton

    Book  Google Scholar 

  16. Cowan G, Cranmer K, Gross E, Vitells O (2011) Asymptotic formulae for likelihood-based tests of new physics. Eur Phys J C 71(2):1554

    Article  ADS  Google Scholar 

  17. Bengio Y, LeCun Y (2007) Scaling learning algorithms towards ai. Large-Scale Kernel Mach 34(5):1–41

    Google Scholar 

  18. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  Google Scholar 

  19. Goldstein M, Uchida S (2016) A comparative evaluation of unsupervised anomaly detection algorithms for multivariate data. PloS one 11(4):e0152173

    Article  Google Scholar 

  20. Zimek A, Schubert E, Kriegel H-P (2012) A survey on unsupervised outlier detection in high-dimensional numerical data. Statis Anal Data Mining ASA Data Sci J 5(5):363–387

    Article  MathSciNet  Google Scholar 

  21. Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds) Proceedings of the 25th international conference on neural information processing systems, vol 1. Curran Associates Inc, Lake Tahoe, Nevada, pp 1097–1105

    Google Scholar 

  22. Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: Xing EP, Jebara T (eds) Proceedings of the 31st international conference on machine learning, vol 32(2). PMLR, Bejing, China, pp 1278–1286

    Google Scholar 

  23. Tishby N, Zaslavsky N (2015) Deep learning and the information bottleneck principle. In: Proceedings of IEEE Information Theory Workshop, Jerusalem, Israel, pp 460–465

  24. Shwartz-Ziv R, Tishby N (2017) Opening the black box of deep neural networks via information. CoRR, arXiv:abs/1703.00810

  25. Ranzato M, Poultney C, Chopra S, LeCun Y (2006) Efficient learning of sparse representations with an energy-based model. In: Schölkopf B, Platt JC, Hoffman T (eds) Proceedings of the 19th international conference on neural information processing systems. MIT Press, Cambridge, MA, USA, pp 1137–1144

    Google Scholar 

  26. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P-A (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  27. Rifai S, Vincent P, Muller X, Glorot X, Bengio Y (2011) Contractive auto-encoders: explicit invariance during feature extraction. In: Getoor L, Scheffer T (eds) Proceedings of the 28th international conference on machine learning. Omnipress, USA, pp 833–840

    Google Scholar 

  28. Simard PY, LeCun YA, Denker JS, Victorri B (1998) Transformation invariance in pattern recognition–tangent distance and tangent propagation. Neural networks: tricks of the trade. Springer, New York, pp 239–274

    Chapter  Google Scholar 

  29. Alain G, Bengio Y (2014) What regularized auto-encoders learn from the data-generating distribution. J Mach Learn Res 15(1):3563–3593

    MathSciNet  MATH  Google Scholar 

  30. Sobel I (1990) An isotropic \(3\times 3\) image gradient operator. In: Freeman H (ed) Machine vision for three-dimensional scenes. Academic Press, London, pp 376–379

    Google Scholar 

  31. Kingma DP, Adam JB (2014) A method for stochastic optimization. arXiv:1412.6980

  32. Chollet F et al (2015) Keras: The python deep learning library. https://keras.io

  33. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467

  34. Song X, Wu M, Jermaine C, Ranka S (2007) Conditional anomaly detection. IEEE Trans Knowl Data Eng 19(5):631–645

    Article  Google Scholar 

Download references

Acknowledgements

We thank the CMS collaboration for providing the data set used in this study. We are thankful to the members of the CMS Physics Performance and Data set project and the CMS DT Detector Performance Group for useful discussions, suggestions, and support. We acknowledge the support of the CMS CERN group for providing the computing resources to train our models and of CERN OpenLab for sponsoring A.S.’s internship at CERN, as part of the CERN OpenLab Summer student program. We thank Danilo Rezende for precious discussions and suggestions. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement no. 772369).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Alan Pol.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pol, A.A., Cerminara, G., Germain, C. et al. Detector Monitoring with Artificial Neural Networks at the CMS Experiment at the CERN Large Hadron Collider. Comput Softw Big Sci 3, 3 (2019). https://doi.org/10.1007/s41781-018-0020-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41781-018-0020-1

Keywords

Navigation