Advertisement

Classification of Signal Versus Background in High-Energy Physics Using Deep Neural Networks

  • M. MythiliEmail author
  • R. Thangarajan
  • N. Krishnamoorthy
Conference paper
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 35)

Abstract

High-energy physics is a fertile area for applied research in machine learning and deep learning. The Large hadron collider generates humongous amount of data by colliding hadrons at very high velocities and recording the events by various detectors. The data about the events are extensively used by machine learning algorithms to classify particles and also find new exotic particles. Deep learning is a specialization of artificial intelligence and machine learning that uses multi-layered artificial neural networks to excel in activities such as detecting the object, recognizing the speech, etc. Classical Techniques such as shallow neural networks have limitations to study the complex non-linear functions of the inputs. Deep learning programs should have access to large amounts of training data and processing power to attain an acceptable level of accuracy. These techniques have made significant progress in the classification metric which uses the best new approaches without the manual assistance.

Keywords

Deep learning Neural network HIGGS benchmark SUSY benchmark 

References

  1. 1.
    ATLAS Collaboration: Observation of a new particle in the search for the standard model higgs boson with the ATLAS detector at the LHC. Phys. Lett. B716, 1–29 (2012)Google Scholar
  2. 2.
    CMS Collaboration: Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC. Phys. Lett. B716, 30–61 (2012)Google Scholar
  3. 3.
    Neyman, J., Pearson, E.: Philos. Trans. Roy. Soc. 231, 694–706 (1933)CrossRefGoogle Scholar
  4. 4.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)CrossRefGoogle Scholar
  5. 5.
    Bengio, Y., Frasconi, P., Simard, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5, 157–166 (1994)CrossRefGoogle Scholar
  6. 6.
    Hochreiter, S.: Recurrent neural net learning and vanishing gradient (1998)Google Scholar
  7. 7.
    Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Hocker, A., et al.: TMVA-toolkit for multivariate data analysis. PoS ACAT. 040 (2007)Google Scholar
  9. 9.
    Alwall, J., et al.: MadGraph 5: going beyond. JHEP. 1106, 128 (2011)CrossRefGoogle Scholar
  10. 10.
    Goodfellow, I.J., et al.: Pylearn2: a machine learning research library. arXiv preprint arXiv, pp. 1308–4214 (2013)Google Scholar
  11. 11.
    Baldi, P., Sadowski, P.: The dropout learning algorithm. Artif. Intell. 210, 78–122 (2014)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Baldi, P., Sadowski, P., Whiteson, D.: Searching for exotic particles in high-energy physics with deep learning (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Kongu Engineering CollegeErodeIndia

Personalised recommendations