Skip to main content

Multivariate analysis methods in physics

Abstract

In this article, a review of multivariate methods based on statistical learning is given. Several popular multivariate methods useful in high-energy physics analysis are discussed. Selected examples from current research in particle physics are discussed, both from online trigger selection and from off-line analysis. In addition, statistical learning methods, not yet applied in particle physics, are presented and some new applications are suggested.

This is a preview of subscription content, access via your institution.

References

  1. R. A. Fisher, “The Use of Multiple Measurements in Taxonomic Problems,” Annals of Eugenics 7, 179–188 (1936).

    Google Scholar 

  2. K. Abe et al. (Belle Collaboration) “Moments of the photon energy spectrum from B → X/s gamma Decays Measured by Belle,” arXiv hep-ex/0508005.

  3. S. Mika et al., “Fisher Discriminant Analysis with Kernels,” in IEEE Conf. of Neural Networks for Signal Processing IX (1999).

  4. K. Karhunen, “About Linear Methods in Probability Theory,” Amer. Acad. Sci., Fennicade, Ser. A, I 37, 3–79 (1947) [in German].

    Google Scholar 

  5. M. Loeve, Probability Theory (Van Nostrand, 1955).

  6. M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence 12(1), 103–108 (1990).

    Article  Google Scholar 

  7. A. Hyvärinen, “Survey on Independent Component Analysis,” Neural Computing Surveys 2, 94–128 (1999); http://www.cs.helsinki.fi/u/ahyvarin/whatisica.shtml.

    Google Scholar 

  8. A. Hyvärinen, “Fast and Robust Fixed-Point Algorithms for Independent Component Analysis,” IEEE Transactions on Neural Networks 10(3), 626–634 (1999); http://www.cis.hut.fi/projects/ica/fastica/.

    Article  Google Scholar 

  9. C. Jutten and J. Karhunen, “Advances in Blind Source Separation (BSS) and Independent Component Analysis (ICA) for Nonlinear Mixtures,” Int. J. Neural Systems 14(5), 267–292 (2004); http://www.cis.hut.fi/projects/ica/nonlinearica/.

    Article  Google Scholar 

  10. R. Vigärio, V. Jousmäki, M. Hämäläinen, et al., “Independent Component Analysis for Identification of Artifacts in Magnetoencephalographic Recordings,” Adv. in Neur. Inform. Proc. Syst. 10, 229–235 (1998).

    Google Scholar 

  11. T. Ristaniemi and J. Joutsensalo, “On the Performance of Blind Source Separation in CDMA Downlink,” in Proc. Int. Workshop on Independent Component Analysis and Signal Separation (ICA’99) (Aussois, France, 1999), pp. 437–441.

    Google Scholar 

  12. H. Lu, H. Zhou, J. Wang, et al., “Ensemble Learning Independent Component Analysis of Normal Galaxy Spectra,” arXiv:astro-ph/0510246.

  13. D. Maino et al., “All-Sky Astrophysical Component Separation with Fast Independent Component Analysis (FastICA),” arXiv:astro-ph/0108362.

  14. X. B. Huang, S. Y. Lee, E. Prebys, and R. Tomlin, “Application of Independent Component Analysis to Fermilab Booster,” Phys. Rev. ST Accel. Beams 8, 064001 (2005).

    Google Scholar 

  15. C.M. Bishop, Neural Networks for Pattern Recognition (Oxford University Pres, Oxford, 1995).

    Google Scholar 

  16. A. Zell, Simulation Neuronaler Netze (R. Oldenbourg Verlag, Munich, 2000).

    Google Scholar 

  17. P. J. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences” Ph. D. Thesis (Harvard University, MA, Boston, 1974).

    Google Scholar 

  18. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning Internal Representations by Error Propagation, Computational Models of Cognition and Perception (MIT Press, Cambridge, MA, 1986), Vol. 1, Ch. 8, pp. 319–362.

    Google Scholar 

  19. W. S. Sarle, “Stopped Training and Other Remedies for Overfitting,” in Proc. of the 27th Symp. on the Interface of Computing Science and Statistics (1995), pp. 352–360.

  20. C. Goutte, “Note on Free Lunches and Cross-Validation,” Neural Computation 9, 1211–1215 (1997).

    Article  Google Scholar 

  21. L. Holmström and P. Koistinen, “Using Additive Noise in Back-Propagation Training,” IEEE Transaction on Neural Networks 3, 24–38 (1992).

    Article  Google Scholar 

  22. B. Abbott et al., (D0 Collaboration) “Measurement of the Top Quark Pair Production Cross Section in the All-Jets Decay Channel,” Phys. Rev. Lett. 83, 1908 (1999); arXiv:hep-ex/9901023.

    Article  ADS  Google Scholar 

  23. S. Abachi et al., (D0 Collaboration) “Direct Measurement of the Top Quark Mass,” Phys. Rev. Lett. 79, 1197 (1997); arXiv:hep-ex/9703008.

    Article  ADS  Google Scholar 

  24. D. Acosta et al., (CDF Collaboration) “Measurement of the Cross Section for t anti-t Production in p anti-p Collisions Using the Kinematics of Lepton + Jets Events,” Phys. Rev. D: Part. Fields 72, 052003 (2005); arXiv:hep-ex/0504053.

    Google Scholar 

  25. V. M. Abazov et al., (D0 Collaboration) “Search for Single Top Quark Production in p anti-p Collisions at s**(1/2) = 1.96 TeV,” Phys. Lett. B 622, 265 (2005); arXiv:hep-ex/0505063.

    Article  ADS  Google Scholar 

  26. M. Wolter, “Measurement of Physical Quantities in the Bayesian Framework Using Neural Networks,” in Prepared for Conf. on Advanced Statistical Techniques in Particle Physics, Durham, England, Mar. 18–22, 2002 (2002).

  27. H. Denby et al., “Performance of the CDF Neural Network Electron Isolation Trigger,” Nucl. Instrum. Meth. A 356, 485 (1995).

    Article  ADS  Google Scholar 

  28. F. R. Leimgruber, P. Pavlopoulos, M. Steinacher, et al., “Hardware Realization of a Fast Neural Network Algorithm for Real Time Tracking in HEP Experiments,” Nucl. Instrum. Meth. A 365, 198 (1995).

    Article  ADS  Google Scholar 

  29. P. Kokkas, M. Steinacher, L. Tauscher, and S. Vlachos, “The Neural Network First Level Trigger for the DIRAC Experiment,” Nucl. Instrum. Meth. A 471, 358 (2001).

    Article  ADS  Google Scholar 

  30. J. K. Kohne et al., “Realization of a Second Level Neural Network Trigger for the H1 Experiment at HERA,” Nucl. Instrum. Meth. A 389, 128 (1997).

    Article  ADS  Google Scholar 

  31. J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proc. of National Academy of Sciences 79(8), 2554–2558 (1982).

    Article  ADS  MathSciNet  Google Scholar 

  32. R. Mankel, “Pattern Recognition and Event Reconstruction in Particle Physics Experiments,” Rept. Prog. Phys 67, 553 (2004); arXiv:physics/0402039.

    Article  ADS  Google Scholar 

  33. H. Denby, “Neural Networks And Cellular Automata in Experimental High-Energy Physics,” Comput. Phys. Commun. 49, 429 (1988).

    Article  ADS  Google Scholar 

  34. C. Peterson, “Track Finding with Neural Networks,” Nucl. Instrum. Meth. A 279, 537 (1989).

    Article  ADS  Google Scholar 

  35. M. Ohlsson, C. Peterson, and A. L. Yuille, “Track Finding with Deformable Templates: The Elastic Arms Approach,” Comput. Phys. Commun. 71, 77 (1992).

    Article  ADS  Google Scholar 

  36. G. Stimpfl-Abele and L. Garrido, “Fast Track Finding With Neural Nets,” Comput. Phys. Commun. 64, 46 (1991).

    Article  ADS  Google Scholar 

  37. D. L. Bui, T. J. Greenshaw, and G. Schmidt, “A Combination of an Elastic Net and a Hopfield Net to Solve the Segment Linking Problem in the Forward Tracker of the H1 Detector at HERA,” Nucl. Instrum. Meth. A 389, 184 (1997).

    Article  ADS  Google Scholar 

  38. M. Lindstrom, “Track Reconstruction in the ATLAS Detector Using Elastic Arms,” Nucl. Instrum. Meth. A 357, 129 (1995).

    Article  ADS  Google Scholar 

  39. H. Bourlard and Y. Kamp, “Auto-Association by Multilayer Perceptrons and Singular Value Decomposition,” Biological Cybernetics 59, 291–294 (1988).

    Article  MathSciNet  MATH  Google Scholar 

  40. T. Kohonen, “Self-Organized Formation of Topologically Correct Feature Maps,” Biological Cybernetics 43, 59–69 (1982).

    Article  MATH  MathSciNet  Google Scholar 

  41. T. Kohonen, Self-Organizing Maps, Springer Series in Information Sciences (Springer, Berlin, Heidelberg, New York, 1995, 1997, 2001), Vol. 30.

    Google Scholar 

  42. D. R. Brett, R. G. West, and P. J. Wheatley, “The Automated Classification of Astronomical Lightcurves Using Kohonen Self-Organising Maps,” arXiv:astro-ph/0408118.

  43. E. Parzen, “Estimation of a Probability Density Function and Its Mode,” Annals of Mathematical Statistics 33, 1065–1076 (1962).

    ADS  MathSciNet  MATH  Google Scholar 

  44. B. Knuteson, H. Miettinen, and L. Holmstrom, “AlphaPDE: A New Multivariate Technique for Parameter Estimation,” Comput. Phys. Commun. 145, 351 (2002); arXiv:physics/0108002.

    Article  MATH  ADS  Google Scholar 

  45. S. Towers, “Kernel Probability Density Estimation Methods,” in Prepared for Conf. on Advanced Statistical Techniques in Particle Physics, Durham, England, Mar. 18–22, 2002 (2002).

  46. V. M. Abazov et al., (D0 Collaboration) “Search for New Physics Using QUAERO: A General Interface to D0 Event Data,” Phys. Rev. Lett. 87, 231801 (2001); arXiv:hep-ex/0106039.

    Google Scholar 

  47. T. Carli and B. Koblitz, “A Multi-Variate Discrimination Technique Based on Range-Searching,” Nucl. Instrum. Meth. A 501, 576 (2003); arXiv:hep-ex/0211019.

    Article  ADS  Google Scholar 

  48. S. Chekanov et al., “(ZEUS Collaboration) Search for Lepton-Flavor Violation at HERA,” Eur. Phys. J. C 44, 463 (2005); arXiv:hep-ex/0501070.

    Article  Google Scholar 

  49. L. Janyst and E. Richter-Was, “Hadronic Tau Identification with Track Based Approach: Optimisation with Multi-Variate Method,” ATL-COM-PHYS-2005-028 (Geneva, CERN, June 3, 2005).

    Google Scholar 

  50. V. Vapnik and A. Lerner, “Pattern Recognition Using Generalized Portrait Method,” Automation and Remote Control 24 (1963).

  51. V. Vapnik and A. Chervonenkis, “A Note on One Class of Perceptrons,” Automation and Remote Control 25 (1964).

  52. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A Training Algorithm for Optimal Margin Classifiers,” in Proc. of the 5th Annual Workshop on Computational Learning Theory (ACM Press, 1992), pp. 144–152.

  53. C. Cortes and V. Vapnik, “Support Vector Networks,” Machine Learning 20, 273–297 (1995).

    MATH  Google Scholar 

  54. V. Vapnik, The Nature of Statistical Learning Theory (Springer Verlag, 1995).

  55. C. J. C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery 2(2), 1–47 (1998).

    Article  Google Scholar 

  56. V. Vapnik, S. Golowich, and A. Smola, “Support Vector Method for Function Approximation, Regression Estimation, and Signal Processing,” Advances in Neural Information Processing Systems 9, 281–287 (1997).

    Google Scholar 

  57. P. Vannerem, K. R. Muller, B. Scholkopf, et al., “Classifying LEP Data with Support Vector Algorithms,” arXiv:hep-ex/9905027.

  58. A. Vaiciulis, “Support Vector Machines in Analysis of Top Quark Production,” Nucl. Instrum. Meth. A 502, 492 (2003); arXiv:hep-ex/0205069.

    Article  ADS  Google Scholar 

  59. H. B. Prosper, “Multivariate Methods: A Unified Perspective,” in Prepared for Conf. on Advanced Statistical Techniques in Particle Physics, Durham, England, March 18–22, 2002 (2002).

Download references

Author information

Authors and Affiliations

Authors

Additional information

The text was submitted by the author in English.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Wolter, M. Multivariate analysis methods in physics. Phys. Part. Nuclei 38, 255–268 (2007). https://doi.org/10.1134/S1063779607020050

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1063779607020050

PACS numbers

  • 02.50.Sk