Nuclear Phenomenology with Neural Nets

  • John W. Clark
  • Srinivas Gazula
  • Henrik Bohr
Conference paper
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)


We propose a new method of phenomenological analysis of physical systems based on adaptive neural networks. When trained with the backpropagation algorithm, multilayered networks are capable of learning the associations between dependent and independent variables implicit in large data sets and may show reliable predictive power when tested on examples absent from the training set. The approach is illustrated through applications to several problems relating to the stability of atomic nuclei.


Neural Network Input Pattern Output Neuron Superheavy Nucleus Neutron Separation Energy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Rumelhart, D. E., Hinton, G. E., and Williams, R. J.: Learning internal representations by error propagation. In: Rumelhart, D. E., McClelland, J. L., et al. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1. MIT Press, Cambridge, MA, 1986Google Scholar
  2. [2]
    Müller, B., and Reinhardt, J. Neural Networks — an Introduction. Springer, Heidelberg, 1990MATHGoogle Scholar
  3. [3]
    Hertz, J., Krogh, A., and Palmer, R. G. Introduction to the Theory of Neural Computation. Addison-Wesley, Redwood City, California, 1991Google Scholar
  4. [4]
    Clark, J. W. Neural network modelling. Physics in Medicine and Biology 1991; in pressGoogle Scholar
  5. [5]
    Sejnowski, T. J., and Rosenberg, C. R. Parallel networks that learn to pronounce English text Complex Systems 1987; 1: 145–168MATHGoogle Scholar
  6. [6]
    Bohr, H., Bohr, J., Brunak, S., Cotterill, R. M. J., Lautrup, B., Noskov, L., Olsen, O. H., and Petersen, S. B. Protein secondary structure and homology by neural networks: the α-helices of rhodopsin FEBS Letters 1988; B241: 223–228CrossRefGoogle Scholar
  7. [7]
    Bohr, H., Bohr, J., Brunak, S., Cotterill, R. M. J., Fredholm, H., Lautrup, B., and Petersen, S. B. A novel approach to prediction of the 3-dimensional structures of protein backbones by neural networks FEBS Letters 1990; 261: 43–46CrossRefGoogle Scholar
  8. [8]
    Bohr, H., elsewhere in this volume.Google Scholar
  9. [9]
    Meyer, B., Hansen, T., Nute, D., Albersheim, P., Darville, A., York, W., and Sellers, J. Identification of the 1H-NMR spectra of complex oligosaccharides with artificial neural networks Science 1991; 251: 542–544.CrossRefGoogle Scholar
  10. [10]
    Odenwahn, S. C., Stockwell, E. B., Pennington, R. L., Humphreys, R. M., and Zumach, W. A. Automated star/galaxy discrimination with neural networks. Ap. J. (submitted).Google Scholar
  11. [11]
    Angel, J. R. P., Wizinowich, P., Lloyd-Hart, M., and Sandler, D. Adaptive optics for array telescopes using neural network techniques. Nature 1990; 348: 221–224CrossRefGoogle Scholar
  12. [12]
    Sandler, D., Barrett, T. K., Palmer, D. A., Fugate, R. Q., and Wild, W. J. Use of a neural network to control an adaptive optics system for an astronomical telescope Nature 1991; 351: 300–302CrossRefGoogle Scholar
  13. [13]
    Denby, B. Neural networks and cellular automata in experimental high energy physics. Comput. Phys. Commun. 1988; 49: 429–448CrossRefGoogle Scholar
  14. [14]
    Peterson, C. Track finding with neural networks. Nucl. Instr. Methods 1989; A279: 537–545Google Scholar
  15. [15]
    Denby, B., and Linn, S. L. Spatial pattern recognition in a high energy particle detector using a neural network algorithm. Comput. Phys. Commun. 1990; 56: 293–297CrossRefGoogle Scholar
  16. [16]
    Humpert, B. On the use of neural networks in high-energy physics experiments. Comput. Phys. Commun. 1990; 56; 299–311CrossRefMathSciNetGoogle Scholar
  17. [17]
    Walker, F. W., Miller, D. G., and Feiner, F. Chart of the Nuclides, Thirteenth Edition, General Electric, San Jose, CA, 1984Google Scholar
  18. [18]
    Clark, J. W., and Gazula, S.: Artificial neural networks that learn many-body physics. In: Fantoni, S., and Rosati, S. (eds.) Condensed Matter Theories, Vol. 6. Plenum, New York, 1991Google Scholar
  19. [19]
    Clark, J. W., Gazula, S., and Bohr, H.: Teaching nuclear systematics to neural networks. In: Benhar, O., Bosio, C., Del Giudice, P., and Tabet, E. (eds.) Neural Networks: From Biology to High-Energy Physics, in pressGoogle Scholar
  20. [20]
    Luenberger, D. G. Linear and Nonlinear Programming, 2nd ed. Addison-Wesley, Reading MA, 1984MATHGoogle Scholar
  21. [21]
    Prater, J. S., private communicationGoogle Scholar
  22. [22]
    Bohr, A., and Mottelson, B. R. Nuclear Structure, Vol. I. W. A. Benjamin, New York, 1969Google Scholar
  23. [23]
    Myers, W. D., and Swiatecki, W. J. Nuclear masses and deformations. Nucl. Phys. 1966; 81: 160Google Scholar
  24. [24]
    Möller, P., and Nix, J. R. Nuclear masses from a unified macroscopic-microscopic model. Atomic Data and Nuclear Data Tables 1988; 39: 213–223CrossRefGoogle Scholar
  25. [25]
    Haustein, P. E. An overview of the 1986–1987 atomic mass predictions. Atomic Data and Nuclear Data Tables 1988; 39: 185–200CrossRefGoogle Scholar
  26. [26]
    Masson, P. J., and Jänecke, J. Masses from an inhomogeneous partial difference equation with higher-order isospin contributions. Atomic Data and Nuclear Data Tables 1988; 39: 273–280CrossRefGoogle Scholar
  27. [27]
    Liisbjerg, C., private communicationGoogle Scholar
  28. [28]
    Fahlman, S. E., and Lebiere, C.: The cascade-correlation learning architecture. In: Touretzky, D. (ed.) Neural Information Processing Systems, Vol. 2. Morgan Kaufmann, Denver, 1990Google Scholar
  29. [29]
    Mozer, M. C., and Smolensky, P.: Skeletonization: a technique for trimming the fat from a network via relevance assessment. In: Touretzky, D. (ed.) Neural Information Processing Systems, Vol. 1. Morgan Kaufmann, Denver, 1989Google Scholar
  30. [30]
    Le Cun, Y., Denker, J. S., and Solla, S. A.: Optimal brain damage. In: Touretzky, D. (ed.) Neural Information Processing Systems, Vol. 2. Morgan Kaufmann, Denver, 1990Google Scholar
  31. [31]
    Prater, J. S., private communicationGoogle Scholar
  32. [32]
    Thompson, S. G. and Tsang, C. F. Superheavy elements. Science 1972; 178: 1047–1055CrossRefGoogle Scholar
  33. [33]
    Nix, J. R. Calculation of fission barriers for heavy and superheavy nuclei. Ann. Rev. Nucl. Part. Sci. 1972; 22: 65–120CrossRefGoogle Scholar
  34. [34]
    Sestito, S., and Dillon, T. The use of sub-symbolic methods for the automation of knowledge acquisition for expert systems. Avignon’91, Expert Systems and their Applications, Avignon, France, June 1991 Google Scholar

Copyright information

© Springer-Verlag London Limited 1992

Authors and Affiliations

  • John W. Clark
    • 1
  • Srinivas Gazula
    • 1
  • Henrik Bohr
    • 2
  1. 1.McDonnell Center for the Space Sciences and Department of PhysicsWashington UniversitySt. LouisUSA
  2. 2.School of Chemical SciencesUniversity of IllinoisUrbanaUSA

Personalised recommendations