Seismic Horizon Picking Using a Hopfield Network

  • Kou-Yuan Huang
Part of the Modern Approaches in Geophysics book series (MAGE, volume 21)


A Hopfield neural network is used to solve the problem of seismic horizon picking. The input seismogram is pre-processed to produce seismic peak data. Pre-processing steps include envelope processing, thresholding, peak detection, and compression in time. Each peak represents a single seismic wavelet, and each pre-processed data item corresponds to one neuron in the Hopfield network. The constraint conditions for detecting seismic horizons are used to construct the Liapunov energy function, which is then used to extract connection weights between neurons. From the equation of motion, the next state value of each neuron can be calculated. Changing the value of a neuron decreases the energy. The system becomes stable when the value of each neuron no longer changes. A single horizon is extracted using the algorithm, and the extracted horizon is removed from the original seismic data before the next horizon is extracted. This process is repeated until no more horizons are extracted. Applying the technique to bright spot data, horizons extracted using the neural network were found to match those obtained by visual inspection.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Haykin, S., 1999, Neural networks — a comprehensive foundation (2’ ed): Prentice-Hall Inc.Google Scholar
  2. Hopfield, J. J., 1982, Neural networks and physical systems with emergent collective computational abilities: Proceedings of the National Academy of Sciences USA, 79, 2554–2558.Google Scholar
  3. Hopfield, J. J., 1984, Neurons with graded response have collective computational properties like those of two-state neurons: Proceedings of the National Academy of Sciences USA, 81, 3088–3092.Google Scholar
  4. Hopfield, J. J., 1987, Learning algorithms and probability distributions in feed-forward and feed-back networks: Proceedings of the National Academy of Sciences USA, 84, 8429–8433.Google Scholar
  5. Hopfield, J. J., and Tank, T. W., 1985, `Neural’ computation of decisions in optimization problems: Biological Cybernetics, 52, 141–152.Google Scholar
  6. Hopfield, J. J., and Tank, T. W., 1986, Computing with neural circuits: A model: Science, 233, 625–633.Google Scholar
  7. Huang, K. Y., 1990, Branch and bound search for automatic linking process of seismic horizons: Pattern Recognition, 23, 657–667.Google Scholar
  8. Huang, K. Y., Liu, W. H., and Chang, I. C., 1989, Hopfield model of neural networks for detection of bright-spot: 59th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 444–446.Google Scholar
  9. Lippman, R. P., 1987, An introduction to computing with neural nets: IEEE AS SP Magazine, 4, 4–22. Pao, Y. H., 1989, Adaptive Pattern Recognition and Neural Networks: Addison-Wesley.Google Scholar
  10. Tank, D. W., and Hopfield, J. J., 1986, Simple neural optimization networks: An A/D converter, signal decision circuit and a linear programming circuit: IEEE Trans. Circuits and Systems, 33, 533–541.Google Scholar
  11. Tank, D. W., and Hopfield, J. J., 1987, Neural computation by concentrating information in time: Proceedings of the National Academy of Sciences USA, 84, 1896–1900.CrossRefGoogle Scholar
  12. Zurada, J. M., 1992, Introduction to artificial neural systems: West Publishing Co.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2003

Authors and Affiliations

  • Kou-Yuan Huang
    • 1
  1. 1.Department of Computer and Information ScienceNational Chiao Tung UniversityHsinchuTaiwan

Personalised recommendations