Advertisement

Neural Computation in Medicine: Perspectives and Prospects

  • Richard Dybowski
Conference paper
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

In 1998, over 400 papers on artificial neural networks (ANNs) were published in the context of medicine, but why is there this interest in ANNs? And how do ANNs compare with traditional statistical methods? We propose some answers to these questions, and go on to consider the ‘black box’ issue. Finally, we briefly look at two directions in which ANNs are likely to develop, namely the use of Bayesian statistics and knowledge-data fusion.

Keywords

Artificial Neural Network Radial Basis Function Network Neural Computation Adaptive Resonance Theory Traditional Statistical Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    L. Aakerlund and R. Hemmingsen. Neural networks as models of psychopathology. Biological Psychiatry, 43 (7):471–482, 1998.CrossRefGoogle Scholar
  2. [2]
    W.W. Tryon. A neural network explanation of posttraumatic stress. Journal of Anxiety Disorders, 12 (4):373–385, 1998.CrossRefGoogle Scholar
  3. [3]
    C.A. Galletly, C.R. Clark, and A.C. McFarlane. Artificial neural networks: a prospective tool for the analysis of psychiatric disorders. Journal of Psychiatry & Neuroscience, 21 (4):239–247, 1996.Google Scholar
  4. [4]
    R. Dybowski and V. Gant. Artificial neural networks in pathology and medical laboratories. Lancet, 346:1203–1207, 1995.CrossRefGoogle Scholar
  5. [5]
    J.T. Wei, Z. Zhang, S.D. Barnhill, K.R. Madyastha, H. Zhang, and J.E. Oesterling. Understanding artificial neural networks and exploring their potential applications for the practicing urologist. Urology, 52 (2):161–172, 1998.CrossRefGoogle Scholar
  6. [6]
    W. Deredy. Pattern recognition approaches in biomedical and clinical magnetic resonance spectroscopy: A review. NMR in Biomedicine, 10 (3):99–124, 1997.CrossRefGoogle Scholar
  7. [7]
    A. Tewari. Artificial intelligence and neural networks: Concept, applications and future in urology. British Journal of Urology, 80Supplement 3:53–58, 1997.MathSciNetGoogle Scholar
  8. [8]
    J.P. Hogge, D.S. Artz, and M.T. Preedman. Update in digital mammography. Critical Reviews in Diagnostic Imaging, 38 (1):89–113, 1997.Google Scholar
  9. [9]
    P.Y. Ktonas. Computer-based recognition of EEG patterns. Electroencephalography & Clinical Neurophysiology — Supplement, 45:23–35, 1996.Google Scholar
  10. [10]
    W. Penny and D. Frost. Neural networks in clinical medicine. Medical Decision Making, 16 (4):386–389, 1996.CrossRefGoogle Scholar
  11. [11]
    D. Itchhaporia, P.B. Snow, R.J. Almassy, and W.J. Oetgen. Artificial neural networks: Current status in cardiovascular medicine. Journal of the American College of Cardiology, 28 (2):515–521, 1996.CrossRefGoogle Scholar
  12. [12]
    R.N. Naguib and G.V. Sherbet. Artificial neural networks in cancer research. Pathobiology, 65 (3):129–139, 1997.CrossRefGoogle Scholar
  13. [13]
    M.R. Brickley, J.P. Shepherd, and R.A. Armstrong. Neural networks: A new technique for development of decision support systems in dentistry. Journal of Dentistry, 26 (4):305–309, 1998.CrossRefGoogle Scholar
  14. [14]
    R. Goodacre, M.J. Neal, and D.B. Kell. Quantitative analysis of multivariate data using artificial neural networks: A tutorial review and applications to the deconvolution of pyrolysis mass spectra. Zentralblatt fur Bakteriologie, 284 (4):516–539, 1996.CrossRefGoogle Scholar
  15. [15]
    C. Robert, C. Guilpin, and A. Limoge. Review of neural network applications in sleep research. Journal of Neuroscience Methods, 79 (2): 187–193, 1998.CrossRefGoogle Scholar
  16. [16]
    L. Tarassenko. A Guide to Neural Computing Applications. Arnold, London, 1998.Google Scholar
  17. [17]
    J.S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogleman-Soulie and J. Herault, editors, Neurocomputing: Algorithms, Architectures, and Applications, pages 227–236. Springer-Verlag, Berlin, 1991.Google Scholar
  18. [18]
    B.D. Ripley. Statistical aspects of neural networks. In O.E. Barndorff-Nielsen, J.L. Jensen, and W.S. Kendell, editors, Networks and Chaos — Statistical and Probabilistic Aspects, pages 40–123. Chapman & Hall, London, 1993.Google Scholar
  19. [19]
    S.-I. Amari. Mathematical methods of neurocomputing. In O.E. Barndorff-Nielsen, J.L. Jensen, and W.S. Kendell, editors, Networks and Chaos — Statistical and Probabilistic Aspects, pages 1–39. Chapman & Hall, London, 1993.Google Scholar
  20. [20]
    B. Cheng and D.M. Titterington. Neural networks: A review from a statistical perspective. Statistical Science, 9:2–54, 1994.MathSciNetMATHCrossRefGoogle Scholar
  21. [21]
    B.D. Ripley and R.M. Ripley. Neural networks as statistical methods in survival analysis. In R. Dybowski and V. Gant, editors, Clinical Applications of Artificial Neural Networks, chapter 9. Cambridge University Press, Cambridge. In press.Google Scholar
  22. [22]
    B.D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.MATHGoogle Scholar
  23. [23]
    S. Haykin. Neural Networks: A Comprehensive Foundation. Macmillan, New York, 1994.MATHGoogle Scholar
  24. [24]
    C.M. Bishop, M. Svensen, and C.K.I. Williams. GTM: the generative topographic mapping. Neural Computation, 10 (1):215–234, 1997.CrossRefGoogle Scholar
  25. [25]
    D.J.C. MacKay. A practical Bayesian framework for back-propagation networks. Neural Computation, 4 (3):448–472, 1992.CrossRefGoogle Scholar
  26. [26]
    A. Flexer. Connectionists and statisticians, friends or foes? In J. Mira and F. Sandoval, editors, Proceedings of the International Workshop on Artificial Neural Networks, LNCS 930, pages 454–461. Springer, 1995.Google Scholar
  27. [27]
    P.H. Goodman and F.E. Harrell. Neural networks: Advantages and limitations for biostatistical modelling. Technical report, Washoe Medical Center, University of Nevada, October 1998.Google Scholar
  28. [28]
    L. Tarassenko. Neural networks. Lancet, 346:1712, 1995.Google Scholar
  29. [29]
    L. Tarassenko, P. Hayton, N. Cerneaz, and M. Brady. Novelty detection for the identification of masses in mammograms. In Proceedings of the 4th IEE International Conference on Artificial Neural Networks, pages 442–447, Cambridge, 1995. Cambrdige Univeristy Press.Google Scholar
  30. [30]
    J. Wyatt. Nervous about artificial neural networks? The Lancet, 346:1175–1177, 1995.CrossRefGoogle Scholar
  31. [31]
    D. Michie, D.J. Spiegelhalter, and C.C. Taylor, editors. Machine Learning, Neural and Statistical Classification. Ellis Horwood, New York, 1994.MATHGoogle Scholar
  32. [32]
    T. Kohonen. Self-organizing formation of topologically correct feature maps. Biological Cybernetics, 43:59–69, 1982.MathSciNetMATHCrossRefGoogle Scholar
  33. [33]
    J. Hertz, A. Krogh, and R.G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley, Reading, MA, 1991.Google Scholar
  34. [34]
    C.J. Lux, A. Stellzig, D. Volz, W. Jager, A. Richardson, and G. Komposch. A neural network approach to the analysis and classification of human craniofacial growth. Growth, Development, & Aging, 62 (3):95–106, 1998.Google Scholar
  35. [35]
    C.I. Christodoulou and C.S. Pattichis. Unsupervised pattern recognition for the classification of EMG signals. IEEE Transactions on Biomedical Engineering, 46 (2)}:169–178, 1999.CrossRefGoogle Scholar
  36. [36]
    J.O. Glass and W.E. Reddick. Hybrid artificial neural network segmentation and classification of dynamic contrast-enhanced MR imaging (DEMRI) of osteosarcoma. Magnetic Resonance Imaging, 16(9): 1075–1083, 1998.CrossRefGoogle Scholar
  37. [37]
    B. Fritzke. Growing cell structures — a self-organising network for unsupervised and supervised learning. Neural Networks, 7:1441–1460, 1994.CrossRefGoogle Scholar
  38. [38]
    A.J. Walker, S.S. Cross, and R.F. Harrison. Visualisation of biomedical datasets by use of growing cell structure networks: A novel diagnostic classification technique. Lancet, 354:1518–1521, 1999.CrossRefGoogle Scholar
  39. [39]
    C.M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.Google Scholar
  40. [40]
    P.V. Balakrishnan, M.C. Cooper, V.S. Jacob, and P.A. Lewis. A study of the classification capabilites of neural networks using unsupervised learning: A comparison with k-means clustering. Psychometrika, 59:509–525, 1994.MATHCrossRefGoogle Scholar
  41. [41]
    J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989.CrossRefGoogle Scholar
  42. [42]
    A. Bezerianos, S. Papadimitriou, and D. Alexopoulos. Radial basis function neural networks for the characterization of heart rate variability dynamics. Artificial Intelligence in Medicine, 15(3):215–234, 1999.CrossRefGoogle Scholar
  43. [43]
    K. Turner, N. Ramanujam, J. Ghosh, and R. Richards-Kortum. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer. IEEE Transactions on Biomedical Engineering, 45(8):953–961, 1998.CrossRefGoogle Scholar
  44. [44]
    R. Goodacre, E.M. Timmins, R. Burton, N. Kaderbhai, A.M. Woodward D.B. Kell, and P.J. Rooney. Rapid identification of urinary tract infection bacteria using hyperspectral whole—organism fingerprinting and artificial neural networks. Microbiology, 144(5):1157–1170, 1998.CrossRefGoogle Scholar
  45. [45]
    J.B. Bishop, M. Szpalski, and S.K. Ananthraman D.R. McIntyre M.H. Pope. Classification of low back pain from dynamic motion characteristics using an artificial neural network. Spine, 22(24):2991–2998, 1997.CrossRefGoogle Scholar
  46. [46]
    R. Dybowski. Classification of incomplete feature vectors by radial basis function networks. Pattern Recognition Letters, 19:1257–1264, 1998.MATHCrossRefGoogle Scholar
  47. [47]
    I.T. Nabney. Efficient training of RBF networks for classification. Technical report NCRG/99/002, Neural Computing Research Group, Aston University, 1999.Google Scholar
  48. [48]
    G.A. Carpenter and S. Grossberg. A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 37:54–115, 1987.MATHCrossRefGoogle Scholar
  49. [49]
    M. Caudill and C. Butler. Naturally Intelligent Systems. MIT Press, Cambridge, MA, 1990.Google Scholar
  50. [50]
    R.G. Spencer, C.S. Lessard, F. Davila, and B. Etter. Self-organising discovery, recognition and prediction of haemodynamic patterns in the intensive care unit. Medical & Biological Engineering & Computing, 35(2): 117–123, 1997.CrossRefGoogle Scholar
  51. [51]
    G. Carpenter, S. Grossberg, and J. Reynolds. ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network. Neural Networks, 4:565–588, 1991.CrossRefGoogle Scholar
  52. [52]
    R.F. Harrison, S.S. Cross, R.L. Kennedy, C.P. Lim, and J. Downs. Adaptive resonance theory: A foundation for ‘apprentice’ systems in clinical decision support? In R. Dybowski and V. Gant, editors, Clinical Applications of Artificial Neural Networks, chapter 13. Cambridge University Press, Cambridge. In press.Google Scholar
  53. [53]
    S. Grossberg. Adaptive pattern classification and universal recording: I. parallel development and coding of neural feature detectors. Cybernetics, 23:121–134, 1976.MathSciNetMATHCrossRefGoogle Scholar
  54. [54]
    B. Moore. ART1 and pattern clustering. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 174–185, Sa Mateo, CA, 1989. Morgan Kaufmann.Google Scholar
  55. [55]
    D.J. Hall and D. Khanna. The ISODATA method of computation for relative perception of similarities and differences in complex and real computers. In K. Enslein, A. Ralston, and H.S. Wilf, editors, Statistical Methods for Digital Computers, pages 340–373. Wiley, New York, 1977.Google Scholar
  56. [56]
    D. Sharp. From “black-box” to bedside, one day? The Lancet, 346:1050, 1995.CrossRefGoogle Scholar
  57. [57]
    R.A. Kemp, C. MacAulay, and B. Palcic. Opening the black box: the relationship between neural networks and linear discriminant functions. Analytical Cellular Pathology. 14(1):19–30, 1997, 14(1):19-30, 1997.Google Scholar
  58. [58]
    R. Andrews, A.B. Tickle, and J. Diederich. A review of techniques for extracting rules from trained artificial neural networks. In R. Dybowski and V. Gant, editors, Clinical Applications of Artificial Neural Networks, chapter 12. Cambridge University Press, Cambridge. In press.Google Scholar
  59. [59]
    D. Nauck, F. Klawonn, and R. Kruse. Foundations of Neuro-Fuzzy Systems. John Wiley, Chichester, 1997.Google Scholar

Copyright information

© Springer-Verlag London 2000

Authors and Affiliations

  • Richard Dybowski
    • 1
  1. 1.Medical Informatics Laboratory, Division of MedicineKing’s College LondonLondonUK

Personalised recommendations