Skip to main content

Comparative Bibliography of Ontogenic Neural Networks

  • Conference paper
  • First Online:
ICANN ’94 (ICANN 1994)

Included in the following conference series:

Abstract

One of the most powerful aspects of neural networks is their ability to adapt to problems by changing their interconnection strengths according to a predetermined learning rule. On the other hand, one of the main drawbacks of neural networks is the lack of knowledge for determining the topology of the network, that is, the number of layers and number of neurons per layer. A special class of neural networks tries to overcome this problem by letting the network also automatically adapt its topology to the problem. These are the so called ontogenic neural networks. Other potential advantages of ontogenic neural networks are improved generalization, implementation optimization (size and/or execution speed), and the avoidance of local minima.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. E. Alpaydin. Grow-and-learn: An incremental method for category learning. In Proc. INNC ’90, volume 2, pages 761–764, Dordrecht, The Netherlands, 1990. Kluwer.

    Google Scholar 

  2. T. Ash. Dynamic node creation in backpropagation networks. Connection Science, 1(4):365–375, 1989.

    Article  Google Scholar 

  3. R. L. Barron. Learning networks improve computer-aided prediction and control. Computer Design, pages 65–70, August 1975.

    Google Scholar 

  4. E. B. Baum and K. J. Lang. Constructing hidden units using examples and queries. In R. P. Lippmann et al., editor, NIPS 3. Morgan Kaufmann, 1991.

    Google Scholar 

  5. I. Bellido and G. Fernández. Backpropagation growing networks: Towards local minima elimination. In A. Prieto, editor, Artificial Neural Networks; Proc. IWANN ’91, pages 130–135, Heidelberg, Germany, 1991. Springer.

    Google Scholar 

  6. B. Bonnlander and M. C. Mozer. Latticed rbf networks: An alternative to constructive methods. In NIPS 5, 1993.

    Google Scholar 

  7. G. Deffuant. Neural units recruitment algorithm for generation of decision trees. In Proc. IJCNN’90 - San Diego, volume I, pages 637–642, Ann Arbor, MI, 1990. IEEE Neural Networks Council; Edward Brothers.

    Google Scholar 

  8. J. Diederich. Connectionist recruitment learning. In Proc. of the 8th European Conference on A.I., pages 351–356, 1988.

    Google Scholar 

  9. S. E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. Technical Report CMU-CS-90–100, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1990.

    Google Scholar 

  10. S. E. Fahlman. The recurrent cascade-correlation architecture. Technical Report CMU-CS-91–100, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, May 17 1991.

    Google Scholar 

  11. E. Fiesler. Partially connected ontogenic high order neural networks. Technical Report 92–02, IDIAP, Martigny, Switzerland, August 1992.

    Google Scholar 

  12. M. Frean. The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation, 2(2):198–209, 1990.

    Article  Google Scholar 

  13. B. Fritzke. Let it grow - self-organizing feature maps with problem dependent cell structure. In T. Kohonen et al., editor, Artificial Neural Network.; (ICANN-91), volume 1, pages 403–408, Amsterdam, The Netherlands, 1991. North-Holland; Elsevier Science Publishing Company B.V.

    Google Scholar 

  14. B. Fritzke. Unsupervised clustering with growing cell structures. In Proc. IJCNN ’91 Seattle, volume II, pages 531–536, 1991.

    Google Scholar 

  15. B. Fritzke and P. Wilke. Flexmap - a neural network with linear time and space complexity for the traveling salesman problem. In Proc. IJCNN-91, 1991.

    Google Scholar 

  16. M. Golea and M. Marchand. A growth algorithm for neural network decision trees. EuroPhysics Letters, 12(3):205–210,1990.

    Article  Google Scholar 

  17. M. Hagiwara. Novel back propagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection. In Proc. IJCNN ’90 San Diego, volume I, pages 625–630, Ann Arbor, MI, 1990. IEEE Neural Networks Council; Edward Brothers.

    Google Scholar 

  18. S. J. Hanson. Meiosis networks. In D. S. Touretzky, editor, NIPS 2, pages 533–541, San Mateo, CA, 1990. Morgan Kaufmann.

    Google Scholar 

  19. B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In NIPS 5, 1993.

    Google Scholar 

  20. Y. Hirose, K. Yamashita, and S. Hijaya. Back-propagation algorithm which varies the number of hidden units. Neural Networks, 4:61–66, 1991.

    Article  Google Scholar 

  21. V. Honavar and L. Uhr. A network of neuron-like units that learns to perceive by generation as well as reweighting of its links. In D. Touretzky et al., editor, Proc. of the 1988 Connectionist Models Summer School, pages 472- 484, San Mateo, California, 1988. Morgan Kaufmann.

    Google Scholar 

  22. V. Honavar and L. Uhr. Generative learning structures and processes for generalized connectionist networks. Connection Science, 1:139–159, 1989.

    Article  Google Scholar 

  23. A. G. Ivakhnenko. The group method of data handling - a rival of stochastic approximation. Soviet Automatic Control, 1:43–55, 1968.

    Google Scholar 

  24. V. Kadirkamanathan and M. Niranjan. Application of an architecturally dynamic network for speech pattern classification. In Proc. of the Institute of Acoustics, volume 14, 1992.

    Google Scholar 

  25. V. Kadirkamanathan and M. Niranjan. A function estimation approach to sequential learning with neural networks. Technical Report CUED/FINFENG/ TR.111, Cambridge University Engineering Department, Cambridge, UK, September 13 1992.

    Google Scholar 

  26. J. Klotz and E. Fiesler. Ontogenic neural networks. Expert Systems. (to be submitted).

    Google Scholar 

  27. A. Lansner and Ö. Ekeberg. An associative network solving the “4-bit ADDER problem”. In M. Caudill and C. Butler, editors, Proc. ICNN, volume II, pages 549–556, San Diego, CA, 1987. SOS Printing.

    Google Scholar 

  28. Y. Le Cun, J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S. Touretzky, editor, NIPS 2, pages 598–605, San Mateo, CA, 1990. Morgan Kaufmann.

    Google Scholar 

  29. M. Mézard and J.-P. Nadal. Learning in feedforward layered networks: The tiling algorithm. J. of Physics: A, 22(12):2191–2203, 1989.

    MathSciNet  Google Scholar 

  30. J.-P. Nadal. Study of a growth algorithm for a feedforward neural network. Int. J. of Neural System8. 1(1):55–59, 1989.

    Article  Google Scholar 

  31. D. L. Reilly, L. N. Cooper, and C. Elbaum. A neural model for category learning. Biological Cybernetics, 45:35–41, 1982.

    Article  Google Scholar 

  32. M. Ring. Sequence learning with incremental high-order neural networks. CSE technical Report AI 93–193, Dept. of Computer Sc., University of Texas at Austin, January 1993.

    Google Scholar 

  33. P. Ruján and M. Marchand. Learning by minimizing resources in neural networks. Complex Systems, 3:229–241, 1989.

    MATH  Google Scholar 

  34. J. A. Sirat and J. P. Nadal. Neural trees: A new tool for classification. Network; Computation in Neural Systems, 1:423–438,1990.

    Article  Google Scholar 

  35. M. F. Tenorio and W.-T. Lee. Self organizing neural networks for the identification problem. In D. S. Touretzky, editor, NIPS 1, pages 57–64, San Mateo, CA, 1989. Morgan Kaufmann.

    Google Scholar 

  36. H. H. Thodberg. Improving generalization of neural networks through pruning. Int. J. of Neural Systems, 1(4):317–326,1991.

    Article  Google Scholar 

  37. W. X. Wen, H. Liu, and A. Jennings. Self-generating neural networks. In Proc. IJCNN - Baltimore. 1992.

    Google Scholar 

  38. R. Zollner, H. J. Schmitz, F. Wünch, and U. Krey. Fast generating algorithm for a general three-layer perceptron. Neural Networks, 5(5):771–777, September-October 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag London Limited

About this paper

Cite this paper

Fiesler, E. (1994). Comparative Bibliography of Ontogenic Neural Networks. In: Marinaro, M., Morasso, P.G. (eds) ICANN ’94. ICANN 1994. Springer, London. https://doi.org/10.1007/978-1-4471-2097-1_188

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-2097-1_188

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-19887-1

  • Online ISBN: 978-1-4471-2097-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics