Article PDF
Avoid common mistakes on your manuscript.
References
Ash, T. (1989). Dynamic node creation in backpropagation networks. Connection Science-Journal ofNeural Computing, Artificial Intelligence and Cognitive Research, 1, 365–375.
Baum, E.B. (1989). A proposal for more powerful learning algorithms. Neural Computation, 1,201–207.
Blum, A., & Rivest, R. (1988). Training a 3-node neural network is NP-complete. In D.S. Touretzky(Ed.),Advances in neural information processing systems (Vol. 1). San Mateo, CA: Morgan Kaufmann.
Diederich, J. (1988). Knowledge-intensive recruitment learning. (Technical report TR-88-010).Berkeley, CA: InternationalComputer Science Institute.
Fahlman, S.E. & Lebiere, C. (1990). The cascade-correlation learning architecture. In D.S. Touretzky(Ed.), Advances in neural information processing systems (Vol. 2). San Mateo, CA: MorganKaufmann.
Gallant, S.I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks,1, 179–---.
Garey, M.R. & Johnson, D.S. (1979). Computers and intractability: A guide to the theory of NP-completeness.San Francisco, CA: W.H. Freeman.
Gold, E. (1967). Language identification in the limit. Information and Control, 10, 447–474.
Hanson, S.J. (1990) Meiosis networks. In D.S. Touretzky (Ed.), Advances in neural informationprocessing systems (Vol. 2). San Mateo, CA: Morgan Kaufmann.
Hinton, G.E. (1989). Connectionist learning procedures. Artificial Intelligence, 8, 232–264.
Honavar, V., & Uhr, L. (1988). A network of neuron-like units that learns to perceive by generation as well asreweighting of its links. In G.E. Hinton, T. J. Sejnowski, & D.S. Touretzky (Eds.), Proceedings ofthe 1988 Connectionist Models Summer School. San Mateo, CA: Morgan Kaufmann.
Le Cun, Y. (1985). Une procedure d'apprentissage pour reseau a sequil assymetrique. In Proceedings ofCognitiva.
Minsky, M., & Papert, S. (1969). Perceptrons: An introduction to computational geometry.Cambridge, MA: MIT Press.
Nadal, J.P. (1989). Study of a growth algorithm for a feedforward network. International Journal ofNeural Systems, 1, 55.
Parker, D.B. Learning logic (Technical report TR-47). Cambridge, MA: MassachusettsInstitute of Technology.
Rosenblatt, F. (1961). Principles of neurodynamics: Perceptrons and the theory of brain mechanisms.Washington, D.C.: Spartan Books.
Rujan, P., & Marchand, M. (1989). Learning by activating neurons: A new approach to learning in neuralnetworks. Complex Systems, 3, 229–---.
Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by errorpropagation. In Parallel distributed processing: explorations into the microstructure of cognition (Vol. 1:Foundations). Cambridge, MA: MIT Press.
Valiant, L.G. (1984). A theory of the learnable. Communications of the ACM, 1134–1142.
Werbos, P.J. (1974). Beyond regression: New tools for prediction and analysis in behavioral sciences.Ph.D. Thesis, Harvard University.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Honavar, V. Book Review:Neural Network Design and the Complexity of Learning, by J. Stephen Judd. Cambridge, MA: MIT Press, 1990. Machine Learning 9, 95–98 (1992). https://doi.org/10.1023/A:1022680813848
Issue Date:
DOI: https://doi.org/10.1023/A:1022680813848