Abstract
Artificial Neural Networks (ANNs) are intelligent, thinking machines. They work in the same way as the human brain. They learn from experience in a way that no conventional computer can and they will shortly solve all of the world’s hard computational problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
J. Hertz, A. Krogh, and R.G. Palmer, in Introduction to the Theory of Neural Computation, Addison-Wesley, 1991.
R.R Lippmann, “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, pp. 4-22, April, 1987.
R.R Lippmann, “Pattern Classification using Neural Networks”, IEEE Communications Magazine, pp. 47-64, November, 1989.
E. Sanchez-Sinencio and C. Lau, in Artificial Neural Networks: Paradigms, Applications and Hardware Implementations, IEEE Press, 1991.
M.L. Minsky and S.A. Papert, Perceptrons: An Introduction to Computational Geometry, MIT Press, Cambridge, MA, 1969.
F. Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organisation in the Brain”, Psychological Review, vol. 65, pp. 386–408, 1958.
J.J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”, Proc. Natl. Acad. Sci. USA, vol. 79, pp. 2554–2558, April, 1982.
J.J. Hopfield, “Neural Networks and Physical Systems with Graded Response have Collective Properties like those of Two-State Neurons”, Proc. Natl. Acad. Sci. USA, vol. 81, pp. 3088–3092, May, 1984.
D.E. Rumelhart and J.D. McLelland, in Parallel Distributed Processing: Explorations in the Microstructures of Cognition Volume 1, MIT Press, 1986.
A.F. Murray, “Multi-Layer Perceptron Learning Optimised for On-Chip Implementation — a Noise-Robust System”, Neural Computation, vol. 4, no. 3, pp. 366–381, 1992.
R. Rohwer, “The “Moving Targets” Training Algorithm”, Neural Information Processing Systems (NIPS) Conference, pp. 558-565, Morgan Kaufmann, 1990.
D. Nabutovsky, T. Grossman, and E. Domany, “Learning by CHIR without Storing Internal Representations”, Complex Systems, vol. 4, pp. 519–541, 1990.
M. Jabri and B. Flower, “Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks”, Neural Computation, vol. 3, pp. 546–565, 1991.
Y. Le Cun, J.S. Denker, and S. Solla, “Optimal Brain Damage”, Neural Information Processing Systems (NIPS) Conference, pp. 598-605, Morgan Kaufmann, 1990.
D.S. Broomhead and D. Lowe, “Multivariate Functional Interpolation and Adaptice Networks”, Complex Systems, vol. 2, pp. 321–355, 1988.
J. Moody and C. Darken, “Fast learning in Networks of Locally-Tuned Processing Units”, Neural Computation, vol. 1, pp. 281–294, 1989.
C. Darken and J. Moody, “Note on Learning Rate Schedules for Stochastic Optimisation”, Neural Information Processing Systems (NIPS) Conference, pp. 832-838, Morgan Kaufmann, 1991.
T. Kohonen, Self-organisation and Associative Memory, Springer-Verlag, Berlin, 1984.
D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning Internal Representations by Error Propagation”, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362, 1986.
H.J. Bremermann and R.W. Anderson, “An alternative to back-propagation: A simple rule for synaptic modification for neural net training and memory”, Internal Report, Univ. of California, Berkeley., 1989.
PJ. Werbos, “A menu for designs of reinforcement learning over time”, in Neural networks for control, ed. W.T. Miller III, R.S. Sutton, and P. J. Werbos, MIT Press, Cambridge, MA., 1990.
A.G. Barto and P. Anandan, “Pattern Learning Stochastic Learning Automata”, IEEE Trans Systems, Man and Cybernetics, vol. 15, pp. 360–375, 1985.
T. Prescott and J. Mayhew, “Obstacle Avoidance through Reinforcement Learning”, Neural Information Processing Systems (NIPS) Conference, pp. 523-530, 1991.
R.S. Sutton, “Learning to predict by methods of temporal difference”, Machine Learning, vol. 3, pp. 9–44, 1988.
B. Widrow and M.A. Lehr, “30 years of adaptive Neural networks: Perceptron, Madaline, and backpropagation”, Proc. IEEE, vol. 78, pp. 1415–1442, 1990.
G.E. Hinton, T.J. Sejnowski, and D.H. Ackley, “Bolzmann Machines: Constraint Satisfaction Networks that Learn”, Cognitive Science, vol. 9, pp. 147–169, May, 1984.
D.G. Bounds, “Numerical Simulation of Boltzman Machines”, in AIP Conference Proceedings 151, Neural Networks for Computing, Snowbird, ed. John S. Denker, pp. 59-64, American Institute of Physics, 1986.
F. Zou and J. Nossek, “A Chaotic Attractor with Cellular Neural Networks”, IEEE Trans CAS, vol. 38, 1991.
H.P. Graf, L.D. Jackel, R.E. Howard, B. Straughn, J.S. Denker, W. Hubbard, D.M. Tennant, and D. Schwartz, “VLSI Implementation of a Neural Network Memory with Several Hundreds of Neurons”, Proc. AIP Conference on Neural Networks for Computing, Snowbird, pp. 182-187, 1986.
C. Mead, in Analog VLSI and Neural Systems, Addison-Wesley, 1988.
A.G. Andreou, K.A. Boahen, P.O. Pouliquen, A. Pavasovic, R.E. Junkins, and K. Strohbehn, “Current-Mode Subthreshold MOS Circuits for Analog VLSI Neural Systems”, IEEE Transactions Neural Networks, vol. 2, no. 2, pp. 205–213, 1991.
L.A. Akers, M.R. Walker, D.K. Ferry, and R.O. Grondin, “A Limited-Interconnect, Highly Layered Synthetic Neural Architecture”, in VLSI for Artificial Intelligence, ed. J.G. Delgado-Frias and W.R. Moore, pp. 218-226, Kluwer, July 1988.
C.R. Schneider and H.C. Card, “Analog CMOS Contrastive Hebbian Networks”, Applications of Artificial Neural Networks III SPIE Proc, p. 1709, 1992.
A.F. Murray and A.V.W. Smith, “Asynchronous Arithmetic for VLSI Neural Systems”, Electronics Letters, vol. 23, no. 12, pp. 642–643, June, 1987.
A.F. Murray, A. Hamilton, D.J. Baxter, S. Churcher, H.M. Reekie, and L. Tarassenko, “Integrated Pulse-Stream Neural Networks — Results, Issues and Pointers”, IEEE Trans. Neural Networks, pp. 385-393, 1992.
J. Meador, A. Wu, C. Cole, N. Nintunze, and P. Chintrakulchai, “Programmable Impulse Neural Circuits”, IEEE Transactions on Neural Networks, vol. 2, no. 1, pp. 101–109, 1990.
L.M. Reyneri, F. Gregoreti, and C. Truzzi, “Interfacing Sensors and Actuators to CPWM and CPEM Neural Networks”, Proc. International Conference on Microelectronics for Neural Networks, Edinburgh, pp. 11-20, 1993.
J. Tomberg and K. Kaski, “Some IC Implementations of Artificial Neural Networks Using Synchronous Pulse-Density Modulation Technique”, Int. Journal of Neural Systems, vol. 2, no. 1/2, pp. 101–114, 1991.
D. Hammerstrom, “A Highly Parallel Digital Architecture for Neural Network Emulation”, VLSI for AI and Neural Networks, pp. 357-366, Plenum, 1991.
U. Ramacher, W. Raab, J. Anlauf, U. Hachmann, J. Beichter, N. Bruls, R. Manner, J. Glas, and A. Wurz, “Multiprocessor and Memory Architecture of the Neurocomputer SYNAPSE-1”, Proc. International Conference on Microelectronics for Neural Networks, Edinburgh, pp. 227-232, 1993.
R.M. Goodman, P. Smyth, C.M. Higgins, and J.W. Miller, “Rule-Based Neural Networks for Classification and Probability Estimation”, Neural Computation, vol. 4, no. 6, pp. 781–804, 1992.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer Science+Business Media New York
About this chapter
Cite this chapter
Murray, A. (1995). Neural Architectures and Algorithms. In: Murray, A.F. (eds) Applications of Neural Networks. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-2379-3_1
Download citation
DOI: https://doi.org/10.1007/978-1-4757-2379-3_1
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-5140-3
Online ISBN: 978-1-4757-2379-3
eBook Packages: Springer Book Archive