Abstract
Recent advances in “neural” computation models1 will only demonstrate their true value with the introduction of parallel computer architectures designed to optimise the computation of these models. Many special-purpose neural network hardware implementations are currently underway2,3,4. While these machines may solve the problem of realising the potential of specific models, the problem of designing a “general-purpose” Neural Computer has not been really addressed. This Neural Computer should provide a framework for executing neural models in much the same way that traditional computers address the problems of number crunching which they are best suited for. This framework must include a means of programming (i.e. operating system and programming languages) and the hardware must be reconfigurable in some manner.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rumelhart, D.E. and McClelland, J.L., Parallel distributed processing: explorations in the microstructure of cognition, Vol. 1 & 2 ,MIT Press, Cambridge, Mass., 1986.
Ackley, D.H., Hinton, G.E., and Sejnowski, T.J., “A learning algorithm for boltzmann machines,” Cognitive Sci ,vol. 9, pp. 147–169, 1985.
Graf, H.P., Jackel, L.D., Howard, R.E., and et.al. “VLSI implementation of a neural network memory with several hundreds of neurons,” in AIP conference proceedings 151 ,ed. Denker, J.S., American Institute of Physics, Snowbird, UT, April 1986.
Sivilotti, M.A., Emerling, M.R., and Mead, C.A., “VLSI architectures for implementation of neural networks,” in AIP conference proceedings 151 ,ed. Denker, J.S., American Institute of Physics, Snowbird, UT, April 1986.
Feldman, J., “Dynamic connections in neural networks”,” Biological Cybernetics ,vol. 46, 1982.
Marr, D., “A theory of cerebellar cortex,” J. Physiol ,vol. 202, pp. 437–470, 1969.
Ballard, D.H., “Cortical connections and parallel processing: structure and function,” The behavioral and brain sciences ,vol. 9, pp. 67–120, 1986.
Hopfield, J.J., “Neural networks and physical systems with emergent collective computational abilities,” Proc. Nat. Acad. Sci. ,vol 79, pp. 2554–2558, 1982.
Treleaven, P.C. and et.al., “Computer architectures for artificial intelligence,” in Lecture Notes in Computer Science ,vol. 272, pp. 416–492, Springer-Verlag, 1987.
Seitz, C.L., “Concurrent VLSI Architectures,” IEEE Trans, on Computers ,vol. C-33, no. 12, pp. 1247–1264., 1984.
Hillis, W.D.,”The Connection Machine,’’ in The MIT Press ,1985.
Fisher, A.L. and el et, “Architecture of the PSC: A Programmable Systolic Chip,” Proc. Tenth Int. Symp. on Computer Architecture ,pp. 48–53., June 1983.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1989 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Recce, M., Treleaven, P.C. (1989). Parallel Architectures for Neural Computers. In: Eckmiller, R., v.d. Malsburg, C. (eds) Neural Computers. Springer Study Edition, vol 41. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-83740-1_49
Download citation
DOI: https://doi.org/10.1007/978-3-642-83740-1_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-50892-2
Online ISBN: 978-3-642-83740-1
eBook Packages: Springer Book Archive