Abstract
The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix approximation, similar in spirit to the well-known Karhunen-Loève transform. This approach appears thus as an efficient alternative to the general error back-propagation algorithm commonly used for training multilayer perceptrons. Moreover, it also gives a clear interpretation of the rôle of the different parameters.
This is a preview of subscription content, access via your institution.
References
Ahmed N, Rao KR (1975) Orthogonal transforms for digital signal processing. Springer, New York Berlin Heidelberg
Bourlard H, Kamp Y, Wellekens CJ (1985) Speaker dependent connected speech recognition via phonemic Markov models. Proc ICASSP, pp 1213–1216
Bunch JR, Nielsen CP (1978) Updating the singular value decomposition. Numer Math 31:111–129
Cottrell GW, Munro PW, Zipser D (1988) Image compression by back propagation: a demonstration of extensional programming. In: Sharkey NE (ed) Advances in cognitive science, vol 2. Abbex, Norwood, (NJ) (in press)
Delsarte P, Kamp Y (1988) Low rank matrices with a given sign pattern Philips Research Laboratory, Brussels SIAM J: (to be published)
Elman JL, Zipser D (1987) Learning the hidden structure of speech. J Acoust Soc Am 83:1615–1626
Golub GH (1968) Least squares, singular values and matrix approximations. Applikace Matematiky 13:44–51
Golub GH, Van Loan CF (1983) Matrix computations. North Oxford Academic, Oxford
Harrison TD (1987) A Connectionist framework for continuous speech recognition. Cambridge University Ph. D. dissertation
Lippmann RP (1987) An introduction to computing with neural nets. IEEE ASSP Magazine, pp 4–22
Minsky M, Papert S (1969) Perceptrons. MIT Press, Cambridge
Rumelhart DE, McClellarnd JL, and the PDP Research Group (1986) Parallel distributed processing. Exploration in the microstructure of cognition. vol 1–2. MIT Press, Cambridge
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClellan JL (eds) Parallel distributed processing. Exploration in the microstructure of cognition, vol 1. Foundations. MIT Press, Cambridge
Stewart GW (1973) Introduction to matrix computations. Academic Press, New York
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Bourlard, H., Kamp, Y. Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59, 291–294 (1988). https://doi.org/10.1007/BF00332918
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/BF00332918
Keywords
- Feature Space
- Dimensionality Reduction
- Processing Application
- Data Compression
- Multilayer Perceptrons