A New Algorithm for Learning Mahalanobis Discriminant Functions by a Neural Network
It is well known that a neural network can learn Bayesian discriminant functions. In the two-category normal-distribution case, a shift by a constant of the logit transform of the network output approximates a corresponding Mahalanobis discriminant function . In , we have proposed an algorithm for estimating the constant, but it requires the network to be trained twice, in one of which the teacher signals must be shifted by the mean vectors. In this paper, we propose a more efficient algorithm for estimating the constant with which the network is trained only once.
KeywordsNeural Network Discriminant Function Mahalanobis Distance Output Unit Network Output
Unable to display preview. Download preview PDF.
- 6.Ito, Y., Srinivasan, C., Izumi, H.: Bayesian Learning of Neural Networks Adapted to changes of Prior Probabilities. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 253–259. Springer, Heidelberg (2005)Google Scholar