A New Approximation Method of the Quadratic Discriminant Function
For many statistical pattern recognition methods, distributions of sample vectors are assumed to be normal, and the quadratic discriminant function derived from the probability density function of multivariate normal distribution is used for classification. However, the computational cost is O(n 2) for n-dimensional vectors. Moreover, if there are not enough training sample patterns, covariance matrix can not be estimated accurately. In the case that the dimensionality is large, these disadvantages markedly reduce classification performance. In order to avoid these problems, in this paper, a new approximation method of the quadratic discriminant function is proposed. This approximation is done by replacing the values of small eigenvalues by a constant which is estimated by the maximum likelihood estimation. This approximation not only reduces the computational cost but also improves the classification accuracy.
KeywordsProbability Density Function Training Sample Small Eigenvalue Multivariate Normal Distribution Minimum Description Length
- 3.Takeshita, T., Kimura, F., Miyake, Y.: On the Estimation Error of Mahalanobis Distance. Trans. IEICE J70-D (1987) 567–573Google Scholar
- 4.Fix, E., Hodges, J.L.: Discriminatory analysis, nonparametric discrimination, consistency properties. Report No.4, School of Aviation Medicine, Randolph Field, Texas (1951)Google Scholar
- 10.Kurita, M., Tsuruoka, S., Yokoi, S., Miyake, Y.: Handprinted “Kanji” and “Hira-gana” character recognition using weighting direction index histograms and quasi-Mahalanobis distance. IEICE Technical Report PRL82-79 (1983) 105–112Google Scholar
- 11.Kato, N., Abe, M., Nemoto, Y.: A Handwritten Character Recognition System Using Modified Mahalanobis Distance. Trans. IEICE J79-D-II (1996) 45–52Google Scholar
- 15.Grother, P.J.: NIST Special Database 19 Handprinted Forms and Characters Database. National Institute of Standards and Technology. (1995)Google Scholar