Linear discriminant analysis (LDA) is a traditional solution to the linear dimension reduction (LDR) problem, which is based on the maximization of the between-class scatter over the within-class scatter. This solution is incapable of dealing with heteroscedastic data in a proper way, because of the implicit assumption that the covariance matrices for all the classes are equal. Hence, discriminatory information in the difference between the covariance matrices is not used and, as a consequence, we can only reduce the data to a single dimension in the two-class case. We propose a fast non-iterative eigenvector-based LDR technique for heteroscedastic two-class data, which generalizes, and improves upon LDA by dealing with the aforementioned problem. For this purpose, we use the concept of directed distance matrices, which generalizes the between-class covariance matrix such that it captures the differences in (co)variances.
Linear Discriminant Analysis Covariance Matrice Eigenvalue Decomposition Discriminatory Information Fisher Criterion
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
T. W. Anderson and R. R. Bahadur. Classification into two multivariate normal distributions with different covariance matrices. Annals of Mathematical Statistics, 33:420–431, 1962.MathSciNetzbMATHCrossRefGoogle Scholar
C. H. Chen. On information and distance measures, error bounds, and feature selection. The information scientist, 10:159–173, 1979.CrossRefGoogle Scholar
J. K. Chung, P. L. Kannappan, C. T. Ng, and P. K. Sahoo. Measures of distance between probability distributions. Journal of mathematical analysis and applications, 138:280–292, 1989.zbMATHCrossRefMathSciNetGoogle Scholar
H. P. Decell and S. K. Marani. Feature combinations and the Bhattacharyya criterion. Communications in Statistics. Part A. Theory and Methods, 5:1143–1152, 1976.MathSciNetCrossRefGoogle Scholar
H. P. Decell and S. M. Mayekar. Feature combinations and the divergence criterion. Computers and Mathematics with Applications, 3:71–76, 1977.zbMATHCrossRefGoogle Scholar
R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179–188, 1936.Google Scholar
K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 1990.zbMATHGoogle Scholar
B. van Ginneken and B. M. ter Haar Romeny. Automatic segmentation of lung fields in chest radiographs. Medical Physics, 27(10):2445–2455, 2000.CrossRefGoogle Scholar
A. K. Jain, R. P. W. Duin, and J. Mao. Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1):4–37, 2000.CrossRefGoogle Scholar
N. Kumar and A. G. Andreou. Generalization of linear discriminant analysis in a maximum likelihood framework. In Proceedings of the Joint Meeting of the American Statistical Association, 1996.Google Scholar
M. Loog. Approximate Pairwise Accuracy Criteria for Multiclass Linear Dimension Reduction: Generalisations of the Fisher Criterion. Number 44 in WBBM Report Series. Delft University Press, Delft, 1999.Google Scholar
W. Malina. On an extended Fisher criterion for feature selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3:611–614, 1981.CrossRefGoogle Scholar
G. J. McLachlan. Discriminant Analysis and Statistical Pattern Recognition. John Wiley & Sons, New York, 1992.CrossRefGoogle Scholar