On Optimizing Subclass Discriminant Analysis Using a Pre-clustering Technique
Subclass Discriminant Analysis (SDA)  is a dimensionality reduction method that has been proven to be successful for different types of class distributions. The advantage of SDA is that since it does not treat the class-conditional distributions as uni-modal ones, the nonlinearly separable problems can be handled as linear ones. The problem with this strategy, however, is that to estimate the number of subclasses needed to represent the distribution of each class, i.e., to find out the best partition, all possible solutions should be verified. Therefore, this approach leads to an associated high computational cost. In this paper, we propose a method that optimizes the computational burden of SDA-based classification by simply reducing the number of classes to be examined through choosing a few classes of the training set prior to the execution of SDA. To select the classes to be partitioned, the intra-set distance is employed as a criterion and a k-means clustering is performed to divide them. Our experimental results for an artificial data set and two face databases demonstrate that the processing CPU-time of the optimized SDA could be reduced dramatically without sacrificing either the classification accuracy or the computational complexity.
KeywordsDimensionality Reduction Subclass Discriminant Analysis Clustering
- 4.Yang, M.-H.: Kernel Eigenfaces vs. kernel Fisherfaces: Face recognition using kernel methods. In: Proceedings of Fifth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 215–220 (2002)Google Scholar