In the previous chapters, it was shown that the number of algorithms for designing the linear and non-linear classification rules is very large and there exist a great variety of strategies for developing the classification algorithm from the design set. There are a number of parametric statistical algorithms suggested for the standard, Gaussian or exponential families of multivariate distribution densities. A great variety of algorithms arise from differing assumptions about the covariance matrices and their estimation methods. Apart from the mixture models, there exist nonparametric density estimation methods that lead to different modifications of the Parzen window and k-NN classification rules. Piecewise-linear and MLP-based models increase the number of potentially applicable algorithms further. The fact that many algorithms depend on continuous parameters such as smoothing, regularisation constants and the number of iterations, increases the field of potential possibilities. The abundance of algorithms can be increased many times by feature selection, feature extraction and ANN architecture selection techniques. The integrated approach to designing the classification rule also increases the number of possible alternatives.
KeywordsFeature Selection Classification Error Feature Subset Generalisation Error Pattern Class
Unable to display preview. Download preview PDF.