Model Selection

  • Šarūnas Raudys
Part of the Advances in Pattern Recognition book series (ACVPR)


In the previous chapters, it was shown that the number of algorithms for designing the linear and non-linear classification rules is very large and there exist a great variety of strategies for developing the classification algorithm from the design set. There are a number of parametric statistical algorithms suggested for the standard, Gaussian or exponential families of multivariate distribution densities. A great variety of algorithms arise from differing assumptions about the covariance matrices and their estimation methods. Apart from the mixture models, there exist nonparametric density estimation methods that lead to different modifications of the Parzen window and k-NN classification rules. Piecewise-linear and MLP-based models increase the number of potentially applicable algorithms further. The fact that many algorithms depend on continuous parameters such as smoothing, regularisation constants and the number of iterations, increases the field of potential possibilities. The abundance of algorithms can be increased many times by feature selection, feature extraction and ANN architecture selection techniques. The integrated approach to designing the classification rule also increases the number of possible alternatives.


Feature Selection Classification Error Feature Subset Generalisation Error Pattern Class 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag London Limited 2001

Authors and Affiliations

  • Šarūnas Raudys
    • 1
  1. 1.Data Analysis DepartmentInstitute of Mathematics and InformaticsVilniusLithuania

Personalised recommendations