Sifting the Margin – An Iterative Empirical Classification Scheme
- Cite this paper as:
- Vance D., Ralescu A. (2004) Sifting the Margin – An Iterative Empirical Classification Scheme. In: Zhang C., W. Guesgen H., Yeap WK. (eds) PRICAI 2004: Trends in Artificial Intelligence. PRICAI 2004. Lecture Notes in Computer Science, vol 3157. Springer, Berlin, Heidelberg
Attribute or feature selection is an important step in designing a classifier. It often reduces to choosing between computationally simple schemes (based on a small subset of attributes) that do not search the space and more complex schemes (large subset or entire set of available attributes) that are computationally intractable. Usually a compromise is reached: A computationally tractable scheme that relies on a subset of attributes that optimize a certain criterion is chosen. The result is usually a’good’ sub-optimal solution that may still require a fair amount of computation. This paper presents an approach that does not commit itself to any particular subset of the available attributes. Instead, the classifier uses each attribute successively as needed to classify a given data point. If the data set is separable in the given attribute space the algorithm will classify a given point with no errors. The resulting classifier is transparent, and the approach compares favorably with previous approaches both in accuracy and efficiency.
Unable to display preview. Download preview PDF.