Abstract
By identifying characteristic regions in which classes are dense and also relevant for discrimination a new, intuitive classification method is set up. This method enables a visualized result so the user is provided with an insight into the data with respect to discrimination for an easy interpretation. Additionally, it outperforms Decision trees in a lot of situations and is robust against outliers and missing values.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Breiman, L.: Bagging predictors. Machine Learning 26, 123–140 (1996)
Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001)
Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and regression trees. Wadsworth Publishing Co. Inc., Belmont (1984)
Freedman, D., Diaconis, P.: On the histogram as a density estimator: L 2 theory. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 67, 453–476 (1981)
Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning. Springer, Heidelberg (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Szepannek, G., Luebke, K., Weihs, C. (2005). Understanding Patterns with Different Subspace Classification. In: Perner, P., Imiya, A. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2005. Lecture Notes in Computer Science(), vol 3587. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11510888_12
Download citation
DOI: https://doi.org/10.1007/11510888_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26923-6
Online ISBN: 978-3-540-31891-0
eBook Packages: Computer ScienceComputer Science (R0)