Up to this point, a great variety of methods for regression and classification have been presented. Recall that, for regression there were the Early methods such as bin smoothers and running line smoothers, Classical methods such as kernel, spline, and nearest-neighbor methods, New Wave methods such as additive models, projection pursuit, neural networks, trees, and MARS, and Alternative methods such as ensembles, relevance vector machines, and Bayesian nonparametrics. In the classification setting, apart from treating a classification problem as a regression problem with the function assuming only values zero and one, the main techniques seen here are linear discriminant analysis, tree-based classifiers, support vector machines, and relevance vector machines. All of these methods are in addition to various versions of linear models assumed to be familiar.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2009 Springer-Verlag New York
About this chapter
Cite this chapter
Clarke, B., Fokoué, E., Zhang, H.H. (2009). Computational Comparisons. In: Principles and Theory for Data Mining and Machine Learning. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-98135-2_7
Download citation
DOI: https://doi.org/10.1007/978-0-387-98135-2_7
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98134-5
Online ISBN: 978-0-387-98135-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)