Abstract
In this chapter, we will focus on the most simple methods in machine learning, viz., linear methods. Even if linear methods are relatively easy to understand, they illustrate the fundamental concepts in machine learning. Linear methods also represent a nice cross section of supervised and unsupervised methods. In this chapter, we will study these concepts followed by linear regression. Regularization techniques also mark a crucial aspect in machine learning, and we will study that in the context of linear methods in this chapter. Then we will see the generalization of these methods using nonlinear link functions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Lagrangian method is a commonly used method of integrating the regularization constraints into the optimization problem, thereby creating a single optimization problem.
References
Mahalanobis distance https://en.wikipedia.org/wiki/Mahalanobisdistance
sklearn-knn https://scikit-learn.org/stable/modules/classes.html\#module-sklearn.neighbors
Trevor Hastie, Robert Tibshirani, Jerome Friedman The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. (Springer, New York, 2016).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Joshi, A.V. (2023). Linear Methods. In: Machine Learning and Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-031-12282-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-12282-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-12281-1
Online ISBN: 978-3-031-12282-8
eBook Packages: Computer ScienceComputer Science (R0)