Abstract
In this chapter, we introduce some mathematical background that can assist the readers to better understand the book. In particular, we first introduce the probability theory, including probability space, random variables, probability distributions, expectations and variances. Then we provide some basics of linear algebra and matrix computation, such as the matrix operations and properties, eigenvalues and eigenvectors. After that, we will briefly review convex optimization, which covers the concepts of convex set and convex function, the conditions for convexity, Lagrangian duality, and the KTT conditions.
Keywords
Probability Density Function Convex Function Quadratic Programming Dual Problem Orthogonal Matrix
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
- 1.Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)MATHGoogle Scholar
- 2.Boyd, S., Vendenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2003)Google Scholar
- 3.Golub, G.H., Loan, C.F.V.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)MATHGoogle Scholar
- 4.Ng, A.: Lecture notes for machine learning. Stanford University cs229 (2010). http://see.stanford.edu/see/courseinfo.aspx?coll=348ca38a-3a6d-4052-937d-cb017338d7b1s
Copyright information
© Springer-Verlag Berlin Heidelberg 2011