Abstract
Linear algebra is the basis of logic constructions in any science. In this chapter, we learn about inverse matrices, determinants, linear independence, vector spaces and their dimensions, eigenvalues and eigenvectors, orthonormal bases and orthogonal matrices, and diagonalizing symmetric matrices. In this book, to understand the essence concisely, we define ranks and determinants based on the notion of Gaussian elimination and consider linear spaces and their inner products within the range of the Euclidean space and the standard inner product. By reading this chapter, the readers should solve the reasons why.
This is a preview of subscription content, access via your institution.
Buying options
Notes
 1.
In general, any subset V that satisfies (1.2) is said to be a vector space with scalars in \(\mathbb R\).
 2.
In general, for vector spaces V and W, we say that f : V → W is a linear map if f(x + y) = f(x) + f(y), where \(\ x,y\in V,\ f(ax)=af(x),\ a\in {\mathbb R}\), and x ∈ V .
 3.
In general, we say that the map (⋅, ⋅) is an inner product of V if (u + u′, v) = (u, v) + (u′, v), (cu, v) = c(u, v), (u, v) = (u′, v), and u ≠ 0⇒(u, u) > 0 for u, v ∈ V , where \(c\in {\mathbb R}\).
Author information
Authors and Affiliations
Appendix: Proof of Propositions
Appendix: Proof of Propositions
Proposition 2
For square matrices A and B of the same size, we have \(\det (AB)=\det (A)\det (B)\) and \(\det (A^T)=\det (A)\).
Proof
For Steps 1, 2, and 3, we multiply the following matrices from left:
 V _{i} (α)::

a unit matrix where the (i, i)th element has been replaced with α.
 U _{i,j} ::

a unit matrix where the (i, i), (j, j)th and (i, j), (j, i)th elements have been replaced by zero and one, respectively.
 W _{i,j} (β)::

a unit matrix where the (i, j)th zero (i ≠ j) has been replaced by − β.
Then, for \(B\in {\mathbb R}^{n\times n}\),
Since
holds, if we write matrix A as the product E _{1}, …, E _{r} of matrices of the three types, then we have
On the other hand, since matrices V _{i}(α) and U _{i,j} are symmetric and W _{i,j}(β)^{T} = W _{j,i}(β), we have a similar equation to (1.4) and (1.5). Hence, we have
□
Proposition 4
Let V and W be subspaces of \({\mathbb R}^n\) and \({\mathbb R}^m\) , respectively. The image and kernel of the linear map V → W w.r.t. \(A\in {\mathbb R}^{m\times n}\) are subspaces of W and V , respectively, and the sum of the dimensions is n. The dimension of the image coincides with the rank of A.
Proof
Let r and x _{1}, …, x _{r} ∈ V be the dimension and basis of the kernel, respectively. We add x _{r+1}, …, x _{n}, which are linearly independent of them, so that x _{1}, …, x _{r}, x _{r+1}, …, x _{n} are the bases of V . It is sufficient to show that Ax _{r+1}, …, Ax _{n} are the bases of the image.
First, since x _{1}, …, x _{r} are vectors in the kernel, we have Ax _{1} = … = Ax _{r} = 0. For an arbitrary \(x=\sum _{j=1}^n b_jx_j\) with \(b_{r+1},\ldots ,b_n\in {\mathbb R}\), the image can be expressed as \(Ax=\sum _{j=r+1}^nb_jAx_j\), which is a linear combination of Ax _{r+1}, …, Ax _{n}. Then, our goal is to show that
If \(A\sum _{i=r+1}^n b_{i}x_{i}=0\), then \(\sum _{i=r+1}^nb_{i}x_{i}\) is in the kernel. Therefore, there exist b _{1}, …, b _{r} such that \(\sum _{i=r+1}^n b_ix_i=\sum _{i=1}^rb_ix_i\), which means that \(\sum _{i=1}^nb_ix_i=0\). However, we assumed that x _{1}, …, x _{n} are linearly independent, which means that b _{1} = … = b _{n} = 0, and Proposition (1.6) is proven. □
Proposition 7
For any square matrix A, we can obtain an uppertriangular matrix P ^{−1} AP by multiplying it, using an orthonormal matrix P.
Proof
We prove the proposition by induction. For n = 1, since the matrix is scalar, the claim holds. From the assumption of induction, for an arbitrary \(\tilde {B}\in {\mathbb R}^{(n1)\times (n1)}\), there exists an orthogonal matrix \(\tilde {Q}\) such that
where ∗ represents the nonzero elements and \(\tilde {\lambda }_2,\ldots ,\tilde {\lambda }_{n}\) are the eigenvalues of \(\tilde {B}\).
For a nonsingular matrix \(A\in {\mathbb R}^{n\times n}\) with eigenvalues λ _{1}, …, λ _{n}, allowing multiplicity, let u _{1} be an eigenvector of eigenvalue λ _{1} and R an orthogonal matrix such that the first column is u _{1}. Then, we have Re _{1} = u _{1} and Au _{1} = λ _{1} u _{1}, where \(e_1:=[1,0,\ldots ,0]^T\in {\mathbb R}^n\). Hence, we have
and we may express
where \(b\in {\mathbb R}[1\times (n1)]\) and \(0\in {\mathbb R}^{(n1)\times 1}\). Note that R and A are nonsingular, so is B.
We claim that \(P=R \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] \) is an orthogonal matrix, where Q is an orthogonal matrix that diagonalizes \(B\in {\mathbb R}^{(n1)\times (n1)}\). In fact, Q ^{T} Q is a unit matrix, so is \(P^TP= \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] R^TR \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] \). Note that the eigenvalues of B are λ _{2}, …, λ _{n} of A:
where I _{n} is a unit matrix of size n.
Finally, we claim that A is diagonalized by multiplying P ^{−1} and P from left and right, respectively:
which completes the proof. □
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Suzuki, J. (2021). Linear Algebra. In: Statistical Learning with Math and Python. Springer, Singapore. https://doi.org/10.1007/9789811578779_1
Download citation
DOI: https://doi.org/10.1007/9789811578779_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 9789811578762
Online ISBN: 9789811578779
eBook Packages: Computer ScienceComputer Science (R0)