Skip to main content

Linear Algebra

  • 1438 Accesses

Abstract

Linear algebra is the basis of logic constructions in any science. In this chapter, we learn about inverse matrices, determinants, linear independence, vector spaces and their dimensions, eigenvalues and eigenvectors, orthonormal bases and orthogonal matrices, and diagonalizing symmetric matrices. In this book, to understand the essence concisely, we define ranks and determinants based on the notion of Gaussian elimination and consider linear spaces and their inner products within the range of the Euclidean space and the standard inner product. By reading this chapter, the readers should solve the reasons why.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-981-15-7877-9_1
  • Chapter length: 17 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   34.99
Price excludes VAT (USA)
  • ISBN: 978-981-15-7877-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   44.99
Price excludes VAT (USA)

Notes

  1. 1.

    In general, any subset V  that satisfies (1.2) is said to be a vector space with scalars in \(\mathbb R\).

  2. 2.

    In general, for vector spaces V  and W, we say that f : V → W is a linear map if f(x + y) = f(x) + f(y), where \(\ x,y\in V,\ f(ax)=af(x),\ a\in {\mathbb R}\), and x ∈ V .

  3. 3.

    In general, we say that the map (⋅, ⋅) is an inner product of V  if (u + u′, v) = (u, v) + (u′, v), (cu, v) = c(u, v), (u, v) = (u′, v), and u ≠ 0⇒(u, u) > 0 for u, v ∈ V , where \(c\in {\mathbb R}\).

Author information

Authors and Affiliations

Authors

Appendix: Proof of Propositions

Appendix: Proof of Propositions

Proposition 2

For square matrices A and B of the same size, we have \(\det (AB)=\det (A)\det (B)\) and \(\det (A^T)=\det (A)\).

Proof

For Steps 1, 2, and 3, we multiply the following matrices from left:

V i (α)::

a unit matrix where the (i, i)-th element has been replaced with α.

U i,j  ::

a unit matrix where the (i, i), (j, j)-th and (i, j), (j, i)-th elements have been replaced by zero and one, respectively.

W i,j (β)::

a unit matrix where the (i, j)-th zero (i ≠ j) has been replaced by − β.

Then, for \(B\in {\mathbb R}^{n\times n}\),

$$\displaystyle \begin{aligned} \det(V_i(\alpha)B)=\alpha \det(B) ,\ \det(U_{i,j}B)=-\det(B) ,\ \det(W_{i,j}(\beta)B)=\det(B)\ . \end{aligned} $$
(1.4)

Since

$$\displaystyle \begin{aligned} \det(V_i(\alpha))=\alpha ,\ \det(U_{i,j})=-1 ,\ \det(W_{i,j}(\beta))=1 \end{aligned} $$
(1.5)

holds, if we write matrix A as the product E 1, …, E r of matrices of the three types, then we have

$$\displaystyle \begin{aligned}\det(A)=\det(E_1)\ldots\det(E_r)\ .\end{aligned}$$
$$\displaystyle \begin{aligned} \begin{array}{rcl} \det(AB)& =&\displaystyle \det(E_1\cdot E_2\ldots E_rB)=\det(E_1)\det(E_2\ldots E_rB)=\ldots\\ & =&\displaystyle \det(E_1)\ldots\det(E_r)\det(B)=\det(A)\det(B). \end{array} \end{aligned} $$

On the other hand, since matrices V i(α) and U i,j are symmetric and W i,j(β)T = W j,i(β), we have a similar equation to (1.4) and (1.5). Hence, we have

$$\displaystyle \begin{aligned}\det(A^T)=\det(E_r^T\ldots E_1^T)=\det(E_r^T)\ldots\det(E_1^T)=\det(E_1)\ldots\det(E_r)=\det(A)\ .\end{aligned}$$

Proposition 4

Let V  and W be subspaces of \({\mathbb R}^n\) and \({\mathbb R}^m\) , respectively. The image and kernel of the linear map V  W w.r.t. \(A\in {\mathbb R}^{m\times n}\) are subspaces of W and V , respectively, and the sum of the dimensions is n. The dimension of the image coincides with the rank of A.

Proof

Let r and x 1, …, x r ∈ V  be the dimension and basis of the kernel, respectively. We add x r+1, …, x n, which are linearly independent of them, so that x 1, …, x r, x r+1, …, x n are the bases of V . It is sufficient to show that Ax r+1, …, Ax n are the bases of the image.

First, since x 1, …, x r are vectors in the kernel, we have Ax 1 = … = Ax r = 0. For an arbitrary \(x=\sum _{j=1}^n b_jx_j\) with \(b_{r+1},\ldots ,b_n\in {\mathbb R}\), the image can be expressed as \(Ax=\sum _{j=r+1}^nb_jAx_j\), which is a linear combination of Ax r+1, …, Ax n. Then, our goal is to show that

$$\displaystyle \begin{aligned} \sum_{i=r+1}^n b_{i}Ax_{i}=0 \Longrightarrow b_{r+1},\ldots,b_n=0\ . \end{aligned} $$
(1.6)

If \(A\sum _{i=r+1}^n b_{i}x_{i}=0\), then \(\sum _{i=r+1}^nb_{i}x_{i}\) is in the kernel. Therefore, there exist b 1, …, b r such that \(\sum _{i=r+1}^n b_ix_i=-\sum _{i=1}^rb_ix_i\), which means that \(\sum _{i=1}^nb_ix_i=0\). However, we assumed that x 1, …, x n are linearly independent, which means that b 1 = … = b n = 0, and Proposition (1.6) is proven. □

Proposition 7

For any square matrix A, we can obtain an upper-triangular matrix P −1 AP by multiplying it, using an orthonormal matrix P.

Proof

We prove the proposition by induction. For n = 1, since the matrix is scalar, the claim holds. From the assumption of induction, for an arbitrary \(\tilde {B}\in {\mathbb R}^{(n-1)\times (n-1)}\), there exists an orthogonal matrix \(\tilde {Q}\) such that

$$\displaystyle \begin{aligned}\tilde{Q}^{-1}\tilde{B}\tilde{Q}= \left[ \begin{array}{c@{\quad }c@{\quad }c} \tilde{\lambda}_2&&*\\ &\ddots&\\ 0&&\tilde{\lambda}_n\\ \end{array} \right]\ , \end{aligned}$$

where ∗ represents the nonzero elements and \(\tilde {\lambda }_2,\ldots ,\tilde {\lambda }_{n}\) are the eigenvalues of \(\tilde {B}\).

For a nonsingular matrix \(A\in {\mathbb R}^{n\times n}\) with eigenvalues λ 1, …, λ n, allowing multiplicity, let u 1 be an eigenvector of eigenvalue λ 1 and R an orthogonal matrix such that the first column is u 1. Then, we have Re 1 = u 1 and Au 1 = λ 1 u 1, where \(e_1:=[1,0,\ldots ,0]^T\in {\mathbb R}^n\). Hence, we have

$$\displaystyle \begin{aligned}R^{-1}ARe_1=R^{-1}Au_1=\lambda_1R^{-1}u_1=\lambda_1R^{-1}Re_1=\lambda_1e_1,\end{aligned}$$

and we may express

$$\displaystyle \begin{aligned}R^{-1}AR= \left[ \begin{array}{c@{\quad }c} \lambda_1&b\\0&B \end{array} \right]\ , \end{aligned}$$

where \(b\in {\mathbb R}[1\times (n-1)]\) and \(0\in {\mathbb R}^{(n-1)\times 1}\). Note that R and A are nonsingular, so is B.

We claim that \(P=R \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] \) is an orthogonal matrix, where Q is an orthogonal matrix that diagonalizes \(B\in {\mathbb R}^{(n-1)\times (n-1)}\). In fact, Q T Q is a unit matrix, so is \(P^TP= \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] R^TR \left [ \begin {array}{c@{\quad }c} 1&0\\0&Q \end {array} \right ] \). Note that the eigenvalues of B are λ 2, …, λ n of A:

$$\displaystyle \begin{aligned}\prod_{i=1}^n(\lambda_i-\lambda)=\det(A-\lambda I_n)=\det(R^{-1}AR-\lambda I_n)=(\lambda_1-\lambda)\det(B-\lambda I_{n-1})\ ,\end{aligned}$$

where I n is a unit matrix of size n.

Finally, we claim that A is diagonalized by multiplying P −1 and P from left and right, respectively:

$$\displaystyle \begin{aligned} \begin{array}{rcl} P^{-1}AP& =&\displaystyle \left[ \begin{array}{c@{\quad }c} 1&\displaystyle 0\\0& Q^{-1} \end{array} \right] R^{-1}AR \left[ \begin{array}{c@{\quad }c} 1&\displaystyle 0\\0& Q \end{array} \right] = \left[ \begin{array}{c@{\quad }c} 1&\displaystyle 0\\0& Q^{-1} \end{array} \right] \left[ \begin{array}{c@{\quad }c} \lambda_1&\displaystyle b\\0& B \end{array} \right] \left[ \begin{array}{c@{\quad }c} 1&\displaystyle 0\\0& Q \end{array} \right]\\ & =&\displaystyle \left[ \begin{array}{c@{\quad }c} \lambda_1&\displaystyle bQ\\0& Q^{-1}BQ \end{array} \right] = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \lambda_1&\displaystyle &\displaystyle &\displaystyle *\\ & \lambda_2&\displaystyle &\displaystyle \\ & &\displaystyle \ddots&\displaystyle \\ & &\displaystyle &\displaystyle \lambda_n\\ \end{array} \right]\ , \end{array} \end{aligned} $$

which completes the proof. □

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Suzuki, J. (2021). Linear Algebra. In: Statistical Learning with Math and Python. Springer, Singapore. https://doi.org/10.1007/978-981-15-7877-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-7877-9_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-7876-2

  • Online ISBN: 978-981-15-7877-9

  • eBook Packages: Computer ScienceComputer Science (R0)