Skip to main content
Log in

Self-Paced, Instructor-Assisted Approach to Teaching Linear Algebra

  • Published:
Mathematics in Computer Science Aims and scope Submit manuscript

Abstract

In this paper we explain why we decided to abandon the traditional classroom instruction of Linear Algebra and switched to a self-paced, instructor-assisted course format. We also describe our experience with creating a self-paced course in NCLab, and with teaching it at a tier-1 research university for three consecutive semesters. The results of a student survey and student comments are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

Notes

  1. Request free access to the course at http://nclab.com/courses/linear-algebra/.

  2. NCLab is an educational cloud platform available at http://nclab.com/.

  3. An overview of these courses can be found at http://nclab.com/courses/.

References

  1. Akayr, G., Akayr, M.: The flipped classroom: A review of its advantages and challenges. Comput. Edu. 126, 334–345 (2020)

    Article  Google Scholar 

  2. Chegg: Find Solutions For Your Homework. http://chegg.com. Accessed September 20 (2020)

  3. MacroTrends, Chegg Company Valuation https://www.macrotrends.net/stocks/charts/CHGG/chegg/net-worth/. Accessed September 20 (2020)

  4. Indeed.com: 7 Reasons Why it’s So Hard to Get a Job After College, https://www.indeed.com/career-advice/finding-a-job/why-is-it-so-hard-to-get-a-job-after-college. Accessed September 20 (2020)

  5. NCLab’s Self-Paced Linear Algebra Course http://nclab.com/courses/linear-algebra. Accessed September 20 (2020)

  6. Solin, P.: Using the NCLab Matrix App. http://femhub.com/pavel/work/LinearAlgebraApp.pdf. Accessed September 20, (2020)

  7. HTML5, https://en.wikipedia.org/wiki/HTML5. Accessed September 20, (2020)

  8. Mathjax: Beautiful and Accessible Math in All Web Browsers. https://www.mathjax.org/. Accessed September 20 (2020)

  9. Python, https://www.python.org/. Accessed September 20 (2020)

  10. ADA: Information and Technical Assistance on the Americans with Disabilities Act. https://www.ada.gov/2010ADAstandards_index.htm. Accessed September 20, (2020)

  11. Solin, P.: Using the NCLab’s Matrix App to Obtain the Reduced Echelon Form of a 4x5 Matrix. https://youtu.be/2DjUTkqK5Fk. Accessed September 20 (2020)

  12. NCLab (Network Computing Laboratory), http://nclab.com. Accessed September 20 (2020)

  13. TinyMCE: Tthe World’s Most Customizable and Flexible Rich Text Editor. https://www.tiny.cloud/. Accessed September 20, (2020)

  14. Sams, A., Bergmann, J.: Flip your students’ learning. Edu. Leader. 70, 16–20 (2013)

    Google Scholar 

Download references

Acknowledgements

The author thanks the University of Nevada, Reno administration for their supportive attitude towards the novel self-paced courses developed by NCLab.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavel Solin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Tips, Tricks, and Lessons Learned

Units 1–4 (Sections 1–20) correspond to a standard first Linear Algebra course. However, many students will be able to cover more ground based on their ability. Sections 21–25 discuss the Spectral Theorem, Least-Squares problems, QR factorization, SVD, large matrices (conditioning, memory issues), and linear algebra with Numpy and Scipy.

Students appreciate it when the instructor uses the first 10–15 min at the beginning of each class to explain the importance of what they will be learning today. After that, they can start working independently.

The course is designed to have an extremely smooth learning curve, and most students will go through it without requiring your assistance. As a consequence, class attendance will tend to drop. We experimented with various incentives aimed at increasing attendance, including extra credit for class attendance. But so far the method which seems to work best, is to make class attendance mandatory, unless the student completes the scheduled work in advance. We apply penalties if students break this rule.

To prevent students from falling behind, they are required to complete by the end of each calendar week all material which was scheduled for that week. We schedule one Section per class and the deadlines are in the syllabus. Penalties apply if students break this rule.

Some students are not very communicative, but we found it very important to have at least one conversation with each student per class. The same holds about exceptional students who seem to not require any attention. Without exception, every student appreciates your attention. The easy conversation starters are “Did you have any problems with the previous Section?”, “Did you make any mistakes in the last quiz? Show me.”. Discuss the difficulties and/or mistakes with them.

It is important to be absolutely supportive and avoid any judgment, even if it turns out that the student is struggling with some elementary concept from precalculus or even from high school. A number of times it happened to us that students who were not doing well gained confidence and improved dramatically after we helped them “seal holes” in their knowledge of elementary concepts.

Students appreciate it if before each Midterm and Final they are given a detailed list of Levels to review. Some students will tend to race through the course and get far ahead. In our experience it is better to slow them down. This can be done by unlocking Sections gradually, or by informing the students clearly what the plan for the week is, and tell them to not go beyond that.

We find that 80–90 % of the students are honest and want to do the required work and learn. There are no problems with these. But then there are a few who will try to avoid doing the work. We have been monitoring their behavior and improving the platform accordingly. This is an ongoing process. Last but not least, some students will lag and eventually drop out. The instructor dashboard will show you the students who have problems, and you can try to help them. But not all students want to be helped, and some of them will become unresponsive and quit no matter what you do.

Let the students fill out an anonymous questionnaire after they finish Unit 1. This may help you improve your teaching.

Regarding grading: The online coursework is autograded and you will see in percent how each student is doing. We usually count this as 40% of the final score. In addition to that, we usually do two midterms and one final exam per semester, each contributing with 20% to the final score. Exams are in person, in class. The NCLab platform does not provide online exams, but it provides thousands of Review Exercises with solutions, from which the instructor can randomly choose and build the exams. All Review Exercises are written in LaTeX, therefore it is easy to extract the LaTeXsource of the exercises.

Last but not least, the course requires Internet access. Originally we started teaching it in a regular classroom (not in a computer lab). We were worried if every student will be able to use a laptop or tablet, but this was never a problem. Eventually, some students suggested that it would be great to move the class to a computer lab, which we did. But then we found that numerous students brought their own mobile devices anyway. Hence, it is not critically important to reserve a computer lab for this course. It is more important to be able to walk freely among the students. Therefore, large classroom with cinema-style seating should be avoided.

Appendix B: Course Syllabus

The course was designed to have extremely mellow learning curve, because in our experience, the students do not mind a slow pace. If some topic is discussed too slowly, then they can move through it quickly. But on the other hand, if a topic was explained insufficiently, then many students might struggle with it. In this sense, we prefer to err on the side of caution.

Compared to other Linear Algebra courses, our course may be discussing some aspects of Linear Algebra in more detail. For instance, concepts such as linear space, subspace, norm, inner product, orthogonal projection, etc. are explained in the context of vector spaces, polynomial spaces, and spaces of matrices. We also cover complex-valued linear spaces. It is the freedom of every instructor, however, to leave out the material they don’t want to include. Here is the course syllabus:

1.1 Section 1 (Vectors and Matrices I)

  • What is an n-vector, how to add and subtract vectors, and multiply them with scalars (real numbers).

  • What is the dot product of vectors.

  • How is the dot product related to angle, orthogonality, Euclidean norm (length) and distance of vectors.

  • What is an \(m \times n\) (“m by n”) matrix and how to correctly determine matrix dimensions.

  • How to add and subtract matrices, and multiply them with scalars.

  • How to use indices when accessing individual entries in vectors and matrices.

  • How to calculate matrix-vector and matrix-matrix products, and when they are defined and undefined.

1.2 Section 2 (Vectors and Matrices II)

  • More about vectors and matrices, including zero vectors, zero matrices, identity matrices, and diagonal matrices.

  • About commutative, associative, and distributive properties of vector and matrix operations.

  • That some important vector and matrix operations are not commutative.

  • How to count arithmetic operations involved in various vector and matrix operations.

  • Which vector and matrix operations are most computationally demanding.

  • About matrix transposition, and how to work with transposed and symmetric matrices.

1.3 Section 3 (Linear Systems)

  • Basic facts about linear systems and their solutions.

  • Basic terminology including equivalent, consistent and inconsistent systems.

  • About the existence and uniqueness of solutions to linear systems.

  • How to transform a linear system to a matrix form.

1.4 Section 4 (Elementary Row Operations)

  • How to perform three types of elementary row operations.

  • About reversibility of elementary row operations.

  • How to create the augmented matrix of a linear system.

  • How to solve linear systems using Gauss elimination.

  • That the Gauss elimination is reversible.

1.5 Section 5 (Echelon Forms)

  • What is an echelon form of a matrix and how to obtain it.

  • How to use the echelon form to infer existence and uniqueness of solution.

  • What is an under- and over-determined system.

  • How to determine pivot positions and pivot columns.

  • What are basic and free variables.

  • What is the reduced echelon form and how to obtain it.

  • About the uniqueness of the reduced echelon form.

  • How to express solution sets to linear systems in parametric vector form.

1.6 Section 6 (Linear Combinations)

  • What is a linear combination of vectors.

  • How to write a linear system as a vector equation and vice versa.

  • How to write a vector equation as a matrix equation \(Ax = b\) and vice versa.

  • What is a linear span, and how to check if a vector is in the linear span of other vectors.

  • The Solvability Theorem for linear systems.

  • That the linear system \(Ax = b\) has a solution for every right-hand side if and only if the matrix A has a pivot in every row.

1.7 Section 7 (Linear Independence)

  • What it means for vectors to be linearly independent or dependent.

  • How to determine linear independence or dependence of vectors.

  • How is linear independence related to linear span.

  • How is linear independence related to the matrix equation \(Ax = 0\).

  • About singular and nonsingular matrices.

  • About homogeneous and nonhomogeneous linear systems.

1.8 Section 8 (Linear Transformations I)

  • Review of essential concepts including set, subset, relation, function, domain, codomain, image, and range.

  • Review of basic properties of functions such as one-to-one (injective) functions, onto (surjective) functions, bijective functions, and inverse functions.

  • Review of linearity and linear functions.

  • About linear vector transformations of the form \(T(x) = Ax\).

  • How to figure out the matrix of a linear vector transformation.

  • About rotations, scaling transformations, and shear transformations.

  • That surjectivity of the transformation \(T(x) = Ax\) is equivalent to the existence of solution to the linear system \(Ax = b\) for any right-hand side.

  • How to easily check if the transformation \(T(x) = Ax\) is onto (surjective).

  • That injectivity of the transformation \(T(x) = Ax\) is equivalent to the uniqueness of solution to the linear system \(Ax = b\).

  • How to easily check if the transformation \(T(x) = Ax\) is one-to-one (injective).

  • That the solution to the linear system \(Ax = b\) is unique if and only if the matrix A has a pivot in every column.

  • That bijectivity of the transformation \(T(x) = Ax\) is equivalent to the existence and uniqueness of solution to the linear system \(Ax = b\) for any right-hand side.

1.9 Section 9 (Linear Transformations II)

  • How to find a spanning set for the range of a linear transformation.

  • What is the kernel of a linear transformation.

  • How to find a spanning set for the kernel.

  • That the kernel determines injectivity of linear transformations.

  • About compositions of linear transformations.

  • How to obtain the matrix of a composite transformation.

  • About inverse matrices and inverse transformations.

  • How to obtain the matrix of an inverse transformation.

  • What can one do with inverse matrices.

1.10 Section 10 (Special Matrices and Decompositions)

  • What are block matrices and how to work with them.

  • How to count arithmetic operations in the Gauss elimination.

  • How to work with block diagonal, tridiagonal and diagonal matrices.

  • What are upper and lower triangular matrices.

  • What are sparse and dense matrices.

  • What are elementary matrices.

  • Gauss elimination in terms of elementary matrices.

  • Matrix inversion in terms of elementary matrices.

  • About LU decomposition of nonsingular matrices.

  • What is definiteness of symmetric matrices.

  • How to check if a symmetric matrix is positive-definite, positive-semidefinite, negative-semidefinite, negative-definite or indefinite.

  • About Cholesky decomposition of symmetric positive-definite (SPD) matrices.

1.11 Section 11 (Determinants I)

  • What is a permutation and how to calculate its sign.

  • What is a determinant, the Leibniz formula.

  • Simplified formulas for \(2\times 2\) and \(3\times 3\) determinants.

  • How to calculate determinants using Laplace (cofactor) expansion.

  • How to calculate determinants using elementary row operations.

  • How to calculate determinants of diagonal, block diagonal and triangular matrices.

1.12 Section 12 (Determinants II)

  • About the determinants of elementary matrices and their inverses.

  • That matrix A is invertible if and only if \(\det (A) \not = 0\), and \(\det (A-1) = 1/det(A)\).

  • That for any two \(n \times n\) matrices A and B it holds \(\det (AB) = \det (A)\det (B)\).

  • That \(\det (A+B) \not = \mathrm{det}(A) + \mathrm{det}(B)\).

  • That \(\det (A) = \mathrm{det}(A^T)\) and what it means for column operations.

  • How to use the so-called Cramer’s rule to solve the linear system \(Ax=b\).

  • About the adjugate (adjunct) matrix and how to use it to calculate matrix inverse.

  • About the relations between \(\det (A)\), linear dependence of columns and rows in the matrix A, properties of the linear transformation \(T(x)=Ax\), and the existence and uniqueness of solution to the linear system \(Ax=b\).

  • How to use determinants to calculate area and volume.

1.13 Section 13 (Linear Spaces)

  • What is a linear space and how it differs from a standard set.

  • How to check whether or not a set is a linear space.

  • That linear spaces may contain other types of elements besides vectors.

  • That a linear span always is a linear space.

  • How to check if a set of vectors generates a given linear space.

  • How to work with linear spaces of matrices.

  • How to work with polynomial spaces.

1.14 Section 14 (Basis, Coordinates, Subspaces)

  • What is a basis, and that a linear space can have many bases.

  • How to find the coordinates of an element relative to a given basis.

  • How to determine the dimension of a linear space.

  • What is a subspace of a linear space, and how to recognize one.

  • About the union and intersection of subspaces.

  • How the change of basis in a linear space affects the coordinates of its elements.

1.15 Section 15 (Null Space, Column Space, Rank)

  • What is the null space, how to check if a vector belongs to the null space.

  • How to determine the dimension of the null space and find its basis.

  • How is the null space related to the uniqueness of solution to the linear system Ax=b

  • What is the column space, how to check if a vector belongs to the column space.

  • How to determine the dimension of the column space and find its basis.

  • How is the column space related to the existence of solution to the linear system \(Ax=b\).

  • How is the null space related to the kernel of the linear transformation \(T(x)=Ax\).

  • How is the column space related to the range of the linear transformation \(T(x)=Ax\).

  • What is the rank of a matrix, and its implications for the existence and uniqueness of solution to the linear system \(Ax = b\).

  • What does it mean for a matrix to have full rank or be rank-deficient.

  • The Rank Theorem.

1.16 Section 16 (Eigenproblems I)

  • What is an eigenproblem.

  • About important applications of eigenproblems in various areas of science and engineering.

  • How to verify if a given vector is an eigenvector, and if a given value is an eigenvalue.

  • How to calculate eigenvalues and eigenvectors.

  • About the characteristic polynomial and characteristic equation.

  • How to handle matrices with repeated eigenvalues.

  • About algebraic and geometric multiplicity of eigenvalues.

  • How to determine algebraic and geometric multiplicity of eigenvalues.

  • How to find a basis of an eigenspace.

1.17 Section 17 (Eigenproblems II)

  • The geometrical meaning of eigenvalues and eigenvectors in \(R^2\) and \(R^3\).

  • That a matrix is singular if and only if it has a zero eigenvalue.

  • That the null space of a matrix is the eigenspace corresponding to the zero eigenvalue.

  • That eigenvectors corresponding to different eigenvalues are linearly independent.

  • The CayleyHamilton theorem (CHT).

  • How to use the CHT to efficiently calculate matrix inverse.

  • How to use the CHT to efficiently calculate matrix powers and matrix exponential.

  • About similar matrices, diagonalizable matrices, and eigenvector basis.

  • How to use the eigenvector basis to efficiently calculate arbitrary matrix functions including matrix powers, the inverse matrix, matrix exponential, the square root of a matrix, etc.

1.18 Section 18 (Complex Linear Systems)

  • About complex numbers and basic complex arithmetic.

  • How to solve complex linear equations with real and complex coefficients.

  • How to solve complex linear systems with real and complex matrices.

  • How to perform complex elementary row operations.

  • How to determine the rank of a complex matrix.

  • How to check if a complex matrix is nonsingular.

  • How to use the Cramer’s rule for complex matrices.

  • How to check if complex vectors are linearly independent.

  • How to invert complex matrices.

  • How to transform complex linear systems into real ones.

  • About the structure of complex roots of real-valued polynomials.

  • About complex eigenvalues and eigenvectors of real matrices.

  • How to diagonalize matrices with complex eigenvalues.

1.19 Section 19 (Normed Spaces, Inner Product Spaces)

  • Important properties of the Euclidean norm and dot product in \(R^n\).

  • The Cauchy-Schwarz inequality and triangle inequality.

  • How are norm and inner product defined and used in general linear spaces.

  • About important norms and inner products in spaces of matrices and polynomials.

  • How to calculate norms, distances and angles of matrices and polynomials.

  • That every inner product induces a norm, but not every norm induces an inner product.

  • About the parallelogram law.

  • About the norm and inner product of complex vectors in \(C^n\).

  • About the norm and inner product in general complex linear spaces.

1.20 Section 20 (Orthogonality and Best Approximation)

  • About orthogonal complements and orthogonal subspaces.

  • How to find a basis in an orthogonal complement.

  • That Row(A) and Nul(A) of a \(m \times n\) matrix A are orthogonal complements in \(R^n\).

  • About orthogonal sets and orthogonal bases.

  • About orthogonal decompositions and orthogonal projections on subspaces.

  • That orthogonal projection operators are idempotent.

  • How to calculate orthogonal decomposition with and without orthogonal basis.

  • How to calculate the best approximation of an element in a subspace.

  • How to calculate the distance of an element from a subspace.

  • The Gram-Schmidt process.

  • How to orthogonalize vectors, matrices, and polynomials.

  • About the Fourier series expansion of periodic functions.

  • How to obtain Legendre polynomials.

  • How to use Legendre polynomials for best polynomial approximation of functions.

1.21 Section 21 (Spectral Theorem)

  • About the basis and dimension of the complex vector space \(C^n\).

  • What are conjugate-transpose matrices and how to work with them.

  • About Hermitian, orthogonal, and unitary matrices.

  • That the eigenvalues of real symmetric and Hermitian matrices are real.

  • That the eigenvectors of real symmetric and Hermitian matrices can be used to create an orthogonal basis in \(R^n\) and \(C^n\).

  • About orthogonal diagonalization of real symmetric matrices.

  • About unitary diagonalization of Hermitian matrices.

  • The Spectral Theorem for real symmetric and Hermitian matrices.

  • About the outer product of vectors.

  • How to perform spectral decomposition of real symmetric and Hermitian matrices.

  • How eigenvalues are related to definiteness of real symmetric and Hermitian matrices.

  • How to calculate eigenvalues of large matrices using Numpy.

1.22 Section 22 (QR Factorization and Least-Squares Problems)

  • How to perform the QR factorization of rectangular and square matrices.

  • What are Least-Squares problems and how to solve them.

  • Properties of the normal equation, existence and uniqueness of solution.

  • How to fit lines, polynomials, and bivariate polynomials to data.

  • How to fit implicitly given curves to data.

  • How to solve Least-Square problems via QR factorization.

1.23 Section 23 (Singular Value Decomposition)

  • The basic idea of SVD.

  • About the shared eigenvalues of the matrices \(A^T A\) and \(A A^T\).

  • What are singular values and how to calculate them efficiently.

  • That the number of nonzero singular values equals the rank of the matrix.

  • How to calculate right and left singular vectors, and perform SVD.

  • The difference between full and compact SVD.

  • About the Moore-Penrose pseudoinverse and its basic properties.

  • What types of problems can be solved using the M-P pseudoinverse.

  • How to calculate the M-P pseudoinverse using SVD.

  • How to calculate the SVD of complex matrices.

  • How are singular values related to the Frobenius norm.

  • What is the spectral (operator) norm of a matrix.

  • How to calculate the error caused by truncating the SVD.

  • How to use SVD for rank estimation in large data sets.

  • How SVD is used in image processing.

1.24 Section 24 (Large Linear Systems)

  • The condition number and how to calculate it using singular values.

  • The condition number of real symmetric and Hermitian matrices.

  • The role the condition number plays in the solution of linear systems.

  • Mantissa, exponent, and the representation (round-off) error.

  • Machine epsilon and finite computer arithmetic.

  • Instability of the Gauss elimination.

  • Memory issues related to storing large matrices.

  • COO, CSR and CSC representation of sparse matrices.

  • Sparsity-preserving and sparsity-breaking matrix operations.

1.25 Section 25 (Linear Algebra with Python)

  • Import Numpy and Scipy.

  • Define (large) vectors and matrices.

  • Perform standard vector and matrix operations.

  • Edit matrices and vectors using the Python for-loop.

  • Extract row and column vectors from matrices.

  • Use direct and iterative matrix solvers.

  • Determine the rank of a matrix.

  • Perform LU and Cholesky factorizations.

  • Calculate determinants, eigenvalues and eigenvectors.

  • Diagonalize matrices, calculate functions of matrices.

  • Perform QR factorization and orthogonalize sets of vectors.

  • Solve Least-Squares problems.

  • Perform spectral decomposition and SVD.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Solin, P. Self-Paced, Instructor-Assisted Approach to Teaching Linear Algebra. Math.Comput.Sci. 15, 661–687 (2021). https://doi.org/10.1007/s11786-021-00499-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11786-021-00499-z

Keywords

Mathematics Subject Classification

Navigation