﻿What is the story of linear algebra?


You start with identifying what is a “vector space”, including interesting subsets of vector spaces being vector spaces under specific conditions.


Next, you want to classify vector spaces. It turns out dimension is the only invariant. We define dimension as the cardinality of the basis set, and the basis set as any set that is linearly independent, but spans the whole space (we also have to show this is well defined). Linear independence means that two vectors are not in the same axis. In other words not a scalar multiple of the other. The span condition pretty much says you should be able to take a direct sum of all the candidate basis vectors.


Anyways, so now we’ve classified vector spaces. Next step is to enter into the interesting part of linear algebra: the fact that linear maps (the structure preserving maps of vector spaces) have in fact a vector space structure themselves.


Next is eigenvectors, which pretty much just means reducing down the “behavior” of a linear map down into its most simple components (the eigenvectors). Eigenvectors pretty much guarantee that whatever basis you’re in, the components of the linear map will be scalar multiple of themselves (by the eigenvalue). This makes eigenvectors quite special for computation because if you take the basis vectors to be eigenvectors, then the linear map scales along the principal axis. So for repeated matrix multiplication (such as arbitrary powers), then eigenvectors come in quite handy. Another thing is that the product of the eigenvalues is the determinant of a linear transformation.


Next is inner product spaces. Basically we want to be able to tell apart distances and angles, so we introduce this (0,2)-tensor that’s symmetric so a*b is b*a, that’s called the inner product (with a few other restrictions, like positive-zero only, and one other). Inner products are their own category, with their orthogonal basis, and their own operators. I honestly don’t know it that well.