﻿The goal of study is linear algebra. Why do we care about linear algebra? We want to understand the behavior of linear systems. We want to study the behavior of linear systems in isolation and understand them very well. This is because linear systems represent a lot of things, so understanding them well will allow us to understand all the representatives. The main representation we care about is our universe, or more specifically things in our universe, which require an understanding of linear algebra because tangent spaces are vector spaces.


So our first goal is to play around with what is a vector space in the first place. We come up with fun rules for specific cases, for example that of the subspace of a vector space being a vector space if x.


Also direct sum is a statement about the subspaces you choose to sum up together, essentially saying that they’re independent but span the whole space, whereas sum is actually used to sum up two vectors to obtain they’re linear combination.


Next we play around with how to classify vector spaces. And the cool thing is that vector spaces are entirely classified by their dimension. Which means you can ponder the existence or nonexistence of an invertible linear map between A and B iff they have the same dimension. But what is a dimension anyways? Well it’s the cardinality of the basis.


Sidenote: quite interesting that again a space is fully categorized by its cardinality, except this time of the basis, (compared to sets where the categorization happens directly based on cardinality).


So what’s the basis anyways? Well to talk about the basis we need to talk about linear independence of two vectors, and the span of two vectors. It’s because we want the basis vectors to be MECE, which means mutually exclusive, but collectively exhaustive. In other words we want our basis vectors to be linearly independent, but span the whole vector space. If you find a collection of vectors that do that, it’s called the basis of that vector space. It remains to be shown that dimension is well defined because any choice of basis vector will yield the same dimension, which is not immediately obvious.


So what’s linear independence? When are two vectors linearly dependent vs linearly independent? Well if one vector is a scalar multiple of another, then it’s linearly dependent. So vectors have to be on different lines for them to be linearly independent.


And so what’s the span of two vectors? Well the span is the vector space that contains all linear combinations of the two vectors. That vector space could be the whole space, or it could be some subspace. We say a set of vectors (such as the basis) span the whole vector space if the span is the whole vector space. You can imagine needing vectors that point in each major direction so that you don’t get stuck in a plane or line subspace.


Now we know how to classify vector spaces, the next step is to look at the fascinating part of linear algebra: that the structure preserving maps can themselves be analyzed as vector spaces, and have their own theory of linear algebra.


Cool so we’ve done that, next we want a new language to talk about linear maps, which mirrors our language for regular maps. This is because linear maps will be similar to continuous maps: they’re not guaranteed to be invertible / they’re not as strong as homeomorphisms, which in linear algebra are invertible linear maps. So we do a bit of study on the cases when and when there are no inverses for specific linear maps etc.


So we have the range of a linear map which is pretty much the image. Then we have the nullspace, which is pretty much the preimage of 0 in the target. If a linear map has a non-zero null space, you’re in deep shit about finding an inverse, because a lot of information is being lost with all the points being sent to 0.


Next we see how we can represent a linear transformation as a matrix. Basically it goes like this: transform your linear transformation into a 1-1 tensors by applying the linear transformation in the middle of a covector vector application:
w(v) becomes w(F(v))


Then you get the 1-1 tensor components by definition of applying the 1-1 tensor to the basis vectors. And the 1-1 tensor components are defined to be the matrix when aligned in a specific way. So essentially the behavior of the linear map is entirely defined by how it acts on the basis vectors (duh).


Matrix multiplication is composing the two underlying linear maps one after the other.


Next we have eigenvectors and eigenvalues, which is pretty much an extension of the theory of linear maps. Basically we want to more succinctly encompass the behavior of linear maps, and it turns out we can do that by looking at the maps eigenvectors and eigenvalues.


What is an eigenvector and eigenvalue? Basically it’s for some linear transformation T, find vectors v and scalars such that Tv=av holds. Basically find vectors that are only scaled along an axis by a linear transformation.


Basically, matrices are such because under change of basis, they change. But a matrix A and B under different bases should still have something in common right? And the answer is yes: they have the same eigenvectors. And if you choose your basis vectors to be the eigenvectors, then you get the most compact matrix representation for the linear transformation.


It feels almost trivial in hindsight to say that two differing matrices share the same eigenvectors if they represent the same linear transformation, because eigenvectors are defined independent of matrices anyways.


It’s probably more accurate for me to rename eigenvectors by the name “characteristic vectors”.


Next step is to introduce length in our analysis of vector spaces. We do this by introducing a norm (in the book it for some reason introduces an inner product, which is technically inaccurate, because an inner product introduces a notion of length AND angle). So we have side effects which isn’t good.