Point of post: In this post I discuss the material discussed in sections 31 and 32 of Halmos.
We finally begin studying arguably the most important aspect of linear algebra, linear transformations. These are the structure preserving maps for vector spaces. They correspond naturally to coordinate changes, linear shifts, and contraction/dilations. Moreover, theoretically they are interesting because, as it turns out, many maps which aren’t usually thought of as linear transformations are. For example, the differential operator on the space .
If and are -space, we call a mapping a linear transformation/linear homomorphism if
for all and . If we call a linear transformation on in which case is usually called a linear operator or endomorphism.
As stated earlier examples of linear transformations are abound. We give three examples:
Ex.(1): Let be a -space, then if we consider as a vector space over itself then every linear functional is a linear transformation from to .
Ex.(2): If is a vector space the the maps and are both linear transformations.
Remark: In this context it is customary to denote by , the reasons for this will be clear soon enough.
Ex.(3): If denotes the space of all smooth functions from to then the differential operator is a linear operator. More specifically, if is the space of all degree polynomials with complex coefficients then
is a linear operator.
Remark: Note that the last map was defined algebraically, in a manner independent of the derivative. Thus, we can make sense of the “differential operator” on arbitrary polynomial rings.
Some aspects of linear transformation become apparent as soon as the definitions are laid out. For example, it’s evident that for any linear transformation that . Slightly less obvious is the following theorem
Theorem: Let be a linear transformation from to which is bijective. Then, is a linear transformation.
Proof: We merely note that if then for some and thus
and since and were arbitrary the conclusion follows.
It’s also fairly clear that we may add linear transformations and multiply them by scalars in the obvious way. Namely if are linear transformations and then we can define to be the map
Moreover, with this definition of addition it’s clear that the zero function acts as a additive inverse and the map given by is an additive inverse. Thus, it’s not a quantum leap to realize that with the operations of addition and scalar multiplication given by that the set of all linear transformations from to , denoted , is a vector space. If we only consider endomorphisms on we shorten to .
So, the obvious question is, what is the dimension of ? The answer turns out to be although it may not be clear immediately why this is so. But first, we prove a small lemma:
Theorem: Let be finite dimensional -spaces. Then, if is a basis for then for any there is a unique linear transformation such that for each .
Proof: Let be the map given by
Clearly then we have that is a linear map and . Moreover, if is also a linear transformation such that we see then that
and since every element of may be written in the form it follows that for all , from where the conclusion follows.
Theorem: Let be -spaces of dimension and respectively; then the dimension of is .
Proof: Let be a basis for and a basis for . Then, for each and the above lemma implies that there exists a unique such that. We claim that forms a basis for . Firstly, to see that we suppose that are such that
then, for a fixed we see that the above implies that
and since is linearly independent it follows that . But, since was arbitrary it follows that . Now, to prove that let .Then, for each we have that
we claim that
to do this it suffices to show that the right hand side agrees with the left hand side for . To do this though we merely note that for each
from where it follows that is a basis. Noting that finishes the argument.
Corollary: If is a finite dimensional -space then
1. Halmos, Paul R. Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print