Abstract Nonsense

Crushing one theorem at a time

The Endomorphism Algebra

Point of post: In this post we discuss the equivalent of sections 34 and 35 if Halmos’s book.


In our last post we discussed the concept of linear transformations from a F-space \mathscr{V} to itself  and how we can form the vector space \text{End}\left(\mathscr{V}\right) of all such linear transformations. It turns out though that this set of all linear transformation can be endowed with much more structure than that of a vector space. To motivate how special this property is consider the following question: given a vector space \mathscr{V} what does v_1\times v_2 mean for vectors v_1,v_2\in\mathscr{V}? In general, it’s meaningless because a vector space need not have a “multiplication”. It turns out though that some vector spaces do have a canonical way to define multiplication in a way that “makes sense”, such vector spaces are invariably called associative algebras. In particular, it turns out that there is a canonical way to turn \text{End}\left(\mathscr{V}\right) into an associative algebra. Namely, there is a natural way to define the product of two endomorphisms.


Let \mathscr{V} be a F-space. Then, \mathscr{V} is called an associative algebra if there exists a map


such that for all x,y,z\in\mathscr{V}

1.                                                                 \mu\left(x,\mu(y,z)\right)=\mu\left(\mu(x,y),z\right)

2.                                                                 \mu\left(x,y+z\right)=\mu(x,y)+\mu(y,z)

3.                                                                 \mu\left(x+y,z\right)=\mu(x,z)+\mu(y,z)

4.                                                                 \alpha\mu(x,y)=\mu(\alpha x,y)=\mu(x\alpha y)

for all \alpha\in F. If \mathscr{V} also possess a non-zero element, denoted \mathbf{1}, such that

5.                                                                \mu\left(x,\mathbf{1}\right)=\mu\left(\mathbf{1},x\right)=x

for all x\in\mathscr{V} then \mathscr{V} is called an associative algebra with identity/unital algebra. In practice one doesn’t explicitly write out \mu but denotes \mu(x,y) by the concatenation xy.

We now proceed to give some examples:

Ex(1): Consider C[\mathbb{R},\mathbb{R}] the space of all continuous functions  from \mathbb{R} to itself. With the usual (pointwise) definitions of addition and multiplication this is an associative algebra (a Banach algebra if given the usual norm and the induced topology) with identity 1(x).

Ex(2): \mathbb{R}^3 with \mu(x,y)=x\times y where \times is the usual cross product is a non-associative algebra.

Ex(3): The set \text{Mat}_n\left(F\right) of all n\times n matrices over some field F with the usual definitions of matrix multiplication and addition, with identity I.

Remark: Note that the last example gives an example of an associative unital algebra  that isn’t commutative in the sense that, in general, for two matrices it’s not true that MN=NM. Also, it shows in general that elements of an associative unital algebra need not have inverses, in the multiplicative sense.

While there are many theorems relating to general associative unital algebras we shall have need for only one, and it’s merely an observation. Namely, observe that for any x\in\mathscr{V} we have that x\bold{0}=x(\bold{0}+\bold{0})=x\bold{0}+x\bold{0} and so upon subtraction x\bold{0}=\mathbf{0}. Similarly, \mathbf{0}x=\mathbf{0}.


Given an associative algebra with identity \mathscr{A} over F, there is a canonical way to define polynomials on \mathscr{A}.  Namely, we may start by inductively defining v^n for \mathscr{V} by the relation v^0=\mathbf{1} and v^{n+1}=vv^{n}. Note that by associativity this definition is well-defined. From there we can extend the notion of a real valued polynomial \displaystyle p(x)=\sum_{j=0}^{n}a_j x^j to a polynomial on \mathscr{A} by

\displaystyle p(v)=\sum_{j=0}^{n}a_jv^j=a_0\mathbf{1}+a_1v+\cdots+a_nv^n

for a_0,\cdots,a_n\in F.

Endomorphism Algebra

The rest of this post will be devoted to showing how one can canonically impose a multiplication on \text{End}\left(\mathscr{V}\right) which transforms it into an associative unital algebra. The surprising thing is the identity of this multiplication, namely, function composition. In essence we will show that we can consider

\circ:\text{End}\left(\mathscr{V}\right)\times\text{End}\left(\mathscr{V}\right)\to\text{End}\left(\mathscr{V}\right):\left(T,T'\right)\mapsto T\circ T'

as a multiplication map, with identity \text{id}_\mathscr{V}\overset{\text{def.}}{=}\mathbf{1}. But, this is just grunt work, so let’s get to it:

Theorem: \text{End}\left(\mathscr{V}\right) with usual addition and function composition is an associative unital algebra with identity \text{id}_\mathscr{V}\overset{\text{def.}}{=}\mathbf{1}.

Proof: We have already established that \text{End}\left(\mathscr{V}\right) is a F-space, and so it suffices to check axioms one through five as listed in the algebra section. We do this axiom by axiom.

1. This follows since function composition, in general, is associative

2. This follows since

\begin{aligned}\left(T_1\left(T_2+T_3\right)\right)(x)&= T_1\left(\left(T_2+T_3\right)(x)\right)\\ &=T_1\left(T_2(x)+T_3(x)\right)\\ &=T_1(T_2(x))+T_1(T_3(x))\end{aligned}

for all x\in\mathscr{V}, and thus T_1\left(T_2+T_3\right)=T_1T_2+T_1T_3.

3. This is done using the exact same method.

4. This follows since

\begin{aligned}\left(\alpha\left(T_1T_2\right)\right)(x) &=\alpha T_1(T_2(x))\\ &=\left(\alpha T_1\right)\left(T_2(x)\right)\\ &=T_1\left(\alpha T_2(x)\right)\\ &=T_1\left(\left(\alpha T_2\right)(x)\right)\end{aligned}

5. This follows directly from the definition of the identity function.

Thus, since all five axioms are satisfied the conclusion follows. \blacksquare

Note that while \text{End}\left(\mathscr{V}\right) is an associative unital algebra, it is far, far from being as nice as a field. Explicitly the multiplication (for spaces of dimension greater than one) admits zero divisors as can be seen by considering that T_{1,2}T_{1,2}=\bold{0}. Moreover, the multiplication isn’t even commutative (for spaces of dimension greater than one) as can be seen by considering that \left(T_{1,2}T_{1,1}\right)(x_1)=x_2 but \left(T_{1,1}T_{1,2}\right)(x_1)=\bold{0}


1. Golan, Jonathan S. The Linear Algebra a Beginning Graduate Student Ought to Know. Dordrecht: Springer, 2007. Print.

2. Halmos, Paul R.  Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print

3. Simmons, George Finlay. Introduction to Topology and Modern Analysis. Malabar, FL: Krieger Pub., 2003. Print.


November 22, 2010 - Posted by | Algebra, Halmos, Linear Algebra | , , , ,


  1. […] linear transformation from to given by . Interpret and prove as many of the five axioms for an associative unital algebra as you […]

    Pingback by Halmos Sections 34 and 35:Products and Polynomials « Abstract Nonsense | November 23, 2010 | Reply

  2. […] that in our last post we discussed how to turn into an associative unital algebra by defining a ‘multiplication […]

    Pingback by Invertible Linear Transformations (Pt. I) « Abstract Nonsense | November 30, 2010 | Reply

  3. […] this post we show how given a field there is a natural way to produce an associative unital algebra of dimension over . This algebra will be the algebra of square matrices (to be defined below) of […]

    Pingback by Matrix Algebra (Pt. I) « Abstract Nonsense | December 13, 2010 | Reply

  4. […] these concepts in mind we claim that is a -dimensional associative unital algebra over . More […]

    Pingback by Representation Theory: The Group Algebra « Abstract Nonsense | January 20, 2011 | Reply

  5. […] any irrep which admits as its character. Since we know that . Thus, it clearly follows that the endormorphism algebra  being dimensional must be equal to . In particular, since we know that for some . Note though […]

    Pingback by Representation Theory: Character Tables « Abstract Nonsense | March 23, 2011 | Reply

  6. […] Just as is the case for vector spaces, groups, modules etc. one can define the direct sum of algebras. […]

    Pingback by Direct Sum of Algebra « Abstract Nonsense | April 6, 2011 | Reply

  7. […] to interact. After much experimentation mathematicians (taking natural examples such as the endomorphism algebra with its two usual operations and with its two usual operations) decided upon the object which is […]

    Pingback by Basic Definitions of Rings « Abstract Nonsense | June 15, 2011 | Reply

  8. […] Theorem: Let be a Boolean ring, then the multiplication and turn into a unital associative -algebra. […]

    Pingback by Boolean Rings (Pt. I) « Abstract Nonsense | July 14, 2011 | Reply

  9. […] example comes from linear algebra itself. Namely, suppose we are given some vector space . We know that the set of endomorphisms on is an abelian group (in fact, it’s an associative algebra). […]

    Pingback by Modules (Pt. I) « Abstract Nonsense | October 27, 2011 | Reply

  10. […] makes into a unital ring such that . We have already run into algebras before, in the context of endomorphism algebras of vector spaces. More generally, any ring of matrices is given the structure of an -algebra. In […]

    Pingback by R-Algebras « Abstract Nonsense | January 10, 2012 | Reply

  11. […] suffices to show that a basis of is hit. To do this we let be a basis for and a basis for . We know then that a basis for is the set of maps defined by . That said, note […]

    Pingback by Some Natural Identifications (Pt. III) « Abstract Nonsense | August 14, 2012 | Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: