## Invertible Linear Transformations (Pt. I)

**Point of post: **In this post we talk about the equivalent of Halmos’s section 36, but in a more general setting.

*Motivation*

Recall that in our last post we discussed how to turn into an associative unital algebra by defining a ‘multiplication map’

We saw though that this multiplication has some downsides (it had zero divisors and wasn’t, in general, commutative). There is a nice property of this algebra which isn’t enjoyed by all algebras. In particular, it is not ‘uncommon’ for elements of to have multiplicative inverses, in the usual sense. Now, at first glance this doesn’t seem that ‘great’ of a property; I mean, not *all* of the elements of have multiplicative inverses just *some *do. One may expect that, in general, this is a common occurrence among algebras, even more common among associative unital algebras. In fact, this isn’t the case. To see how badly the existence of multiplicative inverses can get screwed up consider for example the polynomial ring . It’s fairly plain to see that is, in fact, an associative unital algebra over with the usual polynomial addition and multiplication. That said, a little thought shows that has a multiplicative inverse *only if *.

Thus, this post will explore this ‘nice’ quality of the multiplication map on

*Invertible Elements In Associative Unital Algebras*

Let be an associative unital algebra with identity element . We say that is *invertible *if there exists some for which

in which case we call a (soon to be *the)* inverse of . We denote the set of all invertible elements of by . Some theorems come immediately from this definition, first and foremost:

**Theorem: ***Let be an associative unital algebra with identity and let . Then, if and are both inverses of then .*

**Proof: **We merely note that by definition and so and by associativity but by assumption that this implies that from where the conclusion follows.

*Remark: *Now that we know that for the inverse of is unique we may unambiguously denote it by .

The next logical thing to ask is does imply that is invertible? What about for ? And ? We take care of these three things in the next theorem:

**Theorem: ***Let be an associative unital -algebra with identity . Then, if then where .*

**Proof: **To prove that it suffices to find an inverse for it. To do this we note that since that has an inverse (in the sense of the field operations of ) given by . We merely note then that

To prove that we notice that by definition

and so and . Lastly, to prove that we note that

from where the conclusion follows.

Of course, one may wonder whether implies that so that becomes a contender to be a linear subspace (and in fact, a subalgebra) of . The answer is unfortunately no. Note that is not invertible since for all . But, if the above implies that , yet by what was just said we know that .

*Invertible Linear Homomorphisms*

Recalling that if is an -dimensional -space then is an associative unital algebra with multiplication given by function composition and identity element we may apply the discussion in the previous to . Recalling though the definition of the multiplication and identity in we see that is invertible if and only if there exists some such that

Thus, is invertible if and only if has a set-theoretic inverse which is also a linear transformation. Recall though that we proved in an earlier post that if a linear transformation possesses a set-theoretic inverse, denoted , it is a linear transformation. Thus, we are left with the following satisfying theorem:

**Theorem: ***Let be an -dimensional -space, then is invertible if and only if possesses a set-theoretic inverse. Moreover, if is invertible it’s inverse is the set-theoretic inverse .*

In light of the above theorem it’s evident that the study of when a linear transformation possesses a set-theoretic inverse is crucial to the study of when is invertible. So, before we start this study we introduce some terminology. If is injective we say that is a *monomorphism*. If is surjective we call it an *epimorphism*. Finally, if is bijective we call an *isomorphism*. Since the concept comes up a lot we denote the set of all isomorphisms on by , and thus to point out the obvious; with this definition . Lastly, for we denote .

*Remark: *It is also common to denote what we called by . But, this leaves confusion when dealing with for a group .

We note from our initial section that is closed under inversion, multiplication, and scalar multiplication. In fact, realizing that pretty much proves that is a group under multiplication. So, we begin our characterization of these maps

**Theorem: ***Let . Then, is a monomorphism if and only if .*

**Proof: **First suppose that is a monomorphism then since we know that .

Conversely, suppose that . Then we see that implies and since is a linear transformation we may conclude that and thus , or .

The conclusion follows.

*Remark: *Note that we did not use the ‘full extent’ of the linearity of in the sense that we didn’t use the fact that . This is because the above theorem holds for homomorphisms between groups and every vector space is an abelian group.

**References: **

1. Halmos, Paul R. *Finite-dimensional Vector Spaces,*. New York: Springer-Verlag, 1974. Print

[...] of post: This is a literal continuation of this post. Treat it as a physical [...]

Pingback by Invertible Linear Transformations (Pt. II) « Abstract Nonsense | November 30, 2010 |

[...] since is a monomorphism we know from prior discussion that so that the above implies [...]

Pingback by Characterization of Linear Homomorphisms In Terms of Bases « Abstract Nonsense | December 2, 2010 |

[...] this it readily follows that implies that . The other direction was covered in our discussion of invertible [...]

Pingback by Halmos Section 36: Inverses (Pt. II) « Abstract Nonsense | December 3, 2010 |

[...] is defined, as usual, to be the set of invertible endomorphisms on under composition. Often when [...]

Pingback by Representation Theory: Definitions and Basics « Abstract Nonsense | January 18, 2011 |

[...] “equivalent”, on by saying if there exists (where the notation means is invertible in the algebra) such that . We call two elements of disjoint, denoted (to accentuate the dual notions of [...]

Pingback by Representation Theory: Projections Into the Group Algebra (Pt. I) « Abstract Nonsense | April 9, 2011 |

[...] so . Moreover, by definition of non-degeneracy we have that and thus by a common theorem regarding linear transformations and so is injective. Thus, we have that and so (recalling our [...]

Pingback by Canonical Isomorphism Between a Finite Dimensional Inner Product Space and its Dual « Abstract Nonsense | June 4, 2011 |

[...] differentiable at and . If is differentiable at and we say that is a critical point. By a basic theorem of linear algebra we know that if we have that a point , for which is differentiable, is regular [...]

Pingback by The Inverse Function Theorem (Proof) « Abstract Nonsense | September 8, 2011 |