# Abstract Nonsense

## Invertible Linear Transformations (Pt. I)

Point of post: In this post we talk about the equivalent of Halmos’s section 36, but in a more general setting.

Motivation

Recall that in our last post we discussed how to turn $\text{End}\left(\mathscr{V}\right)$ into an associative unital algebra by defining a ‘multiplication map’

$\mu:\text{End}\left(\mathscr{V}\right)\times\text{End}\left(\mathscr{V}\right)\to\text{End}\left(\mathscr{V}\right):\left(T,T'\right)\mapsto T\circ T'$

We saw though that this multiplication has some downsides (it had zero divisors and wasn’t, in general, commutative).  There is a nice property of this algebra which isn’t enjoyed by all algebras. In particular, it is not ‘uncommon’ for elements of $\text{End}\left(\mathscr{V}\right)$ to have multiplicative inverses, in the usual sense.  Now, at first glance this doesn’t seem that ‘great’ of a property; I mean, not all of  the elements of $\text{End}\left(\mathscr{V}\right)$ have multiplicative inverses just some do. One may expect that, in general, this is a common occurrence among algebras, even more common among associative unital algebras. In fact, this isn’t the case. To see how badly the existence of multiplicative inverses can get screwed up consider for example the polynomial ring $\mathbb{R}[x]$. It’s fairly plain to see that $\mathbb{R}[x]$ is, in fact, an associative unital algebra over $\mathbb{R}$ with the usual polynomial addition and multiplication. That said, a little thought shows that $p(x)\in\mathbb{R}[x]$ has a multiplicative inverse only if $p(x)\in\text{span}\{1\}$.

Thus, this post will explore this ‘nice’ quality of the multiplication map on $\text{End}\left(\mathscr{V}\right)$

Invertible Elements In Associative Unital Algebras

Let $\mathscr{A}$ be an associative unital algebra with identity element $\mathbf{1}$. We say that $x\in\mathscr{A}$ is invertible if there exists some $y\in\mathscr{A}$ for which

$xy=yx=\mathbf{1}$

in which case we call $y$ a (soon to be the) inverse of $x$.  We denote the set of all invertible elements of $\mathscr{A}$ by $\mathscr{A}^{\times}$. Some theorems come immediately from this definition, first and foremost:

Theorem: Let $\mathscr{A}$ be an associative unital algebra with identity $\mathbf{1}$ and let $x\in\mathscr{A}^{\times}$. Then, if $y$ and $z$ are both inverses of $x$ then $y=z$.

Proof: We merely note that by definition $xz=\mathbf{1}$ and so $y(xz)=y$ and by associativity $(yx)z=y$ but by assumption that $yx=\mathbf{1}$ this implies that $z=\mathbf{1}z=y$ from where the conclusion follows. $\blacksquare$

Remark: Now that we know that for $x\in\mathscr{A}^{\times}$ the inverse of $x$ is unique we may unambiguously denote it by $x^{-1}$.

The next logical thing to ask is does $x,y\in\mathscr{A}^{\times}$ imply that $xy$ is invertible? What about $\alpha x$ for $\alpha\in F$? And $x^{-1}$? We take care of these three things in the next theorem:

Theorem: Let $\mathscr{A}$ be an associative unital $F$-algebra with identity $\mathbf{1}$. Then, if $x,y\in\mathscr{A}^{\times}$ then $\alpha x,x^{-1},xy\in\mathscr{A}$ where $\alpha\in F-\{0\}$.

Proof: To prove that $\alpha x\in\mathscr{A}^{\times}$ it suffices to find an inverse for it. To do this we note that since $\alpha\in F-\{0\}$ that $\alpha$ has an inverse (in the sense of the field operations of $F$) given by $\alpha^{-1}$. We merely note then that

$(\alpha x)(\alpha^{-1}x^{-1})=(\alpha\alpha^{-1})(xx^{-1})=1\mathbf{1}=\mathbf{1}$

To prove that $x^{-1}\in\mathscr{A}^{\times}$ we notice that by definition

$xx^{-1}=x^{-1}x=\mathbf{1}$

and so $x^{-1}\in\mathscr{A}^{\times}$ and $\left(x^{-1}\right)^{-1}=x$. Lastly, to prove that $xy\in\mathscr{A}^{\times}$ we note that

$(xy)\left(y^{-1}x^{-1}\right)=x\left(yy^{-1}\right)x^{-1}=x\mathbf{1}x^{-1}=xx^{-1}=\mathbf{1}$

from where the conclusion follows. $\blacksquare$

Of course, one may wonder whether $x,y\in\mathscr{A}^{\times}$ implies that $x+y\in\mathscr{A}^{\times}$ so that $\mathscr{A}^{\times}$ becomes a contender to be a linear subspace (and in fact, a subalgebra) of $\mathscr{A}$. The answer is unfortunately no. Note that $\mathbf{0}$ is not invertible since $\mathbf{0}\mathbf{x}=\mathbf{0}\ne\mathbf{1}$ for all $x\in\mathscr{A}$. But, if $x\in\mathscr{A}^{\times}$ the above implies that $-x\in\mathscr{A}^{\times}$, yet by what was just said we know that $x+-x=\mathbf{0}\notin\mathscr{A}^{\times}$.

Invertible Linear Homomorphisms

Recalling that if $\mathscr{V}$ is an $n$-dimensional $F$-space then $\text{End}\left(\mathscr{V}\right)$ is an associative unital algebra with multiplication given by function composition and identity element $\text{id}_{\mathscr{V}}$ we may apply the discussion in the previous to $\text{End}\left(\mathscr{V}\right)$. Recalling though the definition of the multiplication and identity in $\text{End}\left(\mathscr{V}\right)$ we see that $T\in\text{End}\left(\mathscr{V}\right)$ is invertible if and only if there exists some $S\in\text{End}\left(\mathscr{V}\right)$ such that

$T\circ S=S\circ T=\text{id}_{\mathscr{V}}$

Thus, $T$ is invertible if and only if $T$ has a set-theoretic inverse which is also a linear transformation. Recall though that we proved in an earlier post that if a linear transformation possesses a set-theoretic inverse, denoted $T^{-1}$, it is a linear transformation. Thus, we are left with the following satisfying theorem:

Theorem: Let $\mathscr{V}$ be an $n$-dimensional $F$-space, then $T\in\text{End}\left(\mathscr{V}\right)$ is invertible if and only if $T$ possesses a set-theoretic inverse. Moreover, if $T$ is invertible it’s inverse is the set-theoretic inverse $T^{-1}$.

In light of the above theorem it’s evident that the study of when a linear transformation $T$ possesses a set-theoretic inverse is  crucial to the study of when $T$ is invertible. So, before we start this study we introduce some terminology. If $T\in\text{End}\left(\mathscr{V}\right)$ is injective we say that $T$ is a monomorphism. If $T$ is surjective we call it an epimorphism. Finally, if $T$ is bijective we call $T$ an isomorphism. Since the concept comes up a lot we denote the set of all isomorphisms on $\mathscr{V}$ by $\text{GL}\left(\mathscr{V}\right)$, and thus to point out the obvious; with this definition $\left[\text{End}\left(\mathscr{V}\right)\right]^{\times}=\text{GL}\left(\mathscr{V}\right)$. Lastly, for $T\in\text{End}\left(\mathscr{V}\right)$ we denote $\ker T=\left\{v\in T:T(v)=\bold{0}\right\}$.

Remark: It is also common to denote what we called $\text{GL}\left(\mathscr{V}\right)$ by $\text{Aut}\left(\mathscr{V}\right)$. But, this leaves confusion when dealing with $\text{Aut}\left(G\right)$ for a group $G$.

We note from our initial section that $\text{GL}\left(\mathscr{V}\right)$ is closed under inversion, multiplication, and scalar multiplication. In fact, realizing that $\mathbf{1}\in\text{GL}\left(\mathscr{V}\right)$ pretty much proves that $\text{GL}\left(\mathscr{V}\right)$ is a group under multiplication. So, we begin our characterization of these maps

Theorem: Let $T\in\text{End}\left(\mathscr{V}\right)$. Then, $T$ is a monomorphism if and only if $\ker T=\{\bold{0}\}$.

Proof: First suppose that $T$ is a monomorphism then since $T(\bold{0})=\bold{0}$ we know that $\{\bold{0}\}=T^{-1}(\{\bold{0}\})=\ker T$.

Conversely, suppose that $\ker T=\{\bold{0}\}$. Then we see that $T(x)=T(y)$ implies $T(x)-T(y)=\bold{0}$ and since $T$ is a linear transformation we may conclude that $T(x-y)=\bold{0}$ and thus $x-y=\bold{0}$, or $x=y$.

The conclusion follows. $\blacksquare$

Remark: Note that we did not use the ‘full extent’ of the linearity of $T$ in the sense that we didn’t use the fact that $T(\alpha x)=\alpha T(x)$. This is because the above theorem holds for homomorphisms between groups and every vector space is an abelian group.

References:

1. Halmos, Paul R.  Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print

November 30, 2010 -

1. [...] of post: This is a literal continuation of this post. Treat it as a physical [...]

Pingback by Invertible Linear Transformations (Pt. II) « Abstract Nonsense | November 30, 2010 | Reply

2. [...] since is a monomorphism we know from prior discussion that so that the above implies [...]

Pingback by Characterization of Linear Homomorphisms In Terms of Bases « Abstract Nonsense | December 2, 2010 | Reply

3. [...] this it readily follows that implies that .  The other direction was covered in our discussion of invertible [...]

Pingback by Halmos Section 36: Inverses (Pt. II) « Abstract Nonsense | December 3, 2010 | Reply

4. [...] is defined, as usual, to be the set of invertible endomorphisms on under composition. Often when [...]

Pingback by Representation Theory: Definitions and Basics « Abstract Nonsense | January 18, 2011 | Reply

5. [...] “equivalent”, on  by saying if there exists (where the notation means is invertible in the algebra) such that . We call two elements of disjoint, denoted (to accentuate the dual notions of [...]

Pingback by Representation Theory: Projections Into the Group Algebra (Pt. I) « Abstract Nonsense | April 9, 2011 | Reply

6. [...] so . Moreover, by definition of non-degeneracy we have that and thus by a common theorem regarding linear transformations and so is injective. Thus, we have that and so (recalling our [...]

Pingback by Canonical Isomorphism Between a Finite Dimensional Inner Product Space and its Dual « Abstract Nonsense | June 4, 2011 | Reply

7. [...] differentiable at and . If is differentiable at and we say that is a critical point. By a basic theorem of linear algebra we know that if we have that a point , for which is differentiable, is regular [...]

Pingback by The Inverse Function Theorem (Proof) « Abstract Nonsense | September 8, 2011 | Reply