# Abstract Nonsense

## Tensor Algebra and Exterior Product (Pt. VI)

Point of Post: This is a continuation of this post.

$\text{ }$

A cool corollary of this pointed out in [6] is the following:

$\text{ }$

Theorem: Let $f:R^n\to R^m$ be an $R$-map (with $R$ commutative!). Then, if $f$ is mono then $n\leqslant m$ and if $f$ is epi then $m\leqslant n$

Proof: Suppose that $f$ is epi. then, $\Lambda^m(f):\Lambda^m(R^n)\to\Lambda^m(R^m)$ is epi. But, $\Lambda^m(R^m)\cong R$ and thus we see that $\Lambda^m(R^n)\ne 0$ which by previous proof means that we can’t have $m>n$ and thus $m\leqslant n$. If $f$ is mono then since these are free modules we have that $\Lambda^n(f):\Lambda^n(R^n)\to\Lambda^n(R^m)$ is mono. Since $\Lambda^n(R^n)\cong R$ this implies that $\Lambda^n(R^m)\ne 0$ and thus similar logic leads us to $m\geqslant n$. $\blacksquare$

$\text{ }$

I’m not one-hundred percent sure that there isn’t any circularity to this argument (point it out if there is!) but to me it seems that combining these two implies:

$\text{ }$

Theorem: Commutative rings have the IBN property.

$\text{ }$

The standard proof of this, if you recall, is that if $R^n\cong R^m$ as $R$-modules then $(R/\mathfrak{m})\otimes_R R^n\cong (R/\mathfrak{m})\otimes_R R^m$ (where $\mathfrak{m}$ is some maximal ideal afforded to us by Krull’s theorem) as $R/\mathfrak{m}$ modules, but from basic module theory this just says that $(R/\mathfrak{m})^n\cong(R/\mathfrak{m})^m$ as $R/\mathfrak{m}$-modules. This has reduced us to having to verify that fields have the IBN property which is the standard “exchange lemma”  (apparently this lemma is actually named) proof. My point in bringing this up  is that to do this proof quickly, and well, one needs to have a working knowledge of both tensor products and maximal ideals, AND requires one to already know the proof for fields. Thus, if there is no circularity in the above argument it is much more technically simple than the “standard” one.

$\text{ }$

Relation to Determinants

$\text{ }$

What I’d now like to discuss is how, in particular, exterior powers of maps on f.g. free modules relate to determinants.

$\text{ }$

Recall that if $M$ is an f.g. free $R$-module with an ordered basis $\mathcal{B}=(e_1,\cdots,e_n)$ there is an association that takes an $R$-map $f:M\to M$ and associates an $n\times n$ matrix $[f]_\mathcal{B}\in\text{Mat}_n(R)$ defined by making the $(i,j)^{\text{th}}$ entry of $[f]_\mathcal{B}$ the unique $a_{i,j}\in R$ such that $\displaystyle f(e_j)=\sum_{i=1}^{n}a_{i,j}e_i$.  Moreover the association $f\mapsto [f]_\mathcal{B}$ is actuall an $R$-algebra isomorphism $\text{End}_R(M)\to\text{Mat}_n(R)$ (the proof is the same as that for the case of fields). Moreover we know that if we define $\phi$ to be the unique $R$-isomorphism $\phi:M\to R^n$ with $\phi(e_i)=(0,\cdots,\underbrace{1}_{i^{\text{th}}},0\cdots,0)$ then $f=\phi^{-1}\circ [f]_\mathcal{B}\circ\phi$ where $[f]_\mathcal{B}$ acts on $R^n$ in the usual way (once again, this is the same as the proof in the case of fields).

$\text{ }$

Thus, just in the case of vector spaces, we have that we can think about endmorphisms $\text{End}_R(M)$ as being matrices, as long as we keep track of bases. Moreover, we can define the determinant $\det$ of a matrix $A=[a_{i,j}]$ in $\text{Mat}_n(R)$ by the rule

$\text{ }$

$\displaystyle \det(A)=\sum_{\sigma\in S_n}\text{sgn}(\sigma)\prod_{i=1}^{n}a_{i,\sigma(i)}$

$\text{ }$

Moreover, we can still define the adjugate matrix $\text{adj}(A)$ to be the matrix whose $(i,j)^{\text{th}}$ entry is gotten by deleting the $j^{\text{th}}$ row and $i^{\text{th}}$ column of $A$, taking the determinant, and multiplying by $(-1)^{i+j}$. We see then, same as the proof from basic linear algebra, that

$\text{ }$

$A\text{adj}(A)=\text{adj}(A)A=\det(A)I$

$\text{ }$

And thus, we can see that $A\in\text{GL}_n(R)$ (i.e. $\text{Mat}_n(R)^\times$) if and only if $\det(A)\in R^\times$.

$\text{ }$

What we’d like to show is that if $\mathcal{B}=(e_1,\cdots,e_n)$ is an ordered basis for $M$ and $f\in\text{End}_R(M)$ then

$\text{ }$

$\Lambda^n(f)(e_1\wedge\cdots\wedge e_n)=\det([f]_\mathcal{B})(e_1\wedge\cdots\wedge e_n)$

$\text{ }$

Indeed, suppose that $[f]_\mathcal{B}=[a_{i,j}]$ then

$\text{ }$

\begin{aligned}\Lambda^n(f)(e_1\wedge\cdots\wedge e_n) &= f(e_1)\wedge\cdots\wedge f(e_n)\\ &= \left(\sum_{i_1=1}^{n}a_{i_1,1}e_{i_1}\right)\wedge\cdots\wedge\left(\sum_{i_n=1}^{n}a_{i_ ,n}e_{i_n}\right)\\ &= \sum_{i_1,\cdots,i_n=1}^n a_{i_1,1}\cdots a_{i_n,n}(e_{i_1}\wedge\cdots\wedge e_{i_n})\end{aligned}

$\text{ }$

Now, any $(i_1,\cdots,i_n)$ with repeated numbers makes $e_{i_1}\wedge\cdots\wedge e_{i_n}=0$ and so we may consider our sum only over indices which are bijections of $\{1,\cdots,n\}$. Thus, we see that

$\text{ }$

$\displaystyle \Lambda^n(f)(e_1\wedge\cdots\wedge e_n)=\sum_{\sigma\in S_n}a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}(e_{\sigma(1)}\wedge\cdots\wedge e_{\sigma(n)})$

$\text{ }$

But, $e_{\sigma(1)}\wedge\cdots\wedge e_{\sigma(n)}=\text{sgn}(e_1\wedge\cdots\wedge e_n)$ and so

$\text{ }$

\begin{aligned}\Lambda^n(f) &=\sum_{\sigma\in S_n}\text{sgn}(\sigma)a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}(e_1\wedge\cdots\wedge e_n)\\ &=\left(\sum_{\sigma\in S_n}\text{sgn}(\sigma)a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}\right)(e_1\wedge\cdots\wedge e_n)\\ &= \det([f]_\mathcal{B}^\top)(e_1\wedge\cdots e_n)\\ &=\det([f]_\mathcal{B})(e_1\wedge\cdots\wedge e_n)\end{aligned}

$\text{ }$

$\text{ }$

References:

[1] Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print.

[2] Rotman, Joseph J. Advanced Modern Algebra. Providence, RI: American Mathematical Society, 2010. Print.

[3] Blyth, T. S. Module Theory. Clarendon, 1990. Print.

[5] Grillet, Pierre A. Abstract Algebra. New York: Springer, 2007. Print.