Abstract Nonsense

Crushing one theorem at a time

Tensor Algebra and Exterior Algebra (Pt. VII)


Point of Post: This is a continuation of this post.

\text{ }

Now, since e_1\wedge\cdots\wedge e_n is a basis of \Lambda^n(M) we may actually conclude from this that \Lambda^n(f) is nothing but multiplication by \det([f]_\mathcal{B}) on \Lambda^n(M). But, this actually tells us something really cool. Note that if we had selected a different ordered basis \mathcal{B}' for M then we know from the above that \Lambda^n(f) is just multiplication by \det([f]_{\mathcal{B}'}). In particular, we see that

\text{ }

\det([f]_\mathcal{B})(e_1\wedge\cdots\wedge e_n)=\Lambda^n(f)(e_1\wedge\cdots\wedge e_n)=\det([f]_{\mathcal{B}'})(e_1\wedge\cdots\wedge e_n)

\text{ }

Since e_1\wedge\cdots\wedge e_n is torsion free this implies that \det([f]_\mathcal{B})=\det([f]_{\mathcal{B}'}). In other words, the determinant is a well-defined invariant of a linear transformation–it doesn’t matter what basis you pick the determinant of f with respect to that basis is the same. Thus, we may define \det(f) for f\in\text{End}_R(M) by \det(f)=\det([f]_\mathcal{B}) for any ordered basis \mathcal{B} of M.

\text{ }

Note that this lends credence to the notion that the determinant is something algebraically natural. Namely, we know that \Lambda^n(M) is free of rank \displaystyle {n\choose n}=1 and thus any map \Lambda^n(M)\to\Lambda^n(M) is just multiplication by a constant. Thus, we could have defined \det(f) to be the constant for which \Lambda^n(f) is multiplication by. Thus, we see that the determinant is not just some formula some guy made up that happens to be useful. If you believe that multilinear maps are natural, then you have to concede that exterior powers are natural, and thus the constant by which \Lambda^n(f) is multiplication by is natural.

\text{ }

This formulation also allows us to give a quick proof of the multiplciativeness of the determinant. Indeed:

\text{ }

Theorem: Theorem, let g,f\in\text{End}_R(M). Then, \det(g\circ f)=\det(g)\det(f). In particular, if A,B\in\text{Mat}_n(R) then \det(AB)=\det(A)\det(B).

Proof: We know that \Lambda^n(g\circ f)(\omega)=\det(g\circ f)\omega and (\Lambda^n(g)\circ\Lambda^n(f))(\omega)=\Lambda^n(g)(\det(f)\omega)=\det(g)\det(f)\omega. Since \Lambda^n(g\circ f)=\Lambda^n(g)\circ\Lambda^n(f). The first conclusion then follows. \blacksquare

\text{ }

Exterior Algebra Revisited

\text{ }

As a final  topic, I’d like to discuss the tensor algebra in more detail now that we understand it’s contituent homogeneous parts better.

\text{ }

We know that on the tensor algebra we have, well, an algebra structure. The multiplication is the usual quotient ring multiplication inherited by considering \Lambda(V) as \mathcal{T}(V)/\mathfrak{e}. We see then that by definition

\text{ }

\displaystyle \begin{aligned}(v_1\otimes\cdots\otimes v_\ell+\mathfrak{e})(v'_1\otimes\cdots\otimes v'_k+\mathfrak{e}) &=(v_1\otimes\cdots\otimes v_\ell)\otimes(v'_1\otimes\cdots\otimes v'_k)+\mathfrak{e}\\ &= v_1\otimes\cdots\otimes v_\ell\otimes v'_1\otimes\cdots\otimes v'_k+\mathfrak{e}\end{aligned}

\text{ }

Or, in wedge notation this says that the multiplication of v_1\wedge\cdots\wedge v_\ell and v'_1\wedge\cdots\wedge v'_k is

\text{ }

v_1\wedge\cdots\wedge v_\ell\wedge v'_1\wedge\cdots\wedge v'_k

\text{ }

Thus, it makes sense to call the inherited multiplication on \wedge.

\text{ }

We note by the grading that \wedge is a map \Lambda^\ell(M)\times\Lambda^k(M)\to\Lambda^{k+\ell}(M).  Since the multiplication is associative we know that \omega\wedge(\zeta\wedge\eta)=(\omega\wedge\zeta)\wedge \eta and so we may unambiguously just write \omega\wedge\zeta\wedge \eta.

\text{ }

Something which holds true for wedge products but not the product in the tensor algebra is the following:

\text{ }

Theorem: Let \omega\in\Lambda^k(M) and \eta\in\Lambda^\ell(M). Then, \omega\wedge\eta=(-1)^{k\ell}(\eta\wedge \omega).

Proof: By bilinearity of \wedge it suffices to prove this on simple wedges \omega=v_1\wedge\cdots\wedge v_k and \eta=v'_1\wedge\cdots\wedge v'_\ell. But, this is simple since it requires k\ell transpositions to switch \omega\wedge \eta to \eta\wedge\omega. \blacksquare

\text{ }

In fact, this proves (for the case where 2\in R^\times) that:

\text{ }

Theorem: Let k be odd. Then, \omega\wedge\omega=0

\text{ }

This is true generally, but only has a quick proof when, as I said, 2\in R^\times.

\text{ }


References:

[1] Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print.

[2] Rotman, Joseph J. Advanced Modern Algebra. Providence, RI: American Mathematical Society, 2010. Print.

[3] Blyth, T. S. Module Theory. Clarendon, 1990. Print.

[4] Lang, Serge. Algebra. Reading, MA: Addison-Wesley Pub., 1965. Print.

[5] Grillet, Pierre A. Abstract Algebra. New York: Springer, 2007. Print.

[6]  Conrad, Keith. “Exterior Powers.” Www.math.uconn.edu/~kconrad. Web. <http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/extmod.pdf&gt;.

Advertisements

May 10, 2012 - Posted by | Algebra, Module Theory, Ring Theory | , , , ,

4 Comments »

  1. Wow, thank you for this heroic effort in 7 installments to explain the exterior product. I know the biggest stumbling block for me when I began studying this stuff was wading through all of the different and inconsistent approaches, not to mention wretched combinatorial proofs of everything – which as your notes make evident is absolutely unnecessary. I especially liked your proofs of the product rule for determinants and the associativity of the wedge product. I just don’t understand why geometry authors avoid using this approach and instead so often opt for a debauch of indices. It takes more effort to understand/assimilate the “coordinate-free” approach which, however, I believe is more than offset by its elegance and power…

    Comment by Chris | May 10, 2012 | Reply

    • Chris, what I am going to discuss is not entirely a coordinate free approach. For me the basic idea shall be that we will use coordinates when it behooves us and more advanced machinery otherwise. This is made possible, for example, by canonical identifications $\text{Mult}_k(V)$ (alternating k-linear maps V^k\to F) with \Lambda^k(V^\ast) where V^\ast is the dual space of V. I hope to, at each step (when it’s convenient), examine both the coordinate-laden and coordinate-free point of view–for I think this is what shall capture the best understanding.

      Anyways, thank you so much for the kind comments! Be sure to stay tuned.

      Best,
      Alex

      Comment by Alex Youcis | May 11, 2012 | Reply

  2. I’m staying tuned, but fear you may be lost in the thicket of differential forms, manifolds and Stoke’s theorem :=). If there’s any quick route to a precise and broadly applicable statement/proof of Green’s theorem that doesn’t involve that journey, I have not found it, and therein lies a lot of machinery…

    Comment by Chris | June 2, 2012 | Reply

    • Dear Chris,

      Unfortunately, all of this is about to be put on the back burner. As I mentioned in a recent post, I am just about to enter an REU (summer research program) and will be blogging more about that my standard stuff. I will pick it up in August when the REU is over. As to your question, I wholly believe that there is a proof of Green’s theorem out there that completely does not use the notion of differential forms and is relatively low-level. In fact, something pretty close to what (I assume) you want can be found in William Wade’s analysis book [pg. 488].

      Best,
      Alex

      Comment by Alex Youcis | June 10, 2012 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: