Using Partial Exactness to Compute Things (Pt. II)
Point of Post: This is a continuation of this post.
This gives a wide array of corollaries, some of them being:
Corollary: Let be a PID and let and be two ideals of , then
So, for example we have that and .
While the above may seem slightly laborious consider that if we’re smart we can often get away with computing a lot less. For example, suppose we wanted to compute where . If and then, using the commuting of tensor and direct products, we have that
So, we actually really only have to care about tensors of the form where are primes. In fact, we only have to care about the case when . Indeed, suppose that , then since we know that is invertible and so for any we have that we can find such that thus we see that for
so that all simple tensors, and thus everything, in is zero. So, to figure out the general case we only actually have to care about with , and now we can use partial exactness. Indeed, we have the following exact sequence
where the first map is “multiply by “. Using right exactness we may then conclude that
We know then that is the cokernel of this first map, and so it suffices to find the image of this first map. But, note that for any simple tensor in the first module we have that the image under is . Thus, we see that has trivial image and so the cokernel is isomorphic to which is isomorphic to . That was easy! Thus, we see that
Ok, ok, I can see that this might have seemed like a lot of work, but if you really analyze it you see that everything up until the computation of was conceptually easy (you could have guessed the results) and when this computation finally came the actual computation was easy (the image of the pertinent map was trivial)–so overall, not that bad.
Free Presentations and their Use
This last computation segues nicely into the the general techinque I’d like to discuss. In particular, it gives a good example of a free presentation. What is a free presentation? Formally, a free presentation for a module is an exact sequence of left -modules and -maps of the form where and are free modules. Fine, but what does a free presentation really mean? Let’s look at a simple way to construct a free presentation and see if it’ll help our understanding. Suppose that we have that is generated by some subset . Note then that if denotes a free module on the inclusion gives way to an -map which reduces to the identity map on (e.g. is the identity map). Let denote the kernel of the map and note that if denotes a set of generators for then we can (using the same idea) see that surjects onto by some map . Thus, we see that is a free resolution. So, what do these and represent? Well, what represents is clearly enough, it’s a set of generators, but what about ? Intuitively an element of is in if and only if when we take the formal sum and pass it into an actual element of we get zero. In other words, an element of denotes a “relation” between the generators of and so (a generating set for ) roughly denotes some economical (hopefully) and representation choice of these “relations”.
I don’t think that could have been any more thinly-veiled. Yes, a free presentation is basically the same thing as giving the module in terms of generators and relations. So, how does giving a module in terms of generators and relations lend itself to the computation of tensor products? Well, the basic idea is that in general, giving a module in terms of a cokernel enables us to give that modules tensor with something else in terms of a cokernel, and in the case of the cokernel coming from a free presentation this cokernel is likely to be “nicer looking”. Ok, saying that again without using the word “cokernel” ten times, if we have a free presentation for , say then we know that is equal to . Tensoring with some module then gives us that is isomorphic to . But, note that and since for some and for some we know from previous discussion that and with in both cases. We see then that we have the following diagram
where the vertical arrows are the obvious isomorphisms and is such that the diagram commutes. We see then that is isomorphic to which is a fairly tame beast. Namely, we see that basically acts by
where is the usual Kronecker delta function ( at and zero elsewhere) and by an unusually bad abuse of notation by i mean “write in it’s form like and then “multiply each term by to get “. This idea of computing things by generators and relations will be of great use to us in the future.
 Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print.
 Rotman, Joseph J. Advanced Modern Algebra. Providence, RI: American Mathematical Society, 2010. Print.
 Blyth, T. S. Module Theory. Clarendon, 1990. Print.
 Lang, Serge. Algebra. Reading, MA: Addison-Wesley Pub., 1965. Print.
 Grillet, Pierre A. Abstract Algebra. New York: Springer, 2007. Print.