Point of Post: In this post we formally introduce the notion of Riemann surfaces and discuss some important examples.
We now begin what is, in my very humble and uninformed opinion, one of the most beautiful subjects in the entirety of basic graduate mathematics–Riemann surfaces. Such a bold statement begs two immediate questions: what are Riemann surfaces, and why are they so pretty?
The first question is one which has a simple, albeit somewhat esoteric, response–a Riemann surface is merely a one-dimensional connected complex manifold. What is such an object? Well, anyone who is likely to get a lot out of these posts probably is familiar with the concept of a smooth manifold, which is merely topological spaces with a well-defined notion of how to ‘do calculus’ on them. From this, it’s not hard to guess what a complex manifold is, it’s a topological space that has well-defined way of doing complex analysis on them. So, Riemann Surfaces are nothing more than (connected!) topological spaces which locally look like open subsets of , and that this local notion pieces together nicely enough to give a global notion of what a holomorphic mapping between two such Riemann surfaces looks like.
Now that we have a very rough idea of what a Riemann surface should be, we can at least try to explain why the theory of Riemann surfaces is so beautiful. Everyone who has taken complex analysis at an advanced undegraduate/graduate level is aware of the fact that complex analysis is much more intimately (or at least more immediately!) related to algebra and topology than real analysis is. For example, for a domain one has that the simple connectedness of is equivalent to every harmonic function admitting a holomorphic function such that , which is equivalent to every holomorphic function admitting a primitive on (these allow us to define cohomology via complex analysis!).
This pervasive feeling of deep algebraic and geometric connections will continue when we discuss Riemann surfaces. We shall prove some truly deep, and truly beautiful theorems in this vein. For example, we shall prove that, in a very precise sense, doing work with compact Riemann surfaces is the same thing as working with projective plane curves–in particular, we shall see that every algebraic function field (algebraic extension of ) is just the meromorphic functions on some compact Riemann surface. While I could go on and on about how interesting and amazing this subject is, I think that it would be better that I attempt to inject my paltry insight as we go along, and let you see for yourself why this subject makes me so excited.
Point of Post: In this post we discuss the notion of adjoint functors, giving the two definitions via both Hom set adjunction and counit-unit adjunction.
As is standard when talking about adjoint functors we begin with a quote by the late Saunders Mac Lane: “The slogan is ‘Adjoint functors arise everywhere'”. Mr. Mac Lane was surely not lying because (as I hope is clear by the end of this post) some of the functors we are most well-acquainted with are left or right adjoints. What exactly are adjoint functors? While there are tons-and-tons of motivations for what these ubiquitous little demons are, there is one though that stands forefront in my mind. The idea is that adjoint functors are kind of like generalized inverses–in the sense that while they are not actually invertible, they share many of the same functional properties of invertible functors. Namely, let’s assume that we have a functor . It is entirely unreasonable to assume that this is a literal isomorphism (it has a two-sided functor inverse). It is slightly less unreasonable to hope that is going to be an equivalence, which means that and for some functor . That said, having an equivalence of categories is a BIG deal thus we shouldn’t expect the average functor on the street to be an equivalence. Moreover, in both of the cases of a functor being an isomorphism (having a literal inverse) and being an equivalence the focus is really more on the categories. If I said to you that is an isomorphism or probably the most important thing that jumps to mind is “ and are isomorphic” or and are equivalent”. Thus, if we are looking for generalizations of the functional properties of invertible or near invertible functors perhaps it behooves us to shy away from looking directly at the categories and instead look at how we want our functors to act on “elements” of the categories. Namely, let’s sit here for a second and try to think of what is a very desirable property that an invertible functor has.
Well, recalling our mantra that we should only care about the morphisms in a category it seems then that a step in the discovery of this desirable is to figure out what invertible or near invertible functors do to morphisms. Well, let’s just mess around with the idea and see what comes up. Well, what we know is that there is a natural isomorphism for each an object . Ok, so, morphisms, morphisms. Hmm, well what does this tell us about looking at morphisms between and other objects of . Ok, so we want to see what we can say about . Hmm, well nothing immediately jumps out. But, there is something we can do which is pretty nice. Since our is an equivalence it is easy to prove that must be isomorphic to some object in the image of . Thus, we want to figure out what our equivalence enable us to say about . Well, since there is no harm in replacing with so that we are really trying to figure out what we can say about . But the cool thing is that since is an equivalence we have that induces bijections on Hom sets, and thus . Moreover, if one follows the details of the construction above one can prove that this isomorphism is natural in both and .
While this is clearly a very nice property for two functors to have it is not at all clear that it’s the correct generlization of invertible functors. Hopefully though, the ubiquitousness of functors satisfying this property (as illustrated below) will convince the reader.
Point of Post: In this post we discuss the notion of functors between categories, and give examples of such functors.
We are now going to discuss probably one of the most fundamental, and influential ideas in the entirety of category theory: functors. In fact, when functors were the reason Mac Lane et. al introduced the notion of categories–they were merely the necessary background to describe functors. So, what are these magical objects, these functors? Intuitively, functors allow us to make rigorous statements such as “by turning a problem in topology into a problem in group theory”. Functors allow us to naturally carry ideas from one category (for all intents and purposes, subject of study) to another. They allow us to make mathematically precise the interconnections and interplay between the various and (artificially) disparate branches of mathematics. Most of the game-changing mathematics in the last fifty years is, in some way or another, traceable back to a functor. Of course, this is definitively hyperbole but it’s impossible to stress this point enough. Functors are, in a crude sense, the “morphisms” in the “category of categories”. For those in the know, it is well-known that this approach has difficulties, but it roughly gives the idea that functors are the structure preserving “maps” between categories. While I could go on, and on, and on about how important functors are, I believe that the best way to make clear how important and pervasive functors are, is to see some examples. Thus, let’s get on with it.
Point of Post: In this post we make note of a simple fact about what the direct limit of a directed set over a directed system looks like.
Most of the examples we have discussed up until this point concerning direct limits have involved directed systems not over the bare minimum preordered set–no most of the time they are over directed sets. So what? Why does being over a directed set make anything better? Well, that is the content of the post. Namely, we shall see that if we take the directed limit of a directed system over a directed set (opposed to a general preordered set) we know that all the elements “look nice” (in a sense to soon be made clear). Of course, this begs the question as to why, considering this new information, we wouldn’t just from the get-go restrict our attention to directed systems over directed sets? Well, put bluntly, we’ll miss out on some good stuff. In other words, allowing us to considered directed systems over general preordered sets (and their subsequent direct limits) allows us to bring under the umbrella of “direct limit” some ‘degenerate’ cases which are very useful. First and foremost in my mind is the coproduct of a set of modules which is obtained by defining the trivial preorder on . Thus, we don’t want to throw out the possibility of discussing direct limits of directed sets over preordered sets, but we’d at least like (since, as we said, ‘most times’ the preordered set is really a directed set) know how to reap the benefits of the additional properties of directed sets when they arise.
Point of Post: In this post we discuss the notion of the direct limit of modules, giving some particular examples of direct limits.
In this post we shall discuss one of the most useful constructions in the entirety of module theory–direct limits. Intuitively direct limits allow us to define a gluing process which makes rigorous sense of things such as the following: “ is the limit of the set of all finite products of “. To be more specific we wish to look at the case when we have a “chain” (really, we shall be discussing a more general notion, but chain gives the right idea) of modules and a chain of morphisms
and we wish to glue the chain upwards in a way the respects the morphisms–in other words we’d like to, at least intuitively, glue them to get some object for which there are always maps which respect the mappings with in the sense that should be equal to . Thus, in a sense we are really taking a limit of the , allowing us to often realize certain objects as the limit of certain finitary objects. We shall end up seeing that this limit is a fairly faithful representation of the individual in the sense that coproducts are good representation of the factor modules–this shouldn’t be surprising since coproducts are themselves direct limits. All in all, the intuitive idea of direct limits are that they are a two-step process consisting of gluing a set of modules together and then identifying the elements of the gluing which are “eventually equal” (in the sense of the limit).
Point of Post: In this post we more closely examine some examples of categories.
We are going to be starting to talk a fair amount about categories, and so I thought that it would be helpful to lay out some examples, not only to give us intuition about categories, but to set down some of the notation and recurring characters. We have already defined categories (although, somewhat unsatisfactorily, but it will have to do) and motivated them.
Point of Post: The point of this post is to give the axiomatic definition of a category, and explain some of the nuances I’ve picked up. Also, I will give quite a few examples and show that they are indeed categoreis.
It comes time now to actually define what a category is. So, formally a category is a quadruple consisting of
a) A class , whose members are called -objects.
b) A set for for each whose members are referred to as -morphisms.
Remark: It is common to instead of write to write “ is a morphism”
c) For each -object a morphism
d) A composition law which associates for any morphism and any morphism a morphism called the composite of and
Remark: To me, it seems that we could rephrase this as. For each -objects there exists a function
1) The composition is associative, namely for morphisms and the equation holds true.
2) If is a morphism then
3) The elements of the set are pairwise disjoint.
Point of post: This post is to give the motivation behind category theory. Why someone would want to study it, glibly (and this is on purpose) what category theory is, and a perfunctory look at the old set theory assumed and the new stuff needed to study category theory.
Disclaimer: All that is said below is the viewpoint of a non-categorist, and is easily (I again easily) open for disagreement. This post is just the result of my own personal introspection, and does not claim to capture (fully or partially) any semblance of what category theory is. If you have any comments, please for my sake, let me know in a comment!
My topology course this term has necessitated the learning of category theory. So, I will post little semi-lessons as I myself read about the subject. These aren’t supposed to be comprehensive, just my ways of putting things in my own words.
Why do Category Theory?
This is the question all new mathematics faces: “What’s the point? Where can I use it? Is it even relevant?” Honestly, all legitimate questions, all especially applicable to something as general, as mind-bogglingly abstruse as category theory. So, what is the point?