Point of Post: This is a continuation of this post.
Point of Post: In this post we define the exponential and trigonometric functions and note that they are holomorphic.
Last time we proved that every function on an open subset of that is locally representable by power series is necessarily holmorphic (in fact, infinitely differentiable!). That said, we didn’t actually give any honest to god examples of such functions. Thus, in this post we will finally lay down our first few non-trivial examples of holomorphic functions. They will come in terms of what is, in a very precise sense we will make clear later on, the only extension of some of our favorite real valued functions: the exponential and trigonometric functions.
Point of Post: In this post we discuss the basic ideas behind when a complex power series converges and discuss the holomorphicity of functions representable by power series.
We have now discussed the notion of holomorphic functions but beside’s some extremely trivial ones (like polynomials) we don’t have any good class of examples of such functions. In this post we describe what is the most informative example of holomorphic functions–power series. Why are power series the most informative example? Well we shall eventually see that the holomorphic functions are precisely those that are locally just power series.
I assume anyone reading this is fairly well-acquainted with power-series and the related notions (the Weierstrass M-test, etc.) , for I will gloss over some stuff.
Point of Post: In this post we define the notion of a function to be holmorphic on some domain .
We are going to start discussing complex analysis in preparation for later discussion on Riemann surfaces. We start this discussion, naturally, with the notion of differentiability for functions mapping . There is a standard amount of amazement associated to functions which are differentiable in the complex sense since, as we shall see (and, as I’m sure you well-know), they are MUCH nicer then any kind of real differentiable function . In particular, we shall see that any once differentiable function shall be infinitely differentiable and, moreover, locally be expressable as a power series. Think about how different this is from standard real differentiable functions, say, even just where we can find functions that have any number of deriviatives we desire, yet it’s derivative not even be continuous, let alone differentiable. We can even find functions that are infinitely differentiable yet whose Taylor series at a point doesn’t converge to the function!
Point of Post: In this post we discuss an important consequence of the inverse function theorem which relates local diffeomorphisms to global diffeomorphisms.
Point of Post: In this post we discuss the notion of smooth curves in and the implicit function theorem.
To begin our discussion of geometry it seems prudent to discuss perhaps the simplest of all smooth geometric objects–curves. Everyone has an intuitive notion of a curve (at least in two or three space). Namely, a curve can be thought of as a length of string that is twisted this way and that, in a smooth manner. But, of course in mathematics one must always back up intuitive notions with concrete, sold definitions. That said, in our case one quickly realizes that there is not one immediate definition of curve. Indeed, there are two canonical ways of defining a curve which, from our point of view, are ordered in terms of ‘importance’ (i.e. we prefer one notion over the other). To see the difference between these two notions consider probably the simplest (closed) curve one could imagine in –the unit circle . Ask any kid off the street how one defines the unit circle and you are most likely to get the immediate answer “Oh! It’s just the set of points such that ” (or, perhaps the set of all with ). Or, the parabola is another perfectly good curve which could be described as the set such that . Thus, one should start to wonder if perhaps the correct notion of a curve is the ‘locus’ of a single or multiple functions in Euclidean space. To be more concrete, for functions define to be the set (so that ). Perhaps then a good definition of a ‘curve’ is a set of the form for some sufficiently well-behaved functions . That said, there is another natural notion of curve which is equally naturally as the definition as the locus of a set of ‘nice’ functions. Namely, a curve can be thought of as a ‘path’, or the trace of a moving particle, or more importantly the function defining the path. To be precise, a curve could also be defined as a sufficiently nice mapping for some (possibly infinite) non-empty interval . There is a large connotational difference between curves thought of as the locus of a set of functions and as a ‘path’. In particular, a ‘path’ has notions of how quick one traverses the path, whether they turn around, etc. whereas the loci of functions is just a set. That said, there seems to be a pretty obvious ‘connection’, namely taking a ‘path’ and loci. Namely, it seems intuitively obvious that at least the locus of a set of functions and the image of some ‘path’ are the same ‘objects’ (i.e. just sets). That said, a little thought shows that they are definitively not in one-to-one correspondence. For example, consider the hyperbola . This is a perfectly nice ‘curve’, that said there evidently does not exist a sufficiently nice (e.g. continuous) ‘path’ with since the right hand side is not connected and the left hand side necessarily is. That said, there is hope to find a ‘path’ that has image equal to part of the hyperbola. In particular, if one restricts to points with positive -coordinates then the path having is a perfectly () nice ‘path’ with equal to the aforementioned branch of the hyperbola. Thus, one wonders if perhaps there is some condition on a curve (or a point of a curve) that gurantees that the curve is locally equivalent to the image of a ‘path’. In fact, there is a theorem to this effect, but it is perhaps more of a sophisticated answer than one would expect–in particular being a stronger version of the inverse function theorem. Roughly the theorem states that if one has a level curve, and if the ‘derivative’ of the defining functions is non-zero at some point then in some neighborhood of that point the level curve is the graph of a function! Not to point out the obvious, but math is full of simple questions with startling complicated answers–this is perhaps one of the most profound of these examples, providing an integral link between algebra of functions and their geometry.
Point of Post: This is a continuation of this post.
Point of Post: In this post we give motivation for the inverse function theorem.
In this post we discuss one of the most fundamental analytic-geometric facts in all of multivariable analysis–the inverse function theorem. The theorem really has its humble roots back in single variable analysis with an observation about regular points (points where the derivative is non-zero) of continuously differentiable functions. Namely, it is a common theorem that if is continuously differentiable and for some then there exists some neighborhood containing for which is bijective and it’s inverse is continuously differentiable and moreover that . There, the theorem was easy to prove (see any basic analysis textbook for a proof). So, since we are doing multivariable analysis an obvious question is “does this result extend to maps ?” Well, the first problem in answering this question is formulating exactly what this ‘theorem’ would say in higher dimensions. Let’s rephrase this theorem in a language a little more amenable to total derivatives. We begin with that means. In particular, (using the notation used above) we see that since we have that if and only if . Thus, it seems that the natural extension would be we want to consider , with open, with some distinguished point such that is an isomorphism. In particular we should make the concession that we would like to only consider maps (with the above notation) where . From this, we can see that we have a visually similar condition that takes to (recalling that we are only considering maps ) the condition . It’s pretty intuitive that we should replace continuously differentiable with (in the multivariable sense). Lastly, we see that seems naturally translatable to or, in the more common form, . Thus, we can finally create a single-variable to multivariable dictionary for this theorem
1. Spivak, Michael. Calculus on Manifolds; a Modern Approach to Classical Theorems of Advanced Calculus. New York: W.A. Benjamin, 1965. Print.
2. Apostol, Tom M. Mathematical Analysis. Reading, MA: Addison-Wesley Pub., 1974. Print.
Point of Post: In this post we discuss and prove the famous result known as the Banach Fixed Point Theorem or the Contraction Mapping Principle.
There is an old (anonymous) saying among mathematicians and math enthusiasts: “All of topology comes down to a fixed point theorem”. Now, while this is clearly a gross exaggeration it is definitely a moral truth. In fact, when one thinks of some of the most famous [“basic”] results in topology one is often thinking of theorems which, if not fixed point theorems themselves, have a definite fixed point feel: the Lefschetz fixed point theorem, the Brouwer fixed point theorem, the Borsuk-Ulam theorem, etc. This oversimplified truth can, in some small way, also be said of analysis. Often certain, seemingly intractable, problems in analysis are killed right away if one applies a certain fixed point theorem. Probably the most famous of the ‘analytic feeling’ fixed point theorems (analytic feeling is pretty vague, but most often has to do with metric spaces e.g. this theorem) is the Banach fixed point theorem which first appeared in the Ph.D. thesis of Stefan Banach (the same Banach as in Banach space of course). Roughly, it says that if you have a contraction mapping from a complete metric space to itself must have a unique fixed point. This is so intuitive (once you’ve been told it of course) that the intuition is the proof: since the mapping is a contraction the points (for any ) must be getting close together (i.e. the sequence is Cauchy) and thus we know from the completeness of our space the sequence must converge to some . But, since is continuous (evidently since it’s a contraction>Lipschitz>uniform continuity>continuity) we know that acting on by should be the same as considering which, if there is any justice in the world, is just . The uniqueness is clear since if two points where fixed by then acting on them by couldn’t take them any closer.
Point of Post: In this post we state and prove the multidimensional analogue of the mean value theorem.
Anyone who has taken a basic analysis course knows that the mean value theorem (MVT) is a very important, and widely used tool. In fact, we’ve had several occasions to use it in our study of multidimensional analysis. So, the obvious question is “is there a multidimensional analogue?” Well, in the strictest sense, there isn’t. By this I mean if one literally transposes the usual MVT to higher dimensions one arrives at something like “Let be everywhere differentiable on and let . Then, there exists some (where is the line segment connecting and ) such that ” Unfortunately, this is wildly false as the map clearly shows. So, what then is the correct formulation? We play our old trick of thinking of a map as secretly by first mapping for some vectors and then evaluating our map there (more explicitly something of the form ). From there if we could find some way of going back into we’d have a map which, being an honest to god real valued real variable map, can have the one dimensional case of the MVT applied to it. So, the question remains as to what kind of maps we want to consider? Well, we know from the above that we’re going to have to apply the chain rule to find the derivative of the map and so we don’t want to pick something so crazy that we are left knowing nothing new. No, we’ll restrict our maps into to be the simplest (in terms of derivatives), namely we’ll consider and so our map really looks like (where .