# Abstract Nonsense

## The Exponential and Trigonometric Functions (Pt. II)

Point of Post: This is a continuation of this post.

$\text{ }$

May 5, 2012

## The Exponential and Trigonometric Functions

Point of Post: In this post we define the exponential and trigonometric functions and note that they are holomorphic.

$\text{ }$

Motivation

$\text{ }$

Last time we proved that every function on an open subset of $\mathbb{C}$ that is locally representable by power series is necessarily holmorphic (in fact, infinitely differentiable!). That said, we didn’t actually give any honest to god examples of such functions. Thus, in this post we will finally lay down our first few non-trivial examples of  holomorphic functions. They will come in terms of what is, in a very precise sense we will make clear later on, the only extension of some of our favorite real valued functions: the exponential and trigonometric functions.

$\text{ }$

May 5, 2012

## Complex Power Series

Point of Post: In this post we discuss the basic ideas behind when a complex power series converges and discuss the holomorphicity of functions representable by power series.

$\text{ }$

Motivation

$\text{ }$

We have now discussed the notion of holomorphic functions but beside’s some extremely trivial ones (like polynomials) we don’t have any good class of examples of such functions. In this post we describe what is the most informative example of holomorphic functions–power series. Why are power series the most informative example? Well we shall eventually see that the holomorphic functions are precisely those that are locally just power series.

$\text{ }$

I assume anyone reading this is fairly well-acquainted with power-series and the related notions (the Weierstrass M-test, etc.) , for I will gloss over some stuff.

$\text{ }$

May 3, 2012

## Complex Differentiability and Holomorphic Functions (Pt. I)

Point of Post: In this post we define the notion of a function $f:\Omega\to\mathbb{C}$ to be holmorphic on some domain $\Omega\subseteq\mathbb{C}$.

$\text{ }$

Motivation

$\text{ }$

We are going to start discussing complex analysis in preparation for later discussion on Riemann surfaces. We start this discussion, naturally, with the notion of differentiability for functions mapping $\mathbb{C}\supseteq\Omega\to\mathbb{C}$. There is a standard amount of amazement associated to functions which are differentiable in the complex sense since, as we shall see (and, as I’m sure you well-know), they are MUCH nicer then any kind of real differentiable function $U\to\mathbb{R}^2$. In particular, we shall see that any once differentiable function shall be infinitely differentiable and, moreover, locally be expressable as a power series. Think about how different this is from standard real differentiable functions, say, even just $\mathbb{R}\to\mathbb{R}$ where we can find functions that have any number $N$ of deriviatives we desire, yet it’s $N^{\text{th}}$ derivative not even be continuous, let alone differentiable. We can even find functions that are infinitely differentiable yet whose Taylor series at a point doesn’t converge to the function!

May 1, 2012

## Local Homeo(Diffeo)morphisms to Global Homeo(Diffeo)morphisms

Point of Post: In this post we discuss an important consequence of the inverse function theorem which relates local diffeomorphisms to global diffeomorphisms.

$\text{ }$

September 22, 2011

## Curves and the Implicit Function Theorem

Point of Post: In this post we discuss the notion of smooth curves in $\mathbb{R}^n$ and the implicit function theorem.

$\text{ }$

Motivation

To begin our discussion of geometry it seems prudent to discuss perhaps the simplest of all smooth geometric objects–curves. Everyone has an intuitive notion of a curve (at least in two or three space). Namely, a curve can be thought of as a length of string that is twisted this way and that, in a smooth manner. But, of course in mathematics one must always back up intuitive notions with concrete, sold definitions. That said, in our case one quickly realizes that there is not one immediate definition of curve. Indeed, there are two canonical ways of defining a curve which, from our point of view, are ordered in terms of ‘importance’ (i.e. we prefer one notion over the other). To see the difference between these two notions consider probably the simplest (closed) curve one could imagine in $\mathbb{R}^2$–the unit circle $\mathbb{S}^1$. Ask any kid off the street how one defines the unit circle and you are most likely to get the immediate answer “Oh! It’s just the set of points $(x,y)\in\mathbb{R}^2$ such that $x^2+y^2=1$” (or, perhaps the set of all $z\in\mathbb{C}$ with $|z| =1$). Or, the parabola is another perfectly good curve which could be described as the set $(x,y)$ such that $y=x^2$. Thus, one should start to wonder if perhaps the correct notion of a curve is the ‘locus’ of a single or multiple functions in Euclidean space. To be more concrete, for functions$f_1,\cdots,f_n:\mathbb{R}^n\to\mathbb{R}$ define $\mathbb{V}(f_1,\cdots,f_n)$ to be the set $f_1^{-1}(\{0\})\cap\cdots\cap f_n^{-1}(\{0\})$ (so that $\mathbb{S}^1=\mathbb{V}(x^2+y^2-1)$). Perhaps then a good definition of a ‘curve’ is a set of the form $\mathbb{V}(f_1,\cdots,f_n)$ for some sufficiently well-behaved functions $f_1,\cdots,f_n$. That said, there is another natural notion of curve which is equally naturally as the definition as the locus of a set of ‘nice’ functions. Namely, a curve can be thought of as a ‘path’, or the trace of  a moving particle, or more importantly the function defining the path. To be precise, a curve could also be defined as a sufficiently nice mapping $\gamma:I\to\mathbb{R}^n$ for some (possibly infinite) non-empty interval $I\subseteq\mathbb{R}$. There is a large connotational difference between curves thought of as the locus of a set of functions and as a ‘path’. In particular, a ‘path’ has notions of how quick one traverses the path, whether they turn around, etc. whereas the loci of functions is just a set. That said, there seems to be a pretty obvious ‘connection’, namely taking a ‘path’ $\gamma:I\to\mathbb{R}^n$ and loci. Namely, it seems intuitively obvious that at least the locus $\mathbb{V}(f_1,\cdots,f_n)$ of a set of functions and the image $\gamma(I)$ of some ‘path’ $\gamma$ are the same ‘objects’ (i.e. just sets). That said, a little thought shows that they are definitively not in one-to-one correspondence. For example, consider the hyperbola $\mathbb{V}(x^2-y^2-1)$. This is a perfectly nice ‘curve’, that said there evidently does not exist a sufficiently nice (e.g. continuous) ‘path’ $\gamma:I\to\mathbb{R}^2$ with $\gamma(I)=\mathbb{V}(x^2-y^2-1)$ since the right hand side is not connected and the left hand side necessarily is. That said, there is hope to find a ‘path’ that has image equal to part of the hyperbola. In particular, if one restricts $\mathbb{V}(x^2-y^2-1)$ to points with positive $y$-coordinates then the path $\gamma:\mathbb{R}\to\mathbb{R}^2$ having $\gamma:t\mapsto (\cosh(t),\sinh(t))$ is a perfectly ($C^\infty$) nice ‘path’ with $\gamma(I)$ equal to the aforementioned branch of the hyperbola. Thus, one wonders if perhaps there is some condition on a curve (or a point of a curve) that gurantees that the curve is locally equivalent to the image of a ‘path’.  In fact, there is a theorem to this effect, but it is perhaps more of a sophisticated answer than one would expect–in particular being a stronger version of the inverse function theorem. Roughly the theorem states that if one has a level curve, and if the ‘derivative’ of the defining functions is non-zero at some point then in some neighborhood of that point the level curve is the graph of a function! Not to point out the obvious, but math is full of simple questions with startling complicated answers–this is perhaps one of the most profound of these examples, providing an integral link between algebra of functions and their geometry.

$\text{ }$

September 15, 2011

## The Inverse Function Theorem (Proof)

Point of Post: This is a continuation of this post.

$\text{ }$

September 8, 2011

## The Inverse Function Theorem (Preliminaries)

Point of Post: In this post we give motivation for the inverse function theorem.

$\text{ }$

Motivation

In this post we discuss one of the most fundamental analytic-geometric facts in all of multivariable analysis–the inverse function theorem. The theorem really has its humble roots back in single variable analysis with an observation about regular points (points where the derivative is non-zero) of continuously differentiable functions. Namely, it is a common theorem that if $f:(a,b)\to\mathbb{R}$ is continuously differentiable and $f'(c)\ne 0$ for some $c\in(a,b)$  then there exists some neighborhood $U\subseteq (a,b)$ containing $c$ for which $f$ is bijective and it’s inverse is continuously differentiable and moreover that $\displaystyle \left(f^{-1}\right)'(f(c))=\frac{1}{f'(c)}$. There, the theorem was easy to prove (see any basic analysis textbook for a proof). So, since we are doing multivariable analysis an obvious question is “does this result extend to maps $f:\mathbb{R}^n\to\mathbb{R}^m$?” Well, the first problem in answering this question is formulating exactly what this ‘theorem’ would say in higher dimensions. Let’s  rephrase this theorem in a language a little more amenable to total derivatives. We begin with that $f'(c)\ne 0$ means. In particular,  (using the notation used above) we see that since $D_f(c)(x)=f'(c)x$ we have that $f'(c)\ne 0$ if and only if $D_f(c)\in\text{GL}\left(\mathbb{R}\right)$. Thus, it seems that the natural extension would be we want to consider $f:U\to\mathbb{R}^m$, with $U\subseteq\mathbb{R}^n$ open, with some distinguished point $c\in U$ such that $D_f(c):\mathbb{R}^n\to\mathbb{R}^m$ is an isomorphism. In particular we should make the concession that we would like to only consider maps (with the above notation) where $m=n$. From this, we can see that we have a visually similar condition that takes $f'(c)\ne 0$ to (recalling that we are only considering maps $\mathbb{R}^n\to\mathbb{R}^n$) the condition $\det\text{Jac}_f(c)\ne 0$. It’s pretty intuitive that we should replace continuously differentiable with $C^1(U)$ (in the multivariable sense). Lastly, we see that $\displaystyle \left(f^{-1}\right)'(f(c))=\frac{1}{f'(c)}$ seems naturally translatable to $D_{f^{-1}}(x)=D_f^{-1}(f^{-1}(x))$ or, in the more common form, $\text{Jac}_{f^{-1}}(x)=\text{Jac}_f^{-1}(f^{-1}(x))$. Thus, we can finally create a single-variable to multivariable dictionary for this theorem

$\text{ }$

$\begin{array}{c|c}\mathbb{R}\to\mathbb{R} & \mathbb{R}^n\to\mathbb{R}^n\\ \hline & \text{ }\\ \text{continuously differentiable} & C^1(U)\\ & \\ f'(c)\ne 0 & D_f(c)\in\text{GL}\left(\mathbb{R}^n\right)\\ & \\ \displaystyle \left(f^{-1}\right)'(f(c))=\frac{1}{f'(c)} & D_{f^{-1}}(f(c))=D_f^{-1}(c)\end{array}$

$\text{ }$

$\text{ }$

References:

1.  Spivak, Michael. Calculus on Manifolds; a Modern Approach to Classical Theorems of Advanced Calculus. New York: W.A. Benjamin, 1965. Print.

2. Apostol, Tom M. Mathematical Analysis. Reading, MA: Addison-Wesley Pub., 1974. Print.

September 8, 2011

## Banach Fixed Point Theorem

Point of Post: In this post we discuss and prove the famous result known as the Banach Fixed Point Theorem or the Contraction Mapping Principle.

$\text{ }$

Motivation

There is an old (anonymous) saying among mathematicians and math enthusiasts: “All of topology comes down to a fixed point theorem”. Now, while this is clearly a gross exaggeration it is definitely a moral truth. In fact, when one thinks of some of the most famous [“basic”] results in topology one is often thinking of theorems which, if not fixed point theorems themselves, have a definite fixed point feel: the Lefschetz fixed point theorem, the Brouwer fixed point theorem, the Borsuk-Ulam theorem, etc. This oversimplified truth can, in some small way, also be said of analysis. Often certain, seemingly intractable, problems in analysis are killed right away if one applies a certain fixed point theorem. Probably the most famous of the ‘analytic feeling’ fixed point theorems (analytic feeling is pretty vague, but most often has to do with metric spaces e.g. this theorem) is the Banach fixed point theorem which first appeared in the Ph.D. thesis of Stefan Banach (the same Banach as in Banach space of course). Roughly, it says that if you have a contraction mapping from a complete metric space to itself must have a unique fixed point. This is so intuitive (once you’ve been told it of course) that the intuition is the proof: since the mapping is a contraction the points $x_0,f(x_0),f(f(x_0)),\cdots,$ (for any $x_0$) must be getting close together (i.e. the sequence is Cauchy) and thus we know from the completeness of our space the sequence must converge to some $\lim f^{(n)}(x_0)=y$. But, since $f$ is continuous (evidently since it’s a contraction>Lipschitz>uniform continuity>continuity) we know that acting on $y$ by $f$ should be the same as considering $\lim f^{(n+1)}(x)$ which, if there is any justice in the world, is just $y$. The uniqueness is clear since if two points where fixed by $f$ then acting on them by $f$ couldn’t take them any closer.

$\text{ }$

June 13, 2011

## The Mean Value Theorem for Multivariable Maps

Point of Post: In this post we state and prove the multidimensional analogue of the mean value theorem.

$\text{ }$

Motivation

Anyone who has taken a basic analysis course knows that the mean value theorem (MVT) is a very important, and widely used tool. In fact, we’ve had several occasions to use it in our study of multidimensional analysis. So, the obvious question is “is there a multidimensional analogue?” Well, in the strictest sense, there isn’t. By this I mean if one literally transposes the usual MVT to higher dimensions one arrives at something like “Let $f$ be everywhere differentiable on $U$ and let $x,y\in U$. Then, there exists some $\xi\in \overline{xy}$ (where $\overline{xy}$ is the line segment connecting $x$ and $y$) such that $f(x)-f(y)=D_f(\xi)(x-y)$” Unfortunately, this is wildly false as the map $f:\mathbb{R}\to\mathbb{R}^2:t\mapsto (\cos(t),\sin(t))$ clearly shows. So, what then is the correct formulation? We play our old trick of thinking of a map $\mathbb{R}^n\to\mathbb{R}^m$ as secretly $\mathbb{R}\to\mathbb{R}^n\to\mathbb{R}^m$ by first mapping $t\mapsto a+tb$ for some vectors $a,b\in\mathbb{R}^n$ and then evaluating our map there (more explicitly something of the form $t\mapsto a+tb\mapsto f(a+tb)$). From there if we could find some way of going back into $\mathbb{R}$ we’d have a map $\mathbb{R}\to\mathbb{R}^n\to\mathbb{R}^m\to\mathbb{R}$ which, being an honest to god real valued real variable map, can have the one dimensional case of the MVT applied to it. So, the question remains as to what kind of maps $\mathbb{R}^m\to\mathbb{R}$ we want to consider? Well, we know from the above that we’re going to have to apply the chain rule to find the derivative of the map $\mathbb{R}\to\mathbb{R}$ and so we don’t want to pick something so crazy that we are left knowing nothing new. No, we’ll restrict our maps into $\mathbb{R}$ to be the simplest (in terms of derivatives), namely we’ll consider $\varphi\in\text{Hom}\left(\mathbb{R}^m,\mathbb{R}\right)$ and so our map really looks like $\varphi\circ f\circ g$ (where $g(t)=a+tb)$.

$\text{ }$

June 11, 2011