# Abstract Nonsense

## Structure of Euclidean Space

Point of Post: In this post we discuss the basic structure of Euclidean space $\mathbb{R}^n$. This will be in preparation for our multivariable analysis.

$\text{ }$

Motivation

$\text{ }$

The setting for all multivariable analysis is of course Euclidean space $\mathbb{R}^n$. It makes sense that it should then be something we get our notation straight about and recall some basic theorems. Euclidean space is perhaps one of the richest mathematical structures encountered by people on a regular basis, and so consequently we will not do it any fraction of justice with our discussion here.

$\text{ }$

Properties as an Algebra/Inner Product Space

$\text{ }$

Euclidean $n$-space, $\mathbb{R}^n$, is given the product algebra structure inherited from the $\mathbb{R}$-algebra $\mathbb{R}$ itself. It is clearly $n$-dimensional as a vector space with the canonical basis $e_1,\cdots,e_n$ where $e_k$ has a $1$ in the $k^{\text{th}}$ slot and $0$ elsewhere. The canonical ordered basis for $\mathbb{R}^n$ is $(e_1,\cdots,e_n)$.

$\text{ }$

One has the usual inner product ,denoted just $\langle\cdot,\cdot\rangle$ when no confusion will arise, given by

$\text{ }$

$\displaystyle \left\langle (x_1\cdots,x_n),(y_1,\cdots,y_n)\right\rangle=\sum_{j=1}^{n}x_jy_j$

$\text{ }$

which could equivalently be thought of as declaring the canonical basis orthonormal and extending by bilinearity. Of course we can induce the usual norm on $\mathbb{R}^n$, $\|\cdot\|$, given by (as expected)

$\text{ }$

$\displaystyle \|(x_1,\cdots,x_n)\|=\sqrt{\sum_{j=1}^{n}x_j^2}=\sqrt{\langle (x_1,\cdots,x_n),(x_1,\cdots,x_n)\rangle}$

$\text{ }$

From this one can prove the so-called polarization identity

$\text{ }$

$\displaystyle \left\langle (x_1,\cdots,x_n),(y_1,\cdots,y_n)\right\rangle=\frac{\left\|(x_1,\cdots,x_n)+(y_1,\cdots,y_n)\right\|^2-\left\|(x_1,\cdots,x_n)-(y_1,\cdots,y_n)\right\|^2}{4}$

$\text{ }$

$\text{ }$

Topological Properties of $\mathbb{R}^n$

$\text{ }$

There are multiple equivalent ways to define a topology on $\mathbb{R}^n$. One can give it the metric topology induced by the usual norm, in other words the metric topology induced by the metric $d(x,y)=\|x-y\|$. Or one could give it  the product topology induced of thinking of $\mathbb{R}^n$ as a $n$-fold product space of $\mathbb{R}$ (in either the order topology induced by the usual linear ordering on $\mathbb{R}$ or the usual metric topology). The fact that these two topologies are equivalent can be cutely wrapped up in the following: “Inside every little square is a little circle, and inside every little circle is a little square.”  Given this normal topology one can prove that $\mathbb{R}^n$ has the Heine-Borel Property that while in general compact subspaces of metric spaces are closed and bounded the converse is true in $\mathbb{R}^n$ (this can be proven first for $\mathbb{R}$ and then extended by Tychonoff’s theorem to the general case). It is also true that $\mathbb{R}^n$ is a complete metric space.

$\text{ }$

We have as always the canonical projections $\pi_i:\mathbb{R}^n\to\mathbb{R}$ for $i=1,\cdots,n$ which takes $(x_1,\cdots,x_n)\mapsto x_i$. Every mapping $f:X\to\mathbb{R}^n$ can, of course, be thought of as

$\text{ }$

$f:x\mapsto \left(\pi_1(f(x)),\cdots,\pi_n(f(x))\right)$

$\text{ }$

where given a $f$ we denote, when no confusion arises, $\pi_i\circ f$ as $f_i$ and call it the $i^{\text{th}}$ coordinate function. As usual a function $f:X\to\mathbb{R}^n$ is continuous if and only if $f_i:X\to\mathbb{R}$ is continuous for each $i=1,\cdots,n$. Perhaps stronger is the fact that if one defines limits in the usual way (as per a metric space)\$ then it’s true that if $f:E\to\mathbb{R}^m$ where $E\subseteq\mathbb{R}^n$ that

$\text{ }$

$\displaystyle \lim_{x\to y}f(x)=(a_1,\cdots,a_m)\Leftrightarrow \lim_{x\to y}f_i(x)=a_i\;\;i=1,\cdots,m$

$\text{ }$

$\text{ }$

References:

1. Spivak, Michael. Calculus on Manifolds; a Modern Approach to Classical Theorems of Advanced Calculus. New York: W.A. Benjamin, 1965. Print.

May 22, 2011 -

1. […] . We define the Jacbobian matrix of at to be the matrix representation of with respect to the usual ordered basis. We denote the Jacobian of at as […]

Pingback by The Total Derivative « Abstract Nonsense | May 22, 2011 | Reply

2. […] differentiable, in particular linear trnasofrmations, the functions of the form given by and , the usual inner product on , and the […]

Pingback by The Total Derivative of a Multilinear Function (Pt. I) « Abstract Nonsense | May 23, 2011 | Reply

3. […] set of bounded operators on a space. We shall then show that for finite dimensional spaces (such as Euclidean space) the landscape is much simpler since every linear operator is […]

Pingback by Linear Operators and the Operator Norm « Abstract Nonsense | May 24, 2011 | Reply

4. […] Let with coordinate functions . Then, is differentiable at if and only if is differentiable at for . […]

Pingback by Further Properties of the Total Derivative (Pt. I) « Abstract Nonsense | May 25, 2011 | Reply

5. […] Let where be differentiable at . Then, the function  given by (where is the usual inner product (i.e. dot product) on ) is differentiable at […]

Pingback by Further Properties of the Total Derivative (Pt. II) « Abstract Nonsense | May 26, 2011 | Reply

6. […] differences of the value and and let tend to zero. When the vector is one of the elements of the canonical basis we get the partial derivatives which, as we shall see, are of huge importance in multivariable […]

Pingback by Directional Derivatives and Partial Derivatives « Abstract Nonsense | May 29, 2011 | Reply

7. […] particular we see that if , with , then really (where and denote the usual inner product on ) where . This vector is so important that it’s given a name. In particular, if then the […]

Pingback by Relationship Between the Notions of Directional and Total Derivatives (Pt.I) « Abstract Nonsense | June 2, 2011 | Reply