Abstract Nonsense

Crushing one theorem at a time

Halmos Sections 37 and 38: Matrices and Matrices of Linear Transformations(Pt. IV)


Point of post: This is a continuation of this post.

Remark: For some strange reason the fourth (this one) and the fifth (the previous one) got mixed up in the order of posting. The number is correct, this is the fourth post in this sequence and the one preceding it the fifth.

Continue reading

Advertisements

December 19, 2010 Posted by | Fun Problems, Halmos, Linear Algebra, Uncategorized | , , , , | Leave a comment

Halmos Sections 32 and 33: Linear Transformations and Transformations as Vectors (Pt. II)


Point of post: This is a continuation of this post in an effort to answer the questions at the end of sections 32 and 33 in Halmos’s book.

Continue reading

November 22, 2010 Posted by | Fun Problems, Halmos, Linear Algebra, Uncategorized | , , , , | Leave a comment

Tensor Product


Point of post: In this post I will discuss the very basic, and simple minded, definition of the tensor product \mathscr{V}\otimes\mathscr{W} of finite dimensional vector spaces \mathscr{V} and \mathscr{W} and it’s consequences, as is outlined in Halmos (viz. reference 1).

Nota Bene: The following may seem to be a far-cry from the typical definition of the tensor product as \mathcal{F}\left(\mathscr{U}\times\mathscr{V}\right)/\sim where \mathcal{F}\left(\mathscr{U}\times\mathscr{V}\right) is the free vector space and \sim is the usual equivalence relation. That said, the following is a fairly large amount of theoretical buck for a fairly small complexity buck.

Motivation

In the  last post we discussed how given vector spaces \mathscr{U},\mathscr{V} over a field F there is a canonical way to form the vector space of all bilinear forms on \mathscr{U}\boxplus\mathscr{V}, denoted \text{Bil}\left(\mathscr{U},\mathscr{V}\right). But, as is fast becoming a motif in our studies we begin with a vector space \mathscr{W} and the study it’s dual space, as always denoted either \text{Hom}\left(\mathscr{W},F\right) or \mathscr{W}^*. For the case of \text{Bil}\left(\mathscr{U},\mathscr{V}\right) we make a small notational change. Instead of denoting the dual space of \text{Bil}\left(\mathscr{U},\mathscr{V}\right) by \text{Hom}\left(\text{Bil}\left(\mathscr{U},\mathscr{V}\right),F\right) we denote it by \mathscr{U}\otimes\mathscr{V} and call it the tensor product of \mathscr{U} and \mathscr{V}.

Continue reading

October 29, 2010 Posted by | Algebra, Linear Algebra | , , , | 2 Comments

Halmos Chapter one Sections 15, 16 and 17: Dual Bases, Reflexivity, and Annihilators (Part I)


Note: For a vector space \mathcal{V} over a field F I will freely switch between the notations \mathcal{V}^{*} and \text{Hom}\left(\mathcal{V},F\right) for the dual space of \mathcal{V}, depending which fits better.

Continue reading

October 3, 2010 Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , , , | 3 Comments

Halmos Chapter one Section 13 and 14: Linear Functionals and Bracket Notation


1.

Problem: Consider the set \mathbb{C} of complex numbers as a vector space over \mathbb{R}. Suppose that for each \zeta=\xi_1+i\xi_2 in \mathbb{C} (where \xi_1,\xi_2\in\mathbb{R}) the function y is given by

a) y(\zeta)=\xi_1

b) y(\zeta)=\xi_2

c) y(\zeta)=\xi_1^2

d) y(\zeta)=\xi_1-i\xi_2

e) y(\zeta)=\sqrt{\xi_1^2+\xi_2^2}

In which cases are these linear functionals?

Continue reading

September 30, 2010 Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , , , | 7 Comments

Halmos Chapter one Section five, six, and seven: Linear Dependence, Linear Combinations, and Bases


1.

Problem:

a) Prove that the four vectors x=(1,0,0),y=(0,1,0),z=(0,0,1), and u=(1,1,1) are linearly dependent, but any three are linearly independent

b) If the vectors x,y,z and u in \displaystyle \mathcal{P}=\left\{\sum_{k=0}^{n}c_nt^n:c_n\in\mathbb{C}\text{ and }n\in\mathbb{N}\cup\{0\}\right\}are given by x(t)=1, y(t)=t,z(t)=t^2 and u(t)=1+t+t^2, prove that x,y,z and w are dependent, but any three of them are linearly independent

Proof:

a) Clearly all four are l.d. (linearly dependent) since -x+-y+-z+w=0. Now, clearly x,y,z are l.i. (linearly independent) and so it remains to show that \{x,y,w\},\{y,z,w\},\{z,x,w\} are l.i. We do this only for the first case, since the others are done similarly. So, suppose that

\alpha_1x+\alpha_2y+\alpha_3w=\left(\alpha_1+\alpha_3,\alpha_2+\alpha_3,\alpha_3\right)=\bold{0}

comparison of the third coordinates tells us that \alpha_3=0, and thus \alpha_1,\alpha_2=0 from where l.i. follows.

b) Clearly they are l.d. since -x+-y+-z+w=\bold{0}. We can prove that any three are l.i. in much the same way as we can the tuples (clearly this is not coincidence, the variables when treated as formal objects are glorified placeholders for the coefficients) and so we once again prove one. Namely, if

p(t)=\alpha_1x+\alpha_2y+\alpha_3w=\alpha_1+\alpha_3+(\alpha_2+\alpha_3)t+\alpha_3 t^2=0(t)

then in particular p(0)=\implies \alpha_1+\alpha_3=0 as well as that

p(t)=(\alpha_2+\alpha_3)t+\alpha_3 t^2=t\left(\alpha_2+\alpha_3+\alpha_3t\right)=0(t)

Now, noting that p(t)=0(t) for all t\in\mathbb{C} we in particular may note that it’s true for all t\in\mathbb{R}^+. And, for such t we see that

p(t)=t(\alpha_2+\alpha_3+\alpha_3t)=0(t)\implies \alpha_2+\alpha_3+\alpha_3t=0(t)

and in particular, p(0)=\alpha_2+\alpha_3=0. Finally, repeating this process again shows that \alpha_3=0 from where it all cascades back to show that \alpha_1=\alpha_2=\alpha_3=0

2.

Problem: Prove that if \mathbb{R} is considered as a vector space over \mathbb{Q}, then a necessary and sufficient condition that the vectors 1 and \xi in \mathbb{R} be l.i. is that the real number \xi is irrational.

Proof: This is evident. Suppose that \xi\notin\mathbb{Q} but \{1,\xi\} was not l.i., then there exists nonzero \alpha,\beta\in\mathbb{Q} for which \alpha+\beta\xi=0 (neither can be zero since one being zero implies the other must be zero, considering \xi\ne0) but, this in particular means that \displaystyle \xi=\frac{\alpha}{-\beta}\in\mathbb{Q}, which is a contradiction. Conversely, suppose that \{1,\xi\} is l.i. but \xi\in\mathbb{Q}. Then, there exists \alpha,\beta\in\mathbb{Q} such that \displaystyle \frac{\alpha}{\beta}=\xi\implies \alpha-\beta x=0. Clearly, \alpha,\beta\ne 0 and so this violates the l.i. of \{1,\xi\}

3.

Problem: Is it true that if x,y, and z are l.i. vectors, then so are x+y,y+z,z+x?

Proof: Yes. Note that if

\alpha_1(x+y)+\alpha_2(x+z)+\alpha_3(y+z)=(\alpha_1+\alpha_2)x+(\alpha_2+\alpha_3)z+(\alpha_1+\alpha_3)y=\bold{0}

then the l.i. of \{x,y,z\} tells us that the system of equations

\begin{cases}\alpha_1+\alpha_2=0\quad(1)\\\alpha_2+\alpha_3=0\quad(2)\\ \alpha_1+\alpha_3=0\quad(3)\end{cases}

But,

\alpha_2-\alpha_3=(1)-(3)=0=\alpha_2+\alpha_3

upon which cancellation gives \alpha_3=0 from where the rest follows.

Remark: This clearly generalized, but it’s too late. I’ll come back later and think about it.

4.

Problem:

a) Under what conditions on the scalar \xi are the vectors (1+\xi,1-\xi) and (1-\xi,1+\xi) in \mathbb{C}^2 l.d.?

b) Under what conditions on the scalar \xi are the vectors (\xi,1,0),(1,\xi,1), and (0,1,\xi) in \mathbb{R}^3 l.d.?

c) What is the answer to b) for \mathbb{Q}^3

Solution:

a) We first note that

\alpha_1\left(1+\xi,1-\xi\right)+\alpha_2\left(1-\xi,1+\xi\right)=\left(\alpha_1+\alpha_2+\xi(\alpha_1-\alpha_2),\alpha_1+\alpha_2+(\alpha_2-\alpha_1)\xi\right)

So, we are looking to see when this expression equals zero. Clearly, setting this equal to zero gives us

\begin{cases}\alpha_1+\alpha_2+(\alpha_1-\alpha_2)\xi=0\\\alpha_1+\alpha_2+(\alpha_2-\alpha_1)\xi=0\end{cases}\Leftrightarrow\begin{cases}(\alpha_1-\alpha_2)\xi=0\\(\alpha_2-\alpha_1)\xi=0\end{cases}

Now, \alpha_1\ne\alpha_2 since a quick check would show that they would have to then be zero. So, we may compare these two equations and arrive at \xi=-\xi\implies \xi=0. Thus, they are l.d. precisely when they coincide.

b) We note first that

\alpha_1(\xi,1,0)+\alpha_2(1,\xi,1)+\alpha_3(0,1,\xi)=\left(\alpha_1\xi+\alpha_2,\alpha_1+\alpha_2\xi+\alpha_3,\alpha_2+\alpha_3\xi\right)

thus, if we set this equal to zero we get the following three equations

\begin{cases}\alpha_1\xi+\alpha_2=0\quad\quad\quad\text{ }(1)\\\alpha_1+\alpha_2\xi+\alpha_3=0\quad(2)\\\alpha_2+\alpha_3\xi=0\quad\quad\quad\text{ }(3)\end{cases}

We then note that

(1)-(3)=(\alpha_1-\alpha_2)\xi=0

So, if we assume that \xi\ne0 we arrive at \alpha_1=\alpha_2. Thus (1)\implies (\alpha_1(\xi+1)=0 and so (2) says that 0+\alpha_3=0\implies \alpha_3=0 and so it follows that \alpha_1=\alpha_2=\alpha_3=0. Thus, the only possibility is that \xi=0. Checking this we find that (0,1,0), (1,0,1), and 0,1,0 are linearly dependent.

c) The above implies that the same conclusion must be drawn.

5.

Problem: Prove the following

a) The vectors (\xi_1,\xi_2) and (\eta_1,\eta_2) in \mathbb{C}^2 are l.d. implies \xi_1\eta_2=\xi_2\eta_1

b) Find a similar necessary condition for the l.d. of three ectors in \mathbb{C}^3.

c) Is there a set of three l.i. vectors in \mathbb{C}?

Proof:

a) We first note that

\alpha_1(\xi_1,\xi_2)+\alpha_2(\eta_1,\eta_2)=\left(\alpha_1\xi_1+\alpha_2\eta_1,\alpha_1\xi_2+\alpha_2\eta_2\right)

and thus we are afforded the equations

\begin{cases} \alpha_1\xi_1+\alpha_2\eta_1=0\\\alpha_1\xi_2+\alpha_2\eta_2=0\end{cases}

or in matrix form

\begin{bmatrix}\xi_1 & \eta_1\\ \xi_2 & \eta_2\end{bmatrix}\begin{bmatrix}\alpha_1\\\alpha_2\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}\quad (1)

Now, if A=\begin{bmatrix}\xi_1 & \eta_1\\ \xi_2 & \eta_2\end{bmatrix} were invertible then

(1)\implies \begin{bmatrix}\alpha_1 \\ \alpha_2 \end{bmatrix}=A^{-1} \begin{bmatrix} 0 \\ 0 \end{bmatrix} =\begin{bmatrix} 0 \\ 0 \end{bmatrix} \implies \alpha_1=\alpha_2=0

and thus the vectors are l.i. It follows that A is not invertible, or equivalently

\det A=\xi_1\eta_2-\xi_2\eta_1=0

b) For three vectors we follow the same line of logic and note that the determinant formed by having the the three vectors as rows must be zero.

c) Yes, what about (0,0,1),(0,1,0) and (1,0,0)

6.

Problem: Prove the following

a) Under what conditions on the scalars  \xi,\eta are the vectors (1,\xi),(1,\eta) l.d.

b) Under what conditions on the scalars  \xi,\eta, and \zeta are the vectors (1,\xi,\xi^2),(1,\eta,\eta^2) and (1,\zeta,\zeta^2) l.d. in \mathbb{C}^3?

c) Generalize to \mathbb{C}^n

Proof:

a) By the last problem we see that they are l.d. iff \eta=\xi

b) By the last problem we see they are l.d. iff -(\xi-\zeta)(\xi-\eta)(\eta-\zeta)=0

c) It is clear following the same logic that \left(1,\cdots,\xi_1^n\right),\cdots,\left(1,\cdots,\xi_n^n\right) are l.d. iff

\displaystyle \det\begin{bmatrix}1 & \cdots & \xi_1^n\\ \vdots & \ddots & \vdots\\ 1 & \cdots & \xi_n^n\end{bmatrix}=\prod_{1\leqslant i<j\leqslant n}\left(\xi_j-\xi_i\right)=0

in other words, iff \xi_i=\xi_j,\text{ }i\ne j

7.

Problem: Prove the following

a) Find two bases in \mathbb{C}^4 such that the only vectors common to both are (0,0,1,1) and (1,1,0,0)

b) Find two bases in \mathbb{C}^4 that nave no vectors in common, so that one of them contains the vectors (1,0,0,0) and (1,1,0,0) and the other one contains teh vectors (1,1,1,0) and (1,1,1,1)

Proof:

a) Consider \{(1,0,0,0)=x,(0,0,0,1)=y,(0,0,1,1)=z,(1,1,0,0)=w\}=B_1. To see that this set is l.i. we note that

\alpha_1 x+\alpha_2 y+\alpha_3 z+\alpha_4 w=\left(\alpha_1+\alpha_4,\alpha_4,\alpha_3,\alpha_2+\alpha_3\right)=(0,0,0,0)

clearly implies that \alpha_3=\alpha_4=0 and the fact that \alpha_1=\alpha_2=0 quickly follows. Also, if v=(\xi_1,\xi_2,\xi_3,\xi_4) then taking \alpha_1=\xi_2,\alpha_2=\xi_3,\alpha_3=\xi_1-\xi_2, and \alpha_4=\xi_4-\xi_3 we can readily see that \alpha_1x+\alpha_2y+\alpha_3z+\alpha_4w=v. Thus, B is, in fact, a basis for \mathbb{C}^4.

Using the same process we can see that \left\{(0,1,0,0),(0,0,1,0),(0,0,1,1),(1,1,0,0)\right\}=B_2 forms a basis, and B_1\cap B_2=\{(1,1,0,0),(0,0,1,1)\}

b) One can check that B_1 from last time and \left\{(1,1,1,0),(1,1,1,1),(0,1,1,1),(1,1,0,1)\right\} work.

8.

Problem:

a) Under what conditions on the scalar \xi do the vectors (1,1,1) and (1,\xi,\xi^2) form a basis of \mathbb{C}^3?

b) Under what conditions on the scalar \xi do the vectors (0,1,\xi),(\xi,0,1) and (\xi,1,1+\xi) form a basis of \mathbb{C}^3

Proof:

a) Note that if \xi=1 this set of vectors is surely not l.i. Thus, we may assume that \xi\ne 1. We note then that if

\alpha_1(1,1,1)+\alpha_2(1,\xi,\xi^2)=\left(\alpha_1+\alpha_2,\alpha_1+\alpha_2\xi,\alpha_1+\alpha_2\xi^2\right)=(0,0,0)

then the three equations

\begin{cases} \alpha_1+\alpha_2=0\quad &(1)\\\alpha_1+\alpha_2\xi=0 &(2)\\\alpha_1+\alpha_2\xi^2=0 & (3)\end{cases}

hold. Namely, we see that

(1)-(2)=\alpha_2(1-\xi)=0\implies \alpha_2=0

and thus by (1) we see that \alpha_1=0. Thus, if \xi\ne 1 that these vectors are l.i. But, suppose that \alpha_1,\alpha_2,\alpha_3\in\mathbb{C} were such that

\begin{cases} \alpha_1+\alpha_2=0\quad &(1)\\\alpha_1+\alpha_2\xi=0 &(2)\\\alpha_1+\alpha_2\xi^2=1 & (3)\end{cases}

We see that (1)\implies \alpha_1=-\alpha_2 and so insertion of this into (3) shows that \displaystyle \alpha_2=\frac{-1}{\xi^2-1}, and insertion of this into (2) gives that \displaystyle \alpha_1=\frac{-\xi}{\xi^2-1}. But, inserting these into (1) gives

\displaystyle 0=\alpha_1+\alpha_2=\frac{-\xi}{\xi^2-1}+\frac{1}{\xi^2-1}=\frac{1-\xi}{\xi^2-1}\ne0

which is a contradiction. Thus, these vectors can never be a basis.

9.

Problem: If \mathcal{X} is the set consisting of the six vectors \{(1,1,0,0),(1,0,1,0),(1,0,0,1),(0,1,1,0),(0,1,0,1),(0,0,1,1)\} find two different maximal independt subsets of \mathcal{X}.

Proof: It is tedious, but one can prove that \{(1,1,0,0),(0,1,1,0),(0,0,1,1)\} and \{(1,0,1,0),(0,1,0,1),(0,1,1,0)\} are two such setes.

10.

Problem: Let \mathcal{V} be a vector space. Prove that \mathcal{V} has a basis.

Proof: We can prove something even stronger. We can prove that given a set of l.i. vectors I\subseteq\mathcal{V} that there is a basis \mathfrak{B} of \mathcal{V} such that I\subseteq\mathfrak{B}. To do this, let

\mathfrak{P}=\left\{L\subseteq \mathcal{V}:L\text{ is l.i and }I\subseteq L\right\}

To prove this we first note that \left(\mathfrak{P},\subseteq\right) is a partially ordered set. Also, given some chain \mathfrak{C}\subseteq\mathfrak{P} we can easily see that \displaystyle U=\bigcup_{C\in\mathfrak{C}}C is an upper bound. To see that U\in\mathfrak{P} we let \{v_1,\cdots,v_m\}\subseteq U (remember we’re dealing with the arbitrary notion of l.i., not necessarily the finite one). Then, by definition there exists C_1,\cdots,C_m\in\mathfrak{C} such that v_1\in C_1,\cdots, V_m\in C_m. Now, it clearly follows (from \mathfrak{C} being a chain) that

\{v_1,\cdots,v_m\}\subseteq C_1\cup\cdots\cup C_m=C_{m_0},\quad m_0\in\{1,\cdots,m\}

but, that means that \{v_1,\cdots,v_m\} is contained within an element of \mathfrak{P}, namely it is the subset of a set of l.i. vectors, and thus l.i.  Thus, U\in\mathfrak{P}. So, evoking Zorn’s lemma we see that \mathfrak{P} admits a maximal element \mathfrak{M}. We claim that \text{Span }\mathfrak{M}=\mathcal{V}. To see this, suppose not. Then, there exists some v\in\mathcal{V}-\text{Span }\mathfrak{M}. Now, let \{v_1,\cdots,v_n\} be a finite subset of \mathfrak{M}\cup\{v\}. Clearly if v\ne v_i,\text{ }i=1,\cdots,n then it is a l.i. set, and if v=v_{i_0} for some i_0, then we see that

\alpha_1 v_1+\cdots+\alpha_{i_0}v_{i_0}+\cdots+\alpha_n v_n=0\implies \alpha_{i_0}=0

since otherwise

\displaystyle v_{i_0}=v=\frac{-\alpha_1}{\alpha_{i_0}}v_1+\cdots+\frac{-\alpha_{i_0-1}}{\alpha_{i_0}}v_{i_0-1}+\frac{-\alpha_{i_0+1}}{\alpha_{i_0}}v_{i_0+0}+\cdots+\frac{-\alpha_n}{\alpha_{i_0}}v_n

which contradicts that v\notin\text{Span }\mathfrak{M}. But, \alpha_{i_0}=0 clearly implies (by the l.i. of \mathfrak{M}) that \alpha_{i}=0,\text{ }i=1,\cdots,n. Thus, \mathfrak{M}\cup\{v\} is l.i. and so is contained in \mathfrak{P}. But, this contradicts the maximality of \mathfrak{M}. It follows that no such v exists, in other words

\mathcal{V}-\text{Span }\mathfrak{M}=\varnothing\implies \mathcal{V}=\mathfrak{M}

(since \mathfrak{M}\subseteq \mathcal{V}. So, taking I=\varnothing we see that \mathcal{V} must admit a basis. \blacksquare

September 24, 2010 Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , | 3 Comments

Halmos Sections 2,3, and 4


1.

Problem:

Prove that if \mathcal{V} is a vector space over the field \mathfrak{F}, then for any x,y\in\mathcal{V} and \alpha\in\mathfrak{F} the following are true:

a) 0+x=x

b) -0=0

c) \alpha0=0

d) 0x=0

e) If \alpha x=0 the either \alpha=0 or x=0

f) -x=(-1)x

g) y+(x-y)=x

Proof:

a) This follows from x+0=x and the commutativity of +

b) We merely note that 0+0=0 and so 0=-0

c) We merely note that \alpha0=\alpha(0+0)=\alpha0+\alpha0 and thus by cancellation \alpha0=0

d) We see that 0x=(0+0)x=0x+0x\implies 0x=0

e) This is identical to the similar problem in the last post.

2.

Problem: If p is a prime then \mathbb{Z}_p^n is a vector space over \mathbb{Z}_p. How many vectors are there in this vector space?

Proof: This is equivalent to asking how many functions are there from \{1,\cdots,n\} to \{1,\cdots,p\} which is p^n

3.

Problem: Let \mathcal{V} be the set of all ordered pairs of real numbers. If x=(\xi_1,\xi_2) and y=(\eta_1,\eta_2) are elements of \mathcal{V} write x+y=(\xi_1+\eta_1,\xi_2+\eta_2), \alpha x=(\alpha\xi_1,0), 0=(0,0) and -x=(-\xi_1,-\xi_2)

Proof: It is not. Notice that (0,1)\ne0 and 1\ne 0 yet 1(0,1)=(1\cdot0,0)=(0,0)=0 which contradicts the e) in the problem one.

4.

Problem: Sometimes a subset of a vector space is itself a vector space. Consider, for example, the vector space \mathbb{C}^3 and the subsets \mathcal{V} of \mathbb{C}^3 consisting of those vectors (\xi_1,\xi_2,\xi_3) such that

a) \xi_1 is real

b) \xi_1=0

c) Either \xi_1=0 or \xi_2=0

d) \xi_1=-\xi_2

e) \xi_1+\xi_2=1

Proof:

a) This clearly isn’t (remembering that we’re considering \mathbb{C}^3 as being a vector space over \mathbb{C}) since (1,0,0)\in\mathcal{V} but i(1,0,0)=(i,0,0)\notin\mathcal{V}

b) It suffices to show that (0,0,0)\in\mathcal{V}, x,y\in\mathcal{V}\implies x+y\in\mathcal{V}, and \alpha\in\mathbb{C},x\in\mathcal{V}\implies \alpha x\in\mathcal{V} since all the attributes of a vector space (concerning the addition and scalar multiplication) are inherited. But, all three are glaringly obvious. So yes, this is a subspace.

c) No, note that (1,0,0),(0,1,0)\in\mathcal{V} but (1,0,0)+(0,1,0)=(1,1,0)\notin\mathcal{V}

d) Clearly 0\in\mathcal{V}. Also, if x\in\mathcal{V} we have that \alpha x\in\mathcal{V} since \alpha\xi_1+\alpha\xi_2=\alpha(\xi_1+\xi_2)=0. Lastly, if x=(\xi_1,\xi_2,\xi_3),y=(\eta_1,\eta_2,\eta_3)\in\mathcal{V} we see that x+y\in\mathcal{V} since (\xi_1+\eta_1)+(\xi_2+\eta_2)=(\xi_1+\xi_2)+(\eta_1+\eta_2)=0+0=0.

e) No, consider that (1,0,0),(0,1,0)\in\mathcal{V} but (1,0,0)+(0,1,0)=(1,1,0)\notin\mathcal{V}

5.

Problem: Consider the vector space \mathcal{P} (the set of all complex coefficiented polynomials) and the subsets \mathcal{V} consisting those vectors p(x) for which

a) \deg(p(x))=3

b) 2p(0)=p(1)

c) p(x)\geqslant0,\text{  }0\leqslant x\leqslant1

d) p(x)=p(1-x),\quad\forall x\in\mathbb{C}

Which of them are vector spaces?

Proof:

a) This is not since the zero function isn’t in it.

b) This is.

c) This isn’t since x\in\mathcal{V} but -x\notin\mathcal{V}

d) This is. (maybe, I got a little lazy)

September 22, 2010 Posted by | Fun Problems, Halmos, Munkres, Topology, Uncategorized | , , , , , | Leave a comment

Halmos Chaper One, Section 1: Fields


1.

Problem: Amost all the laws of elementary arithmetic are consequences of the axioms defining a field. Prove, in particular, that if \mathfrak{F} is a field, and if \alpha,\beta and \gamma belong to \mathfrak{F}, then the following relations hold.

a) 0+\alpha=\alpha

b) If \alpha+\beta=\alpha+\gamma then \beta=\gamma

c) \alpha+\left(\beta-\alpha\right)=\beta

d) \alpha0=0\alpha=0

e) (-1)\alpha=-\alpha

f) (-\alpha)(-\beta)=\alpha\beta

g) If \alpha\beta=0 then either \alpha=0 or \beta=0

Proof:

a) By axiom 3 (A3) we know that \alpha+0=\alpha and by the commutativity described in A1 we conclude that 0+\alpha=\alpha+0=\alpha

b) We see that if \alpha+\beta=\alpha+\gamma then \left(\alpha+\beta\right)+-\alpha=\left(\alpha+\gamma\right)+-\alpha which by associativity and commutativity says that \gamma+(\alpha+-\alpha)=\beta+(\alpha+-\alpha) which then implies that \gamma=\gamma+0=\beta+0=\beta.

c) We use associativity and commutativity to rewrite our equations as \beta+(\alpha+-\alpha)=\beta+0=\beta

d) By commutativity of the multiplication it suffices to note that \alpha0=\alpha(0+0)=\alpha0+\alpha0 and thus \alpha0+-\alpha0=\left(\alpha0+\alpha0\right)+-\alpha0 and  by associativity we arrive at 0=\alpha0.

e) We merely note that \alpha+(-1)\alpha=(1+-1)\alpha=0 and thus -\alpha=(-1)\alpha.

f) We use e) to say that (-\alpha)(-\beta)=(-1)\alpha(-1)\beta=(-1)(-1)\alpha\beta. Then, we notice that (-1)(-1)+(-1)=(-1)(-1+1)=0 from where it follows that -(-1)(-1)=-1 and thus (-1)(-1)=1 and the conclusion follows.

g) Suppose that \alpha,\beta\ne0 then since \alpha\ne0 we see that \alpha\beta=0\implies \beta=0\alpha^{-1}=0 which contradicts our choice of b

2.

Problem:

a) Is the set of all positive integers a field?

b) What about the set of all integers?

c) Can the answers to both these question be changed by re-defining addition or multiplication (or both)?

Proof:

a) No, we merely note that there is no additive identity for 1

b) No, there is no multiplicative identity for 2

c) Yes. But first before we justify let us prove a lemma (which is useful),

Lemma: Let \left(\mathfrak{F},+,\cdot\right) be a field with \text{card }\mathfrak{F}=\mathfrak{n}. Then, given any set F with \text{card }F=\mathfrak{n} there are operations \oplus,\odot:F\times F\to F for which \left(F,\oplus,\odot\right) is a field.

Proof: By virtue of their equal cardinalities there exists some bijection \theta:F\to\mathfrak{F}. Then, for \alpha,\beta\in F define

\alpha\oplus\beta=\theta^{-1}\left(\theta(\alpha)+\theta(\beta)\right)

and

\alpha\odot\beta=\theta^{-1}\left(\theta(\alpha)\cdot\theta\left(\beta\right)\right)

We prove that with these operations \left(F,\oplus,\odot\right) is a field. We first note that \oplus,\odot:F\times F\to F and so they are legitimate binary operations. We now begin to show that all the field axioms are satisfied

1) Addition is commutative- This is clear since

\alpha\oplus\beta=\theta^{-1}\left(\theta(\alpha)+\theta(\beta)\right)=\theta^{-1}\left(\theta(\beta)+\theta(\alpha)\right)=\beta+\alpha

2) Addition is associative- This is also clear since

\alpha\oplus\left(\beta\oplus\gamma\right)=\theta^{-1}\left(\theta(\alpha)+\theta\left(\beta\oplus\gamma\right)\right)=\theta^{-1}\left(\theta(\alpha)+\theta\left(\theta^{-1}\left(\theta(\beta)+\theta(\gamma)\right)\right)\right)

which is equal to

\theta^{-1}\left(\theta(\alpha)+\left(\theta(\beta)+\theta(\gamma)\right)\right)=\theta^{-1}\left(\left(\theta(\alpha)+\theta(\beta)\right)+\theta(\gamma)\right)

which finally is equal to

\theta^{-1}\left(\theta^{-1}\left(\theta\left(\theta(\alpha)+\theta(\beta)\right)\right)+\theta(\gamma)\right)=\theta^{-1}\left(\theta\left(\alpha\oplus\beta\right)+\theta(\gamma)\right)=\left(\alpha\oplus\beta\right)\oplus\gamma

3) There exists a zero element- Let 0 be the zero element of \mathfrak{F} then \theta^{-1}(0) is clearly the zero element of F. To see this we note that

\alpha\oplus\theta^{-1}(0)=\theta^{-1}\left(\theta(\alpha)+\theta\left(\theta^{-1}\left(0\right)\right)\right)=\theta^{-1}\left(\theta(\alpha)+0\right)=\theta^{-1}\left(\theta\left(\alpha\right)\right)=\alpha

for every \alpha\in F.

4) Existence of inverse element- If \alpha\in F we note that

\alpha\oplus\theta^{-1}\left(-\theta(\alpha)\right)=\theta^{-1}\left(\theta(\alpha)+\theta\left(\theta^{-1}\left(-\theta(\alpha)\right)\right)\right)

which equals

\theta^{-1}\left(\theta(\alpha)+-\theta(\alpha)\right)=\theta^{-1}(0)

which is the identity element of F

5-8 are the analogous axioms for multiplication, which are (for the most part) the exact same as the above.

9) Distributivity- We note that

\alpha\odot\left(\beta\oplus\gamma\right)=\theta^{-1}\left(\theta(\alpha)\cdot\theta\left(\beta\oplus\gamma\right)\right)

which equals

\theta^{-1}\left(\theta(\alpha)\cdot\theta\left(\theta^{-1}\left(\theta(\beta)+\theta(\gamma)\right)\right)\right)=\theta^{-1}\left(\theta(\alpha)\cdot\left(\theta(\beta)+\theta(\gamma)\right)\right)

from where the rest is obvious.

This completes the lemma \blacksquare

Now, we may answer the question. Since \mathbb{Q} is a field and \mathbb{Q}\cong\mathbb{N}\cong\mathbb{Z} the above lemma implies there exists addition and multiplications on \mathbb{N} and \mathbb{Z} which make them into fields.

3.

Problem: Let m\in\mathbb{N}-\{1\} and let \mathbb{Z}_m denote the integers \text{mod }m.

a) Prove this is a field precisely when m is prime

b) What is -1 in \mathbb{Z}_5?

c) What is \tfrac{1}{3} in \mathbb{Z}_7?

Proof:

a) We appeal to the well-known fact that ax=1\text{ mod }m is solvable precisely when (a,m)=1. From there we may immediately  disqualify non-primes since the number of multiplicatively invertible elements of \mathbb{Z}_m is \varphi(m) and \varphi(m)<m-1 when m is not a prime. When m is a prime the only thing worth noting is that every non-zero element of \mathbb{Z}_m has a multiplicative inverse. The actual work of showing the axioms hold is busy work, and I’ve done it before.

b) It’s clearly 4. Since 1+4=5=0

c) It’s 5. To see this we note that 5\cdot3=15=1

4.

Problem : Let \mathfrak{F} be a field and define

c:\mathbb{N}\to\mathfrak{F}:n\mapsto\underbrace{1+\cdots+1}_{n\text{ times}}

show that either there is no n such that c(n)=0 or that if there is, the smallest such n is prime

Proof: Assume that c^{-1}\left(\{0\}\right)\ne\varnothing and p=\min c^{-1}\left(\{0\}\right). Now, suppose that p=ab where 1<a,b<p. We see then that

c(a)c(b)=(\underbrace{1+\cdots+1}_{a\text{ times}})c(b)=\underbrace{c(b)+\cdots+c(b)}_{a\text{ times}}

which upon expansion equals

\underbrace{(\underbrace{1+\cdots+1}_{b\text{ times}})+\cdots+(\underbrace{1+\cdots1}_{b\text{ times}}}_{a\text{ times}})

which by associativity and grouping is equal to

\underbrace{1+\cdots+1}_{ab\text{ times}}=\underbrace{1+\cdots+1}_{p\text{ times}}=0

which by concatenation of the equations yields

c(a)c(b)=0

but since \mathfrak{F} is a field it follows that c(a)=0 or c(b)=0, either way the minimality of p is violated.

5.

Problem: Let \mathbb{Q}(\sqrt{2})=\left\{a+b\sqrt{2}:a,b\in\mathbb{Q}\right\}

a) Is \mathbb{Q}(\sqrt{2}) a field?

b) What if \alpha,\beta are required to be integers?

Proof:

a) This is a classic yet tedious exercise, I will not do it here.

b) No. For example, consider 1+\sqrt{2}. Then, we have that

\displaystyle \left(1+3\sqrt{2}\right)^{-1}=\frac{1-3\sqrt{2}}{1-18}=\frac{-1}{17}+\frac{3}{17}\sqrt{2}\notin\mathbb{Z}(\sqrt{2})

6.

Problem:

a) Doest the set of all polynomials with integer coefficients (\mathbb{Z}[x]) form a field?

b) What about \mathbb{R}[x]?

Proof:

a) No.

b) No. I’ll let you figure these out (it’s really easy)

7.

Problem:

Let \mathfrak{F} be the set of all ordered pairs (a,b) of real numbers

a) Is \mathfrak{F} a field if addition and multiplication are done coordinate wise?

b) If addition and multiplication are done as one multiplies complex numbers?

Proof:

a) No. Consider that (0,1) is a not the additive identity but it has no multiplicative inverse.

b) Yes, this is just field isomorphic to \mathbb{C}

September 21, 2010 Posted by | Fun Problems, Halmos, Munkres, Topology | , , , , , , | 4 Comments