Abstract Nonsense

Crushing one theorem at a time

Halmos Chapter one Sections eight and nine: Dimension and Isomorphism


1.

Problem:

a) What is the dimension of \mathbb{C} considered as a real vector space?

b) Every complex vector space \mathcal{V} is intimately associated with a real vector space \mathcal{V}^{\mathbb{R}}; the space obtained from \mathcal{V} by refsing to multiply vectors of \mathcal{V}  by anything but real scalars. If \dim \mathcal{V}=n what is \dim\mathcal{V}^\mathbb{R}

Proof:

a) Clearly 1 and i are l.i. in \mathbb{C} and given any a+bi\in\mathcal{C} we have that a+bi=a1+bi and thus \{1,i\} is a basis for \mathbb{C}. So, \dim_{\mathbb{R}}\mathbb{C}=2

Remark: The subscript \mathbb{R} is used to indicate that \mathbb{C} is to be taken over \mathbb{R}. Note that if we consider \mathbb{C} as a vector space over \mathbb{C} then \{1\} is a basis and so \dim_{\mathbb{C}}\left(\mathbb{C}\right)=1. In fact, this clearly generalized that if F is a field then \dim_{F}\left(F\right)=1 since \{1_F\} is a basis.

b) Using the above notation the question is asking given \dim_{\mathbb{C}}\left(\mathcal{V}\right) what is \dim_{\mathbb{R}}\left(\mathcal{V}\right), we intend to prove that

2\dim_{\mathbb{C}}\left(\mathcal{V}\right)=\dim_{\mathbb{R}}\left(\mathcal{V}\right)

To see this it suffices to produce a basis of cardinality 2n for \mathcal{V}^{\mathbb{R}}. So, by assumption there exists a basis \{v_1,\cdots,v_n\} for \mathcal{V}, consider then (remembering that if v\in\mathcal{V} then iv\in\mathcal{V}) the set \left\{v_1,iv_1,\cdots,v_n,iv_n\right\}. We claim this is a basis for \mathcal{V}^{\mathbb{R}}. To see this first suppose that

\alpha_1v_1+\beta_1(iv_1)+\cdots+\alpha_n v_n+\beta_n(iv_n)=\bold{0}\quad (1)

for \alpha_1,\cdots,\alpha_n,\beta_1,\cdots,\beta_n\in\mathbb{R}. Now, if we think again of \mathcal{V} we see that (1) says

\left(\alpha_1+\beta_1 i\right)v_1+\cdots+\left(\alpha_n+\beta_n i\right)v_n=\bold{0}

so then by the l.i. of \{v_1,\cdots,v_n\} in \mathcal{V} we see that this implies that

\alpha_1+\beta_1 i=\cdots=\alpha_n+\beta_n i=0

which is true iff \alpha_1=\beta_1=\cdots=\alpha_n=\beta_n=0 from where the l.i. of \{v_1,iv_1,\cdots,v_n,iv_n\} in \mathcal{V}^{\mathbb{R}} follows.  Now, to see that

\text{Span }\{v_1,iv_1,\cdots,v_n,iv_n\}=\mathcal{V}^{\mathbb{R}}

To see this note that for any v\in\mathcal{V} there exists \alpha_1+\beta_1i,\cdots,\alpha_n+\beta_n i\in\mathbb{C} such that

(\alpha_1+\beta_1 i)x_1+\cdots+(\alpha_n+\beta_n i)x_n=v

which implies that

\alpha_1 x_1+\beta_1 i x_1+\cdots+\alpha_n x_n+\beta_n i x_n=v

and so v is a linear combination of \{v_1,iv_1,\cdots,v_n,iv_n\} with coefficients in \mathbb{R}. The conclusion follows.

Remark: If one is reading this, there may be a feeling that some kind of mathematical legerdemain is going on here. I assure you there is not. Let me take time to clarify this. Remember that \mathcal{V} being a vector space over \mathbb{C} means there is a scalar multiplication such that

(\alpha\beta)v=\alpha(\beta v)\quad *

Or, if we instead of writing the function by concatenation think of it as the function

\cdot_2:\mathbb{C}\times\mathcal{V}\to\mathcal{V}

(the subscript of 2 will make sense later) so that \alpha v=\cdot_2(\alpha,v). Then, * says that

\cdot_2(\alpha\beta,v)=\cdot_2\left(\alpha,\cdot_2\left(\beta,v\right)\right)

Which in words says that “If we multiply \alpha and \beta (using the FIELD multiplication of F) and then multiply v by the result (using SCALAR multiplication) we get the same element of \mathcal{V} that we would get if we first multiply v by \beta (using SCALAR multiplication) and then multiply the result (which lives in \mathcal{V}) by \alpha (using SCALAR multiplication)”.

Now, when we consider \mathcal{V} all we’ve really done is considered the same set of vectors \mathcal{V} except now we’ve created a new function

\cdot_1:\mathbb{R}\times\mathcal{V}\to\mathcal{V}

such that

\cdot_1(\alpha,v)=\cdot_2(\alpha,v)

except that \alpha may ONLY be real. In other words \cdot_1 (the scalar multiplication of \mathcal{V}^{\mathbb{R}}) is just the restriction of the scalar multiplication on \mathcal{V} to \mathbb{R}\times\mathcal{V}. Put symbolically

\cdot_1=\left(\cdot_2\right)_{\mid \mathbb{R}\times\mathcal{V}}

So now, we can apply this less deceptive way of writing things to resolve the l.i. part of the above proof (you can apply the same logic to the spanning part). We first rewrite the claimed basis of \mathcal{V}^{\mathbb{R}} as

\left\{v_1,\cdot_2(i,v_1),\cdots,v_n,\cdot_2(i,v_n)\right\}

where now it makes more sense since each \cdot_2(i,v_j)\in\mathcal{V} and thus a legitimate vector choice. Now, to prove l.i. we suppose there are \alpha_1,\beta_1,\cdots,\alpha_n,\beta_n\in\mathbb{R} such that

\cdot_1(\alpha_1,v_1)+\cdot_1\left(\beta_1,\cdot_2\left(i,v_1\right)\right)+\cdots+\cdot_1\left(\alpha_n,v_n\right)+\cdot_1\left(\beta_n,\cdot_2\left(i,v_n\right)\right)=\bold{0}\text{ }(1)

But, for each instance of \cdot_1(\text{ },\text{ }) we note that the first slot is taken up by a real number (either \alpha_j or \beta_j) and the second slot by a vector (either v_j or \cdot_2(i,v_j)) and so by the above discussion we see that \cdot_1 gives the same value as \cdot_2 and so (1)  may be rewritten as

\cdot_2(\alpha_1,v_1)+\cdot_2\left(\beta_1,\cdot_2\left(i,v_1\right)\right)+\cdots+\cdot_2\left(\alpha_n,v_n\right)+\cdot_2\left(\beta_n,\cdot_2\left(i,v_n\right)\right)=\bold{0}\text{ }(2)

but remembering that since the domain of \cdot_2 is \mathbb{C}\times\mathcal{V} we may apply our previous discussion to conclude

\cdot_2\left(\beta_j,\cdot_2(i,v_j)\right)=\cdot_2\left(\beta_j i,v_j\right)

and so (using more axioms for the scalar multiplication)

\cdot_2\left(\alpha_j,v_j\right)+\cdot_2\left(\beta_j,\cdot_2\left(i,v_j\right)\right)=\cdot_2\left(\alpha_j,v_j\right)+\cdot_2\left(\beta_j i,v_j\right)=\cdot_2\left(\alpha_j+\beta_j i, v_j\right)

and so (2) may be rewritten as

\cdot_2\left(\alpha_1+\beta_1 o,v_1\right)+\cdots+\cdot_2\left(\alpha_n+\beta_n i,v_n\right)=\bold{0}

but the l.i. of \{v_1,\cdots,v_n\} in \mathcal{V} says that this is true if and only if

\alpha_1+\beta_1 i=\cdots=\alpha_n+\beta_n i=0

but by the definition of the complex numbers this is true if and only if

\alpha_1=\beta_1=\cdots=\alpha_n=\beta_n=0

And thus, remembering this is precisely what we wanted to prove in (1) we’re done.

As you can see, the actual proof is easy but subtleties are abound in this problem. I hope I’ve clarified any misunderstandings in the above proof.

2.

Problem: Is the set of all real numbers a finite dimensional vector space over \mathbb{Q}?

Proof: No it’s not. Suppose there existed \{x_1,\cdots,x_n\}\subseteq\mathbb{R} which is a basis for \mathbb{R} over \mathbb{Q}. Then, define

\varphi:\mathbb{R}\to\mathbb{Q}_n:x\mapsto (r_1,\cdots,r_n)

where r_1,\cdots,r_n are the unique rational numbers such that

\displaystyle x=\sum_{i\leqslant n}r_i x_i

Now, to see that \varphi is injective suppose that

\varphi(x)=(r_1,\cdots,r_n)=(s_1,\cdots,s_n)=\varphi(y)

then,

\displaystyle x-y=\sum_{i\leqslant n}r_i x_i-\sum_{i\leqslant n}s_i x_i=\sum_{i\leqslant n}(r_i-s_i)x_i=0

from where injectivity immediately follows. Surjectivity is an immediate consequence of

\mathbb{R}=\text{Span }\{x_1,\cdots,x_n\}

Thus, it follows that \mathbb{Q}^n\cong\mathbb{R}. This is a contradiction though since \mathbb{Q}^n\cong\mathbb{N} and \mathbb{R}\not\cong\mathbb{N}

Remark: In fact, it is fairly easy to show that given a vector space \mathcal{V} with \dim_{F}\left(\mathcal{V}\right)=n that

\eta:\mathcal{V}\to F^n:v\mapsto (\alpha_1,\cdots,\alpha_n)

(same mapping as above) is a linear isomorphism. In other words, it’s bijective and

\eta\left(\alpha x+\beta y\right)=\alpha\eta(x)+\beta\eta(y)

3.

Problem: How many vectors are there in ann-dimensional vector space over the field \mathbb{Z}_p.

Proof: Clearly by what was said in the last problem it’s p^n

4.

Problem: Discuss the following assertion: if two rational vector spaces have the same cardinal number then they are isomorphic.

Proof: This is completely ridiculous. Note that

\text{card }\mathbb{Q}^m=\aleph_0=\text{card }\mathbb{Q}^n,\text{ }n,m\in\mathbb{N}

yet

\mathbb{Q}^m\not\cong\mathbb{Q}^n,\text{ }n\ne m

since

\dim_{\mathbb{Q}}\left(\mathbb{Q}^m\right)=m\ne n=\dim_{\mathbb{Q}}\left(\mathbb{Q}^n\right)

and since dimension is invariant under linear isomorphism the conclusion follows.

Advertisements

September 25, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: