Abstract Nonsense

Crushing one theorem at a time

Halmos Section 21 and Section 22:Quotient Spaces and Dimension of a Quotient Space


Point of post: In this post I will do the problems from sections 21 and 22 in Halmos. There are some quite interesting ones, with some definite algebraic slants.

1.

Problem: Consider the quotient spaces obtained by reducing the space \mathbb{C}[x] modulo various subspaces. If \mathscr{M}=\mathscr{P}_n, the set of all nth degree polynomials, is \mathbb{C}[x]/\mathscr{P}_n finite-dimensional? What if \mathscr{M} is the set of all even polynomials. What if \mathscr{M} consists of all polynomials divisible by x^n?

Proof: When \mathscr{M}=\mathscr{P}_n then \mathbb{C}[x]/\mathscr{M} is not finite dimensional. To see this fix m_0 and consider x^{n+1}+\mathscr{M},\cdots,x^{n+m_0+1}\mathscr{M} suppose then that

\displaystyle \sum_{j=1}^{m_0+1}\alpha_j\left(x^{n+j}+\mathscr{M}\right)=\mathscr{M}

but, by definition this implies that

\displaystyle \sum_{j=1}^{m_0+1}\alpha_jx^{n+j}+\mathscr{M}=\mathscr{M}

But, this is the same as saying that

\displaystyle \sum_{j=1}^{m_0+1}\alpha_jx^{n+j}\in\mathscr{M}

which is clearly impossible unless \alpha_1=\cdots=\alpha_{m_0+1}=0. Thus, \left\{x^{n+1}+\mathscr{M},\cdots,x^{n+m_0+1}+\mathscr{M}\right\} are m_0+1 linearly independent vectors in \mathbb{C}/\mathscr{M} and so it can’t be m_0-dimensional. Since m_0 was arbitrary it follows that \dim_\mathbb{C}\mathbb{C}[x]/\mathscr{M}\ne n for any n\in\mathbb{N} and is thus not finite dimensional.

Now, if \mathscr{M} is the set of all even polynomials, then since \mathbb{C}[x]=\mathscr{M}\oplus\mathscr{N} where \mathscr{N} is the set of all odd polynomials we know that \mathbb{C}[x]/\mathscr{M}\cong\mathscr{N} and since \mathscr{N} isn’t finite dimensional (consider x,x^3,x^5,\cdots) we may conclude that \mathbb{C}[x]/\mathscr{M} is not finite.

Lastly, we claim that if \mathscr{M} is the set of all polynomials divisible by x^n that \left\{1+\mathscr{M},\cdots,x^{n-1}+\mathscr{M}\right\} is a basis for \mathbb{C}[x]/\mathscr{M}. To see this first suppose that

\displaystyle \sum_{j=0}^{n-1}\alpha_j \left(x^j+\mathscr{M}\right)=\sum_{j=0}^{n-1}\alpha_j x^j+\mathscr{M}=\mathscr{M}

then, by definition

\displaystyle \sum_{j=0}^{n-1}\alpha_j x^j\in\mathscr{M}

which is evidently impossible unless \alpha_0=\cdots=\alpha_{n-1}=0. Next, let p(x)+\mathscr{M}\in\mathbb{C}[x]/\mathscr{M} be arbitrary. We know that \displaystyle p(x)=\sum_{j=0}^{m}\alpha_j x^j for some m\in\mathbb{N}. If m\leqslant n-1 the result’s clear, and if not we see that

\displaystyle \begin{aligned}p(x)+\mathscr{M} &= \sum_{j=0}^{m}\alpha_j x^j+\mathscr{M}\\ &= \left(\sum_{j=0}^{n-1}\alpha_j x^j+\sum_{j=n}^{m}\alpha_j x^j\right)+\mathscr{M}\\ &= \bigg(\sum_{j=0}^{n-1}\alpha_j x^j+\mathscr{M}\bigg)+\bigg(\overbrace{\sum_{j=n}^{m}\alpha_j {x^j}}^{\text{in }\mathscr{M}}+\mathscr{M}\bigg)\\ &=\sum_{j=0}^{n-1}\alpha_j x^j+\mathscr{M}\end{aligned}

from where it follows that \text{span }\left\{1+\mathscr{M},\cdots,x^{n-1}+\mathscr{M}\right\}=\mathbb{C}[x]/\mathscr{M}. Thus, \dim_{\mathbb{C}}\mathbb{C}[x]/\mathscr{M}=n

2.

Problem: IF \mathscr{S} and \mathscr{J} are arbitrary subsets of a vector space (not necessarily cosets of a subspace), there is nothing to stop us from defining \mathscr{S}+\mathscr{J} just as addition was defined for cosets, and similarly, we may define \alpha\mathscr{S}. IF the class of all subsets of a vector space is endowed with these “linear operations” which of the axioms of a vector space are satisfies?

Proof: The following uses the definition of a field as is given on page three of Halmos’s book:

A)

1. This axiom (commutativity of the addition) is definitely true since

\mathscr{S}+\mathscr{J}=\left\{s+j:s\in\mathscr{S}\text{ and }j\in\mathscr{J}\right\}=\left\{j+s:j\in\mathscr{J}\text{ and }s\in\mathscr{S}\right\}=\mathscr{J}+\mathscr{S}

2. This is also true since

\begin{aligned}\mathscr{S}+\left(\mathscr{J}+\mathscr{Q}\right) &=\left\{s+k:s\in\mathscr{S}\text{ and }k\in\mathscr{J}+\mathscr{Q}\right\}\\ &= \left\{s+(j+q):s\in\mathscr{S}\text{ and }j\in\mathscr{J}\text{ and }q\in\mathscr{Q}\right\}\\ &= \left\{(s+j)+q:s\in\mathscr{S}\text{ and }j\in\mathscr{J}\text{ and }q\in\mathscr{Q}\right\}\\ &=\left\{k+q:k\in\mathscr{S}+\mathscr{J}\text{ and }q\in\mathscr{Q}\right\}\\ &= \left(\mathscr{S}+\mathscr{J}\right)+\mathscr{Q}\end{aligned}

3. This is true since \{e\}+\mathscr{S}=\mathscr{S}+\{e\}=\left\{e+s:s\in\mathscr{S}\right\}=\left\{s:s\in\mathscr{S}\right\}=\mathscr{S}

4. This is not true. Let \mathbb{R} be a vector space over itself and consider the subset \mathbb{Z}. Note then that since 0\in\mathbb{Z} we must have that s\in\mathbb{Z}+\mathscr{S} for all s\in\mathscr{S}. So, in particular if \mathbb{Z}+\mathscr{J}=\{e\} then it must be true that \mathscr{J}=\{e\}, but this contradicts that \mathbb{Z}+\mathscr{J}=\{e\}.

B)

1. This is true since \alpha\left(\beta\mathscr{J}\right)=\alpha\left\{\beta j:j\in\mathscr{J}\right\}=\left\{\alpha(\beta j):j\in\mathscr{J}\right\}=\left\{(\alpha\beta)j:j\in\mathscr{J}\right\}=(\alpha\beta)\mathscr{J}

2. This is also true since 1\mathscr{J}=\left\{1j:j\in\mathscr{J}\right\}=\left\{j:j\in\mathscr{J}\right\}=\mathscr{J}

3. This is true since

\begin{aligned}\alpha\left(\mathscr{S}+\mathscr{J}\right) &=\alpha\left\{s+j:s\in\mathscr{S}\text{ and }j\in\mathscr{J}\right\}\\ &=\left\{\alpha(s+j):s\in\mathscr{S}\text{ and }j\in\mathscr{J}\right\}\\ &=\left\{\alpha s+\alpha j:s\in\mathscr{S}\text{ and }j\in\mathscr{J}\right\}\\ &=\left\{k+\ell:k\in\alpha\mathscr{S}\text{ and }\ell\in\alpha\mathscr{J}\right\}\\ &=\alpha\mathscr{S}+\alpha\mathscr{J}\end{aligned}

4.This is true since

\begin{aligned}(\alpha+\beta)\mathscr{S} &=\left\{(\alpha+\beta)s:s\in\mathscr{S}\right\}\\ &=\left\{\alpha s+\beta s:s\in\mathscr{S}\right\}\\ &=\left\{k+\ell:k\in\alpha\mathscr{S}\text{ and }\ell\in\beta\mathscr{S}\right\}\\ &=\alpha\mathscr{S}+\beta\mathscr{S}\end{aligned}

Problem:

a) Suppose that \mathscr{M} is a subspace of a vector space \mathscr{V}. Two vectors x and y of \mathscr{V} are congruent modulo \mathscr{M}, denoted x\equiv y\text{ mod }\mathscr{M}, if x-y\in\mathscr{M}. Prove that \equiv\text{ mod }\mathscr{M} is an equivalence relation on \mathscr{V}

b) If \alpha_1,\alpha_2\in F and if x_1,y_1,x_2,y_2\in\mathscr{V} are such that x_1\equiv y_1\text{ mod }\mathscr{M} and x_2\equiv y_2\text{ mod }\mathscr{M} prove that \alpha_1x_1+\alpha_2x_2\equiv \alpha_1y_1+\alpha_2y_2\text{ mod }\mathscr{M}.

c) Prove that the equivalence classes of \equiv\text{ mod }\mathscr{M} are precisely the cosets of \mathscr{M}.

Proof:

a) Clearly x\equiv x\text{ mod }\mathscr{M} since x-x=\bold{0}\in\mathscr{M}. Also, if x\equiv y\text{ mod }\mathscr{M} then x-y\in\mathscr{M} and thus -1(x-y)=y-x\in\mathscr{M} and so y\equiv x\text{ mod }\mathscr{M}. Lastly, if x\equiv y\text{ mod }\mathscr{M} and y\equiv z\text{ mod }\mathscr{M} then x-y\in\mathscr{M} and y-z\in\mathscr{M} and thus (x-y)+(y-x)=x-z\in\mathscr{M} so that x\equiv z\text{ mod }\mathscr{M}

b) Clearly this is true since x_1-y_1\in\mathscr{M} and x_2-y_2\in\mathscr{M} implies that \alpha_1(x_1-y_1)-\alpha_2(x_2-y_2)=(\alpha_1x_1-\alpha_2x_2)-(\alpha_1y_1-\alpha_2y_2)\in\mathscr{M} so that \alpha_1x_1-\alpha_2x_2\equiv\alpha_1y_1-\alpha_2y_2\text{ mod }\mathscr{M}.

c) Let [x]=\left\{y\in\mathscr{V}:x\equiv y\text{ mod }\mathscr{M}\right\}, we claim that [x]=x+\mathscr{M}. To see this let y\in[x] then x-y\in\mathscr{M} so that x-y=m for some m\in\mathscr{M}. Thus, y=x-m and thus y\in x+\mathscr{M}. Conversely, if y\in x+\mathscr{M} then y=x+m for some m\in\mathscr{M} and so x-y=-m\in\mathscr{M} so that x\equiv y\text{ mod }\mathscr{M} and thus y\in[x].

3.

Problem:

a) Suppose that \mathscr{M} is a subspace of a vector space \mathscr{V}. Prove that \text{Hom}\left(\mathscr{V}/\mathscr{M},F\right)\cong\text{Ann }\mathscr{M} without appealing to dimensionality.

b) Show that \text{Hom}\left(\mathscr{V},F\right)/\left(\text{Ann }\mathscr{M}\right)\cong\text{Hom}\left(\mathscr{M},F\right) without appealing to dimensionality.

Proof:

a) Let \{x_1,\cdots,x_m\} be a basis for \mathscr{M} and extend it to a basis \{x_1,\cdots,x_m,y_1,\cdots,y_k\}. Then, we know that \{x_1,\cdots,x_m,y_1,\cdots,y_k\} lifts to the dual basis \{\varphi_{x_1},\cdots,\varphi_{x_m},\varphi_{y_1},\cdots,\varphi_{y_k}\} for \text{Hom}\left(\mathscr{V},F\right), with, in particular, \{\varphi_{y_1},\cdots,\varphi_{y_k}\} be a basis for \text{Ann }\mathscr{M}.

But, with equal verity we may start with \{x_1,\cdots,x_m,y_1,\cdots,y_k\} and produce the basis \left\{y_1+\mathscr{M},\cdots,y_k+\mathscr{M}\right\} for \mathscr{V}/\mathscr{M}. And from there we may lift to the basis \left\{\varphi_{y_1+\mathscr{M}},\cdots,\varphi_{y_k+\mathscr{M}}\right\} for \text{Hom}\left(\mathscr{V}/\mathscr{M},F\right).

From here the course of action is clear, namely let

f:\{\varphi_{y_1},\cdots,\varphi_{y_k}\}\to\left\{\varphi_{y_1+\mathscr{M}},\cdots,\varphi_{y_k+\mathscr{M}}\right\}:\varphi_{y_\ell}\mapsto\varphi_{y_\ell+\mathscr{M}}

and extend it to the map

f^{*}:\text{Ann }\mathscr{M}\to\text{Hom}\left(\mathscr{V}/\mathscr{M},F\right)

by linearity. Clearly, by construction, we see that f^{*} is linear. To see it’s injective suppose that

\displaystyle f\left(\sum_{j=1}^{k}\alpha_j \varphi_{y_j}\right)=\sum_{j=1}^{k}\alpha_j \varphi_{y_j+\mathscr{M}}=\bold{0}

then, since \left\{\varphi_{y_1+\mathscr{M}},\cdots,\varphi_{y_k+\mathscr{M}}\right\} is linearly independent we can see that \alpha_1=\cdots=\alpha_k=0. Thus, \sum_{j=1}^{k}\alpha_j\varphi_{y_j}=\bold{0}. Also, it’s clear that f is surjective since if \displaystyle \sum_{j=1}^{k}\alpha_j\varphi_{y_j+\mathscr{M}} is in \text{Hom}\left(\mathscr{V}/\mathscr{M},F\right) we have that \displaystyle \sum_{j=1}^{k}\alpha_j \varphi_{y_j}\in\text{Ann }\mathscr{M} and \displaystyle f\left(\sum_{j=1}^{k}\alpha_j \varphi_{y_j}\right)=\sum_{j=1}^{k}\alpha_j\varphi_{y_j+\mathscr{M}}

b) Consider the mapping

f:\text{Hom}\left(\mathscr{V},F\right)/\left(\text{Ann }\mathscr{M}\right)\to\text{Hom}\left(\mathscr{M},F\right):\varphi+\text{Ann }\mathscr{M}\to \varphi_{\mid\mathscr{M}}

Now, linearity is clear since

\begin{aligned} f\left(\left(\varphi_1+\text{Ann }\mathscr{M}\right)+\left(\varphi_2+\text{Ann }\mathscr{M}\right)\right) &= f\left((\varphi_1+\varphi_2)+\text{Ann }\mathscr{M}\right)\\ &=\left(\varphi_1+\varphi_2\right)_{\mid\mathscr{M}}\\ &= \left(\varphi_1\right)_{\mid\mathscr{M}}+\left(\varphi_2\right)_{\mid\mathscr{M}}\\ &= f\left(\varphi_1+\text{Ann }\mathscr{M}\right)+f\left(\varphi_2+\text{Ann }\mathscr{M}\right)\end{aligned}

and

f\left(\alpha\left(\varphi+\text{Ann }\mathscr{M}\right)\right)=f\left((\alpha\varphi)+\text{Ann }\mathscr{M}\right)=\left(\alpha\varphi\right)_{\mid\mathscr{M}}=\alpha\left(\varphi\right)_{\mid\mathscr{M}}=\alpha f\left(\varphi+\text{Ann }\mathscr{M}\right)

Now, to prove injectivity suppose that \varphi_1+\text{Ann }\mathscr{M}\ne\varphi_2+\text{Ann }\mathscr{M}. Then, there exists some m_0\in\mathscr{M} such that \varphi_1(m_0)\ne\varphi_2(m_0). Otherwise, for all m\in\mathscr{M} we’d have that \varphi_1(m)=\varphi_2(m)\implies \varphi_1(m)-\varphi_2(m)=\bold{0} and so \varphi_1-\varphi_2\in\text{Ann }\mathscr{M}, but this (by looking at problem 2.) says that \varphi_1+\text{Ann }\mathscr{M}=\varphi_2+\text{Ann }\mathscr{M} which we assumed otherwise. Thus, there exists the aforementioned m_0 and so (\varphi_1)_{\mid\mathscr{M}}\ne(\varphi_2)_{\mid\mathscr{M}}

Lastly, to prove surjectivity let \mathscr{N}\subseteq\mathscr{V} be such that \mathscr{V}=\mathscr{M}\oplus\mathscr{N}. Then, we may extend any \varphi\in\text{Hom}\left(\mathscr{M},F\right) to a $\varphi^*:\mathscr{V}\to F$ by defining \varphi^*(x)=\varphi(m) where m+n is the unique representation of x as the sum of an element of \mathscr{M} and \mathscr{N}. Clearly then \varphi^*+\text{Ann }\mathscr{M}\in\text{Hom}\left(\mathscr{V},F\right)/\left(\text{Ann }\mathscr{M}\right) and \varphi^*+\text{Ann }\mathscr{M}\overset{f}{\longmapsto}\varphi. Thus, f is an isomorphism.

Advertisements

October 18, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra, Uncategorized | , , ,

7 Comments »

  1. I would like to thank you for posting these nice proofs.
    Do you have also a proof for problem 2 section 22? The number 2 from this post is actually problem 3 from the same section 22.
    Thank you in advance! Any help will be appreciated.

    Comment by Kira | November 19, 2010 | Reply

    • There you go. This is the second time I’ve accidentally skipped writing up a solution. If you find any more let me know!

      Comment by drexel28 | November 19, 2010 | Reply

  2. Please help me with problem 2 from section 22. Thank you very much!

    Comment by Kira | November 19, 2010 | Reply

  3. 1.I would like to thank you for posting these nice proofs.
    Do you have also a proof for problem 2 section 22? The number 2 from this post is actually problem 3 from the same section 22.
    Thank you in advance! Any help will be appreciated.

    Comment by Kira | November 19, 2010 | Reply

    • I’m not sure what you mean. I just did the problem I missed, above.

      Comment by drexel28 | November 21, 2010 | Reply

      • Thank you! I really appreciate your great work!

        Comment by Kira | November 21, 2010

      • You’re quite welcome!

        Comment by drexel28 | November 22, 2010


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: