Abstract Nonsense

Crushing one theorem at a time

Halmos Chapter one Sections 15, 16 and 17: Dual Bases, Reflexivity, and Annihilators (Part I)


Note: For a vector space \mathcal{V} over a field F I will freely switch between the notations \mathcal{V}^{*} and \text{Hom}\left(\mathcal{V},F\right) for the dual space of \mathcal{V}, depending which fits better.

1.

Problem: Define a non-zero functional \varphi:\mathbb{C}^3\to\mathbb{C} such that if x_1=(1,1,1) and x_2=(1,1,-1) that [x_1,\varphi]=[x_2,\varphi]=0.

Proof: We note that x_1=e_1+e_2+e_2 and x_2=e_1+e_2-e_3 where e_j=(\delta_{1,j},\delta_{2,j},\delta_{3,j}) (where \delta_{ij} is the Kronecker delta symbol, i.e. this is the standard basis for \mathbb{C}^3 over \mathbb{C}). Then, it suffices to define

[e_1,\varphi]=-1,[e_2,\varphi]=1\text{ and }[e_3,\varphi]=0

since there is precisely one linear functional \varphi taking on the above values. So if \varphi is that functional we see that

\varphi(x_1)=\varphi\left(e_1+e_2+e_3\right)=\varphi(e_1)+\varphi(e_2)+\varphi(e_3)=-1+1+0=0

and

\varphi(x_2)=\left(e_1+e_2-e_3\right)=\varphi(e_1)+\varphi(e_2)-\varphi(e_3)=-1+1-0=0

as desired.

2.

Problem: The vectors x_1=(1,1,1), x_2=(1,1,-1), and x_3=(1,-1,-1) form a basis for \mathbb{C}^3 over \mathbb{C}. If \{\varphi_1,\varphi_2,\varphi_3\}\subseteq\text{Hom}\left(\mathbb{C}^3,\mathbb{C}\right) is the associated dual basis and if x=(0,1,0) find [x,\varphi_1], [x,\varphi_2], and [x,\varphi_3].

Proof: Recall that for a basis \{x_1,\cdots,x_n\} we define the dual basis \{\varphi_1,\cdots,\varphi_n\}to be the unique functionals \varphi_1,\cdots,\varphi_n in the dual space such that [x_i,\varphi_j]=\delta_{ij}. From this we can easily see that if v=\alpha_1x_1+\cdots+\alpha_n x_n that \varphi_j(v)=\alpha_j. Thus, noticing that

x=0x_1+\frac{1}{2}x_2-\frac{1}{2}x_3

that

[x,\varphi_1]=0, [x,\varphi_2]=\frac{1}{2},\text{ and }[x,\varphi_3]=-\frac{1}{2}

3.

Problem: Prove that if \varphi\in\mathcal{V}^{*} where \mathcal{V} is an n-dimensional vector space over F, then the set of all vectors v\in\mathcal{V} for which [v,\varphi]=0 is a subspace of \mathcal{V}; what is the dimension of that subspace?

Proof: For the sake of notation convenience, define \varphi^{-1}(\{0\})=\ker\varphi. Then, if x,y\in\ker\varphi and \alpha,\beta\in F we see that

\varphi(\alpha x+\beta y)=\alpha\varphi(x)+\beta\varphi(y)=\alpha0+\beta0=0

and thus \alpha x+\beta y\in\ker\varphi from where the fact that \ker\varphi is a subspace follows.

Now, for the second part we claim something stronger. Namely, either \varphi=\bold{0} or for any x_0\notin\ker\varphi we have that

\mathcal{V}=\text{Span }\{x_0\}\oplus\ker\varphi

where the symbol \oplus merely means that

\mathcal{V}=\text{Span }\{x_0\}+\ker\varphi\quad (1)\quad\text{ and }\text{Span }\{x_0\}\cap\ker\varphi=\{\bold{0\}}\quad (2)

So, to prove this we first prove (2). This follows since

v\in\text{Span }\{x_0\}\cap\ker\varphi\implies v=\alpha x_0\text{ and }\varphi(v)=0\implies 0=\alpha\varphi(x_0)

But, \varphi(x_0)\ne 0 so that the above implies \alpha=0 and thus v=\alpha x_0=\bold{0}. To prove (2) we merely note that for any x\in\mathcal{V}

\displaystyle x=\underbrace{\frac{\varphi(x)}{\varphi(x_0)}x_0}_{(*)}+\underbrace{\left(\frac{\varphi(x)}{\varphi(x_0)}x_0-x\right)}_{(**)}

but evidently (*) is of the form \alpha x and a quick computation shows that \varphi\left((**)\right)=0. From where our proposition follows. But, from a previous problem in a past post we know that if \mathcal{U},\mathcal{W} are any two subsets of \mathcal{V} that

\dim_F\left(\mathcal{W}+\mathcal{U}\right)=\dim_F\left(\mathcal{W}\right)+\dim_F\left(\mathcal{U}\right)-\dim_F\left(\mathcal{W}\cap\mathcal{U}\right)\quad \text{Eq. 1}

But,

\dim_F\left(\text{Span }\{x_0\}+\ker\varphi\right)=\dim_F\left(\mathcal{V}\right)=n

and

\dim_F\left(\text{Span }\{x_0\}\right)=1

and

\dim_F\left(\text{Span }\{x_0\}\cap\ker\varphi\right)=\dim_F\{\bold{0}\}=0

And so, putting this into equation one gives

n=1+\dim_F\left(\ker\varphi\right)-0\implies \dim_F\left(\ker\varphi\right)=n-1

Thus, to wrap this all up we remember that all of the above was based on the fact that \varphi\ne\bold{0}, but if \varphi=\bold{0} then \ker\varphi=\mathcal{V}. So, we may finally conclude that

\dim_F\left(\ker\varphi\right)=\begin{cases}n-1 & \mbox{if} \quad \varphi\ne\bold{0} \\ n & \mbox{if} \quad \varphi=\bold{0}\end{cases}

4.

Problem: If \varphi(x)=\zeta_1+\zeta_2+\zeta_3 whenever x=(\zeta_1.\zeta_2,\zeta_3)\in\mathbb{C}^3, then \varphi is a linear functional on \mathbb{C}^3; find a basis for the subspace for \ker\varphi

Proof: Since \dim_{\mathbb{C}}\left(\mathbb{C}^3\right)=3 and \varphi\ne\bold{0} that \dim_{\mathbb{C}}\left(\ker\varphi\right)=2 (this wasn’t necessary to know beforehand, but is useful to make sure we know how many things we’re looking for). So,

\zeta_1+\zeta_2+\zeta_3\text{ }\Leftrightarrow\text{ }\zeta_1=-(\zeta_2+\zeta_3)

So the general form of an element of \ker\phi is

(-(\zeta_1+\zeta_2),\zeta_2,\zeta_3)

So, if we let \zeta_2=1,\zeta_3=0 then we get

x_1=(-1,0,1)

and if we let \zeta_2=0,\zeta_3=0 we get

x_2=(-1,1,0)

We claim that \{x_1,x_2\} is a basis for \ker\varphi. Clearly \{x_1,x_2\} is a l.i. set and thus it remains to prove that \text{Span }\{x_1,x_2\}=\ker\varphi. Clearly, \text{Span }\{x_1,x_2\}\subseteq\ker\varphi and so we must only prove the reverse inclusion. So, let (\zeta_1,\zeta_2,\zeta_3)\in\ker\varphi, then as said before we must have that

(\zeta_1,\zeta_2,\zeta_3)=(-(\zeta_2+\zeta_3),\zeta_2+\zeta_3)=\zeta_2x_1+\zeta_3x_2

and so (\zeta_1,\zeta_2,\zeta_3)\in\text{Span }\{x_1,x_2\} as required.

5.

Problem: Prove that if m<n and if \varphi_1,\cdots,\varphi_m\in\text{Hom}\left(\mathcal{V},F\right) where \mathcal{V} is an n-dimensional vector space over F; then there exists a non-zero vector v in \mathcal{V} such that [x,\varphi_j]=0,\text{ }j=1,\cdots,m. What does this say about solutions of linear equations?

Proof: Note that  \text{Span }\{\varphi_1,\cdots,\varphi_m\}=\mathcal{W} is a subspace of \text{Hom}\left(\mathcal{V},F\right) of at most m dimension.  It follows that

\dim_F\left(\text{Ann }\mathcal{W}\right)=n-\dim_F\mathcal{W}\geqslant n-m>0

where \text{Ann} is the annihilator. Namely, since \dim_F\text{ Ann }\mathcal{W}>0 we have that \text{Ann }\mathcal{W} is non-trivial and so there is some \psi\in\text{Ann }\mathcal{W}-\{\bold{0}\} such that \psi\left(\mathcal{W}\right)=\{0\}. But, as was proven in an earlier post we know that the map

F:\mathcal{V}\to\mathcal{V}^{**}:x_0\mapsto [x_0,\varphi]

is an isomorphism. Namely, there exists some x_0\in\mathcal{V}-\{\bold{0}\} (we may assume that x_0\ne \bold{0} since F(\bold{0})=\bold{0} and F is injective)  such that x_0\overset{F}{\longmapsto}\psi. Thus,

\psi\left(\varphi_j\right)=\varphi_j(x_0)=0,\text{ }j=1,\cdots,m

and so

\displaystyle x_0\in\bigcap_{j=1}^{m}\ker\varphi_m-\{\bold{0}\}

as desired.

We note that in particular that if

\begin{cases}\alpha_{11}x_{1}+\cdots+a_{1m}x_m=0\\ a_{21}x_1+\cdots+a_{2m}x_n=0\\ \vdots \\ a_{m1}x_1+\cdots+a_{mm}x_m=0\end{cases}

is interpreted as trying to find a common zero for the m functionals associated linear functionals, the above proof shows there exists a non-zero solution.

 

6.

Problem: Suppose that $latx m<n$ and that \varphi_1,\cdots,\varphi_m\in\text{Hom}\left(\mathscr{V},F\right) where \dim_F\mathscr{V}=n. Under what conditions on the scalars \alpha_1,\cdots,\alpha_m is it true that there exists a vector x_0 such that \varphi_j(x_0)=\alpha_j for j=1,\cdots,m. What does this say about the solutions to linear equations?

 

 

 

7.

Problem:If \mathcal{V} is an n-dimensional vector space over the finite field \mathbb{F}_{p^n} and if 0\leqslant m\leqslant n then the number of m-dimensional subspaces of \mathcal{V} is the same as the number of n-m dimensional subspaces.

Proof: I want to prove something much stronger, namely, I’ll prove how many subspaces there are of a given dimension. But first, a technical lemma:

Lemma: Let \mathcal{V} be an n-dimensional vector space over the finite field \mathbb{F}_{p^m}. Then, the number of distinct bases for \mathcal{V} is:

\displaystyle \prod_{j=0}^{n}\left(p^{nm}-p^{n(j-1)}\right)

Proof: We can construct every basis for \mathcal{V} in the following fashion. We fix x_1\in\mathcal{V} to be the first element 0f 0ur basis. We know though that \mathcal{V}\cong \left(\mathbb{F}_{p^m}\right)^n, and so \text{card }\mathcal{V}=p^{mn}, so there are precisely p^{mn}-1 choices for our initial vector x_1. So, then choose x_2\in\mathcal{V}-\text{Span }\{x_1\}. But, since \dim_{\mathbb{F}_{p^m}}\text{Span }\{x_1\}=1 we have that \text{Span }\{x_1\}\cong \mathbb{F}_{p^m} and so \text{card }\text{Span }\{x_1\}=p^m and thus \text{card }\left(\mathcal{V}-\text{Span }\{x_1\}\right)=p^{mn}-p^m. Thus, having chosen \{x_1,\cdots,x_k\} in our basis we may choose x_{k+1}\in\mathcal{V}-\text{Span }\{x_1,\cdots,x_k\}, and using the exact same argument as before we see there are precisely p^{mn}-p^{m(k-1)} choices. Thus, since the process terminates at the choice of the nth vector in our basis we see that the total number of distinct bases is the product of all the choices up to k=n, from where the conclusion follows. \blacksquare

Now, let \mathcal{V} and \mathbb{F}_{p^n} be as before. We will prove that for each k=1,\cdots,n the number of subspaces of dimension k is

S(k)=\displaystyle \left(\prod_{j=1}^{k}\left(p^{mn}-p^{m(j-1)}\right)\right)\left(\prod_{j=1}^{k}\left(p^{mk}-p^{m(j-1)}\right)\right)^{-1}\quad (1)

To do this we note that any k-dimensional subspace of \mathcal{V} is spanned by k l.i. vectors. It’s clear using the previous argument that the numerator of (1) is the number of ways of picking k such vectors from an n-dimensional space. Also, two sets of k independent vectors span the same space iff they both form a basis for the resulting space. Thus, to counteract redundant counting we must divide by the number of bases for a k-dimensional space. But, this is the denominator of (1) as the lemma’s technique clearly shows. It follows that the number of k-dimensional subspaces of \mathcal{V} is indeed S(k).

Then, a simple (and I use this word lightly) calculation shows that S(k)\cdot S(n-k)^{-1}=1. Think about writing the product out up to a certain point, cancelling and regrouping.

8.

Problem:

a) Prove that if \mathcal{S} is any subset of a finite-dimensional vector space, then \text{Ann }\left(\text{Ann }\mathcal{S}\right) coincides with the subspace spanned by \mathcal{S}

b) If \mathcal{S} and \mathcal{J} are subsets of a vectors space then

\mathcal{S}\subseteq\mathcal{J}\implies\text{Ann }\mathcal{J}\subseteq\text{Ann }\mathcal{J}

c) Prove that if \mathcal{M},\mathcal{N} are subspaces of a finite-dimensional vector space \mathcal{V}, then

\text{Ann }\left(\mathcal{M}\cap\mathcal{N}\right)=\text{Ann }\mathcal{M}+\text{Ann }\mathcal{N}

and

\text{Ann }\left(\mathcal{M}+\mathcal{N}\right)=\text{Ann }\mathcal{M}\cap\text{Ann }\mathcal{N}

d) Is the conclusion of c) true if \mathcal{V} is not finite dimensional?

Proof:

We prove b) first so that we may do a) in a different sort of way.

b) Let \varphi \in\text{Ann }\mathcal{J} then \varphi\left(\mathcal{S}\right)\subseteq\varphi\left(\mathcal{J}\right)=\{0\} and so \varphi\in\text{Ann }\mathcal{S}. Thus,

\mathcal{S}\subseteq\mathcal{J}\implies \text{Ann }\mathcal{J}\subseteq\text{Ann }\mathcal{S}

a) We first remember that since the canonical identification

F:\mathcal{V}\to\mathcal{V}^{**}:x_0\mapsto [x_0,\varphi]

is an isomorphim, that one currently thinks of x_0 and F(x_0) as being “the same”. Thus, the question is really asking us to prove that

\text{Span } \mathcal{S}=F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)

To see this we’ll prove that

F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)

is the smallest subspace of \mathcal{V} containing \mathcal{S} from where the equation will follow (since \text{Span }\mathcal{S} is the unique such subspace). Thus, to see that

F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)

is a subspace we must merely note that \text{Ann }\mathcal{S} is a subspace (since the annihilator of any set is a subspace of \text{Hom}\left(\mathcal{V},F\right)) and similarly \text{Ann }\left(\text{Ann }\mathcal{S}\right) is a subspace of \mathcal{V}^{**}. But, F:\mathcal{V}\to\mathcal{V}^{**} is a linear isomorphism, and thus so is F^{-1}:\mathcal{V}^{**}\to\mathcal{V} and thus it clearly follows (since being a subspace is an invariant property under linear isomorphisms) that

F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)

is a subspace of \mathcal{V}. Furthermore, let x_0\in\mathcal{S}, then for any \varphi\in\text{Ann }\mathcal{S} we have that \varphi(x_0)=0 and thus for any \varphi\in\text{Ann }\mathcal{S} we have that [x_0,\varphi]=0, but this says that

[x_0,\varphi]=F\left(x_0\right)\in\text{Ann }\left(\text{Ann }\mathcal{S}\right)

and so

F\left(\mathcal{S}\right)\subseteq\text{Ann }\left(\text{Ann }\mathcal{S}\right)\text{ }\Longleftrightarrow \mathcal{S}\subseteq F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)

Lastly, let \mathcal{W} be any subspace of \mathcal{V} such that

\mathcal{S}\subseteq\mathcal{W}

Then, by b)

\text{Ann }\mathcal{S}\supseteq\text{Ann }\mathcal{W}

and so by b) again

\text{Ann }\left(\text{Ann }\mathcal{S}\right)\subseteq\text{Ann }\left(\text{Ann }\mathcal{W}\right)

and so

F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)\subseteq F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{W}\right)\right)=\mathcal{W}

Thus, F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right) is the smallest subspace of \mathcal{V} containing \mathcal{S}, namelyu

F^{-1}\left(\text{Ann }\left(\text{Ann }\mathcal{S}\right)\right)=\text{Span }\mathcal{S}

c) The fact that \text{Ann }\left(\mathcal{M}+\mathcal{N}\right)=\text{Ann }\mathcal{M}+\text{Ann }\mathcal{N} is easy.

\varphi\in\text{Ann }\left(\mathcal{M}+\mathcal{N}\right)\implies \varphi\left(\mathcal{M}\right),\varphi\left(\mathcal{N}\right)\subseteq\varphi\left(\mathcal{M}+\mathcal{N}\right)=\{0\}

but this clearly implies that

\varphi\left(\mathcal{M}\right)=\varphi\left(\mathcal{N}\right)=\{0\}\implies \varphi\in\text{Ann }\mathcal{M}\cap\text{Ann }\mathcal{N}

Conversely, if \varphi\in\text{Ann }\mathcal{M}\cap\text{Ann }\mathcal{N} then for every v=m+n\in \mathcal{M}+\mathcal{N} we have that

\varphi(v)=\varphi(m+n)=\varphi(m)+\varphi(n)=0+0=0\implies \varphi\left(\mathcal{M}+\mathcal{N}\right)=\{0\}

But of course, this is equivalent to saying that \varphi\in\text{Ann }\left(\mathcal{M}+\mathcal{N}\right). Thus,

\text{Ann }\left(\mathcal{M}+\mathcal{N}\right)=\text{Ann }\mathcal{M}\cap\text{Ann }\mathcal{N}

To prove the second part we note trivially that \text{Ann}\left(\mathcal{S}\right)+\text{Ann}\left(\mathcal{T}\right)\subseteq\text{Ann}\left(\mathcal{S}\cap\mathcal{T}\right). To see this, let \varphi+\psi\in\text{Ann}\left(\mathcal{S}\right)+\text{Annt}\left(\mathcal{T}\right). Then, we note that if v\in\mathcal{S}\cap\mathcal{T} that \varphi(v)=0=\psi(v) and so (\varphi+\psi)(v)=0 and thus \varphi+\psi\in\text{Ann}\left(\mathcal{S}\cap\mathcal{T}\right). Conversely, let \varphi\in\text{Ann}\left(\mathcal{S}\cap\mathcal{T}\right). Decompose the ambient space \mathcal{V} into the direct sum

\mathcal{V}=\mathcal{S}'\oplus\left(\mathcal{S}\cap\mathcal{T}\right)\oplus\mathcal{T}'\oplus\mathcal{U}

and define \psi to be the unique linear functional on \mathcal{V} or which \psi_{\mid \mathcal{S}'}=\varphi, \psi_{\mid \mathcal{S}\cap\mathcal{T}}=\bold{0}, \psi_{\mid\mathcal{T}'}=\bold{0} and \psi_{\mid U}=\varphi. Next, define \eta to be the unique linear functional such that \eta_{\mid\mathcal{S}'}=\eta_{\mid U}=\eta_{\mid S\cap T}=\bold{0} and \eta_{\mathcal{T}'}=\varphi. Evidently \psi\in\text{Ann}\left(\mathcal{T}\right), \eta\in\text{Ann}\left(\mathcal{S}\right), and \varphi=\psi+\eta from where it follows that \varphi\in\text{Ann}\left(\mathcal{S}\right)+\text{Ann}\left(\mathcal{T}\right) and so the problem follows.

c) Yes. The conclusion is still true. We have not used any facts which relate to finite-dimensionality.

Advertisements

October 3, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , , ,

3 Comments »

  1. Hello! Can you post problem 6 because you skipped it? Thank you very much. Problem 6 from this post is actually 7, problem 7 is 8. Thanks.

    Comment by Raluca E. Toscano | November 22, 2010 | Reply

    • Truth be told, I’m not entirely sure of the answer right now. I’ll have to think about it. Be sure to let me know if anything strikes you!

      On an upside, I added proofs for those problems I deferred to a “later post” but in all reality forgot about.

      Comment by drexel28 | November 22, 2010 | Reply

  2. Can you actually post problem 6? Thank you.

    Comment by Kira | November 22, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: