Abstract Nonsense

Crushing one theorem at a time

Halmos Sections 18,19, and 20: Direct Sums, Dimension of Direct Sums, and Duals of Direct Sums


Point of Post:In this post we do the 18,19, and 20th sections in Halmos. In it we discuss necessary and sufficient conditions for a space to be the direct sum of subspaces, etc. The last problem gives a particularly interesting way of viewing what it means for a space to be a direct sum of subspaces.

1. Problem:Suppose that x,y,u and v are vectors in \mathbb{C}^4; let \mathscr{M} and \mathscr{N} be the subspaces of \mathbb{C}^4 spanned by \{x,y\} and \{y,z\} respectively. In which of the following cases is it true that \mathbb{C}^4=\mathscr{M}\oplus\mathscr{N}?

a) x=(1,1,0,0), y=(1,0,1,0), (0,1,0,1), and v=(0,0,1,1)

b) x=(-1,1,1,0), y=(0,1,-1,1), u=(1,0,0,0), and v=(0,0,0,1)

c) x=(1,0,0,1), y=(0,1,1,0), u=(1,0,1,0), and v=(0,1,0,1)

Proof: We do this by considering the following lemma

Sufficiency Lemma: Let \mathscr{U},\mathscr{V} be subspaces of \mathscr{W} (all finite dimensional, as always). Then, if \{x_1,\cdots,x_m\} and \{y_1,\cdots,y_k\} are bases of \mathscr{U} and \mathscr{V} respectively then \mathscr{W}=\mathscr{U}\oplus\mathscr{V} if and only if \{x_1,\cdots,x_m,y_1,\cdots,y_k\} is a basis for \mathscr{W}

Proof: We have prove the necessity of this statement so it remains to show the sufficiency portion. Suppose that \{x_1,\cdots,x_m,y_1,\cdots,y_k\} is a basis for \mathscr{W}. First, suppose that w\in\mathscr{U}\cap\mathscr{V} then

\displaystyle \sum_{j=1}^{m}\alpha_j x_j=w=\sum_{j=1}^{k}\beta_j x_j

and so upon subtraction

\displaystyle \sum_{j=1}^{m}\alpha_j x_j+\sum_{j=1}^{k}(-\beta_j) y_j=\bold{0}

and so by the linear independence of \{x_1,\cdots,x_m,y_1,\cdots,y_k\} we may conclude that

\alpha_1=\cdots=\alpha_m=\beta_1=\cdots=\beta_k=0

and thus it follows that w=\bold{0}. Also, to prove that \mathscr{W}=\mathscr{U}+\mathscr{V} we notice that given w\in\mathscr{W} we that there exists \alpha_1,\cdots,\alpha_m,\beta_1,\cdots,\beta_k such that

\displaystyle w=\overbrace{\sum_{j=1}^{m}\alpha_j x_j}^{\text{in }\mathscr{U}}+\overbrace{\sum_{j=1}^{k}\beta_j y_j}^{\text{in }\mathscr{V}}

from where the conclusion is obvious. \blacksquare

And, another lemma to make at least some of these easier.

Lemma: Let \{x_1,\cdots,x_n\}\subseteq\mathbb{C}^n. Define

\displaystyle A=\left[\begin{array}{c|c|c} & & \\ x_1 & \cdots & x_n \\ & & \end{array}\right]

(i.e. take the columns of the matrix to be the vectors). Then, \det A\ne 0 implies that \{x_1,\cdots,x_n\} is a basis.

Proof: Suppose that

\alpha_1 x_1+\cdots+\alpha_n x_n=\bold{0}

then we may rephrase this as

A\begin{bmatrix}\alpha_1\\ \vdots \\ \alpha_n \end{bmatrix}=\begin{bmatrix}0 \\ \vdots \\ 0\end{bmatrix}

And so if \det A\ne 0 we know that A is invertible and so

\begin{bmatrix}\alpha_1 \\ \vdots \\ \alpha_n \end{bmatrix}=A^{-1}\begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix}=\begin{bmatrix} 0 \\ \vdots \\ 0\end{bmatrix}

from where it follows that the set of vectors is linearly independent But, this is a set of n linearly independent vectors in an n dimensional space, and so it must be a basis. \blacksquare

Remark: This is an if and only if, that I will eventually prove in both directions, but this direction suffices for now.

a) We note that y+u-(x+v)=\bold{0} so \{x,y,u,v\} isn’t a basis and the sufficiency lemma takes care of the rest.

b) We note that if

\displaystyle A=\left[\begin{array}{c|c|c|c} & & & \\ x & y & u & v\\ & & & \end{array}\right]

a simple calculation tells us that \det A=-2 and so by the second lemma it’s linearly independent. The rest follows from the sufficiency lemma.

c) Define A as in b) and note that \det A=1 and draw the same conclusion.

2.

Problem: If \mathscr{M} is the subspace of those vectors (x_1,\cdots,x_n,x_{n+1},\cdots,x_{2n}\subseteq\mathbb{C}^{2n} for which x_1=\cdots=x_n=0, and if \mathscr{N} is the subspace of all those vectors for which x_j=x_{n+j},\text{ }j=1,\cdots,n then \mathbb{C}^{2n}=\mathscr{M}\oplus\mathscr{N}.

Proof: Clearly if \{e_1,\cdots,e_{2n} are the standard basis elements for \mathbb{C}^{2n} then \{e_{n+1},\cdots,e_{2n}\} is a basis for \mathscr{M}. Similarly \{e_1+e_{n+1},\cdots,e_n+e_{2n}\} is a basis for \mathcal{N}. Thus, it suffices to prove that \{e_{n+1},\cdots,e_{2n},e_1+e_{n+1},\cdots,e_n+e_{2n}\} is a basis for \mathbb{C}^{2n}.  But consider the matrix

A=\displaystyle \left[\begin{array}{c|c|c|c|c|c} & & & & & \\ e_{n+1} & \cdots & e_{2n} & e_1+e_{n+1} & \cdots & e_n+e_{2n} \\ & & & & & \end{array}\right]

stare at it long enough and you’ll notice that

\displaystyle A= \left[\begin{array}{c|c} 0 & I_n\\ \hline I_n & I_n\end{array} \right]

But, noticing that block matrix multiplication works the same as regular multiplication we can see that the above matrix “acts” like

\displaystyle \begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix}

in terms of multiplication. But, a quick check gives that the inverse of the above matrix is

\displaystyle \begin{bmatrix}-1 & 1\\ 0 &1\end{bmatrix}

one may hypothesize that

A^{-1}=\left[\begin{array}{c|c}-I_n & I_n\\ \hline 0 & I_n\end{array}\right]

And, in fact, it is easily verifiable that

\left[\begin{array}{c|c}-I_n & I_n\\ \hline 0 & I_n\end{array}\right]\left[\begin{array}{c|c} 0 & I_n\\ \hline I_n & I_n\end{array} \right]=\left[\begin{array}{c|c} 0 & I_n\\ \hline I_n & I_n\end{array} \right]\left[\begin{array}{c|c}-I_n & I_n\\ \hline 0 & I_n\end{array}\right]=\left[\begin{array}{c|c} I_n & 0 \\\hline 0 & I_n \end{array}\right]=I_{2n}

from where it follows that A\text{ is invertible}\Leftrightarrow \det A\ne 0 and so by our second lemma and the sufficiency lemma we’re done.

Remark: The intuitive “acts the same” quality that led us to our solutions really is more than just intuition. Really what we’ve noticed is that in general if R is a ring with 0_R and 1_R additive and multiplicative identities respectively then the matrix

A=\begin{bmatrix} 1_R & 0_R \\ 0_R & 1_R\end{bmatrix}

is invertible with inverse

A^{-1}=\begin{bmatrix}-1_R & 1_R \\ 0_R & 1_R\end{bmatrix}

But, I,0 (where 0 is the n\times n zero matrix) are the multiplicative and additive identity of the ring of all n\times n matrices over \mathbb{C}.

3.

Problem: Construct three subspaces \mathscr{M},\mathscr{N}_1, and \mathscr{N}_2 of a vector space \mathscr{V} so that \mathscr{M}\oplus\mathscr{N}_1=\mathscr{M}\oplus\mathscr{N}_2=\mathscr{V} but \mathscr{N}_1\ne\mathscr{N_2}.

Proof: We need look no further than our last example. Let \mathscr{V}=\mathbb{C}^4, \mathbb{M}=\text{span }\{e_1,e_2\}, \mathscr{N}_1=\text{span }\{e_3,e_4\} and \mathscr{N}_2=\{e_1+e_3,e_2+e_4\}. Noticing that \{e_1,e_2,e_3,e_4\} and \{e_1,e_2,e_1+e_3,e_2+e_4\} are both bases for \mathbb{C}^4 we can see that

\mathscr{M}\oplus\mathscr{N}_1=\mathbb{C}^4=\mathscr{M}\oplus\mathscr{N}_2

and \mathscr{N}_1\ne\mathscr{N}_2 as required.

4.

Problem:

a) If \mathscr{U},\mathscr{V} and \mathscr{W} are vector spaces, what is the relation between \left(\mathscr{U}\oplus\mathscr{V}\right)\oplus\mathscr{W} and \mathscr{U}\oplus\left(\mathscr{V}\oplus\mathscr{W}\right).

b) In what sense is the formation of direct sums commutative?

Proof: If we look at the direct sum as being the internal sum, the obvious answer to both is that they are the same thing. If we view it as the external direct sum then the answer to a) and b) is that they are isomorphic by the canonical isomorphisms

T_1:\left(\mathscr{U}\oplus\mathscr{V}\right)\oplus\mathscr{W}\to\mathscr{U}\oplus\left(\mathscr{V}\oplus\mathscr{W}\right):\left((u,v),w\right)\mapsto \left(u,(v,w)\right)

and

T_2:\mathscr{U}\oplus\mathscr{V}\to\mathscr{V}\oplus\mathscr{U}:(u,v)\mapsto(v,u)

which are undoubtedly isomorphisms.

5.

Problem:

a) Three subspaces \mathscr{L}, \mathscr{M}, and \mathscr{N} ofa vector space \mathscr{V} are called indepdent if each one is disjoint from the sum of the other two (here means “disjoint” means their intersection is trivial). Prove that a necessary and sufficient condition for

\mathscr{V}=\mathscr{L}\oplus\left(\mathscr{M}\oplus\mathscr{N}\right)

is that \mathscr{L},\mathscr{M} and \mathscr{N} are independent and that \mathscr{V}=\mathscr{L}+\mathscr{M}+\mathscr{N}.

b) Suppose that x,y and z are elements of a vector space \mathscr{V} and that \mathscr{L},\mathscr{M}, and \mathscr{N} are the subspaces spanned by x,y and z respectively. Prove that the vectors x,y and z are linearly independent if and only if the subspaces \mathscr{L},\mathscr{M}, and \mathscr{N} are independent.

c) Prove that three finite-dimensional subspaces are independent if and only if the sum of their dimensionas is equal to the dimension of their sums.

d) Generalize the results a)-c) from three subspaces to any finite number.

Proof: Instead of doing all the results and then e) let’s just generalize now. Let \mathscr{S}_1,\cdots,\mathscr{S}_n be subspaces of \mathscr{V} and call them independent if \displaystyle \mathscr{S}_{k}\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j

With this we do a)-c) in full generality:

a)

Restated Problem: Let \mathscr{S}_1,\cdots,\mathscr{S}_n be n subspaces of \mathscr{V}. Then,

\mathscr{V}=\mathscr{S}_1\oplus\left(\mathscr{S}_2\oplus\left(\cdots\left(\mathscr{S}_{n-1}\oplus\mathscr{S}_n\right)\cdots\right)\right)

if and only if \mathscr{S}_1,\cdots,\mathscr{S}_n are n-independent and \displaystyle \sum_{j=1}^{n}\mathscr{S}_j=\mathscr{V}

Proof: We first prove the following lemma:

Lemma: Let \mathscr{S}_1,\cdots,\mathscr{S}_n,\text{ }n\geqslant 3 be subsets of \mathscr{V}. Then, \mathscr{S}_1,\cdots,\mathscr{S}_n are n-independent if and only if every element of \displaystyle \sum_{j=1}^{n}\mathscr{S}_j may be written uniquely as \displaystyle \sum_{j=1}^{n}x_j with x_j\in\mathscr{S}_j

Proof: First suppose that \mathscr{S}_1,\cdots,\mathscr{S}_n is independent and let \displaystyle x\in\sum_{j=1}^{n}\mathscr{S}_j. Suppose then that \displaystyle \sum_{j=1}^{x}x_j=x=\sum_{j=1}^{n}x'_j where x_j,x'_j\in\mathscr{S}_j,\text{ }j=1,\cdots,n. Then, for any k\in\{1,\cdots,n\} we can see that

\displaystyle x_k-x'_k=\sum_{j\in\{1,\cdots,n\}-\{k\}}^{n}(x'_j-x-j)

so that \displaystyle x_k-x'_k\in S_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j from where it follows by assumption that x_k-x'_k=\bold{0} and so x_k=x'_k.

Conversely, suppose that every \displaystyle x\in\sum_{j=1}^{n}\mathscr{S}_j can be written uniquely as \displaystyle \sum_{j=1}^{n}x_j with x_j\in\mathscr{S}_j. Then, if \displaystyle x\in\mathscr{S}_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j that x=x_k for some x_k\in\mathscr{S}_k (namely itself) and \displaystyle x=\sum_{j\in\{1,\cdots,n\}-\{k\}}x_j and so upon subtraction we get that

\displaystyle\bold{0}=-x_k+\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j

but, since \bold{0}\in\mathscr{S}_j,\text{ }j=1,\cdots,n and since

\bold{0}=\underbrace{\bold{0}+\cdots+\bold{0}}_{n\text{ times}}

it follows by the assumption of unique representation x_j=\bold{0},\text{ }j=1,\cdots,n from where it follows that x=\bold{0}. \blacksquare

Then, with the assumptions on the subspaces described in the original problem we can see the above lemma says that \mathscr{S}_1,\cdots,\mathscr{S}_n are independent if and only if every element of \mathscr{V} may be written as the a unique sum \displaystyle \sum_{j=1}^{n}x_j with x_j\in\mathscr{S}_j,\text{ }j=1,\cdots,n , but this is equivalent to saying that \mathscr{V} is the direct sum of the subspaces.

b)

Restated problem: A set of vectors \{x_1,\cdots,x_n\} is linearly independent if and only if the subpspaces \text{span }\{x_1\},\cdots,\text{span }\{x_n\} are linearly depdenent.

Proof: Suppose that \{v_1,\cdots,v_n\} is an independent set and suppose that

\displaystyle x\in\mathscr{S}_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j

Then, that says that

\displaystyle x=\sum_{j\in\{1,\cdots,n\-\{k\}}x_j

But, x=\alpha_1 v_k and x_j=\alpha v_j and so this may be rewritten as

\displaystyle \bold{0}=-\alpha_kx_k+\sum_{j\in\{1,\cdots,n\}-\{k\}}\alpha_j x_j

from where it follows by linear independence that \alpha_1=\cdots=\alpha_k=\cdots=\alpha_n=0 and so x=\bold{0}

 

Conversely, suppose that \mathscr{S}_1,\cdots,\mathscr{S}_n are independent then,

\displaystyle \bold{0}=\sum_{j=1}^{n}\alpha_j x_j\implies \alpha_kx_k=\sum_{j\in\{1,\cdots,n\}_\{k\}}\alpha_j x_j

but this says that \displaystyle \alpha_j x_j\in\mathscr{S}_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j and so \alpha_kx_k=\bold{0} from this (assuming x_k\ne 0 which we may do otherwise we’re done) we may conclude that \alpha_k=0. But, we may do this for each k=1,\cdots,n so that \alpha_1=\cdots=\alpha_n=0 from where linear independence follows.

c)

Restated problem: Let \mathscr{S}_1,\cdots,\mathscr{S}_n be subspaces of \mathscr{V}. Then, they are independent if and only if the sum of their dimensions is the dimension of their sums.

Proof: First assume that \displaystyle \mathscr{S}_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j is non trivial for some k\in\{1,\cdots,n\}. Then, we see that

\displaystyle \dim_F\left(\sum_{j=1}^{n}\mathscr{S}_j\right) =\dim_F\left(\mathscr{S}_k\right)+\dim_F\left(\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j\right)-\dim_F \underbrace{\left(\mathscr{S}_k\cap\sum_{j\in\{1,\cdots,n\}-\{k\}}\mathscr{S}_j\right)}_{*}

but, we assumed that * was non-trivial, and so it’s dimension is greater than one. So, combining this with the fact that in general

\displaystyle \dim_F \left(\sum_{i}V_i\right)\leqslant\sum_{i}\dim_F V_i

we may conclude that

\displaystyle \dim_F\left(\sum_{j=1}^{n}\mathscr{S}_j\right)<\sum_{j=1}^{n}\dim_F\mathscr{S}_j

 

Conversely, suppose that \mathscr{S}_1,\cdots,\mathscr{S}_n are independent, then:

\displaystyle \dim_F\left(\sum_{j=1}^{n}\mathscr{S}_j\right) =\dim_F\mathscr{S}_1+\dim_F\left(\sum_{j=2}^{n}\mathscr{S}_j\right)-\dim_F\left(\mathscr{S}_1\cap\sum_{j=2}^{n}\mathscr{S}_j\right)

but by assumption that the intersection in the last term is trivial, the last term is zero. Continuing in this way successively leads to the sum of the dimensions.

Advertisements

October 16, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , , , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: