Abstract Nonsense

Crushing one theorem at a time

Halmos Chapter one Section five, six, and seven: Linear Dependence, Linear Combinations, and Bases



a) Prove that the four vectors x=(1,0,0),y=(0,1,0),z=(0,0,1), and u=(1,1,1) are linearly dependent, but any three are linearly independent

b) If the vectors x,y,z and u in \displaystyle \mathcal{P}=\left\{\sum_{k=0}^{n}c_nt^n:c_n\in\mathbb{C}\text{ and }n\in\mathbb{N}\cup\{0\}\right\}are given by x(t)=1, y(t)=t,z(t)=t^2 and u(t)=1+t+t^2, prove that x,y,z and w are dependent, but any three of them are linearly independent


a) Clearly all four are l.d. (linearly dependent) since -x+-y+-z+w=0. Now, clearly x,y,z are l.i. (linearly independent) and so it remains to show that \{x,y,w\},\{y,z,w\},\{z,x,w\} are l.i. We do this only for the first case, since the others are done similarly. So, suppose that


comparison of the third coordinates tells us that \alpha_3=0, and thus \alpha_1,\alpha_2=0 from where l.i. follows.

b) Clearly they are l.d. since -x+-y+-z+w=\bold{0}. We can prove that any three are l.i. in much the same way as we can the tuples (clearly this is not coincidence, the variables when treated as formal objects are glorified placeholders for the coefficients) and so we once again prove one. Namely, if

p(t)=\alpha_1x+\alpha_2y+\alpha_3w=\alpha_1+\alpha_3+(\alpha_2+\alpha_3)t+\alpha_3 t^2=0(t)

then in particular p(0)=\implies \alpha_1+\alpha_3=0 as well as that

p(t)=(\alpha_2+\alpha_3)t+\alpha_3 t^2=t\left(\alpha_2+\alpha_3+\alpha_3t\right)=0(t)

Now, noting that p(t)=0(t) for all t\in\mathbb{C} we in particular may note that it’s true for all t\in\mathbb{R}^+. And, for such t we see that

p(t)=t(\alpha_2+\alpha_3+\alpha_3t)=0(t)\implies \alpha_2+\alpha_3+\alpha_3t=0(t)

and in particular, p(0)=\alpha_2+\alpha_3=0. Finally, repeating this process again shows that \alpha_3=0 from where it all cascades back to show that \alpha_1=\alpha_2=\alpha_3=0


Problem: Prove that if \mathbb{R} is considered as a vector space over \mathbb{Q}, then a necessary and sufficient condition that the vectors 1 and \xi in \mathbb{R} be l.i. is that the real number \xi is irrational.

Proof: This is evident. Suppose that \xi\notin\mathbb{Q} but \{1,\xi\} was not l.i., then there exists nonzero \alpha,\beta\in\mathbb{Q} for which \alpha+\beta\xi=0 (neither can be zero since one being zero implies the other must be zero, considering \xi\ne0) but, this in particular means that \displaystyle \xi=\frac{\alpha}{-\beta}\in\mathbb{Q}, which is a contradiction. Conversely, suppose that \{1,\xi\} is l.i. but \xi\in\mathbb{Q}. Then, there exists \alpha,\beta\in\mathbb{Q} such that \displaystyle \frac{\alpha}{\beta}=\xi\implies \alpha-\beta x=0. Clearly, \alpha,\beta\ne 0 and so this violates the l.i. of \{1,\xi\}


Problem: Is it true that if x,y, and z are l.i. vectors, then so are x+y,y+z,z+x?

Proof: Yes. Note that if


then the l.i. of \{x,y,z\} tells us that the system of equations

\begin{cases}\alpha_1+\alpha_2=0\quad(1)\\\alpha_2+\alpha_3=0\quad(2)\\ \alpha_1+\alpha_3=0\quad(3)\end{cases}



upon which cancellation gives \alpha_3=0 from where the rest follows.

Remark: This clearly generalized, but it’s too late. I’ll come back later and think about it.



a) Under what conditions on the scalar \xi are the vectors (1+\xi,1-\xi) and (1-\xi,1+\xi) in \mathbb{C}^2 l.d.?

b) Under what conditions on the scalar \xi are the vectors (\xi,1,0),(1,\xi,1), and (0,1,\xi) in \mathbb{R}^3 l.d.?

c) What is the answer to b) for \mathbb{Q}^3


a) We first note that


So, we are looking to see when this expression equals zero. Clearly, setting this equal to zero gives us


Now, \alpha_1\ne\alpha_2 since a quick check would show that they would have to then be zero. So, we may compare these two equations and arrive at \xi=-\xi\implies \xi=0. Thus, they are l.d. precisely when they coincide.

b) We note first that


thus, if we set this equal to zero we get the following three equations

\begin{cases}\alpha_1\xi+\alpha_2=0\quad\quad\quad\text{ }(1)\\\alpha_1+\alpha_2\xi+\alpha_3=0\quad(2)\\\alpha_2+\alpha_3\xi=0\quad\quad\quad\text{ }(3)\end{cases}

We then note that


So, if we assume that \xi\ne0 we arrive at \alpha_1=\alpha_2. Thus (1)\implies (\alpha_1(\xi+1)=0 and so (2) says that 0+\alpha_3=0\implies \alpha_3=0 and so it follows that \alpha_1=\alpha_2=\alpha_3=0. Thus, the only possibility is that \xi=0. Checking this we find that (0,1,0), (1,0,1), and 0,1,0 are linearly dependent.

c) The above implies that the same conclusion must be drawn.


Problem: Prove the following

a) The vectors (\xi_1,\xi_2) and (\eta_1,\eta_2) in \mathbb{C}^2 are l.d. implies \xi_1\eta_2=\xi_2\eta_1

b) Find a similar necessary condition for the l.d. of three ectors in \mathbb{C}^3.

c) Is there a set of three l.i. vectors in \mathbb{C}?


a) We first note that


and thus we are afforded the equations

\begin{cases} \alpha_1\xi_1+\alpha_2\eta_1=0\\\alpha_1\xi_2+\alpha_2\eta_2=0\end{cases}

or in matrix form

\begin{bmatrix}\xi_1 & \eta_1\\ \xi_2 & \eta_2\end{bmatrix}\begin{bmatrix}\alpha_1\\\alpha_2\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}\quad (1)

Now, if A=\begin{bmatrix}\xi_1 & \eta_1\\ \xi_2 & \eta_2\end{bmatrix} were invertible then

(1)\implies \begin{bmatrix}\alpha_1 \\ \alpha_2 \end{bmatrix}=A^{-1} \begin{bmatrix} 0 \\ 0 \end{bmatrix} =\begin{bmatrix} 0 \\ 0 \end{bmatrix} \implies \alpha_1=\alpha_2=0

and thus the vectors are l.i. It follows that A is not invertible, or equivalently

\det A=\xi_1\eta_2-\xi_2\eta_1=0

b) For three vectors we follow the same line of logic and note that the determinant formed by having the the three vectors as rows must be zero.

c) Yes, what about (0,0,1),(0,1,0) and (1,0,0)


Problem: Prove the following

a) Under what conditions on the scalars  \xi,\eta are the vectors (1,\xi),(1,\eta) l.d.

b) Under what conditions on the scalars  \xi,\eta, and \zeta are the vectors (1,\xi,\xi^2),(1,\eta,\eta^2) and (1,\zeta,\zeta^2) l.d. in \mathbb{C}^3?

c) Generalize to \mathbb{C}^n


a) By the last problem we see that they are l.d. iff \eta=\xi

b) By the last problem we see they are l.d. iff -(\xi-\zeta)(\xi-\eta)(\eta-\zeta)=0

c) It is clear following the same logic that \left(1,\cdots,\xi_1^n\right),\cdots,\left(1,\cdots,\xi_n^n\right) are l.d. iff

\displaystyle \det\begin{bmatrix}1 & \cdots & \xi_1^n\\ \vdots & \ddots & \vdots\\ 1 & \cdots & \xi_n^n\end{bmatrix}=\prod_{1\leqslant i<j\leqslant n}\left(\xi_j-\xi_i\right)=0

in other words, iff \xi_i=\xi_j,\text{ }i\ne j


Problem: Prove the following

a) Find two bases in \mathbb{C}^4 such that the only vectors common to both are (0,0,1,1) and (1,1,0,0)

b) Find two bases in \mathbb{C}^4 that nave no vectors in common, so that one of them contains the vectors (1,0,0,0) and (1,1,0,0) and the other one contains teh vectors (1,1,1,0) and (1,1,1,1)


a) Consider \{(1,0,0,0)=x,(0,0,0,1)=y,(0,0,1,1)=z,(1,1,0,0)=w\}=B_1. To see that this set is l.i. we note that

\alpha_1 x+\alpha_2 y+\alpha_3 z+\alpha_4 w=\left(\alpha_1+\alpha_4,\alpha_4,\alpha_3,\alpha_2+\alpha_3\right)=(0,0,0,0)

clearly implies that \alpha_3=\alpha_4=0 and the fact that \alpha_1=\alpha_2=0 quickly follows. Also, if v=(\xi_1,\xi_2,\xi_3,\xi_4) then taking \alpha_1=\xi_2,\alpha_2=\xi_3,\alpha_3=\xi_1-\xi_2, and \alpha_4=\xi_4-\xi_3 we can readily see that \alpha_1x+\alpha_2y+\alpha_3z+\alpha_4w=v. Thus, B is, in fact, a basis for \mathbb{C}^4.

Using the same process we can see that \left\{(0,1,0,0),(0,0,1,0),(0,0,1,1),(1,1,0,0)\right\}=B_2 forms a basis, and B_1\cap B_2=\{(1,1,0,0),(0,0,1,1)\}

b) One can check that B_1 from last time and \left\{(1,1,1,0),(1,1,1,1),(0,1,1,1),(1,1,0,1)\right\} work.



a) Under what conditions on the scalar \xi do the vectors (1,1,1) and (1,\xi,\xi^2) form a basis of \mathbb{C}^3?

b) Under what conditions on the scalar \xi do the vectors (0,1,\xi),(\xi,0,1) and (\xi,1,1+\xi) form a basis of \mathbb{C}^3


a) Note that if \xi=1 this set of vectors is surely not l.i. Thus, we may assume that \xi\ne 1. We note then that if


then the three equations

\begin{cases} \alpha_1+\alpha_2=0\quad &(1)\\\alpha_1+\alpha_2\xi=0 &(2)\\\alpha_1+\alpha_2\xi^2=0 & (3)\end{cases}

hold. Namely, we see that

(1)-(2)=\alpha_2(1-\xi)=0\implies \alpha_2=0

and thus by (1) we see that \alpha_1=0. Thus, if \xi\ne 1 that these vectors are l.i. But, suppose that \alpha_1,\alpha_2,\alpha_3\in\mathbb{C} were such that

\begin{cases} \alpha_1+\alpha_2=0\quad &(1)\\\alpha_1+\alpha_2\xi=0 &(2)\\\alpha_1+\alpha_2\xi^2=1 & (3)\end{cases}

We see that (1)\implies \alpha_1=-\alpha_2 and so insertion of this into (3) shows that \displaystyle \alpha_2=\frac{-1}{\xi^2-1}, and insertion of this into (2) gives that \displaystyle \alpha_1=\frac{-\xi}{\xi^2-1}. But, inserting these into (1) gives

\displaystyle 0=\alpha_1+\alpha_2=\frac{-\xi}{\xi^2-1}+\frac{1}{\xi^2-1}=\frac{1-\xi}{\xi^2-1}\ne0

which is a contradiction. Thus, these vectors can never be a basis.


Problem: If \mathcal{X} is the set consisting of the six vectors \{(1,1,0,0),(1,0,1,0),(1,0,0,1),(0,1,1,0),(0,1,0,1),(0,0,1,1)\} find two different maximal independt subsets of \mathcal{X}.

Proof: It is tedious, but one can prove that \{(1,1,0,0),(0,1,1,0),(0,0,1,1)\} and \{(1,0,1,0),(0,1,0,1),(0,1,1,0)\} are two such setes.


Problem: Let \mathcal{V} be a vector space. Prove that \mathcal{V} has a basis.

Proof: We can prove something even stronger. We can prove that given a set of l.i. vectors I\subseteq\mathcal{V} that there is a basis \mathfrak{B} of \mathcal{V} such that I\subseteq\mathfrak{B}. To do this, let

\mathfrak{P}=\left\{L\subseteq \mathcal{V}:L\text{ is l.i and }I\subseteq L\right\}

To prove this we first note that \left(\mathfrak{P},\subseteq\right) is a partially ordered set. Also, given some chain \mathfrak{C}\subseteq\mathfrak{P} we can easily see that \displaystyle U=\bigcup_{C\in\mathfrak{C}}C is an upper bound. To see that U\in\mathfrak{P} we let \{v_1,\cdots,v_m\}\subseteq U (remember we’re dealing with the arbitrary notion of l.i., not necessarily the finite one). Then, by definition there exists C_1,\cdots,C_m\in\mathfrak{C} such that v_1\in C_1,\cdots, V_m\in C_m. Now, it clearly follows (from \mathfrak{C} being a chain) that

\{v_1,\cdots,v_m\}\subseteq C_1\cup\cdots\cup C_m=C_{m_0},\quad m_0\in\{1,\cdots,m\}

but, that means that \{v_1,\cdots,v_m\} is contained within an element of \mathfrak{P}, namely it is the subset of a set of l.i. vectors, and thus l.i.  Thus, U\in\mathfrak{P}. So, evoking Zorn’s lemma we see that \mathfrak{P} admits a maximal element \mathfrak{M}. We claim that \text{Span }\mathfrak{M}=\mathcal{V}. To see this, suppose not. Then, there exists some v\in\mathcal{V}-\text{Span }\mathfrak{M}. Now, let \{v_1,\cdots,v_n\} be a finite subset of \mathfrak{M}\cup\{v\}. Clearly if v\ne v_i,\text{ }i=1,\cdots,n then it is a l.i. set, and if v=v_{i_0} for some i_0, then we see that

\alpha_1 v_1+\cdots+\alpha_{i_0}v_{i_0}+\cdots+\alpha_n v_n=0\implies \alpha_{i_0}=0

since otherwise

\displaystyle v_{i_0}=v=\frac{-\alpha_1}{\alpha_{i_0}}v_1+\cdots+\frac{-\alpha_{i_0-1}}{\alpha_{i_0}}v_{i_0-1}+\frac{-\alpha_{i_0+1}}{\alpha_{i_0}}v_{i_0+0}+\cdots+\frac{-\alpha_n}{\alpha_{i_0}}v_n

which contradicts that v\notin\text{Span }\mathfrak{M}. But, \alpha_{i_0}=0 clearly implies (by the l.i. of \mathfrak{M}) that \alpha_{i}=0,\text{ }i=1,\cdots,n. Thus, \mathfrak{M}\cup\{v\} is l.i. and so is contained in \mathfrak{P}. But, this contradicts the maximality of \mathfrak{M}. It follows that no such v exists, in other words

\mathcal{V}-\text{Span }\mathfrak{M}=\varnothing\implies \mathcal{V}=\mathfrak{M}

(since \mathfrak{M}\subseteq \mathcal{V}. So, taking I=\varnothing we see that \mathcal{V} must admit a basis. \blacksquare


September 24, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , , , ,


  1. Old post, I know, but I’m working through some of the early exercises in Halmos (slowly) and just came across your site. I’m happy to see it, and it’s in my bookmarks.

    I think you’re mistaken on one of these, however. For #3 — “If it’s true that {x,y,z} are l.i., then so are {x+y,y+z,z+x}.” (From your note at the end, it seems like you were a bit unsure of the proof.) I am almost certain this is false. As a counterexample, think of the vector space consisting of 3-tuples over the most trivial field (containing 0 and 1 only, with 1+1=0, equivalent to the Z_2 field Halmos defines in the first batch of exercises). Let x=(1,0,0), y=(0,1,0), z=(0,0,1) — clearly l.i. We then have:


    The sum (x+y)+(y+z)+(z+x) then equals (0,0,0). The mistake in the proof comes in juggling the equalities — cancellation doesn’t give alpha_3=0, it gives alpha_3=-alpha_2=-alpha_3.

    Hope this helps. I spent most of a page trying to prove the statement true before thinking about this example. I look forward to checking some of your later proofs as I work forward, so the postings are very much appreciated.

    Comment by jh | March 10, 2011 | Reply

    • Dear Jackie,

      Thank you for pointing that out. I must admit that I was anxious to get to some of the later stuff and to be quite frank Halmos wasn’t my main goal and so I kind of rushed, but I have a compulsion to do problems.

      Now that I have padded my ego after making a stupid mistake I agree with what you said 100%…good job. 🙂


      Comment by drexel28 | March 10, 2011 | Reply

  2. Hello, Another comment.

    I think your solution to 4(b) is incorrect. (1) – (3) = (a_1 – a_3)x. The answer should be (I believe) x = 0, +sqrt(2), -sqrt(2). That would mean part (c) needs fixed too.

    Comment by tyler | March 27, 2012 | Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: