Abstract Nonsense

Crushing one theorem at a time

Interesting Linear Algebra and Matrix Analysis Problems


Point of post: In this post I will just do some neat problems from basic linear algebra and matrix analysis. Feel free to comment/critique any.

NOTE: It would be very appreciated if anyone could give me some more linear algebra problems to do.

1.

Problem: Let \mathscr{V} be a n-dimensional F-space and denote it’s scalar multiplication by \ast_1 and K a subfield of F. Then:

a) F is a vector space over K with usual addition and multiplication.

b) If \dim_K F=m then \mathscr{V} is a vector space over K with multiplication \ast_2=\left(\ast_1\right)_{\mid K\times\mathscr{V}} and \dim_K\mathscr{V}=mn

Proof:

a) This is immediate from the field axioms.

b) The fact that \mathscr{V} over K forms a vector space is clear. Now, to prove the claim about the dimensionality let \{\beta_1,\cdots,\beta_m\} be a basis for F over K and \{x_1,\cdots,x_n\} a basis for \mathscr{V} over F. We claim that \{\beta_1x_1,\cdots,\beta_m x_1,\cdots,\beta_1x_n,\cdots,\beta_mx_n\} is a basis for \mathscr{V} over K. To see this, first suppose that \alpha_{1,1},\cdots,\alpha_{m,n}\in K were such that

\displaystyle \displaystyle \sum_{j=1}^{n}\sum_{i=1}^{m}\alpha_{i,j}\beta_ix_j=\sum_{j=1}^{n}\left(\sum_{i=1}^{m}\beta_i\alpha_{i,j}\right)=\bold{0}

Then, by the linear independence of \{x_1,\cdots,x_n\} we see that \displaystyle \sum_{i=1}^{m}\alpha_{i,1}\beta_i=\cdots=\sum_{i=1}^{m}\alpha_{i,n}\beta_i=0. But, since \{\beta_1,\cdots,\beta_m\} are are linearly independent over K we see that each of the m sums above implies that \alpha_{1,j}=\cdots=\alpha_{m,j}=0\text{ }j=1,\cdots,n from where the linear independence of \{\alpha_{1,1},\cdots,\alpha_{m,n}\} follows.

To see that \text{span}_K\text{ }\{\beta_1x_1,\cdots,\beta_mx_1,\cdots,\beta_1x_n,\cdots,\beta_mx_n\}=\mathscr{V} we may merely notice that since \text{span}_{F}\text{ }\{x_1,\cdots,x_n\}=\mathscr{V} for any x\in\mathscr{V} there exists \gamma_1,\cdots,\gamma_n\in F such that \displaystyle x=\sum_{j=1}^{n}\gamma_j x_j. But, since \text{span}_K\text{ }\{\beta_1,\cdots,\beta_m\}=F we know that \displaystyle \gamma_j=\sum_{i=1}^{m}\alpha_{i,j}\beta_i,\text{ }j=1,\cdots,n where \alpha_{i,j}\in K,\text{ }i=1,\cdots,m\text{ }j=1,\cdots,n. So, then upon insertion and expansion one arrives at

\displaystyle x=\sum_{i\in[m]\text{ and }j\in[n]}\alpha_{i,j}\beta_ix_j

where [m]=\{1,\cdots,m\} and [n]=\{1,\cdots,n\}

2.

Problem: Let S\subseteq\mathbb{R}[x]. Show that e^x\notin\text{span }S

Proof: This is equivalent to asking why e^x is not equal to a polynomial. But, this is clear since if p(x) is any polynomial then D^{\text{deg }p+1}p=\bold{0} but D^{\text{deg }p+1}e^x=e^x (where D is the differential operator).

3.

Problem: Let F be a field. Show then that if one considers F as a vector space over itself that there does not exist proper subspaces W_1 and W_2 such that F=\mathscr{W}_1\oplus \mathscr{W}_2.

Proof: The obvious way is to note that if F=\mathscr{W}_1\oplus \mathscr{W}_2 that 1=\dim_F F=\dim_F \mathscr{W}_1+\dim_F \mathscr{W}_2, and clearly this implies that, up to relabeling, \dim_F \mathscr{W}_1=1 and \dim_F \mathscr{W}_2=0. Thus, since \mathscr{W}_1 is a 1-dimensional subspace of F (which is itself 1-dimensional) we may conclude that \mathscr{W}_1=F. Similarly, since \dim_F \mathscr{W}_2=0 we may conclude that \mathscr{W}_2=\{\bold{0}\}.

4.

Problem: Show that if \mathbb{R}^n is endowed with the usual topology and \mathscr{S} is a subspace of \mathbb{R}^n (with the usual operations) then \mathscr{S} is closed.

5.

Problem: Let f:\mathbb{R}^n\to\mathbb{R} be additive (f(x+y)=f(x)+f(y)) and continuous.. Prove that f is a linear functional.

Proof: First notice that it follows by induction that

f\left((mx_1,\cdots,mx_n)\right)=m f\left((x_1,\cdots,x_n)\right)

for all m\in\mathbb{N}. Furthermore, if m\in\mathbb{N} we see that

f\left((x_1,\cdots,x_n)\right)=f\left(\left(\frac{1}{m}mx_1,\cdots,\frac{1}{m}mx_n\right)\right)=m f\left(\left(\frac{1}{m}x_1,\cdots,\frac{1}{m}x_n\right)\right)

Thus,

\frac{1}{m} f\left((x_1,\cdots,x_n)\right)=f\left(\left(\frac{1}{m}x_1,\cdots,\frac{1}{m}x_n\right)\right)

So, by combining these two we see that if \frac{p}{q}\in\mathbb{Q} that

f\left(\left(\frac{p}{q}x_1,\cdots,\frac{p}{q}x_n\right)\right)=\frac{p}{q}f\left((x_1,\cdots,x_n)\right)

Thus, if  \left(q_1,\cdots,q_n\right)\in\mathbb{Q}^n  we see that

\begin{aligned}f\left((q_1,\cdots,q_n)\right) &=f\left((q_1,\cdots,0)+\cdots+(0,\cdots,q_n)\right)\\ &=f((q_1,\cdots,0))+\cdots+f((0,\cdots,q_n))\\ &=q_1f((1,\cdots,0))+\cdots+q_nf((0,\cdots,1))\end{aligned}

Thus, if

g((x_1,\cdots,x_n)=x_1f(1,\cdots,0)+\cdots+x_nf(0,\cdots,1)

we see that f_{\mid\mathbb{Q}^n}=g_{\mid\mathbb{Q}^n}. But, this means that f and g are continuous maps which agree on a dense subset of the domain, which means that they must agree on all of the domain. Namely, f=g. But, g is evidently a linear functional and so the conclusion follows.

6.

Problem: Let A\in\text{Mat}_{n\times n} be both normal and nilpotent. Then A=[0]

Proof: By the spectral theorem we have that

\displaystyle A=U\text{ diag}(\lambda_1,\cdots,\lambda_n)U^{*}

for some unitary matrix U. Thus,

A^*=U\text{ diag}(\overline{\lambda_1},\cdots,\overline{\lambda_n})U^{*}\text{ }(1)

Thus,

AA^{*}=U\text{ diag }\left(|\lambda_1|^2,\cdots,|\lambda_n|^2\right)U^{*}

and so

\left(AA^{*}\right)^k=U\text{diag }(|\lambda_1|^{2k},\cdots,|\lambda_n|^{2k})U^{*}

But, using normality and nilpotence we see that

\left(AA^{*}\right)^k=\left(A^k\right)\left(A^*\right)^k=[0]

so, upon comparision we find that

\text{diag}\left(|\lambda_1|^{2k},\cdots,|\lambda_n|^{2k}|\right)=[0]

and so \lambda_1=\cdots=\lambda_n=0. Thus,  appealing to (1) we find that A=[0]

7.

Problem: Let A\in\text{Mat}_{n\times n} be such that A^k=[0] for some k>n. Show that A^r=[0] for some r\leqslant n.

Proof: Note that if \lambda\in\sigma\left(A\right) (where \sigma is the spectrum of a matrix) with associated eigenvector x, then

0=A^kx=A^{k-1}\lambda x=\cdots=\lambda^kx

and since x\ne0 we may conclude that \lambda^k=0\implies \lambda=0. Thus, all the eigenvalues of A are 0. But, then we know that

A=S\begin{bmatrix} J_{n_1}(0) & \cdots & 0\\ \vdots & \ddots & \vdots \\ 0 & \cdots & J_{n_k}(0)\end{bmatrix}S^{-1}

where

J_{n_\ell}(0)=\underbrace{\begin{bmatrix}0 & 1 &\cdots & 0\\ \vdots & \ddots & \ddots &\vdots\\ 0 & \cdots &\ddots&1\\ 0 &\cdots &0 &0\end{bmatrix}}_{n_{\ell}\times n_{\ell}}

is the Jordan block associated with the particular eigenvalue. But, since n_1+\cdots+n_k=n we evidently have that \displaystyle \max_{1\leqslant j\leqslant k}n_j\leqslant n. But, evidently if \displaystyle N=\max_{1\leqslant j\leqslant k}n_j we have that

J_{n_k}(0)^N=[0]

and so

A^N=S\begin{bmatrix} J_{n_1}(0)^N & \cdots & 0\\ \vdots & \ddots & \vdots \\ 0 & \cdots & J_{n_k}(0)^N\end{bmatrix}S^{-1}=S[0]S^{-1}=[0]

which is what was to be proven.

8.

Problem: What are the possible Jordan canonical forms for A\in\text{Mat}_{n\times n} given that A^m=I,\text{ }m\geqslant 2?

Proof: We know that

A=SJS^{-1}

for some Jordan matrix J. Thus,

I=A^{m}=SJ^{m}S^{-1}\implies J^m=I

But, notice that if

\displaystyle J=\begin{bmatrix} J_{n_1}(\lambda_1) & \cdots &0\\ \vdots & \ddots &\vdots\\ 0 &\cdots &J_{n_k}\left(\lambda_k\right)\end{bmatrix}

that

J^m=\begin{bmatrix} \left[J_{n_1}(\lambda_1)\right]^m & \cdots & 0\\ \vdots & \ddots &\vdots\\ 0 & \cdots & \left[J_{n_k}(\lambda_k)\right]^m\end{bmatrix}

And thus, for J^m=I it’s evident that \left[J_{n_\ell}\left(\lambda_\ell\right)\right]^m=I,\text{ }\ell=1,\cdots,k. But, if $n_{\ell}>0$ it easily follows that \left[J_{n_{\ell}}(\lambda_{\ell})\right]^m\ne I (just consider induction on n_{\ell} and partition the matrix into smaller ones). Thus, it follows that n_{\ell}=1,\text{ }\ell=1,\cdots k so that J_{n_{\ell}}(\lambda_\ell)=\lambda_\ell. Thus, we may conclude that

A=S\begin{bmatrix}\ell_1 & \cdots &0\\ \vdots &\ddots & \vdots\\ 0 &\cdots &\ell_{k}\end{bmatrix}S^{-1}

where \lambda_1,\cdots\lambda_k are eigenvalues of A, and thus m-th roots of unity.

References:

1. Halmos, Paul R. Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print.

2. Brown, William C. A Second Course in Linear Algebra. New York: Wiley, 1988. Print.

3.  Horn, Roger A., and Charles R. Johnson. Matrix Analysis. Cambridge [u.a.: Cambridge Univ., 2009. Print.

4.  Roman, Steven. Advanced Linear Algebra. New York: Springer, 2005. Print.

Advertisements

October 18, 2010 - Posted by | Fun Problems, Linear Algebra | , , ,

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: