Abstract Nonsense

Crushing one theorem at a time

Halmos Sections 37 and 38: Matrices and Matrices of Linear Transformations(Pt. III)

Point of post: This is a continuation of this post.


Problem: Prove that if A and B are the complex matrices

\begin{pmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\end{pmatrix}


\begin{pmatrix}i & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -i & 0\\ 0 & 0 & 0 & 1\end{pmatrix}

respectively, and if C=AB-iBA then C^3+C^2+C=[0].

Problem: This is just plain old computation, we leave this to the reader.



Problem: IF A,B\in\text{End}\left(\mathscr{V}\right) for some n-dimensional F-space \mathscr{V}, and if AB=\bold{0} does it follow that BA=\bold{0}?

Proof: No, this is definitely not true. Take A,B be the unique members of \text{End}\left(\mathbb{R}^2\right) such that

\left\{\begin{array}{c}A((1,0))=(0,1)\\ A((0,1))=(0,1)\end{array}\right\}\text{ and }\left\{\begin{array}{c}B((1,0))=(1,0)\\ B((0,1))=(0,0)\end{array}\right\}

We see then that


so that BA=\bold{0}. That said, (AB)((1,0))=(0,1) so that AB\ne \bold{0}.



Problem: What happens to the matrix of a linear transformation T on a n-dimensional F-space \mathscr{V} with respect to the ordered basis \mathcal{B}=(x_1,\cdots,x_n) when the matrix is computed with respect to \pi\mathcal{B}=(x_{\pi(1)},\cdots,x_{\pi(n)}) for \in\in S_n?

Proof: The columns are permuted via \pi. Put more explicitly,

\left[T\right]_{\pi\mathcal{B}}=\left(\begin{array}{c|c|c} & & \\ T(x_{\pi(1)}) & \cdots & T(x_{\pi(n)})\\ & & \end{array}\right)




a) Suppose that \mathscr{V} is an n-dimensional F-space with basis \{x_1,\cdots,x_n\}. Suppose that \alpha_1,\cdots,\alpha_n\in F are distinct. If A\in\text{End}\left(\mathscr{V}\right) such that A(x_j)=\alpha_j x_j,\text{ }j\in[n] and B\in\text{End}\left(\mathscr{V}\right) is such that AB=BA prove then there exists scalars \beta_1,\cdots,\beta_n such that B(x_j)=\beta_j x_j.

b) Prove that if B\in\text{End}\left(\mathscr{V}\right) and BA=AB for every A\in\text{End}\left(\mathscr{V}\right) then B=\beta\mathbf{1} for some \beta\in F.


a) We note that for each x_i there exists scalars \beta_{i,j},\text{ }j\in[n] such that

\displaystyle B(x_i)=\sum_{j=1}^{n}\beta_{i,j} x_j


A\left(B(x_i)\right)=\sum_{j=1}^{n}\alpha_j \beta_{i,j}x_j

But, this must be equal to

\displaystyle B(A(x_i))=B(\alpha_i x_i)=\sum_{j=1}^{n}\alpha_i \beta_{i,j}x_j

or, upon subtraction

\displaystyle \sum_{j=1}^{n}\beta_{i,j}(\alpha_j -\alpha_i)x_j=\bold{0}

But, since \{x_1,\cdots,x_n\} is linearly independent this implies that \beta_{i,j}(\alpha_j-\alpha_i)=0 for j\in[n]. But, since \alpha_j\ne \alpha_i for j\ne i we may conclude that \beta_{i,j}=0 for j\ne i. Therefore, looking back we see that this implies that


Since i\in[n] was arbitrary the conclusion follows.

b) See here.



Problem: If \{x_1,\cdots,x_k\} and \{y_1,\cdots,y_k\} are linearly independent sets of vectors in an n-dimensional F-space \mathscr{V} then there exists some T\in\text{GL}\left(\mathscr{V}\right) such that T(x_j)=y_j for j=1,\cdots,k

Proof: Extend \{x_1,\cdots,x_k\} to a basis \{x_1,\cdots,x_k,x_{k+1},\cdots,x_n\} and extend \{y_1,\cdots,y_k\} to a basis \{y_1,\cdots,y_k,y_{k+1},\cdots,y_n\}. Then, let T be the unique element of \text{End}\left(\mathscr{V}\right) such that T(x_\ell)=y_{\ell}. Since this is a bijection between bases our previous characterization of isomorphisms let’s us conclude that T\in\text{GL}\left(\mathscr{V}\right).



Problem: If a matrix A=[\alpha_{i,j}]\in\text{Mat}_n\left(F\right) is such that \alpha_{i,i}=0,\text{ }i\in[n] then there exists B=[\beta_{i,j}],C=[\gamma_{i,j}]\in\text{Mat}_n\left(F\right) such that A=BC-CB.

Proof: We may clearly assume that F has infinitely distinct elements. Thus, choosing some countable subset \left\{\beta_i\right\}_{i\in\mathbb{N}} we may define \beta_{i,j}=\beta_i\delta_{i,j} (where \delta_{i,j} is [as usual] the Kronecker delta symbol) and

\displaystyle \gamma_{i,j}=\begin{cases}\frac{\alpha_{i,j}}{\beta_i-\beta_j} & \mbox{if}\quad i\ne j\\ 0 & \mbox{if}\quad i=j\end{cases}

We note then that the general term of BC is

\displaystyle \sum_{r=1}^{n}\beta_i\delta_{i,r}\gamma_{r,j}=\beta_{i}\gamma_{i,j}=\begin{cases}\frac{\alpha_{i,j}}{\beta_i-\beta_j} & \mbox{if}\quad i\ne j\\ 0 & \mbox{if}\quad i=j\end{cases}

and the general term of CB is

\displaystyle \sum_{r=1}^{n}\gamma_{i,r}\beta_r \delta_{r,j}=\gamma_{i,j}\beta_j=\begin{cases}\beta_j \frac{\alpha_{i,j}}{\beta_i-\beta_j} & \mbox{if}\quad i\ne j\\ 0 & \mbox{if} \quad i=j\end{cases}

thus, the general term of BC-CB is

\begin{cases}\alpha_{i,j} & \mbox{if} \quad i\ne j\\ 0 & \mbox{if}\quad i=j\end{cases}=\alpha_{i,j}

from where the conclusion follows.



1. Halmos, Paul R.  Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print


December 19, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , ,

1 Comment »

  1. […] Point of post: This is a continuation of this post. […]

    Pingback by Halmos Sections 37 and 38: Matrices and Matrices of Linear Transformations(Pt. IV) « Abstract Nonsense | December 19, 2010 | Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: