## Halmos Sections 37 and 38: Matrices and Matrices of Linear Transformations(Pt. IV)

**Point of post: **This is a continuation of this post.

*Remark: *For some strange reason the fourth (this one) and the fifth (the previous one) got mixed up in the order of posting. The number is correct, this is the fourth post in this sequence and the one preceding it the fifth.

## Halmos Sections 32 and 33: Linear Transformations and Transformations as Vectors (Pt. II)

**Point of post: **This is a continuation of this post in an effort to answer the questions at the end of sections 32 and 33 in Halmos’s book.

## Tensor Product

**Point of post:** In this post I will discuss the very basic, and simple minded, definition of the *tensor product * of finite dimensional vector spaces and and it’s consequences, as is outlined in Halmos (viz. reference 1).

*Nota Bene: *The following may seem to be a far-cry from the typical definition of the tensor product as where is the free vector space and is the usual equivalence relation. That said, the following is a fairly large amount of theoretical buck for a fairly small complexity buck.

**Motivation**

In the last post we discussed how given vector spaces over a field there is a canonical way to form the vector space of all bilinear forms on , denoted . But, as is fast becoming a motif in our studies we begin with a vector space and the study it’s dual space, as always denoted either or . For the case of we make a small notational change. Instead of denoting the dual space of by we denote it by and call it the tensor product of and .

## Halmos Chapter one Sections 15, 16 and 17: Dual Bases, Reflexivity, and Annihilators (Part I)

**Note: **For a vector space over a field I will freely switch between the notations and for the dual space of , depending which fits better.

## Halmos Chapter one Section 13 and 14: Linear Functionals and Bracket Notation

1.

**Problem:** Consider the set of complex numbers as a vector space over . Suppose that for each in (where ) the function is given by

**a) **

**b)**

**c) **

**d) **

**e)**

In which cases are these linear functionals?

## Halmos Chapter one Section five, six, and seven: Linear Dependence, Linear Combinations, and Bases

1.

**Problem:**

**a) **Prove that the four vectors , and are linearly dependent, but any three are linearly independent

**b) **If the vectors and in are given by and , prove that and are dependent, but any three of them are linearly independent

**Proof:**

**a) **Clearly all four are l.d. (linearly dependent) since . Now, clearly are l.i. (linearly independent) and so it remains to show that are l.i. We do this only for the first case, since the others are done similarly. So, suppose that

comparison of the third coordinates tells us that , and thus from where l.i. follows.

**b) **Clearly they are l.d. since . We can prove that any three are l.i. in much the same way as we can the tuples (clearly this is not coincidence, the variables when treated as formal objects are glorified placeholders for the coefficients) and so we once again prove one. Namely, if

then in particular as well as that

Now, noting that for all we in particular may note that it’s true for all . And, for such we see that

and in particular, . Finally, repeating this process again shows that from where it all cascades back to show that

2.

**Problem: **Prove that if is considered as a vector space over , then a necessary and sufficient condition that the vectors and in be l.i. is that the real number is irrational.

**Proof:** This is evident. Suppose that but was not l.i., then there exists nonzero for which (neither can be zero since one being zero implies the other must be zero, considering ) but, this in particular means that , which is a contradiction. Conversely, suppose that is l.i. but . Then, there exists such that . Clearly, and so this violates the l.i. of

3.

**Problem: **Is it true that if , and are l.i. vectors, then so are ?

**Proof: **Yes. Note that if

then the l.i. of tells us that the system of equations

But,

upon which cancellation gives from where the rest follows.

*Remark:* This clearly generalized, but it’s too late. I’ll come back later and think about it.

4.

**Problem:**

**a) **Under what conditions on the scalar are the vectors and in l.d.?

**b) **Under what conditions on the scalar are the vectors and in l.d.?

**c) **What is the answer to b) for

**Solution:**

**a) **We first note that

So, we are looking to see when this expression equals zero. Clearly, setting this equal to zero gives us

Now, since a quick check would show that they would have to then be zero. So, we may compare these two equations and arrive at . Thus, they are l.d. precisely when they coincide.

**b) **We note first that

thus, if we set this equal to zero we get the following three equations

We then note that

So, if we assume that we arrive at . Thus and so says that and so it follows that . Thus, the only possibility is that . Checking this we find that and are linearly dependent.

**c) **The above implies that the same conclusion must be drawn.

5.

**Problem: **Prove the following

**a) **The vectors and in are l.d. implies

**b) **Find a similar necessary condition for the l.d. of three ectors in .

**c) **Is there a set of three l.i. vectors in ?

**Proof:**

**a) **We first note that

and thus we are afforded the equations

or in matrix form

Now, if were invertible then

and thus the vectors are l.i. It follows that is not invertible, or equivalently

**b) **For three vectors we follow the same line of logic and note that the determinant formed by having the the three vectors as rows must be zero.

**c) **Yes, what about and

6.

**Problem:** Prove the following

**a) **Under what conditions on the scalars are the vectors l.d.

**b) **Under what conditions on the scalars and are the vectors and l.d. in ?

**c) **Generalize to

**Proof:**

**a) **By the last problem we see that they are l.d. iff

**b) **By the last problem we see they are l.d. iff

**c) **It is clear following the same logic that are l.d. iff

in other words, iff

7.

**Problem: **Prove the following

**a) **Find two bases in such that the only vectors common to both are and

**b) **Find two bases in that nave no vectors in common, so that one of them contains the vectors and and the other one contains teh vectors and

**Proof:**

**a) **Consider . To see that this set is l.i. we note that

clearly implies that and the fact that quickly follows. Also, if then taking and we can readily see that . Thus, is, in fact, a basis for .

Using the same process we can see that forms a basis, and

**b) **One can check that from last time and work.

8.

**Problem:**

**a) **Under what conditions on the scalar do the vectors and form a basis of ?

**b) **Under what conditions on the scalar do the vectors and form a basis of

**Proof:**

**a) **Note that if this set of vectors is surely not l.i. Thus, we may assume that . We note then that if

then the three equations

hold. Namely, we see that

and thus by we see that . Thus, if that these vectors are l.i. But, suppose that were such that

We see that and so insertion of this into shows that , and insertion of this into gives that . But, inserting these into gives

which is a contradiction. Thus, these vectors can never be a basis.

9.

**Problem: **If is the set consisting of the six vectors find two different maximal independt subsets of .

**Proof: **It is tedious, but one can prove that and are two such setes.

10.

**Problem:** Let be a vector space. Prove that has a basis.

**Proof: **We can prove something even stronger. We can prove that given a set of l.i. vectors that there is a basis of such that . To do this, let

To prove this we first note that is a partially ordered set. Also, given some chain we can easily see that is an upper bound. To see that we let (remember we’re dealing with the arbitrary notion of l.i., not necessarily the finite one). Then, by definition there exists such that . Now, it clearly follows (from being a chain) that

but, that means that is contained within an element of , namely it is the subset of a set of l.i. vectors, and thus l.i. Thus, . So, evoking Zorn’s lemma we see that admits a maximal element . We claim that . To see this, suppose not. Then, there exists some . Now, let be a finite subset of . Clearly if then it is a l.i. set, and if for some , then we see that

since otherwise

which contradicts that . But, clearly implies (by the l.i. of ) that . Thus, is l.i. and so is contained in . But, this contradicts the maximality of . It follows that no such exists, in other words

(since . So, taking we see that must admit a basis.

## Halmos Sections 2,3, and 4

1.

**Problem:**

Prove that if is a vector space over the field , then for any and the following are true:

a)

b)

c)

d)

e) If the either or

f)

g)

**Proof:**

a) This follows from and the commutativity of

b) We merely note that and so

c) We merely note that and thus by cancellation

d) We see that

e) This is identical to the similar problem in the last post.

2.

**Problem: **If is a prime then is a vector space over . How many vectors are there in this vector space?

**Proof: **This is equivalent to asking how many functions are there from to which is

3.

**Problem: **Let be the set of all ordered pairs of real numbers. If and are elements of write , , and

**Proof: **It is not. Notice that and yet which contradicts the e) in the problem one.

**4.**

**Problem: **Sometimes a subset of a vector space is itself a vector space. Consider, for example, the vector space and the subsets of consisting of those vectors such that

a) is real

b)

c) Either or

d)

e)

**Proof:**

a) This clearly isn’t (remembering that we’re considering as being a vector space over ) since but

b) It suffices to show that , , and since all the attributes of a vector space (concerning the addition and scalar multiplication) are inherited. But, all three are glaringly obvious. So yes, this is a subspace.

c) No, note that but

d) Clearly . Also, if we have that since . Lastly, if we see that since .

e) No, consider that but

5.

**Problem: **Consider the vector space (the set of all complex coefficiented polynomials) and the subsets consisting those vectors for which

a)

b)

c)

d)

Which of them are vector spaces?

**Proof:**

a) This is not since the zero function isn’t in it.

b) This is.

c) This isn’t since but

d) This is. (maybe, I got a little lazy)

## Halmos Chaper One, Section 1: Fields

1.

**Problem:** Amost all the laws of elementary arithmetic are consequences of the axioms defining a field. Prove, in particular, that if is a field, and if and belong to , then the following relations hold.

a)

b) If then

c)

d)

e)

f)

g) If then either or

**Proof:**

a) By axiom 3 (A3) we know that and by the commutativity described in A1 we conclude that

b) We see that if then which by associativity and commutativity says that which then implies that .

c) We use associativity and commutativity to rewrite our equations as

d) By commutativity of the multiplication it suffices to note that and thus and by associativity we arrive at .

e) We merely note that and thus .

f) We use e) to say that . Then, we notice that from where it follows that and thus and the conclusion follows.

g) Suppose that then since we see that which contradicts our choice of

2.

**Problem:**

a) Is the set of all positive integers a field?

b) What about the set of all integers?

c) Can the answers to both these question be changed by re-defining addition or multiplication (or both)?

**Proof:**

a) No, we merely note that there is no additive identity for

b) No, there is no multiplicative identity for

c) Yes. But first before we justify let us prove a lemma (which is useful),

**Lemma:** Let be a field with . Then, given any set with there are operations for which is a field.

**Proof:** By virtue of their equal cardinalities there exists some bijection . Then, for define

and

We prove that with these operations is a field. We first note that and so they are legitimate binary operations. We now begin to show that all the field axioms are satisfied

1) Addition is commutative- This is clear since

2) Addition is associative- This is also clear since

which is equal to

which finally is equal to

3) There exists a zero element- Let be the zero element of then is clearly the zero element of . To see this we note that

for every .

4) Existence of inverse element- If we note that

which equals

which is the identity element of

5-8 are the analogous axioms for multiplication, which are (for the most part) the exact same as the above.

9) Distributivity- We note that

which equals

from where the rest is obvious.

This completes the lemma

Now, we may answer the question. Since is a field and the above lemma implies there exists addition and multiplications on and which make them into fields.

3.

**Problem: **Let and let denote the integers .

a) Prove this is a field precisely when is prime

b) What is in ?

c) What is in ?

**Proof:**

a) We appeal to the well-known fact that is solvable precisely when . From there we may immediately disqualify non-primes since the number of multiplicatively invertible elements of is and when is not a prime. When is a prime the only thing worth noting is that every non-zero element of has a multiplicative inverse. The actual work of showing the axioms hold is busy work, and I’ve done it before.

b) It’s clearly . Since

c) It’s . To see this we note that

4.

**Problem :** Let be a field and define

show that either there is no such that or that if there is, the smallest such is prime

**Proof: **Assume that and . Now, suppose that where . We see then that

which upon expansion equals

which by associativity and grouping is equal to

which by concatenation of the equations yields

but since is a field it follows that or , either way the minimality of is violated.

5.

**Problem: **Let

a) Is a field?

b) What if are required to be integers?

**Proof:**

a) This is a classic yet tedious exercise, I will not do it here.

b) No. For example, consider . Then, we have that

6.

**Problem: **

a) Doest the set of all polynomials with integer coefficients () form a field?

b) What about ?

**Proof:**

a) No.

b) No. I’ll let you figure these out (it’s really easy)

7.

**Problem:**

Let be the set of all ordered pairs of real numbers

a) Is a field if addition and multiplication are done coordinate wise?

b) If addition and multiplication are done as one multiplies complex numbers?

**Proof:**

a) No. Consider that is a not the additive identity but it has no multiplicative inverse.

b) Yes, this is just field isomorphic to