Abstract Nonsense

Crushing one theorem at a time

Halmos Chapter one Section 13 and 14: Linear Functionals and Bracket Notation


1.

Problem: Consider the set \mathbb{C} of complex numbers as a vector space over \mathbb{R}. Suppose that for each \zeta=\xi_1+i\xi_2 in \mathbb{C} (where \xi_1,\xi_2\in\mathbb{R}) the function y is given by

a) y(\zeta)=\xi_1

b) y(\zeta)=\xi_2

c) y(\zeta)=\xi_1^2

d) y(\zeta)=\xi_1-i\xi_2

e) y(\zeta)=\sqrt{\xi_1^2+\xi_2^2}

In which cases are these linear functionals?

Proof:

a) Clearly y:\mathbb{C}\to\mathbb{R} so it is indeed a map from the vector space to it’s underlying field, and so it’s a functional. To see it’s linear we merely notice that if \zeta=x_1+i\xi_2 and \zeta'=\xi'_1+\xi'_2 then

\alpha\zeta+\beta\zeta'=(\alpha\xi_1+\beta\xi'_1)+i(\alpha\xi_2+\beta\xi'_2)

and so

y\left(\alpha\zeta+\beta\zeta'\right)=\alpha\xi_1+\beta\xi'_1=\alpha y(\zeta)+\beta y(\zeta')

so that it is indeed linear.

b) Similarly, we can see that y:\mathbb{C}\to\mathbb{R} so it is indeed a function and if \zeta,\zeta' are defined above we get again that

\alpha\zeta+\beta\zeta'=(\alpha\xi_1+\beta\xi'_1)+i(\alpha\xi_2+\beta\xi'_2)

and so

y(\zeta+\zeta')=\alpha\xi_2+\beta\xi'_2=\alpha y(\zeta)+\beta y(\zeta')

so it is linear.

c) This is not linear, notice that y(2\cdot 2)=16\ne 2\cdot y(2)=4

d) This is not a functional since y:\mathbb{C}\to\mathbb{C} surjectively. More simply, notice that y(i)=-i\notin\mathbb{R}

e) This is not linear. Notice that

y(2+4i)=\sqrt{2^2+4^2i}=\sqrt{20}\ne 2+4=\sqrt{2^2}+\sqrt{4^2}=y(2)+y(4i)

and so y is not linear.

2.

Problem: Suppose that for each \zeta=(\xi_1,\xi_2,\xi_3)\in\mathbb{C}^3 the function y is defined by

a) y(\zeta)=\xi_1+\xi_2

b) y(\zeta)=\xi_1-\xi_3^2

c) y(\zeta)=\xi_1+1

d) \xi_1-2\xi_2+3\xi_3

Which of these are linear functions for \mathbb{C}^3?

Proof: Assuming we’re taking the underlying field to be \mathbb{C}

a) This is a functional since clearly y:\mathbb{C}^3\to\mathbb{C}. Now, suppose that \zeta=(\xi_1,\xi_2,\xi_3) and \zeta'=\left(\xi'_1,\xi'_2,\xi'_3\right) then clearly

\alpha\zeta+\beta\zeta'=\left(\alpha\xi_1+\beta\xi'_1,\alpha\xi_2+\beta\xi'_2,\alpha\xi_3+\beta\xi'_3\right)

and so

y(\zeta+\zeta')=\alpha \xi+\beta\xi'_1+\alpha\xi_2+\beta\xi'_2=\alpha(\xi_1+\xi_2)+\beta(\xi'_1+\xi'_2)=\alpha y(\zeta)+\beta y(\zeta')

so that y is clearly linear.

b) This is not linear, since y(2(0,0,2))=-(4)^2=-16\ne 2y((0,0,2))=2\cdot -(2^2)=-8

c) This is not linear since y((0,0,0))=1\ne 0

d) Clearly this is a functional since y:\mathbb{C}^3\to\mathbb{C}. Furthermore, let \zeta,\zeta' be as in a) then we see once again that

\alpha\zeta+\beta\zeta'=\left(\alpha\xi_1+\beta\xi'_1,\alpha\xi_2+\beta\xi'_2,\alpha\xi_3+\beta\xi'_3\right)

and so

y\left(\alpha\zeta+\beta\zeta'\right)=(\alpha\xi_1+\beta\xi'_1)-2(\alpha\xi_2+\beta\xi'_2)+3(\alpha\xi_3+\beta\xi'_3)

which upon expansion and regrouping is

\alpha(\xi_1-2\xi_2+3\xi_3)+\beta(\xi'_1-2\xi'_2+3\xi'_3)=\alpha y(\zeta)+\beta y(\zeta')

and so y is linear.

3.

Problem: Suppose that for each p\in\mathbb{R}[x] define the function y by

a) \displaystyle y(p)=\int_{-1}^2 p(x)dx

b) \displaystyle y(p)=\int_0^2 p(x)^2dx

c) \displaystyle y(p)=\int_0^1 x^2p(x)dx

d) \displaystyle y(p)=\int_0^1 y\left(x^2\right)dx

e) y(p)=p'

f) y(p)=p''(1)

Which of these are linear functionals?

Proof:

a) Clearly y:\mathbb{R}[x]\to\mathbb{R} and so it’s a functional. Furthermore, using elementary properties of integrals we see that

\displaystyle y\left(\alpha p+\beta q\right)=\int_{-1}^{2}\left(\alpha p(x)+\beta q(x)\right)dx=\alpha \int_{-1}^{2} p(x) dx+\beta \int_{-1}^{2} q(x)dx

but clearly this is equal to \alpha y(p)+\beta y(q)

b) This is not linear since

\displaystyle y(2x)=\int_{-1}^{2}(2x)^2 dx=4\int_{-1}^{2}x^2 dx \ne 2\int_{-1}^{2}x^2 dx=2 y(x)

c) This is clearly a functional since y:\mathbb{R}[x]\to\mathbb{R} and

\displaystyle y(\alpha p+\beta q)=\int_0^1 x^2(\alpha p(x)+\beta q(x))dx=\alpha \int_0^1 x^2p(x) dx+\beta \int_0^1 x^2q(x)dx

but this is clearly equal to \alpha y(p)+\beta y(q)

d) This is clearly a functional since y:\mathbb{R}[x]\to\mathbb{R} and we see that

\displaystyle y\left(\alpha p+\beta q\right)=\int_0^1\left(\alpha p\left(x^2\right)+\beta q\left(x^2\right)\right)dx=\alpha\int_0^1 p\left(x^2\right) dx+\beta\int_0^1 q\left(x^2\right) dx

but clearly this is equal to \alpha y(p)+\beta y(q)

e) This is not a functional since y\left(x^2\right)=2x\notin\mathbb{R}

d) This is clearly a linear function since y:\mathbb{R}[x]\to\mathbb{R}. Also,

y\left(\alpha p+\beta q\right)=\left(\alpha p(x)+\beta q(x)\right)''\mid_{x=1}=\left(\alpha p''(x)+\beta q''(x)\right)\mid_{x=1}=\alpha p''(1)+\beta q''(1)

but this is evidently equal to \alpha y(p)+\beta y(q)

4.

Problem: If \left\{\alpha_n\right\}_{n\in\mathbb{N}} is an arbitrary sequence of complex numbers, and if p\in\mathbb{C}[x] where \displaystyle p(z)=\sum_{j=0}^{n}\xi_j x^j write \displaystyle y(p)=\sum_{j=0}^{n} \xi_j \alpha ^j. Prove that y\in\mathbb{C}[x]^* (where the asterisk  indicates the dual space) and that every element of \mathbb{C}[x]^* can be obtained in this manner by a suitable choice of the \alpha‘s

Proof: Clearly y:\mathbb{C}[x]\to\mathbb{C} and we see that if \displaystyle p(z)=\sum_{j=0}^{n}\xi_j z^j and \displaystyle q(z)=\sum_{j=0}^{m}\gamma_j z^j where we may assume WLOG that n\leqslant m then

\displaystyle \alpha p(z)+\beta q(z)=\sum_{j=0}^{m}\left(\alpha \xi_j+\beta \gamma_j\right)z^j

where we take \xi_j=0,\text{ }j>n. Thus,

\displaystyle y\left(\alpha p(z)+\beta q(z)\right)=\sum_{j=0}^{m}\left(\alpha \xi_j+\beta \gamma_j\right)\alpha_j=\alpha \sum_{j=0}^{m}\xi_j \alpha_j+\beta\sum_{j=0}^{m}\gamma_j \alpha_j

but by how we defined the \xi_j we may rewrite this as

\displaystyle \alpha\sum_{j=0}^{n}\xi_j \alpha_j+\beta\sum_{j=0}^{m}\gamma_j \alpha_j=\alpha y(p)+\beta y(q)

from where it follows that y\in\mathbb{C}[x]^*.

The second part follows as an immediate corollary from the following  technical lemma

Lemma: Let \mathcal{V} be a vector space over the field F and let \mathfrak{B} be a basis for \mathcal{V}. Then, any \varphi \in\mathcal{V}^* is completely determined on \mathfrak{B}, in the sense that if \eta\in\mathcal{V}^{*} is such that \varphi(x)=\eta(x),\text{ }\forall x\in\mathfrak{B} then \varphi=\eta

Proof: Let x\in\mathcal{V} then by assumption there exists a unique representation

\displaystyle x=\sum_{j=1}^{n}\alpha_j x_j,\text{ }\alpha_j\in F,\text{ and }x_j\in\mathfrak{B}

thus

\displaystyle \varphi(x)=\varphi\left(\sum_{j=1}^{n}\alpha_j x_j\right)=\sum_{j=1}^{n}\alpha_j \varphi\left(x_j\right)

but by assumption \varphi(x_j)=\eta(x_j),\text{ }j=1,\cdots,n so that the above says

\displaystyle \varphi(x)=\sum_{j=1}^{n}\alpha_j\varphi(x_j)=\sum_{j=1}^{n}\alpha_j\eta(x_j)=\eta\left(\sum_{j=1}^{n}\alpha_j x_j\right)=\eta(x)

\blacksquare.

Well, actually the result isn’t really a corollary, but if \varphi,\mathcal{V},\mathfrak{B}=\left\{x_\alpha\right\}_{\beta\in\mathcal{B}} are defined as above then we can see that

\displaystyle \varphi(x)=\varphi\left(\sum_{\beta\in\mathcal{B}}\alpha_\beta x_\beta\right)=\sum_{\beta\in\mathcal{B}}\alpha_\beta\varphi(x_\beta)

where it’s understood (by the unique representation of a vector in \mathcal{V} by a  finite linear combination of elements of \mathfrak{B}) that \alpha_\beta=0 for all but finitely many \beta\in\mathcal{B}. In other words, for our example that sequence \left\{\alpha_n\right\}_{n\in\mathbb{N}\cup\{0\}} may be taken to be \alpha_n=\varphi\left(x^n\right)

5.

Problem: If \varphi\in\mathcal{V}^{*}-\{\bold{0}(x)\} for a vector space \mathcal{V} over a field F is it true that \varphi is surjective?

Proof: Yes, since \varphi\ne\bold{0}(x) there is some x_0\in \mathcal{V} such that \varphi(x_0)=\alpha_0\ne 0. Then, for any \alpha\in F we have that \varphi(\alpha\alpha_0^{-1}x_0)=\alpha\alpha_0^{-1}\varphi(x_0)=\alpha.

6.

Problem: Let V be a vector space and $\varphi,\psi\in\text{Hom}\left(\mathscr{V},F\right)$ be such that

\psi(z)=0\implies\varphi(z)=0

then, \varphi=\alpha\psi for some \alpha\in F

Proof:

Lemma: Let \eta\in\text{Hom}\left(\mathscr{V},F\right) and v_0\in\mathscr{V} be such that \eta(v_0)\ne 0. Then,

\displaystyle \mathscr{V}=\ker\eta\oplus\text{span }\{v_0\}

Proof: We merely note that if v\in\mathscr{V} then

\displaystyle v=\frac{\eta(v}{\eta(v_0)}v+\left(v-\frac{\eta(v)}{v_0}v_0\right)

the first term clearly being in \text{span }\{v_0\} and the second in \ker\eta. The conclusion follows by noticing that evidently \ker\eta\cap\text{span }\{x_0\}=\{\bold{0}\}. \blacksquare

So, now with this we can solve the problem. Clearly if \psi=\bold{0} this is trivial since by assumption this would imply that \varphi=\bold{0}. So, assume not, then there exists some z_0\in\mathscr{V} such that \psi(z_0)\ne 0. We claim then that

\displaystyle \varphi=\frac{\varphi(z_0)}{\psi(z_0)}\psi

To see this let v\in\mathscr{V} be arbitrary. Then, v=\beta z_0+k where k\in\ker\psi\supseteq\ker\varphi. Thus,

\begin{aligned}\frac{\varphi(z_0)}{\psi(z_0)}\psi(v) &=\frac{\varphi(z_0)}{\psi(z_0)}\psi\left(\beta z_0+k\right)\\ &=\frac{\varphi(z_0)}{\psi(z_0)}\beta\psi(z_0)\\ &=\beta\varphi(z_0)\\ &= \beta\varphi(z_0)+\varphi(k)\\ &= \varphi(\beta z_0+k)\\ &=\varphi(v)\end{aligned}

and so the conclusion follows.

Advertisements

September 30, 2010 - Posted by | Fun Problems, Halmos, Linear Algebra | , , , , , , ,

7 Comments »

  1. I really appreciate your work. Do you have any solutions for problem 6 from the same sections: Linear Functionals and Brackets? Thank you in advance!

    Comment by popita | November 8, 2010 | Reply

    • Thank you. I guess I just forgot to post it. Be sure to respond to this so that I remember to write it up, it’s really late here.

      Comment by drexel28 | November 9, 2010 | Reply

      • Thank you for your prompt answer! Can you also post this problem 6 from these sections? Thanks!

        Comment by ralucatoscano | November 9, 2010

      • There you go friend.

        Comment by drexel28 | November 10, 2010

  2. I really appreciate your work! Do you have any solutions for problem 6 of Section 14: Brackets (page 22)? Thank you in advance!

    Comment by ralucatoscano | November 8, 2010 | Reply

  3. 1.I really appreciate your work! Do you have any solutions for problem 6 of Section 14: Brackets (page 22)? Thank you in advance!

    Comment by ralucatoscano | November 8, 2010 | Reply

  4. Resp. Sir/Ma’am,

    Myself Deepak, i request how can i download all proof and problems of P R Halmos form your web site.

    Thanking you

    Comment by Deepak Gawali | August 13, 2011 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: