# Abstract Nonsense

## Alternating Forms

Point of post: In this post we discuss the concept of alternating forms as is discussed in section 30 of Halmos.

Motivation

So in our last post we restricted our study of multilinear algebra to that of spaces of the form $\text{Mult}_n\left(\mathscr{V}\right)$ in the attempt to better approximate multilinear forms which are near and dear to us, (e.g. the determinant). Thus, we continue on this path to study alternating forms, the abstractification of the observation that if two columns (or rows) in a matrix are the same, then the determinant is zero. This is really where things start to get interesting.

Alternating Forms

Let $\mathscr{V}$ be a finite dimensional $F$-space and $K\in\text{Mult}_n\left(\mathscr{V}\right)$ we call $K$ alternating if

$K(x_1,\cdots,\underbrace{x}_{k^{\text{th}}},\cdots,\underbrace{x}_{\ell^{\text{th}}},\cdots,x_n)=0$

for any $\ell,k\in[n]$. In other words $K$ is zero whenever any two of it’s arguments are equal. The first result that we can see from such $n$-forms is that they’re alternating. Indeed

Theorem: Let $\mathscr{V}$ be a finite dimensional $F$-space and let $K\in\text{Mult}_n\left(\mathscr{V}\right)$  be alternating, then $K$ is skew-symmetric.

Proof: We first note for any two $\ell,k\in[n]$ we have by multilinearity that

\begin{aligned}K(x_1,\cdots,\underbrace{x+y}_{k^{\text{th}}},\cdots,\underbrace{x+y}_{\ell^{\text{th}}},\cdots,x_n)=& K\left(x_1,\cdots,x,\cdots,x,\cdots,x_n\right)\\ &+K\left(x,\cdots,y,\cdots,x,\cdots,x_n\right)\\ &+K\left(x_1,\cdots,y,\cdots,y,\cdots,x_n\right)\\ &+K\left(x_1,\cdots,y,\cdots,x,\cdots,x_n\right)\end{aligned}

But, since $K$ is alternating we may conclude that the left hand side and the first and third terms of the right hand side vanish to give us that

$\displaystyle K(x_1,\cdots,x,\cdots,y,\cdots,x_n)=-K(x_1,\cdots,y,\cdots,x,\cdots,x_n)$

from where it follows by the arbitrariness of $\ell,k$ that for any transposition $\tau\in S_n$ we have that

$\tau K=-K$

and thus if $\pi=\tau_1\cdots\tau_m$ we may recall by the first theorem in the last post that

$\pi K=(\tau_1\cdots\tau_m)K=\tau_1(\cdots(\tau_m K)\cdot)=\tau_1(\cdots(-\tau_2 K)\cdots)=\cdots=(-1)^m K$

(more properly by induction) from where it clearly follows that $\pi K=\text{sgn}(\pi)K$ and thus $K$ is skew-symmetric. $\blacksquare$

Now, intuitively it seems as though the converse of the above is true since, for example taking bilinear forms we see that if $B$ is skew-symmetric then

$B(x,x)=-\underbrace{B(x,x)}_{\text{switching }x\text{s}}$

from where we would like to conclude that $B(x,x)=0$. Unfortunately though there are fields for which $x=-x$ and $x\ne 0$, take $\mathbb{Z}_2$ for example. In particular, this is true precisely when the character of the field is equal to $2$. Thus, with this in mind we are able to make sense of the restrictions in the following theorem:

Theorem: Let $\mathscr{V}$ be a finite dimensional $F$-space with $\text{char}\left(F\right)>2$, then if $K\in\text{Mult}_n\left(\mathscr{V}\right)$ is skew-symmetric then $K$ is alternating.

Proof: We merely note that for any $\ell,k\in[n]$ we have that

$K(x_1,\cdots,\underbrace{x}_{k^{\text{th}}},\cdots,\underbrace{x}_{\ell^{\text{th}}},\cdots,x_n)=(k,\ell)K(x_1,\cdots,x,\cdots,x,\cdots,x_n)=-K(x_1,\cdots,x,\cdots,x,\cdots,x_n)$

and thus adding to both sides we see that

$(1+1)K(x_1,\cdots,x,\cdots,x,\cdots,x_n)=0$

and since $1+1$ is a unit we may conclude that

$K(x_1,\cdots,x,\cdots,x,\cdots,x_n)=0$

and since $\ell,k\in[n]$ and $x\in\mathscr{V}$ were arbitrary it follows that $K$ is alternating. $\blacksquare$

We will see later (as one of the homework problems) an example where the above can fail for fields of characteristic two. Next is a theorem which is patently obvious, yet very important. It basically says that if $\{x_1,\cdots,x_n\}$ is a linearly dependent set of vectors in $\mathscr{V}$ and $K\in\text{Mult}_n\left(\mathscr{V}\right)$ is alternating then $K(x_1,\cdots,x_n)=0$. This is clear, as I said, because one of the vectors in $\{x_1,\cdots,x_n\}$ is a linear combination of the others. Thus, when one uses multilinearity to expand it one has a sum of terms each term of which has two identical entries in the $K$ part (see below).

Theorem: Let $\mathscr{V}$ be a $m$-dimensional $F$-space and $\{x_1,\cdots,x_n\}\subseteq\mathscr{V}$ a linearly dependent set of vectors. Then, for any alternating $K\in\text{Mult}_n\left(\mathscr{V}\right)$ it’s true that

$K\left(x_1,\cdots,x_n\right)=0$

Proof: Since $\{x_1,\cdots,x_n\}$ is linearly dependent then we may assume without loss of generality that there exists scalars $\alpha_2,\cdots,\alpha_n\in F$ such that

$\displaystyle x_1=\sum_{j=2}^{n}\alpha_j x_j$

(we may assume this since one of the vectors must be a linear combination of the others, and thus with the possibility of reordering we may assume that vector is the “first” one). Thus

$\displaystyle K\left(\sum_{j=2}^{n}\alpha_j x_j,\cdots,x_n\right)=\sum_{j=2}^{n}\alpha_j K\left(x_j,\cdots,x_n\right)=\sum_{j=2}^{n}\alpha_j 0=0$

From where the conclusion follows. $\blacksquare$

The obvious question is, does the converse hold. Namely, if one has a $n$-linear alternating form $K$ and does $K(x_1,\cdots,x_n)=0$ imply that $\{x_1,\cdots,x_n\}$ is linearly dependent? The answer is yes in the case when $n=\dim_F\mathscr{V}=m$.

Theorem: Let $\mathscr{V}$ be an $m$-dimensional $F$-space and $K\in\text{Mult}_m\left(\mathscr{V}\right)$ be non-zero and alternating. Then, if $\{x_1,\cdots,x_m\}$ are linearly independent then $K(x_1,\cdots,x_m)\ne0$.

Proof: Clearly since the dimension coincides with the number of vectors we know that $\{x_1,\cdots,x_m\}$ forms a basis for $\mathscr{V}$. So, let $y_1,\cdots,y_m\in\mathscr{V}$ be arbitrary. Then, we know that

$\displaystyle y_1=\sum_{j_1=1}^{m}\alpha_{1,j}x_j,\cdots,y_m=\sum_{j_m=1}^{n}\alpha_{m,j}x_j\quad (1)$

so that

$\displaystyle K(y_1,\cdots,y_m)=\sum_{j_1=1}^{m}\cdots\sum_{j_m=1}^{m}\alpha_{1,j}\cdots\alpha_{m,j}K\left(x_{j_1},\cdots,x_{j_m}\right)$

now, if at any point $x_{j_k}=x_{j_\ell}$ then $K(x_{j_1},\cdots,x_{j_m})=0$ and if not then $K(x_{j_1},\cdots,x_{j_m})=\pi K(x_1,\cdots,x_m)$ for some $\pi$, and by our first theorem this implies that $K(x_{j_1},\cdots,x_{j_m})=\text{sgn}(\pi)K\left(x_1,\cdots,x_m\right)$. Thus, if we assume that $K(x_1,\cdots,x_m)=0$ then the two cases above imply that every term in $(1)$ is zero, and thus the sum is zero. But, since $y_1,\cdots,y_m$ were arbitrary it follows that $K(y_1,\cdots,y_m)=0$ for all $y_1,\cdots,y_m\in\mathscr{V}$ contradictory to our assumption that $K\ne \bold{0}$. It follows that $K(x_1,\cdots,x_m)\ne 0$. $\blacksquare$

References:

1. Halmos, Paul R. ” Finite-dimensional Vector Spaces,. New York: Springer-Verlag, 1974. Print

November 14, 2010 -