# Abstract Nonsense

## Matrix Rings (Pt. I)

Point of Post: In this post we introduce the notion of matrix rings and prove some elementary facts such as the classification of their ideals.

$\text{ }$

Motivation

One may think that matrix rings come up most often in the study of matrices, as in the connection between matrices over fields and linear transformations. In fact, in our studies ,at least in general ring theory, we will see that matrix rings serve mainly as a source of interesting properties, and in particular a lot of counterexamples.

$\text{ }$

Definitions and Basics

Let $R$ be an arbitrary non-zero ring. We define the ring of  $n\times n$ matrices over $R$, denoted $\text{Mat}_n\left(R\right)$, to be the set of all square arrays of the form

$\text{ }$

$\begin{pmatrix}a_{1,1} & \cdots & a_{1,n}\\ & \vdots & \ddots & \vdots\\ a_{n,1} & \cdots & a_{n,n}\end{pmatrix}$

$\text{ }$

,denoted $(a_{i,j})$ for short, with $a_{i,j}\in R$ for all $i,j\in[n]$. We define the sum of two matrices by $(a_{i,j})+(b_{i,j})=(c_{i,j})$ where $c_{i,j}=a_{i,j}+b_{i,j}$ and $(a_{i,j})(c_{i,j})=(d_{i,j})$ where $\displaystyle d_{i,j}=\sum_{r=1}^{n}a_{i,r}b_{r,j}$. The fact that $\text{Mat}_n(R)$ with these  operations is really a ring is the same as the case when $R$ is a field.

$\text{ }$

Obviously $\text{Mat}_1(R)$ is naturally isomorphic to $R$, for $n\geqslant 2$ we see that $\text{Mat}_n(R)$ we note that $\text{Mat}_n(R)$ is not commutative regardless of whether or not $R$ is. Indeed, let $r\in R$ be non-zero, $A$ the matrix which has $r$ in the first entry of the first row in every other entry, $B$ the matrix which has $r$ in the second entry of the first row and zero elsewhere, then a quick check shows that $AB\ne BA$.

$\text{ }$

Suppose now that $R$ is unital. Define the matrices $E_{i,j}\in\text{Mat}_n(R)$ to be such that $E_{i,j}$ has $1$ in the $(i,j)^{\text{th}}$ position and $0$ elsewhere. We call the set of matrices $\left\{E_{i,j}:i,j\in[n]\right\}$ the canonical basis for $\text{Mat}_n(R)$. The reason for that name is clear when $R$ is a field since they form a basis for the vector space $\text{Mat}_n(R)$, and we shall see that they play a similar role for $\text{Mat}_n(R)$ when we generalize the notion of a vector space. We note that if $A\in\text{Mat}_n(R)$ then $E_{i,j}A$ is the matrix whose $i^{\text{th}}$ row is equal to the $j^{\text{th}}$ row of $A$ and zeros elsehwere and $AE_{i,j}$ is the matrix whose $j^{\text{th}}$ column is equal to the $i^{\text{th}}$ column of $A$ and zeros elsewhere. We define the $n\times n$ identity matrix to be the element of $\text{Mat}_n(R)$ given by $E_{1,1}+\cdots+E_{n,n}$, we denote it $I_n$. By our previous comment about the left and right multiplication properties of the $E_{i,j}$‘s it’s clear that $I_nA=E_{1,1}A+\cdots+E_{n,n}A=A$ and $AI_n=A(E_{1,1}+\cdots+E_{n,n})=AE_{1,1}+\cdots+AE_{n,n}=A$. Thus, $\text{Mat}_n(R)$ is unital with identity $I_n$.

$\text{ }$

We’d like to mention now another basic structural result concerning $\text{Mat}_n(R)$. Namely, there is no hope that $\text{Mat}_n(R)$ is a division ring, for $n\geqslant 2$. In particular, it’s trivial to check that for $n>1$ one has that $E_{1,1}E_{n,n}=E_{n,n}E_{1,1}=0$. Thus, if $n>1$ then $\text{Mat}_n(R)$ always has zero divisors.

$\text{ }$

Diagional Matrices, Scalar Matrices and the Natural Embeddings

$\text{ }$

A matrix of the form

$\text{ }$

$\begin{pmatrix}r_1 & 0 & \cdots & 0\\ 0 & r_2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & r_n\end{pmatrix}$

$\text{ }$

is called a $n\times n$ diagonal matrix. It is often convenient to denote a $n\times n$ diagonal matrix in the form $\text{diag}(r_1,\cdots,r_n)$. The set of all $n\times n$ diagonal matrices is denoted $\text{Diag}_n(R)$. A quick check shows that

$\text{ }$

$\text{diag}(r_1,\cdots,r_n)\text{diag}(s_1,\cdots,s_n)=\text{diag}(r_1s_1,\cdots,r_ns_n)$

$\text{ }$

and

$\text{ }$

$\text{diag}(r_1,\cdots,r_n)+\text{diag}(s_1,\cdots,s_n)=\text{diag}(r_1+s_1,\cdots,r_n+s_n)$

$\text{ }$

Thus, we obviously see that $\text{Diag}_n(R)$ is a subring of $\text{Mat}_n(R)$ (a unital subring if $R$ is). Moreover, $f:R^n\to\text{Diag}_n(R):(r_1,\cdots,r_n)\mapsto\text{diag}(r_1,\cdots,r_n)$ is a morphism, and since it’s evidently bijective we find that $R^n\cong\text{Diag}_n(R)$ (where we recall that $R^n$ is the $n$-fold product of $R$).

$\text{ }$

$\text{ }$

In a similar vein we define a $n\times n$ scalar matrix to be a matrix of the form $\text{diag}(r,\cdots,r)\in\text{Mat}_n(R)$. The set of all $n\times n$ scalar matrices is denoted $\text{Scal}_n(R)$. We evidently see that $\text{Scal}_n(R)$ is a subring of $\text{Diag}_n(R)$. Moreover, it’s clear that the map $R\to \text{Scal}_n(R):r\mapsto \text{diag}(r,\cdots,r)$ is an isomorphism so that $R\cong\text{Scal}_n(R)$.

$\text{ }$

Perhaps one of the most interesting facts about $\text{Scal}_n(R)$ is:

$\text{ }$

Theorem: Let $R$ be a commutative unital ring. Then, the center $Z\left(\text{Mat}_n(R)\right)$ is equal to $\text{Scal}_n(R)$.

Proof: One can quickly check that for any matrix $(a_{i,j})$ one has that

$\text{ }$

$\text{diag}(r,\cdots,r)(a_{i,j})=(ra_{i,j})=(a_{i,j}r)=(a_{i,j})\text{diag}(r,\cdots,r)$

$\text{ }$

and since $(a_{i,j})$ was arbitrary we may conclude that $\text{Scal}_n(R)\subseteq Z\left(\text{Mat}_n(R)\right)$.

$\text{ }$

Conversely, let $M=(m_{i,j})\in Z(\text{Mat}_n(R))$ then for every $i\in[n]$ we must have that $E_{i,i}M=ME_{i,i}$, but we know that the first of these has all zero entries except it has the same $i^{\text{th}}$ row of $M$ the second of these has all zero entries except it has the same $i^{\text{th}}$ column as $M$. Since they are equal we may conclude that except for the possibility of $m_{i,i}$ all the entries in the $i^{\text{th}}$ column and row of $M$ must be zero. Thus, doing this for all $i\in[n]$ we may conclude that $M=\text{diag}(m_1,\cdots,m_n)$ for some $m_1,\cdots,m_n\in R$. Next, we note that $\left(E_{i,j}+E_{j,i}\right)M=M\left(E_{i,j}+E_{j,i}\right)$ but the left hand side exchanges the $i^{\text{th}}$ and $j^{\text{th}}$ row of $M$ and the right hand side exchanges the $i^{\text{th}}$ and $j^{\text{th}}$ columns, and thus $m_i=m_j$. Since $i,j$ were arbitrary we conclude that $m_1=\cdots=m_n$ and so $M\in\text{Scal}_n(R)$. The conclusion follows. $\blacksquare$

$\text{ }$

$\text{ }$

References:

1. Dummit, David Steven., and Richard M. Foote. Abstract Algebra. Hoboken, NJ: Wiley, 2004. Print.

2. Bhattacharya, P. B., S. K. Jain, and S. R. Nagpaul. Basic Abstract Algebra. Cambridge [Cambridgeshire: Cambridge UP, 1986. Print.

July 12, 2011 -

1. […] Point of Post: This is a continuation of this post. […]

Pingback by Matrix Rings (Pt. II) « Abstract Nonsense | July 13, 2011 | Reply

2. […] a few ways of taking given rings and constructing new rings out of them, such as forming the matrix ring  and the -fold product ring . In this post we discuss yet another way of forming a ring in a […]

Pingback by Polynomial Rings (Pt. I) « Abstract Nonsense | July 19, 2011 | Reply

3. […] as the case for the product of rings and matrix rings there is a certain ‘functorial’ property for matrix rings. In particular given two […]

Pingback by Polynomial Rings (Pt. III) « Abstract Nonsense | July 19, 2011 | Reply

4. […] be true. That said, one should be aware that this could be totally wrong. For example, let be the matrix ring  and let be the set of all non-negative powers of the matrix , then we claim that is the zero […]

Pingback by Localization (Pt. I) « Abstract Nonsense | September 29, 2011 | Reply

5. […] into algebras before, in the context of endomorphism algebras of vector spaces. More generally, any ring of matrices is given the structure of an -algebra. In fact, it’s easy to see that algebras generalize […]

Pingback by R-Algebras « Abstract Nonsense | January 10, 2012 | Reply