Abstract Nonsense

Neat Number Theory/Discrete Math Problem (Pt. III)

Point of post: Since I found this problem enjoyable, for some odd reason, I thought I’d also generalize the methodology in the first proof of this post to the general form and give another alternate proof. These methods have both their pros and cons. The general proof of the problem in my last post has the advantage that it finds a general formula for $a_n$ (see that post for this definition) but is maybe a non-obvious approach and leaves the answer is an ostensibly computationally intractable form. The  proof in this post is more obvious and ends up with a “nicer” answer but leave you only an “estimate” of $a_n$ for any given $n$. Regardless, it is valuable. And, finally for the sake of logical soundness we prove that the form given in the previous proof and the form given in this one are equivalent.

Proof (1): We note once again that

$(m,n_0)=1\Leftrightarrow (m1+5\ell,n_0)=1,\quad \forall \ell\in\mathbb{N}$

Thus, it is easy to see that

$E_{30}=E_{15}\cup\left\{15+k:k\in E_{15}\right\}$

and

$E_{45}=E_{30}\cup\left\{30+k:k\in E_{30}\right\}=E_{15}\cup \left\{k+15:k\in E_{15}\right\}\cup\left\{k+30:k\in E_{15}\right\}$

from where it easily follows by induction that

$\displaystyle E_{15n}=E_{15}\cup \bigcup_{j=1}^{n-1}\left\{15j+k:k\in E_{15}\right\}$

from which it becomes apparent that

$a_{nn_0}=na_{n_0}$

But, a quick check shows that by definition

$a_{n_0}=\left|\left\{m\leqslant n_0:(m,n_0)=1\right\}\right|=\varphi(n_0)$

where $\varphi$ is Euler’s Totient Function. We next notice that $a_n$ is increasing since $E_n\subseteq E_{n+1}$. Thus, we notice that if $\gamma_{n_0}(n)$ denotes reduction $mod n$ then

$a_{n-\gamma_{n_0}(n)}\leqslant a_{n}\leqslant a_{n+n_0-\gamma_{n_0}(n)}\quad (1)$

But, notice that $n-\gamma_{n_0}(n)\cong n+n_0-\gamma_{n_0}(n)\equiv 0\text{ mod }n_0$. we see that

$\displaystyle \frac{n-\gamma_{n_0}(n)}{n_0},\frac{n+n_0-\gamma_{n_0}(n)}{n_0}$

are integers. Thus, by our previous discussion we may conclude that

$\displaystyle a_{n-\gamma_{n_0}(n)}=a_{n_0\left[\frac{n-\gamma_{n_0}(n)}{n_0}\right]}=\frac{\varphi(n_0)}{n_0}\left[n-\gamma_{n_0}(n)\right]$

and

$\displaystyle a_{n+n_0-\gamma_{n_0}(n)}=a_{n_0\left[\frac{n+n_0-\gamma_{n_0}(n)}{n_0}\right]}=\frac{\varphi(n_0)}{n_0}\left[n+n_0-\gamma_{n_0}(n)\right]$

It follows from these two equations and $(1)$ that

$\displaystyle \frac{\varphi(n_0)}{n_0}\left[1-\frac{\gamma_{n_0}(n)}{n}\right]\leqslant\frac{a_n}{n}\leqslant\frac{\varphi(n_0)}{n_0}\left[1+\frac{n_0-\gamma_{n_0}(n)}{n}\right]$

Thus, noting that since $\gamma_{n_0}(n),n_0-\gamma_{n_0}(n)\leqslant n_0$ we see that

$\displaystyle \lim_{n\to\infty}\frac{\gamma_{n_0}(n)}{n}=\lim_{n\to\infty}\frac{n_0-\gamma_{n_0}(n)}{n}=0$

we may conclude by the Squeeze Theorem that

$\displaystyle \lim_{n\to\infty}\frac{a_n}{n}=\frac{\varphi(n_0)}{n_0}$

But, using basic facts about $\varphi$ we see that if $n_0=p_1^{\alpha_1}\cdots p_k^{\alpha_k}$ that

$\displaystyle \frac{\varphi(n_0)}{n_0}=\frac{1}{n_0}\left(n_0\prod_{j=1}^{k}\left(1-\frac{1}{p_j}\right)\right)=\prod_{j=1}^{k}\left(1-\frac{1}{p_j}\right)$

From where the conclusion follows. $\blacksquare$

We know endeavor to prove  the following in two ways, one combinatorial/probabilistic in nature and the other by induction

Theorem (1): Let $\{x_1,\cdots,x_n\}\subseteq\mathbb{N}$, then

$\displaystyle 1-\sum_{j=1}^{n}(-1)^{j+1}\prod_{S\in S_j}\left(\prod_{s\in S}x_s\right)^{-1}=\prod_{j=1}^{n}\left(1-\frac{1}{x_j}\right)$

Proof: For each $x_j$ consider $x_j$ let $C_j=\left\{(x_j,k):k\in[x_j]\right\}$ and let

$\displaystyle \Lambda=\prod_{j=1}^{n}C_j$

evidently then $\text{card }\Lambda=x_1\cdots x_n$. So letting, as usual, $\mathbb{P}(x=y)$ be the probability that $x=y$ we see that for $\bold{x}\in\Lambda$

\displaystyle \begin{aligned}\mathbb{P}\left(\pi_j(\bold{x})= (x_j,1)\right) &= 1-\mathbb{P}\left(\pi_j\left(\bold{x}\right)\ne(x_j,1)\right)\\ &= 1-\frac{\prod_{i\ne j}x_i}{\prod_{i}x_i}\\ &= 1-\frac{1}{x_j}\end{aligned}

where $\pi_j$ is the projection onto the $j$th coordinate. Thus,

$\displaystyle \mathbb{P}\left(\bigwedge_{j=1}^{n}\left(\pi_{j}(\bold{x})=(x_j,1)\right)\right)=\prod_{j=1}^{n}\mathbb{P}\left(\pi_j(\bold{x})=(x_j,1)\right)=\prod_{j=1}^{n}\left(1-\frac{1}{x_j}\right)$

But notice that

$\displaystyle \mathbb{P}\left(\bigwedge_{j=1}^{n}\left(\pi_{j}(\bold{x})=(x_j,1)\right)\right)=1-\mathbb{P}\left(\bigvee_{j=1}^{n}\left(\pi_j(\bold{x})\ne(x_j,1)\right)\right)$

but the inclusion-exclusion principle states that

\displaystyle \begin{aligned}1-\mathbb{P}\left(\bigvee_{j=1}^{n}\left(\pi_j(\bold{x})\ne (x_j,1)\right)\right) &=1-\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_j}\mathbb{P}\left(\bigwedge_{s\in S}\left(\pi_s(\bold{x})\ne (x_s,1)\right)\right)\\ &=1-\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_j}\prod_{s\in S}\mathbb{P}\left(\pi_s(\bold{x})\ne (x_s,1)\right)\\ &= \sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_j}\prod_{s\in S}\frac{1}{x_s}\end{aligned}

From where it follows that

\displaystyle \begin{aligned}\prod_{j=1}^{n}\left(1-\frac{1}{x_j}\right) &= \mathbb{P}\left(\bigwedge_{j=1}^{n}\left(\pi_j(\bold{x})=(x_j,1)\right)\right)\\ &= 1-\mathbb{P}\left(\bigvee_{j=1}^{n}\left(\pi_{j}(\bold{x})\ne (x_j,1)\right)\right)\\ &= 1-\sum_{j=1}^{n}\sum_{S\in S_j}\prod_{s\in S}\frac{1}{x_s}\end{aligned}

which is what was to be proven. $\blacksquare$

Remark: I can be described, and only by the most generous, as an amateur combinatorialist. There is a good chance I made some stupid error above, if so I would like the hear about it. 🙂

Theorem: Let $\{x_1,\cdots,x_n\}\subseteq\mathbb{R}-\{0\}$, then

$\displaystyle \prod_{j=1}^{n+1}\left(1-\frac{1}{x_j}\right)=1-\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}$

Proof: .  In this proof we slightly extend our notation so that

$S_{j,n}=\left\{S\subseteq[n]:|S|=j\right\}$

We proceed by induction. For $n=1$ this is clear. So, assume that

$\displaystyle \prod_{j=1}^{n}\left(1-\frac{1}{x_n}\right)=1-\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}$

We note that

$S_{j,n+1}=S_{j,n}\cup \left\{S\cup\{n+1\}:S\in_{j-1,n}\right\}$

Thus, we note that

$\displaystyle \sum_{j=1}^{n+1}(-1)^{j+1}(-1)^{j}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}=\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}+\frac{1}{x_{n+1}}+\sum_{j=2}^{n+1}(-1)^{j+1}\sum_{S\in S_{j-1,n}}\left(\prod_{s\in S\cup\{n+1\}}x_s\right)$

but we may rewrite the right hand side as

$\displaystyle \sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}+\frac{1}{x_{n+1}}+\sum_{j=2}^{n+1}(-1)^{j+1}\sum_{S\in S_{j-1,n}}\left(x_{n+1}\prod_{s\in S}x_s\right)^{-1}$

But, changing the index on the right sum gives

$\displaystyle \sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in\S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}+\frac{1}{x_{n+1}}+\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(x_{n+1}\prod_{s\in S}\right)^{-1}$

and finally factoring $\frac{1}{x_{n+1}}$ out of this right-most sum gives

$\displaystyle \sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}+\frac{1}{x_{n+1}}+\frac{1}{x_{n+1}}\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}$

thus, calling $\displaystyle F(n)=\sum_{j=1}^{n}(-1)^{j+1}\sum_{S\in S_{j,n}}\left(\prod_{s\in S}x_s\right)^{-1}$ and using this and our induction hypothesis we see that

\displaystyle \begin{aligned}1-F(n+1) &= 1-F(n)-\frac{1}{x_{n+1}}-\frac{1}{x_{n+1}}F(n)\\ &= \left(1-\frac{1}{x_{n+1}}\right)\left(1-F(n)\right)\\ &= \left(1-\frac{1}{x_{n+1}}\right)\prod_{j=1}^{n}\left(1-\frac{1}{x_j}\right)\\ &= \prod_{j=1}^{n+1}\left(1-\frac{1}{x_j}\right)\end{aligned}

and thus the induction is complete. $\blacksquare$

References: For more information on Euler’s Totient Function, and number theory in general see:

1. Niven, Ivan, Herbert S. Zuckerman, and Hugh L. Montgomery. An Introduction to the Theory of Numbers. New York: Wiley, 1991. Print.