How to prove this using Abel's summation formula?

In summary, Abel's summation formula can be used to find ##f(x)## by setting ##f(x)=\frac{1}{\log {x}}## and ##a(n)=\frac{\log {n}}{n}## for prime numbers that are congruent to a certain value mod k, and 0 otherwise. By applying Dirichlet's theorem and Abel's summation formula, the sum of these prime numbers can be approximated as ##\frac{1}{\varphi(k)} + O(\frac{1}{\log{x}})##. This theorem can be used to find the sum of primes with a certain congruence class, such as the example given for ##(h,k
  • #1
Math100
802
222
Homework Statement
Prove that ## \sum_{\substack{prime p\leq x \\ p\equiv 3\pmod {10}}}\frac{1}{p}=\frac{1}{4}\log\log {x}+A+O(\frac{1}{\log {x}}) ##, for some constant ## A ##.
Relevant Equations
Suppose ## gcd(h, k)=1 ## for ## k>0 ##. Then, there exists a constant ## A ## such that for all ## x\geq 2 ##:
## \sum_{\substack{prime p\leq x \\ p\equiv h\pmod {k}}}\frac{1}{p}=\frac{1}{\varphi(k)}\log\log {x}+A+O(\frac{1}{\log {x}}) ##.

Abel's summation formula:
If ## A(x)=\int_{y}^{x}a(t)dt ##, then ## \int_{y}^{x}a(t)f(t)dt=A(x)f(x)-A(y)f(y)-\int_{y}^{x}A(t)f'(t)dt ##.
Before I apply/use the Abel's summation formula, how should I find ## f(x) ##?
 
Physics news on Phys.org
  • #2
Abel's summation formula goes
$$
\sum_{x\leq n\leq y}a_n f(n)=A(y)f(y)-A(x)f(x)-\int_x^y A(t)f'(t)\,dt
$$
with ##A(x)=\displaystyle{\sum_{0<n\leq x}}a_n## where ##(a_n)## is a sequence of real or complex numbers, and ##f(t)## any on ##[x,y]## continuously differentiable function. Your choice.

How do you reference the important equation under "relevant equations"?

https://en.wikipedia.org/wiki/Abel's_summation_formula
 
  • #3
fresh_42 said:
Abel's summation formula goes
$$
\sum_{x\leq n\leq y}a_n f(n)=A(y)f(y)-A(x)f(x)-\int_x^y A(t)f'(t)\,dt
$$
with ##A(x)=\displaystyle{\sum_{0<n\leq x}}a_n## where ##(a_n)## is a sequence of real or complex numbers, and ##f(t)## any on ##[x,y]## continuously differentiable function. Your choice.

How do you reference the important equation under "relevant equations"?

https://en.wikipedia.org/wiki/Abel's_summation_formula
From the textbook and notes. But if ##f(t)## can be any on ##[x, y]##, then what would you choose? How to choose this ##f(t)## function wisely?
 
  • #4
Dirichlet's theorem on arithmetic progressions says
$$
\sum_{\stackrel{p\text{ prime }}{p\equiv h\pmod{k}\, , \,(h,k=1)}}\dfrac{1}{p}=\infty
$$
Your theorem says
$$
S(k,x):=\sum_{\stackrel{p\text{ prime }\leq x}{p\equiv h\pmod{k}\, , \,(h,k=1)}}\dfrac{1}{p}=\dfrac{1}{\varphi(k)}\log\log {x}+A+O\left(\dfrac{1}{\log {x}}\right)
$$
so the boundary ##\leq x## changes everything.

Before we chose ##f(t)## let's look at the sequence, just to get an impression
\begin{matrix}
3& 13& 23& 43& 53& 73& 83& 103& 113& 163& 173& 193& 223& 233\\
263& 283& 293& 313& 353& 373& 383& 433& 443& 463&503& 523& 563& 593\\
613&643 &653& 673& 683& 733& 743& 773& 823& 853& 863& 883& 953& 983\\
1013& 1033& 1063& 1093& 1103& 1123& 1153& 1163& 1193& 1213& 1223& 1283& 1303& 1373\\
1423& 1433& 1453&1483& 1493& 1523& 1543& 1553&\ldots &&&&&&
\end{matrix}

The prime number theorem gives us ##\displaystyle{\lim_{x \to \infty}\dfrac{\pi(x)\log(x)}{x}=1}## where the prime-counting function is given as ##\displaystyle{\pi(x)=\sum_{\stackrel{p\text{ prime }}{p\leq x}}}.## The ##n-##th prime number can be approximated by ##p(n)\sim n\log(n).## The error approaches zero for growing ##n.##

We set ##h=3## and ##k=10.## This gives us from your theorem
$$
S(10,x)= \dfrac{1}{4}\log\log {x}+A+O\left(\dfrac{1}{\log {x}}\right)
$$
since ##\varphi (10)=|\{x\pmod{10}\,|\,(x,10)=1\}|=|\{1,3,7,9\}|=4.##

Do you want to prove the theorem?
 
  • #5
fresh_42 said:
Dirichlet's theorem on arithmetic progressions says
$$
\sum_{\stackrel{p\text{ prime }}{p\equiv h\pmod{k}\, , \,(h,k=1)}}\dfrac{1}{p}=\infty
$$
Your theorem says
$$
S(k,x):=\sum_{\stackrel{p\text{ prime }\leq x}{p\equiv h\pmod{k}\, , \,(h,k=1)}}\dfrac{1}{p}=\dfrac{1}{\varphi(k)}\log\log {x}+A+O\left(\dfrac{1}{\log {x}}\right)
$$
so the boundary ##\leq x## changes everything.

Before we chose ##f(t)## let's look at the sequence, just to get an impression
\begin{matrix}
3& 13& 23& 43& 53& 73& 83& 103& 113& 163& 173& 193& 223& 233\\
263& 283& 293& 313& 353& 373& 383& 433& 443& 463&503& 523& 563& 593\\
613&643 &653& 673& 683& 733& 743& 773& 823& 853& 863& 883& 953& 983\\
1013& 1033& 1063& 1093& 1103& 1123& 1153& 1163& 1193& 1213& 1223& 1283& 1303& 1373\\
1423& 1433& 1453&1483& 1493& 1523& 1543& 1553&\ldots &&&&&&
\end{matrix}

The prime number theorem gives us ##\displaystyle{\lim_{x \to \infty}\dfrac{\pi(x)\log(x)}{x}=1}## where the prime-counting function is given as ##\displaystyle{\pi(x)=\sum_{\stackrel{p\text{ prime }}{p\leq x}}}.## The ##n-##th prime number can be approximated by ##p(n)\sim n\log(n).## The error approaches zero for growing ##n.##

We set ##h=3## and ##k=10.## This gives us from your theorem
$$
S(10,x)= \dfrac{1}{4}\log\log {x}+A+O\left(\dfrac{1}{\log {x}}\right)
$$
since ##\varphi (10)=|\{x\pmod{10}\,|\,(x,10)=1\}|=|\{1,3,7,9\}|=4.##

Do you want to prove the theorem?
Before I try to prove the theorem, I want to know, how did you get those numbers from ## 3 ## to ## 1553 ## from the sequence?
 
  • #6
Math100 said:
Before I try to prove the theorem, I want to know, how did you get those numbers from ## 3 ## to ## 1553 ## from the sequence?
That was the easy part.

I copied the list up to ~1600 from https://de.wikibooks.org/wiki/Primzahlen:_Tabelle_der_Primzahlen_(2_-_100.000),
put it into an editor (textpad), changed the comma (,) globally to ampersand (&), dropped it in a begin-end-matrix environment here, deleted all numbers that did not end on 3, and finally inserted a newline command (\\) every fourteen primes.

If you want to prove the theorem, then why for ##(h,k)=(3,10)## and not generally for ##(h,k)##? And wasn't it proven in the book? You said it is from there.

I haven't found the exact version of Dirichlet's prime number theorem that your book lists but the one I did have found is heavy machinery and I seriously doubt that your version or the specification ##(h,k)=(3,10)## makes it a lot easier than that:
https://www.glk.uni-mainz.de/files/2018/08/Nickel.pdf (theorem 4.16, page 23/23)
 
Last edited:
  • #7
fresh_42 said:
That was the easy part.

I copied the list up to ~1600 from https://de.wikibooks.org/wiki/Primzahlen:_Tabelle_der_Primzahlen_(2_-_100.000),
put it into an editor (textpad), changed the comma (,) globally to ampersand (&), dropped it in a begin-end-matrix environment here, deleted all numbers that did not end on 3, and finally inserted a newline command (\\) every fourteen primes.

If you want to prove the theorem, then why for ##(h,k)=(3,10)## and not generally for ##(h,k)##? And wasn't it proven in the book? You said it is from there.

I haven't found the exact version of Dirichlet's prime number theorem that your book lists but the one I did have found is heavy machinery and I seriously doubt that your version or the specification ##(h,k)=(3,10)## makes it a lot easier than that:
https://www.glk.uni-mainz.de/files/2018/08/Nickel.pdf (theorem 4.16, page 23/23)
The following proof below is from the book:

Let ## f(x)=\frac{1}{\log {x}} ## and ## a(n)=\frac{\log {n}}{n} ## if ## n\equiv h\pmod {k} ## is prime
and ## 0 ## otherwise.
By Dirichlet's Theorem, we have ## \sum_{n\leq x}a(n)=\frac{1}{\varphi(k)}\log {x}+R(x) ##,
where ## R(x)=O(1) ##.
Applying Abel's summation formula,
## \sum_{\substack{prime p\leq x \\ p\equiv h\pmod {k}}}\frac{1}{p}=\sum_{1<n\leq x}a(n)f(n) ##
## =\frac{1}{\log {x}}(\frac{1}{\varphi(k)}\log {x}+O(1))+\int_{2}^{x}\frac{1}{t\log^{2} {t}}(\frac{1}{\varphi(k)}\log {t}+R(t))dt ##
## =\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}})+\frac{1}{\varphi(k)}\int_{2}^{x}\frac{dt}{t\log {t}}+\int_{2}^{\infty}\frac{R(t)}{t\log^{2} {t}}dt+\int_{x}^{\infty}\frac{R(t)}{t\log^{2} {t}}dt ##
## =\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}})+\frac{\log\log {x}-\log\log {2}}{\varphi(k)}+C+O(\int_{x}^{\infty}\frac{dt}{t\log^{2} {t}}) ##
## =\frac{1}{\varphi(k)}\log\log {x}+A+O(\frac{1}{\log {x}}) ##.

But I don't understand the first part in this proof. Why letting ## f(x)=\frac{1}{\log {x}} ## and ## a(n)=\frac{\log {n}}{n} ##?
 
  • #8
Math100 said:
The following proof below is from the book:

Let ## f(x)=\frac{1}{\log {x}} ## and ## a(n)=\frac{\log {n}}{n} ## if ## n\equiv h\pmod {k} ## is prime
and ## 0 ## otherwise.
By Dirichlet's Theorem, we have ## \sum_{n\leq x}a(n)=\frac{1}{\varphi(k)}\log {x}+R(x) ##,
where ## R(x)=O(1) ##.
Yes, this was the difficult part that I thought you wanted to prove. It contains the so-called Dirichlet density ##\varphi (k)^{-1}## which isn't easy to see where it comes from.
Math100 said:
Applying Abel's summation formula, ## \sum_{\substack{prime p\leq x \\ p\equiv h\pmod {k}}}\frac{1}{p}=\sum_{1<n\leq x}a(n)f(n) ##
## =\frac{1}{\log {x}}(\frac{1}{\varphi(k)}\log {x}+O(1))+\int_{2}^{x}\frac{1}{t\log^{2} {t}}(\frac{1}{\varphi(k)}\log {t}+R(t))dt ##
## =\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}})+\frac{1}{\varphi(k)}\int_{2}^{x}\frac{dt}{t\log {t}}+\int_{2}^{\infty}\frac{R(t)}{t\log^{2} {t}}dt+\int_{x}^{\infty}\frac{R(t)}{t\log^{2} {t}}dt ##
## =\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}})+\frac{\log\log {x}-\log\log {2}}{\varphi(k)}+C+O(\int_{x}^{\infty}\frac{dt}{t\log^{2} {t}}) ##
## =\frac{1}{\varphi(k)}\log\log {x}+A+O(\frac{1}{\log {x}}) ##.

But I don't understand the first part in this proof. Why letting ## f(x)=\frac{1}{\log {x}} ## and ## a(n)=\frac{\log {n}}{n} ##?

We want to apply Dirichlet's theorem in its classical form and improve the boundary of its estimation. The classical version reads (with ##a_n## as in your textbook)
$$
\sum_{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{1}{p}}=
\sum_{0<n\leq x} a_n
={\frac{\log(x)}{\varphi (k)}}+R(x)
$$
The problem is that we cannot really work with that sum on the left that contains so many gaps compared to all integers ##\{1,2,\ldots,[x]\}.## So we will start with a replacement
$$
\sum _{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod {k}}}{\frac {1}{p}}=\sum_{n=1}^{[x]}b_n
$$
We would like to set ##b_n=1/n## but we have to consider the gaps. Hence we define
$$
b_n=\begin{cases}1/n&\text{ if }n\leq x \text{ is prime and }n\equiv h{\pmod {k}}\\0&\text{ else }\end{cases}
$$
Our next difficulty is that we only have an estimation
$$
\sum_{0<n\leq x}a_n=\sum _{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{\log p}{p}}
$$
So we have ##b_p=\dfrac{1}{p}=\dfrac{a_p}{\log p}## which also holds for the zeros, so ##b_n=\dfrac{1}{n}=\dfrac{a_n}{\log n}## bridging the gaps. The difference is thus a factor ##f(n)=\frac{1}{\log n}## that we can use in Abel's summation formula since we can choose ##f(n)## as we like as long as it is continuously differentiable. All in all we have now - and note that ##a_1=0##
\begin{align*}
\sum_{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{1}{p}}&=
\sum_{n=1}^{[x]}b_n=\sum_{n=1}^{[x]}\dfrac{a_n}{\log n}=\sum_{n=1}^{[x]}a_nf(n)\\
&\stackrel{\text{Abel}}{=}\left(\sum_{n=1}^xa_n\right)f(x)-\left(\sum_{n=1}^1 a_n\right)f(1)-\int_1^x \left(\sum_{n=1}^t a_n\right)f'(t)\,dt \\
&\stackrel{\text{Dirichlet}}{=}\left({\frac{\log(x)}{\varphi (k)}}+R(x)\right)\cdot \dfrac{1}{\log x}-\int_2^x\left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\left(\dfrac{1}{\log t}\right)'\,dt\\
&= \dfrac{1}{\log x}\left({\frac{\log(x)}{\varphi (k)}}+O(1)\right)+\int_2^x \left(\dfrac{1}{t \log^2 t}\right) \left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\,dt\\
&\phantom{=}\ldots
\end{align*}
 
Last edited:
  • Informative
Likes Math100
  • #9
fresh_42 said:
Yes, this was the difficult part that I thought you wanted to prove. It contains the so-called Dirichlet density ##\varphi (k)^{-1}## which isn't easy to see where it comes from.We want to apply Dirichlet's theorem in its classical form and improve the boundary of its estimation. The classical version reads (with ##a_n## as in your textbook)
$$
\sum_{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{1}{p}}=
\sum_{0<n\leq x} a_n
={\frac{\log(x)}{\varphi (k)}}+R(x)
$$
The problem is that we cannot really work with that sum on the left that contains so many gaps compared to all integers ##\{1,2,\ldots,[x]\}.## So we will start with a replacement
$$
\sum _{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod {k}}}{\frac {1}{p}}=\sum_{n=1}^{[x]}b_n
$$
We would like to set ##b_n=1/n## but we have to consider the gaps. Hence we define
$$
b_n=\begin{cases}1/n&\text{ if }n\leq x \text{ is prime and }n\equiv h{\pmod {k}}\\0&\text{ else }\end{cases}
$$
Our next difficulty is that we only have an estimation
$$
\sum_{0<n\leq x}a_n=\sum _{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{\log p}{p}}
$$
So we have ##b_p=\dfrac{1}{p}=\dfrac{a_p}{\log p}## which also holds for the zeros, so ##b_n=\dfrac{1}{n}=\dfrac{a_n}{\log n}## bridging the gaps. The difference is thus a factor ##f(n)=\frac{1}{\log n}## that we can use in Abel's summation formula since we can choose ##f(n)## as we like as long as it is continuously differentiable. All in all we have now - and note that ##a_1=0##
\begin{align*}
\sum_{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{1}{p}}&=
\sum_{n=1}^{[x]}b_n=\sum_{n=1}^{[x]}\dfrac{a_n}{\log n}=\sum_{n=1}^{[x]}a_nf(n)\\
&\stackrel{\text{Abel}}{=}\left(\sum_{n=1}^xa_n\right)f(x)-\left(\sum_{n=1}^1 a_n\right)f(1)-\int_1^x \left(\sum_{n=1}^t a_n\right)f'(t)\,dt \\
&\stackrel{\text{Dirichlet}}{=}\left({\frac{\log(x)}{\varphi (k)}}+R(x)\right)\cdot \dfrac{1}{\log x}-\int_2^x\left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\left(\dfrac{1}{\log t}\right)'\,dt\\
&= \dfrac{1}{\log x}\left({\frac{\log(x)}{\varphi (k)}}+O(1)\right)+\int_2^x \left(\dfrac{1}{t \log^2 t}\right) \left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\,dt\\
&\phantom{=}\ldots
\end{align*}
Just to confirm from where you left off,
\begin{align*}
&\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\int_{2}^{x}(\frac{1}{t\log {t}}\cdot \frac{1}{\varphi(k)}+\frac{R(t)}{t\log^2 {t}})dt\\
&=\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\frac{1}{\varphi(k)}\int_{2}^{x}\frac{dt}{t\log {t}}+\int_{2}^{x}\frac{R(t)}{t\log^2 {t}}dt+\int_{x}^{\infty}\frac{R(t)}{t\log^2 {t}}dt\\
&=\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\frac{\log\log {x}-\log\log {2}}{\varphi(k)}+C+O(\frac{1}{\log {2}})+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}(1+\log\log {x}-\log\log {2})+C+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}\log\log {x}+\frac{1}{\varphi(k)}(1-\log\log {2})+C+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}\log\log {x}+A+O(\frac{1}{\log {x}})\\
\end{align*}
where ## A=\frac{1}{\varphi(k)}(1-\log\log {2})+C ##.
Is this correct?
 
  • #10
Math100 said:
Just to confirm from where you left off,
\begin{align*}
&\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\int_{2}^{x}(\frac{1}{t\log {t}}\cdot \frac{1}{\varphi(k)}+\frac{R(t)}{t\log^2 {t}})dt\\
&=\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\frac{1}{\varphi(k)}\int_{2}^{x}\frac{dt}{t\log {t}}+\int_{2}^{x}\frac{R(t)}{t\log^2 {t}}dt+\int_{x}^{\infty}\frac{R(t)}{t\log^2 {t}}dt\\
&=\frac{1}{\varphi(k)}+O(\frac{1}{\log {x}}+\frac{\log\log {x}-\log\log {2}}{\varphi(k)}+C+O(\frac{1}{\log {2}})+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}(1+\log\log {x}-\log\log {2})+C+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}\log\log {x}+\frac{1}{\varphi(k)}(1-\log\log {2})+C+O(\frac{1}{\log {x}})\\
&=\frac{1}{\varphi(k)}\log\log {x}+A+O(\frac{1}{\log {x}})\\
\end{align*}
where ## A=\frac{1}{\varphi(k)}(1-\log\log {2})+C ##.
Is this correct?
Looks ok, modulo a few typos. Let me explain the calculation (explanations in brackets ##[\,\cdot\,]## at the end of a line):

\begin{align*}
\sum_{{p\ {\text{ prime }} \atop p\leq x} \atop p\equiv h{\pmod{k}}}{\frac{1}{p}}&=
\sum_{n=1}^{[x]}b_n=\sum_{n=1}^{[x]}\dfrac{a_n}{\log n}=\sum_{n=1}^{[x]}a_nf(n)\\
&\stackrel{\text{Abel}}{=}\left(\sum_{n=1}^xa_n\right)f(x)-\left(\sum_{n=1}^1 a_n\right)f(1)-\int_1^x \left(\sum_{n=1}^t a_n\right)f'(t)\,dt \\
&\stackrel{\text{Dirichlet}}{=}\left({\frac{\log(x)}{\varphi (k)}}+R(x)\right)\cdot \dfrac{1}{\log x}-\int_2^x\left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\left(\dfrac{1}{\log t}\right)'\,dt\\
&= \dfrac{1}{\log x}\left({\frac{\log(x)}{\varphi (k)}}+O(1)\right)+\int_2^x \left(\dfrac{1}{t \log^2 t}\right) \left({\frac{\log(t)}{\varphi (k)}}+R(t)\right)\,dt\\
&= \dfrac{1}{\varphi (k)}+O\left(\dfrac{1}{\log x}\right)+\ldots \quad \left[\dfrac{1}{\log x}\cdot O(1)=\dfrac{1}{\log x}\cdot C=\dfrac{C}{\log x}=O\left(\dfrac{1}{\log x}\right)\right]\\[12pt]
\ldots &+\dfrac{1}{\varphi (k)}\int_2^x \dfrac{1}{t \log t}\,dt+ C \int_2^x \dfrac{1}{t \log^2 t}\,dt \;\ldots\quad \left[R(t)=O(1)=C\right]\\[12pt]
&=\dfrac{1-\log\log 2}{\varphi (k)}+O\left(\dfrac{1}{\log x}\right)+\dfrac{\log\log x}{\varphi (k)}+\ldots \\[12pt]
&\quad\quad\quad\left[\dfrac{d}{dt}\log (\log (t))=\dfrac{1}{t\log t}\Rightarrow \int_2^x \dfrac{1}{t \log t}\,dt=\log (\log (x))-\log\log (2)\right]\\[12pt]
&\quad\quad\quad\left[\text{ and }\log\log 2 \text{ joins the first term}\right]\\[12pt]
\ldots &+ C\cdot \left(\underbrace{\int_2^\infty \dfrac{1}{t \log^2 t}\,dt}_{=\dfrac{1}{\log 2}}- \underbrace{\int_x^\infty \dfrac{1}{t \log^2 t}\,dt}_{=\dfrac{1}{\log x}}\right)\;\ldots\quad \left[\int_a^b =\int_a^\infty -\int_x^\infty \right]\\[12pt]
&=\dfrac{1-\log\log 2}{\varphi (k)}+O\left(\dfrac{1}{\log x}\right)+\dfrac{\log\log x}{\varphi (k)}+\underbrace{\dfrac{C}{\log 2}}_{=:A}-\dfrac{C}{\log x}\\[12pt]
&=\dfrac{\log\log x}{\varphi (k)}+A+O\left(\dfrac{1}{\log x}\right)+\dfrac{1-\log\log 2}{\varphi (k)}\;\ldots\quad\\[12pt]
&\quad\quad\quad\left[O\left(\log^{-1}(x)\right)-C\log^{-1}(x)=(C'-C)\log^{-1}(x)=O(\log^{-1}(x))\right]
\end{align*}
I hesitate to decide where to put ##\dfrac{1-\log\log 2}{\varphi (k)}## to.

If we pushed it in the first term ## \dfrac{\log\log x}{\varphi (k)}## we would get ##\gamma \dfrac{\log\log x}{\varphi (k)}=O\left(\dfrac{\log\log x}{\varphi (k)}\right)## which would ruin our nice first term, which is the entire reason we did this here.

We cannot put it into ##O\left(\dfrac{1}{\log x}\right)## because this gets small for increasing ##x## whereas ##\dfrac{1-\log\log 2}{\varphi (k)}## is independent of ##x##. It would ruin the boundary.

If we put it in ##A##, then we cheat a little bit. Why do we have ##\varphi (k)## explicitly in the first term if we treat it as a constant in ##A##? ##k## is a constant, so ##A## is probably a good place to be. Thus
$$
A=\dfrac{C}{\log 2}+\dfrac{1-\log\log 2}{\varphi (k)}
$$
##C## came in as ##R(t)=O(1)## which is already quite arbitrary. I guess that's why it's best that ##A## swallows ##\dfrac{1-\log\log 2}{\varphi (k)}## as another additive constant, est. ##+1.37\cdot \varphi (k).##
 
  • Like
Likes Math100

FAQ: How to prove this using Abel's summation formula?

What is Abel's summation formula?

Abel's summation formula is a mathematical tool used to evaluate infinite series by relating it to the behavior of a related function. It allows us to express a sum in terms of integrals, making it easier to evaluate.

How do I use Abel's summation formula?

To use Abel's summation formula, first identify the infinite series you want to evaluate. Then, find a related function whose integral can be easily evaluated. Finally, apply Abel's summation formula to express the sum in terms of the integral.

What is the purpose of using Abel's summation formula?

The purpose of using Abel's summation formula is to evaluate infinite series that would otherwise be difficult or impossible to solve. It provides a way to express the sum in terms of a simpler integral, making it easier to calculate.

What are the conditions for using Abel's summation formula?

There are two main conditions for using Abel's summation formula. First, the infinite series must converge. Second, the related function must be continuous and integrable on the interval of interest.

Can Abel's summation formula be used for all infinite series?

No, Abel's summation formula can only be used for certain types of infinite series. It is most useful for series that have a known related function whose integral can be easily evaluated. It is not applicable to all infinite series.

Back
Top