Convergence of q-series for rational x values

In summary: There IS a theorem in calculus that states that the series converges for x<1. However, it is not a proof. It is a mere assumption that you make in order to get the result that you want.
  • #1
ChrisVer
Gold Member
3,378
465

Homework Statement



For 0<q<∞, and x rational, for what x values does the series converge?
[itex] \sum_{n=0}^{∞} q^{1/n} x^n[/itex]

The Attempt at a Solution


I don't know which method works best for this
 
Physics news on Phys.org
  • #2
Ratio test. Then check boundaries.
 
  • #3
Next time do lots of trials with other methods to learn your mistakes, so that you yourself can figure out the optimal method. Asking people for help won't give you the intuition necessary to apply methods during exams.
 
  • #4
ratio test doesn't work:
[itex] \frac{ q^{\frac{1}{n+1}} x^{n+1}}{ q^{\frac{1}{n}} x^{n}}[/itex]
doesn't give any rational result...
[itex] \frac{q^{\frac{1}{n+1}}}{q^{\frac{1}{n}}} x[/itex]

what happens with q^(1/n)?
I'll have
[itex] q^{1/n} \rightarrow 1 [/itex] for n going to infinity... but does this help in getting x result?

(thank god I won't need to give any exams)
 
  • #5
On the other hand, since [itex]q^{1/n}\rightarrow 1[/itex]:
[itex] ln q^{1/n}= \frac{1}{n}lnq \rightarrow 0[/itex]
[itex] q^{1/n}=e^{ln(q^{1/n})}\rightarrow e^{0}=1[/itex]

and since:
[itex] \sum x^{n}= \frac{1}{1-x}, x<1[/itex]
Couldn't I say that the series converge for x<1? Is there such a theorem in calculus?
 
  • #6
##\frac{q^\frac{1}{n+1}}{q^\frac{1}{n}}=q^\frac{-1}{n(n+1)}## Thus by the ratio test. we get ##|\frac{ q^{\frac{1}{n+1}} x^{n+1}}{ q^{\frac{1}{n}} x^{n}}|=|q^\frac{-1}{n(n+1)}x|=|x|<1## as ##n\rightarrow \infty##
 
  • #7
So for some unknown reason, we got the same result...?
 
  • #8
What you did doesn't really constitute as a proof. You basically just assumed that ##\sum x^n=\frac{1}{1-x}##, which you can't really assume because you don't know for what values of x the series converges for.
 
  • #9
It's geometric series...In fact for x>1, the series diverge... in order to converge you just need to have it x<1...
On the other hand, suppose that I have:
[itex] \sum c_{n} a_{n} [/itex]
with [itex] c_{n}= q^{1/n}[/itex]
and [itex]a_{n}= x^{n}[/itex]
If [itex]c_{n}\rightarrow L < ∞[/itex] can I say that the series converge if [itex]a_{n}[/itex] converges? that's why I asked if there is a theorem about it.
 
Last edited:
  • #10
There isn't a theorem, but the idea your using is simply derived from the ratio test. Since clearly if the sequence ##c_n<\infty## as ## n\rightarrow \infty## then ##|\frac{c_{n+1}}{c_{n}}|=L<1##(sequence convergence test). And hence if we let ##a_n=x^n## then ##| \frac{c_{n+1}}{c_n} ||\frac{x^{n+1}}{x^n}|=| \frac{c_{n+1}}{c_n} ||x|=<1##. Therefore ##|x|<1\leq | \frac{c_{n}}{c_{n+1}} |=\frac{1}{L}##. Therefore the series indeed converges when ##|x|<1## if we know that ##c_n## converges.
 
Last edited:
  • #11
So let's make the thing more interesting...
By mathematica also one can verify that the convergence condition is |x|<1. But I also came across a problem...
As we know the ratio method does not give us a specific answer about what happens in the case the result is =1... in our case if x=1...
So in that case we will need to see what happens with:
[itex] \sum_{n} x^{n}= \sum_{n} (1)^{n} = 1+1+... = \sum_{n} 1=-\frac{1}{2}[/itex]
(the proof can be taken from working with the residues of the zeta and gamma function)
What happens then for that case? Is mathematica missing that one extra x?
 
  • #12
ChrisVer said:
So let's make the thing more interesting...
By mathematica also one can verify that the convergence condition is |x|<1. But I also came across a problem...
As we know the ratio method does not give us a specific answer about what happens in the case the result is =1... in our case if x=1...
So in that case we will need to see what happens with:
[itex] \sum_{n} x^{n}= \sum_{n} (1)^{n} = 1+1+... = \sum_{n} 1=-\frac{1}{2}[/itex]
(the proof can be taken from working with the residues of the zeta and gamma function)
What happens then for that case? Is mathematica missing that one extra x?

I hope you are joking. Of course ##\sum 1 \neq -1/2##. Such "proofs" typically involve formal manipulation of equations/identities outside their range of validity. Coming up with such "proofs" can be fun, but nobody really takes them seriously.
 
  • #13
What do you mean by that?
They even have physical application (for example the zeta function's "proofs" can be found in Zwiebach's Introduction course to string theory as exercise, since they appear in the Hamiltonian- there was when I had to prove that).
 
  • #14
ChrisVer said:
What do you mean by that?
They even have physical application (for example the zeta function's "proofs" can be found in Zwiebach's Introduction course to string theory as exercise, since they appear in the Hamiltonian- there was when I had to prove that).

I mean that the result is provably false, so any "proof" must be invalid. I am fully aware that by pushing the boundaries, especially in Physics, we can get things that look wrong---but might not be wrong (especially if they lie beyond the borders of experimental verification). For example, heat can flow from negative-temperature to positive-temperature regions; and other such anomalies are found in various studies and models. However, the result you cite is not of this type---it is just plain wrong. There IS a difference.
 
  • #15
How can a mathematically proven result be wrong?
[itex] Γ(s)= \int_{0}^{∞} dt (t)^{s-1} e^{-t} \rightarrow \int_{0}^{∞} d(nt) (nt)^{s-1} e^{-nt}= n^{s}\int_{0}^{∞} dt (t)^{s-1} e^{-nt}[/itex]
Then the zeta function is defined as:
[itex]ζ(s)= \sum_{n=1} \frac{1}{n^{s}}[/itex]

So
[itex]Γ(s)ζ(s)= \sum_{n=1} \frac{1}{n^{s}} n^{s}\int_{0}^{∞} dt (t)^{s-1} e^{-nt}= \int_{0}^{∞} dt (t)^{s-1} \sum_{n=1}e^{-nt}[/itex]

But we can see the geometric series now:
[itex] \sum_{n=1}e^{-nt}= \frac{e^{-t}}{1-e^{-t}}= \frac{1}{e^{t}-1}[/itex]

Thus:
[itex]Γ(s)ζ(s)= \int_{0}^{∞} dt \frac{t^{s-1} }{e^{t}-1} [/itex]

Also, expanding the denominator's exponential to Taylor series:
[itex]\frac{1}{e^{t}-1}= \frac{1}{1+t+\frac{t^{2}}{2}+\frac{t^{3}}{3!}+O(t^{4})-1}=\frac{1}{t+\frac{t^{2}}{2}+\frac{t^{3}}{6}+O(t^{4})}=\frac{1}{t} \frac{1}{1+\frac{t}{2}+\frac{t^{2}}{6}+O(t^{3})}[/itex]
Then the 2nd denominator, can be expanded as:
[itex] \frac{1}{1+x}= 1-x+x^{2}-...[/itex] for [itex]x= \frac{t}{2}+\frac{t^{2}}{6} [/itex]

The result is:
[itex]\frac{1}{e^{t}-1}= \frac{1}{t}- \frac{1}{2}+\frac{t}{12}+O(t^{2})[/itex]So
[itex]Γ(s)ζ(s)= \int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}+ \int_{1}^{∞} dt \frac{t^{s-1} }{e^{t}-1}[/itex]

From which only the first integral can diverge for t=0. So for that we write:
[itex]\int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}=\int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt+ \int_{0}^{1} t^{s-1} (-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt[/itex]
which is like the 1st integral is zero, and the 2nd one will give the (1/e-1)... Nevertheless, the 2nd can be evaluated:
[itex]\int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}=\int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt + \frac{1}{s-1}- \frac{1}{2s}+\frac{1}{12(s+1)} [/itex]And by that:
[itex]Γ(s)ζ(s)= \int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt + \frac{1}{s-1}- \frac{1}{2s}+\frac{1}{12(s+1)} + \int_{1}^{∞} dt \frac{t^{s-1} }{e^{t}-1}[/itex]

The 2nd integral, converges for every value of s. The parenthesis in the 1st integral is of order [itex]t^{2}[/itex] which gives:
[itex] \int_{0}^{1} t^{s+1} dt [/itex] which converges for [itex]Re>-2[/itex].

Using the fact that the gamma function has a simple pole at [itex]s=0 \rightarrow Res=1[/itex], and a simple pole at [itex]s=-1 \rightarrow Res=-1[/itex]... or in general the pole at [itex]s=-n, n\in Z \rightarrow Res=\frac{(-1)^{n}}{n!}[/itex]... and the fact that the above expression has simple poles at s=0, s=-1, we get:
[itex] Res_{s=-n}[Γ(s)ζ(s)]= Res_{s=-n}[Γ(s)] ζ(-n), n=0,1[/itex]

By that we can get that:
[itex]ζ(0)=-\frac{1}{2}= \sum_{n=1} 1[/itex]
as well as:
[itex]ζ(-1)= - \frac{1}{12}= \sum_{n=1}n[/itex]
while its only singularity is at s=+1.

One could come up with the idea that expansions are wrong, but that's not the case, since they don't contribute anything to the singular points which I make use of... they just exist in a converging integral...
 
Last edited:
  • #16
ChrisVer said:
How can a mathematically proven result be wrong?
[itex] Γ(s)= \int_{0}^{∞} dt (t)^{s-1} e^{-t} \rightarrow \int_{0}^{∞} d(nt) (nt)^{s-1} e^{-nt}= n^{s}\int_{0}^{∞} dt (t)^{s-1} e^{-nt}[/itex]
Then the zeta function is defined as:
[itex]ζ(s)= \sum_{n=1} \frac{1}{n^{s}}[/itex]

So
[itex]Γ(s)ζ(s)= \sum_{n=1} \frac{1}{n^{s}} n^{s}\int_{0}^{∞} dt (t)^{s-1} e^{-nt}= \int_{0}^{∞} dt (t)^{s-1} \sum_{n=1}e^{-nt}[/itex]

But we can see the geometric series now:
[itex] \sum_{n=1}e^{-nt}= \frac{e^{-t}}{1-e^{-t}}= \frac{1}{e^{t}-1}[/itex]

Thus:
[itex]Γ(s)ζ(s)= \int_{0}^{∞} dt \frac{t^{s-1} }{e^{t}-1} [/itex]

Also, expanding the denominator's exponential to Taylor series:
[itex]\frac{1}{e^{t}-1}= \frac{1}{1+t+\frac{t^{2}}{2}+\frac{t^{3}}{3!}+O(t^{4})-1}=\frac{1}{t+\frac{t^{2}}{2}+\frac{t^{3}}{6}+O(t^{4})}=\frac{1}{t} \frac{1}{1+\frac{t}{2}+\frac{t^{2}}{6}+O(t^{3})}[/itex]
Then the 2nd denominator, can be expanded as:
[itex] \frac{1}{1+x}= 1-x+x^{2}-...[/itex] for [itex]x= \frac{t}{2}+\frac{t^{2}}{6} [/itex]

The result is:
[itex]\frac{1}{e^{t}-1}= \frac{1}{t}- \frac{1}{2}+\frac{t}{12}+O(t^{2})[/itex]


So
[itex]Γ(s)ζ(s)= \int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}+ \int_{1}^{∞} dt \frac{t^{s-1} }{e^{t}-1}[/itex]

From which only the first integral can diverge for t=0. So for that we write:
[itex]\int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}=\int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt+ \int_{0}^{1} t^{s-1} (-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt[/itex]
which is like the 1st integral is zero, and the 2nd one will give the (1/e-1)... Nevertheless, the 2nd can be evaluated:
[itex]\int_{0}^{1} dt \frac{t^{s-1} }{e^{t}-1}=\int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt + \frac{1}{s-1}- \frac{1}{2s}+\frac{1}{12(s+1)} [/itex]


And by that:
[itex]Γ(s)ζ(s)= \int_{0}^{1} t^{s-1} (\frac{1}{e^{t}-1}-\frac{1}{t}+ \frac{1}{2}-\frac{t}{12}+O(t^{2}))dt + \frac{1}{s-1}- \frac{1}{2s}+\frac{1}{12(s+1)} + \int_{1}^{∞} dt \frac{t^{s-1} }{e^{t}-1}[/itex]

The 2nd integral, converges for every value of s. The parenthesis in the 1st integral is of order [itex]t^{2}[/itex] which gives:
[itex] \int_{0}^{1} t^{s+1} dt [/itex] which converges for [itex]Re>-2[/itex].

Using the fact that the gamma function has a simple pole at [itex]s=0 \rightarrow Res=1[/itex], and a simple pole at [itex]s=-1 \rightarrow Res=-1[/itex]... or in general the pole at [itex]s=-n, n\in Z \rightarrow Res=\frac{(-1)^{n}}{n!}[/itex]... and the fact that the above expression has simple poles at s=0, s=-1, we get:
[itex] Res_{s=-n}[Γ(s)ζ(s)]= Res_{s=-n}[Γ(s)] ζ(-n), n=0,1[/itex]

By that we can get that:
[itex]ζ(0)=-\frac{1}{2}= \sum_{n=1} 1[/itex]
as well as:
[itex]ζ(-1)= - \frac{1}{12}= \sum_{n=1}n[/itex]
while its only singularity is at s=+1.

One could come up with the idea that expansions are wrong, but that's not the case, since they don't contribute anything to the singular points which I make use of... they just exist in a converging integral...


Several problems:
(1) Use of function definitions outside their range of validity. The definition
[tex] \Gamma(s) = \int_0^{\infty} t^{s-1} e^{-t} \, dt [/tex]
applies when ##\text{Re}(s) > 0##; for other values of ##s##, ##\Gamma(s)## is defined by analytic continuation---typically by
[tex] \Gamma(s) = \frac{\Gamma(s+n)}{s(s+1) \cdots (s+n-1)}[/tex]
(2) Swapping the order of integration and infinite summation without checking if it is valid; we know from examples that it cannot always be done, and sometimes will lead to incorrect results.
(3) Integrating a series term-by-term, when (at least some of) the terms are not integrable---that is, have divergent integrals.
 
  • #17
1. For Re>0 the integral is itself convergent and gamma functions shows no poles. However extending it, allowing complex s is not wrong.
2. why wouldn't it be valid? In fact the exponential (which happens to have the only n dependence) is behaving nicely under summation (geometric series)
3. I didn't integrate term by term series I think...I just stated it converges...
 
  • #18
Some other problems:

ChrisVer said:
[itex]ζ(s)= \sum_{n=1} \frac{1}{n^{s}}[/itex]

This is not the definition of the zeta function. It only coincides with the zeta function for ##\mathrm{Re}(s)>1##. For other ##s## (such as ##s=0## or ##s=-1##) we must use another definition!

Thus:

By that we can get that:
[itex]ζ(0)=-\frac{1}{2}= \sum_{n=1} 1[/itex]
as well as:
[itex]ζ(-1)= - \frac{1}{12}= \sum_{n=1}n[/itex]
while its only singularity is at s=+1.

This is wrong. It is right that ##\zeta(0) = -1/2## and ##\zeta(-1) = -1/12##. But

[tex]\zeta(-1) = \sum_{n=1}^{+\infty} n~\text{and}~\zeta(0) = \sum_{n=1}^{+\infty} 1[/tex]

are false.

If you ever see the above equalities in mathematical literature, then that is because they use a very different notion of convergence of series. And they usually mention what notion they're using. Your OP didn't mention such a notion, so the usual definition applies and under the usual definitions the above equalities are false.
 
  • #19
ChrisVer said:
1. For Re>0 the integral is itself convergent and gamma functions shows no poles. However extending it, allowing complex s is not wrong.


There is no problem with analytic continuation of the Gamma function. But if you do that then the Gamma function will not equal the integral anymore. Equalities only hold when the integral converges.

2. why wouldn't it be valid? In fact the exponential (which happens to have the only n dependence) is behaving nicely under summation (geometric series)

We're not saying it is necessarily invalid. But you need to check it. There are many counterexamples that swapping integration and limits is invalid, so this might be such a case. You need to rigorously show that the two sides are equal.
 
  • #20
Sorry for being a little bit stubborn, but I am really confused...
Eg how could someone ask for us to prove something which is wrong? (see attachment)
 

Attachments

  • LatEx.jpg
    LatEx.jpg
    34.9 KB · Views: 408
  • #21
and also you could check that, where zeta function is defined in the book...

Oh yes,the sources are from Zwiebach's book , A First Course in String Theory, I mentioned before.
 

Attachments

  • LatEx.jpg
    LatEx.jpg
    58.2 KB · Views: 403
  • #22
ChrisVer said:
Sorry for being a little bit stubborn, but I am really confused...
Eg how could someone ask for us to prove something which is wrong? (see attachment)

Nothing on the attachment you linked is wrong. But they never say that

[tex]\zeta(s) = \sum \frac{1}{n^s}[/tex]

for all ##s##.

In fact, they state that if you define the zeta function as I did above, then the expansion

[tex]\zeta(s)\Gamma(s) = ...[/tex]

is only valid for ##\mathrm{Re}(s)>1##, they state this very carefully. Then they notice that the RHS actually converges for ##\mathrm{Re}(s)>-2## and so they define ##\zeta(s)\Gamma(s)## as the RHS. So the zeta function is then not defined as the series

[tex]\zeta(s) = \sum \frac{1}{n^s}[/tex]

anymore, at least not for ##\mathrm{Re}(s)\leq 1##.
 
  • #23
However, in the 2nd one, they use the result to deduce the sum over all integers by the zeta function on s=-1...
 
  • #24
ChrisVer said:
and also you could check that, where zeta function is defined in the book...

Oh yes,the sources are from Zwiebach's book , A First Course in String Theory, I mentioned before.

The book says specifically that the definition

[tex]\zeta(s) = \sum\frac{1}{n^s}[/tex]

only holds for ##\mathrm{Re}(s)>1##. Then they extend ##\zeta## to all complex numbers except ##s=1##. But it's not because you extend ##\zeta## that way that the above equality needs to hold for the extended zeta function as well. It will only remain to hold for ##\mathrm{Re}(s)>1##. If you want it to hold for all complex ##s##, then you need to define the sum that way. This is called zeta regularization, there you redefine what a series is. It is not the usual definition and you need to state that you use this new definition, since it will not agree with the usual definition!

Also note how they call ##-1/12## an "interpretation" of the series. They know very well that it's not the usual definition of sum of series, but merely another definition.
 
  • Like
Likes 1 person
  • #25
ChrisVer said:
and also you could check that, where zeta function is defined in the book...

Oh yes,the sources are from Zwiebach's book , A First Course in String Theory, I mentioned before.

Also, note how the use a question mark above the equality in

[tex]-1/12 = 1+2+3+4+5+6+...[/tex]

This means that the author knows that something fishy is going on here.
 

FAQ: Convergence of q-series for rational x values

What is convergence of a series?

Convergence of a series is a concept in mathematics, specifically in calculus and real analysis, that refers to the behavior of a sequence of terms in a series. It determines whether the series will have a finite limit or not as the number of terms approaches infinity.

How is convergence of a series determined?

The convergence of a series is determined by evaluating the limit of the sequence of terms as the number of terms approaches infinity. If the limit exists and is a finite number, then the series is said to converge. If the limit does not exist or is infinite, then the series is said to diverge.

What is the difference between absolute and conditional convergence?

Absolute convergence refers to a series whose terms are all positive and the series converges. Conditional convergence refers to a series whose terms alternate between positive and negative and the series still converges, but the value of the limit may change if the terms are rearranged.

What are some common tests for convergence of a series?

Some common tests for convergence of a series include the ratio test, the root test, the comparison test, and the integral test. These tests are used to determine the convergence or divergence of a series by comparing it to a known convergent or divergent series.

Why is understanding convergence important in mathematics?

Understanding convergence is important in mathematics because it allows us to determine whether a series will have a finite limit or not, which is crucial in many applications. It also helps us to better understand the behavior of functions and their approximations, as well as to prove theorems and solve problems in calculus and real analysis.

Similar threads

Replies
3
Views
713
Replies
1
Views
950
Replies
2
Views
1K
Replies
2
Views
719
Replies
6
Views
946
Back
Top