What is convergence and 1+2+3+4....... = -1/12

  • I
  • Thread starter bhobba
  • Start date
  • Tags
    Convergence
In summary: If we choose$$A_j(\lambda) = j^{1-\lambda}$$then these expressions diverge if ##\lambda = 1##. But if we look at the function$$F(\lambda) = \sum_{j=0}^{\infty} j^{1-\lambda} = \zeta(2-\lambda)$$where ##\zeta## is the Riemann zeta function, then$$lim_{\lambda \rightarrow 1} F(\lambda) = \sum_{j=0}^{\infty} j^{1-1} = \sum_{j=0}^{\infty} j = -\frac
  • #1
10,824
3,690
Ok - anyone that has done basic analysis knows the definition of convergence. The series 1-2+3-4+5... is for example obviously divergent (alternating series test). But wait a minute let's try something tricky and perform a transform on it, (its Borel summation, but that is not really relevant to this).

∑an = ∑(an/n!)n! = ∑ (an/n!)∫t^n*e^-t where the sum is from n=0 to ∞ and the integral also from 0 to ∞. Now suppose ∑|(an*(xt)^n/n!)e^-t| < ∞ then dominated convergence applies and the integral and sum can be reversed, if the integral exists. It usually does for some x (in the example its |x|<1), but in the actual integral for at least x =1, which means it can be considered an analytic continuation to x=1 so ∫∑((an*t^n)/n!)*e^-t can be taken as ∑an. Or you can consider it limit x → 1- if it exists for |x|<1 because, again from dominated convergence, its continuous in x.

Now let's apply this to 1-2+3-4 ... where the sum is from n =1 to ∞. We have ∫∑(-n*(-t)^n*e^-t)/n! = ∫∑(-(-t)^n*e^-t)/(n-1)! = ∫∑(-(-t)^(n+1)*e^-t)/(n)! = ∫t*e^-t*e^-t = ∫t*e^-2t = 1/4.

Are divergent series sometimes really convergent but simply written in the wrong form?

Added Later:
Made a goof, but fixed it. Thought I could get away without using analytic continuation - just the limit - but I was wrong. Certainly in the example you can use just the limit - but not more generally.

Just as an aside from Wikipedia:
Borel, then an unknown young man, discovered that his summation method gave the 'right' answer for many classical divergent series. He decided to make a pilgrimage to Stockholm to see Mittag-Leffler, who was the recognized lord of complex analysis. Mittag-Leffler listened politely to what Borel had to say and then, placing his hand upon the complete works by Weierstrass, his teacher, he said in Latin, 'The Master forbids it'.
Mark Kac, quoted by Reed & Simon (1978, p. 38)

Looks like Mittag-Leffler had trouble with it, but interestingly Borel wrote a book on Divergent series later, or maybe it should be superficially divergent series:-p:-p:-p:-p:-p:-p:-p:-p.

Thanks
Bill
 
Last edited:
  • Like
Likes jedishrfu
Physics news on Phys.org
  • #3
Thanks Jedishrfu. Didn't know that reference. I will get onto how you use this to do that strange sum. Again it's writing it in a different form that has a better range of convergence. Do not agree 100% with it - I think it really is -1/12 - it's just in the form its written it has unnecessary restrictions on the real value which you see when in the correct form. But that is something that can be discussed once we sort out my first post. I made a bit of a goof first up and thought you didn't need analytic continuation. In the case I gave you just needed to take a limit - but in general you have to use the full power of analytic continuation.

Thanks
Bill
 
Last edited:
  • #4
It’s okay I fixed your title too as you dropped a one in the denominator.
 
  • Like
Likes bhobba
  • #5
I’m afraid my math is a bit superficial too so I won’t be able to contribute much in this thread. I liked the Borel story and the master forbids it. There are some similar ones I’ve heard with Pauli too.
 
  • #6
jedishrfu said:
I’m afraid my math is a bit superficial too so I won’t be able to contribute much in this thread. I liked the Borel story and the master forbids it. There are some similar ones I’ve heard with Pauli too.

That's fine. I suspect I will not get much interest anyway - its a bit esoteric. What I noticed is there are a number of threads here about it but all were shut down - they got out of hand. Just wanted to have a permanent record that as a mentor I will ensure remains 'calm'.

Even though esoteric I think that understanding exacltly what's going on with such things is important - you see it all over the place you can't do it. However good write-ups on analytic continuation, IMHO, tell a more subtle, but nonetheless different story eg:
https://www.colorado.edu/amath/sites/default/files/attached-files/pade_analytic_continuation.pdf
'The key point in this example is that all the three functions are identically the same. The boundaries for f1(z) and f2(z) are not natural ones, but just artifacts of the particular way we used for representing the function f3(z)'

Divergences often are the same - a result of not writing it in a more natural way.

Thanks
Bill
 
  • #7
Hopefully adding something: I think the whole notion of "the sum of all integers equals -1/12" is bad notation. What's actually meant, is that "if you rewrite the sum of all integers in terms of a complex function and perform an analytic continuation on it, then the value of it equals -1/12". And because the analytic continuation is unique, people sometimes forgive the lack of this subtlety.
 
  • Like
Likes bhobba and jedishrfu
  • #8
haushofer said:
Hopefully adding something: I think the whole notion of "the sum of all integers equals -1/12" is bad notation. What's actually meant, is that "if you rewrite the sum of all integers in terms of a complex function and perform an analytic continuation on it, then the value of it equals -1/12". And because the analytic continuation is unique, people sometimes forgive the lack of this subtlety.

It seems to me that the various summation techniques amount to this:
  1. You have a divergent sum: ##\sum_j a_j##.
  2. You come up with a sequence of expressions ##A_j(\lambda)## such that for each ##j##, ##lim_{\lambda \rightarrow 1} A_j(\lambda) = a_j##.
  3. Then you come up with an analytic function ##F(\lambda)## such that for some values of ##\lambda##, ##F(\lambda) = \sum_j A_j(\lambda)##
  4. Then you declare that ##\sum_j a_j = F(1)##
If the original sum were convergent, I'm sure that this technique would give a unique answer for the sum. But if the sum is divergent, then it seems to me that there could be multiple functions ##F(\lambda)## that would work.

Here's an example of multiple ##F##, but it actually leads to the same answer.

The modified sum: ##1 - 2 + 3 - 4 + ...##

We can write that as ##\lim_{x \rightarrow 1} 1 - 2x + 3x^2 - 4x^3 ...##

That converges to the function ##F(x) = \frac{1}{(1+x)^2}##. We can immediately evaluate it to get: ##lim_{x \rightarrow 1} \frac{1}{(1+x)^2} = 1/4##

An alternative summation is:

##1-2+3-4+... = lim_{x \rightarrow 1} 1^{x} - 2^{x} + 3^{x} - 4^{x} + ...##

The function ##F'(x) = \sum_j j^{x} (-1)^{j+1}## is related to the zeta function:

##F'(x) = \zeta(-x) (1-2^{1+x})##

This function is very different from ##F(x) = \frac{1}{(1+x)^2}##, but they have the same value at ##x=1##, namely 1/4.

Off the top of my head, I can't think of an example where you get two different answers, but I also don't see any reason for it to always give the same answer, no matter which ##F## you choose
 
  • Like
Likes haushofer and bhobba
  • #9
I will give the relation of this to that controversial 1+2+3+4... = -1/12 soon, but would like, so people understand it better to flest out Borel summation a bit more.

To recap about Borel summation ∑an = ∑(an/n!)*n!. n! = Γ(n+1) = ∫t^n*e^-t so ∑an = ∑an. In general you can't interchange the sum and integral but under some conditions you can so we will formally interchange them and see what happens so ∑an = ∫∑(an/n!)*t^n*e^-t. This is called the Borel sum and we will look at when it is the same as ∑an.

If ∑an is absolutely convergent then by Tonelli's theorem since all terms are positive the integral and sum can be interchanged ∑|an| = ∑∫|an|/n!)*t^n*e^-t = ∫∑|an|/n!)*t^n*e^-t. Then by Fubini's theorem the sum and integral can be reversed and the Borel sum is the same as the normal sum. Consider the series S =1 + x + x^2 + x^3 ... It is absolutely convergent to 1/1-x, |x| < 1. Then the Borel Sum is S = ∫∑(x^n/n!)*t^n*e^-t = ∫e^t(x-1) = 1/1-x < ∞ if x<1 - not only for Ix|<1. In other words - when S is convergent in the usual sense then it is equal to its Borel Sum but S is only valid for |x| < 1, however the Borel Sum is still the same - but true for more values of x. Borel summation has extended the values of x you get a sensible answer - in fact exactly the same answer. This is the characteristic of analytic continuation ie if a function in a smaller region it is exactly the same function in a larger region. Borel summation has extended the region the series has a finite sum. Normal summation introduces unnecessary restrictions on the sum that Boral Summation removes - at least i part. This of course works for 1+ 2x + 3x^2 +4x^3 ... and is left as an exercise to show its Borel and Normal sum are the same ie 1/(1-x)^2.

There is also another way of looking at this by introducing what's called the Boral exponential sum. Personally I don't use it much but sometimes its of dome use. It is defined as limit t → ∞ e^-t*S(t). Here Sn = a0 + a1 +a3 +...+an so limit n → ∞ Sn = S, and S(t) = ∑Sn*t^n/n!. Its not hard to see if Σan converges normally to S then its exponential sum also converges to S. limit t → ∞ e^-t*S(t) = Sn limit t → ∞ e^-t*t^n/n! = 0 after using L-Hopital n times on e^-t*t^n. So let K be large then Sk is close to S with it getting closer and closer as K gets larger. limit t → ∞ = e^-t (S0 + S1*t + S2*t^2/2! ++++++++ e^-t*t^k-1/(k-1)! + ∑S*t^n/n! (from K to infinity - and we use Sk. Sk+1, Sk+2 ... = S for all practical purposes since K is large and gets better as K gets larger). Hence the exponential sum is ∑S*t^n/n! (from K to infinity). But since the terms preceding it are all zero we have the exponential sum is limit t → ∞ e^-t*S*∑t^n/n! = S. Thus if ∑an converges in the usual sense to S then S is also the exponential sum

Now we will show something interesting - if the exponential sum S exists then the Borel Sum exists and is also S. However the reverse is not true.

Let B(t) = ∑an*t^n/n! = a0 + (S1-S0)*t + (S2-S1)*t^2/2! + (S3-S2)*t^3/3! ++++++. Hence B(t)' = S(t)' - S(t) .
S - a0 = e^-t*S(t)| from 0 to ∞ = ∫d/dt [e^-t*S(t)] = ∫e^-t*(S(t)' - S(t)) = ∫B(t)'*e^-t = e^-t*B(t)| (o to ∞) + ∫e^-t*B(t) = -a0 + ∫e^-t*B(t). On cancelling a0 we end up with what was claimed S = ∫e^-t*B(t) which is the Borel Sum. We also have shown if ∑an normaly converges to S then the Borel Sum is also S. You can also see this intuitively using an argument to the exponential sum giving S. If K is very large all the aK beyond K are for all practical purposes 0, with the approximation getting better and better as K gets larger. But the Boral Sum S then becomes the sum to k as ∫e^-t*t^n/n! =1 so it is S.

What this is saying is the Boral Sum is exactly the same for normally convergent sums. But if it not normally convergent it can still give an answer. Not only this but if ∑an*x^n has any non zero radius of convergence the Borel Sum is exactly the same as the normal sum in the radius of convergence but is an analytic continuation for all x it exists. And the view of the paper I linked to on analytic continuation is its simply removing an unnatural restriction in the way the sum is written. So, one way of viewing Borel summation is simply removing an unnatural restriction in the way a series is written so it be expressed in a more natural way.

Anyway I have another example of this in deriving 1 +2 +3 +4 ... = -1/12. But will wait for comments now.

Thanks
Bill
 
Last edited:
  • #10
I have never understood this equivalence. As stevendaryl said:

Off the top of my head, I can't think of an example where you get two different answers, but I also don't see any reason for it to always give the same answer, no matter which F F you choose

Reference https://www.physicsforums.com/threads/what-is-convergence-and-1-2-3-4-1-12.959557/#post-6085280

I mean, isn't it possible to find another function that, when s=0 gives "the same" as this sum and when you analytically continue it, you get something different than -1/12?

Thanks
 
  • #11
bhobba said:
I will give the relation of this to that controversial 1+2+3+4... = -1/12 soon, but would like, so people understand it better to flest out Borel summation a bit more.

Following your definitions (to the extent that I understand them), I don't get a finite answer.

You start with ##A = \sum_n n##.
You get a related series: ##B(t) = \sum_n n t^n/n!##
Then you can write: ##A = \int_0^\infty B(t) e^{-t} dt = \int_0^\infty \sum_n n t^n/n! e^{-t} dt = \sum_n n/n! \int_0^\infty t^n e^{-t} dt = \sum_n n## (since the inner integral gives ##n!##)

But ##B(t) = t e^t##, which gives ##\infty## when you integrate ##A = \int_0^\infty B(t) e^{-t} dt##
 
  • #12
Noone answered my question but I will state it in a more formal way. I saw different derivations of the "-1/12" result, all of them look like this:

There is, at least, a function F(n,s) such that:
1) F(n,-1)=1+2+3+4+...n (for finite "n"). I'm using here s=-1 for the place in which F is equal to 1+2+3...+n in order to emulate the demostration that uses the z function. Clearly, it does not matter which value of the s variable we use for this equality to happen.
2) The limit of that function where "n" tends to infinite is well defined in some region of the "s" variable, but not in the point s=-1
3) Nevertheless, you can analytically continue this function in the s variable so that we can asign a value to F(infinite;-1).

4) It seems to me that, no matter which function F we use, F(infinite;-1)=-1/12.

This would be an astonishing result, wouldn't it? ... but... Is 4 proved somewhere?


Ps: To me it looks like using this procedure with different F could give different results, but every other example that I find always results in -1/12!
 
  • #13
Ok here is the issue in clear form. Now 1^k + 2^k+ 3^k ... = ζ(-k) where ζ is the standard zeta function 1 + 1/2^s + 1/3^s ... = ζ(s). This function is only convergent for s>1. But the Riemann Hypothesis says the primes are the zeroes on the imaginary axis through s =1/2. But it only exists if s>1. Wow I have solved one of the million dollar Clay prizes - its undefined - yippi. But let us not get up our hopes up. You see there is another function called the Eta function defied by η(s) = 1 - 1/2^s + 1/3^s - 1/4^s ... Now one can easily derive a simple relation between the two:
https://proofwiki.org/wiki/Riemann_Zeta_Function_in_terms_of_Dirichlet_Eta_Function

We have ζ(s) = η(s)/(1-2^(1-s)). Note we have just done some algebraic manipulations to arrive at this equation. Also note n(s) is convergent for s >0 (alternating series test) - we have extended the vales of s so make the Riemann Hypothesis well formulated (it still has issues at s =1 ie the harmonic series - it has a singularity there). Also using Borel Summation the Eta function is summable.

We can apply it easily - we have 1 + 2^k + 3^k +++++++ = (1 - 2^k + 3^k - 4^k...)/(1- 2^(1+k)) and calculate 1+1+1+1... and 1 + 2 + 3 +4 ...

If K=0 we have 1+1+1+1... = (1-1+1-1+1...)/-1 = -1/2
If K=1 we have 1+2+3+4... = (1-2+3-4...)/-3 = -1/12.

I have used Borel Summation but Steven used Abel summation which is based on Abel's Theorem:
https://sites.math.washington.edu/~morrow/335_16/AbelLaplace.pdf

Notice the second bit about the Laplace Transform. The Borel Sum is the Laplace Transform of B(t) with S=1. Let S = 1/x and then take the limit x→1- just like Abel Summation.

Lets look at (1/x)*∫B(t)*e^-t/x. Do a change of variable to t' = t/x. We get ∫Σ(an*(x*t)^n)/n!)*e^-t. In a similar way to Abel Σan*x^n is absolutely convergent if |x|<1 (hence is exactly the same as the Borel Sum) and Abel says take the limit x→1-, so we can take limit x→1- of ∫Σ(an*(x*t)^n)/n!)*e^-t to give the Borel Sum ∫Σ(an*t^n)/n!)*e^-t. So Abel and Borel summtion are related. By taking the limit x → 1- you get both - but Boral is more general.

Note, and this is the key point, we have have not done anything except putting divergent series in different forms, and in those forms apparent divergences are gone. Basically it's as the article on analytic continuation says - many divergences occur simply because it is written in an unnatural form. When written in a better form the divergence disappears. Not all divergent series are like that eg the divergence in the Harmonic series 1 + 1/2 + 1/3 ... can't be removed. This is where analytic continuation comes in - the Harmonic series is a singularity that can't be removed.

Thanks
Bill
 
Last edited:
  • #14
the_pulp said:
I mean, isn't it possible to find another function that, when s=0 gives "the same" as this sum and when you analytically continue it, you get something different than -1/12?

No - from the theory of analytic continuation it is unique. It is easily seen if f1 and f2 are analytic continuations then on the region where they are the same f1-f2 =0. Analytic continue 0 by any method and it's continuation is zero. This means f1 -f2 = 0 in the region of continuation and f1 =f2 everywhere

This is what leads to so called generic summation. The analytic continuation of a series S = a0 + a1x +a2x^2 + ... that has some radius of convergence not = 0 is zero in that region. Since it is unique it has values at all x and you have the normal sumability properties of linearity and stability - the last one is simply S = a0 + ( a1x +a2x^2 + ...).

So let's take S = 1 + x + x^2 ....

We have Sx = x + x^2 + x^3 ...

S - Sx = 1 so S 1/1-x regardless of x - except of course x=1 which is a non removable singularity - you need a more powerful method of summation for 1+1+1+1...

While many methods extend convergence, when looked at they are just forms of analytic continuation and we can use generic summation to often get values for all x - not just those where it is naturally convergent. Again we have written it in a form with unnecessary restrictions and analytic continuation allows you to remove them - often anyway.

Here is another way of looking at it. Suppose someone has cut a circle out of the complex plane and then asks you what is the function in the whole plane. Giving you that circle is an unnatural restriction on the function - like the divergent series. But that small circle is enough to reconstruct the full function and remove that restriction that is artificial anyway.

That's why I say 1+2+ 3+4... really equals -1/12. The way its written obscures artificially its real value which can be recovered by some perfectly valid manipulations. I know many say its not true - I know Micromass for example did a very scathing appraisal of it years ago, saying it made him really mad saying its -1/12. I can't prove him wrong - but what I hope I have shown is you can also view it as simply removing unnatural restrictions and it really has that value.

Thanks
Bill
 
Last edited:
  • #15
bhobba said:
No - from the theory of analytic continuation it is unique.

But a sum is not an analytic function. The expression ##1+2+3+4+...## has no variables in it, so you can't analytically continue it. What you can do is to introduce a variable, for example: It is the limit as ##t \rightarrow 1## of ##1^t + 2^t + 3^t + ...##. That can be analytically continued.

So in order to apply the theorem that analytic continuations are unique, you first have to modify the sum to parametrize it. There isn't a unique way of doing that. For example, instead of the above approach, you could also use:

limit as ##t \rightarrow 1## of ##1 + 2t + 3t^2 + 4 t^3 + ...##.

That has a different analytic continuation--it's a different function. So I don't understand how you can guarantee that two different ways to "enhance" the original series by introducing a parameter ##t## will give you the same analytic continuation at ##t=1##.
 
  • Like
Likes bhobba
  • #16
I was going to ask the exact same thing as stevendaryl!
 
  • Like
Likes bhobba
  • #17
stevendaryl said:
So I don't understand how you can guarantee that two different ways to "enhance" the original series by introducing a parameter ##t## will give you the same analytic continuation at ##t=1##.

That is a good point - how to parameterize the function that you analytically continue. What I have shown is that Borel Summation - which requires no 'direct' parameterization ie ∫Σan*t^n e^-t/n! (the t comes from writing n! = Γ(n+1) = ∫t^n*e^-t) is a natural extension of ordinary summation. Power series are easily and uniquely analytically continued. In fact any sum ∑an can be written as ∑an*x^n where x is one. Thar is unique with a unique analytic continuation providing its radius of convergence is not zero. I have transformed the zeta function into the Eta function which is also analytic (though not as easy to see). But it's true pathological functions exists that do not have a unique analytic continuation, or even any analytic continuation at all.

This means I will have to limit my remarks to those functions that can be uniquely analytically continued - most are eg Boral summation gives a unique result and analitic continuation, and simple algebraic manipulations on such functions like the zeta function to the Eta function are true.

So we will have to impose the rule generic summation will work on it - that means its sum is unique by any method. For 1 +2 +3 ... Note it is equal to 1^1 +2^1 +3^1 ... - its a direct equality - and the relation to 1^1 - 2^1 + 3^ ... is a direct equality so beyond doubt. Now one can use generic summation on 1 -2 + 3 - 4 to show its sum must be 1/4. Its unique - any summation method must give it.

That said - you are correct - it must be done with care to show the answer is unique.

Thanks
Bill
 
Last edited:
  • #18
stevendaryl said:
It seems to me that the various summation techniques amount to this:
  1. You have a divergent sum: ##\sum_j a_j##.
  2. You come up with a sequence of expressions ##A_j(\lambda)## such that for each ##j##, ##lim_{\lambda \rightarrow 1} A_j(\lambda) = a_j##.
  3. Then you come up with an analytic function ##F(\lambda)## such that for some values of ##\lambda##, ##F(\lambda) = \sum_j A_j(\lambda)##
  4. Then you declare that ##\sum_j a_j = F(1)##
If the original sum were convergent, I'm sure that this technique would give a unique answer for the sum. But if the sum is divergent, then it seems to me that there could be multiple functions ##F(\lambda)## that would work.

Here's an example of multiple ##F##, but it actually leads to the same answer.

The modified sum: ##1 - 2 + 3 - 4 + ...##

We can write that as ##\lim_{x \rightarrow 1} 1 - 2x + 3x^2 - 4x^3 ...##

That converges to the function ##F(x) = \frac{1}{(1+x)^2}##. We can immediately evaluate it to get: ##lim_{x \rightarrow 1} \frac{1}{(1+x)^2} = 1/4##

An alternative summation is:

##1-2+3-4+... = lim_{x \rightarrow 1} 1^{x} - 2^{x} + 3^{x} - 4^{x} + ...##

The function ##F'(x) = \sum_j j^{x} (-1)^{j+1}## is related to the zeta function:

##F'(x) = \zeta(-x) (1-2^{1+x})##

This function is very different from ##F(x) = \frac{1}{(1+x)^2}##, but they have the same value at ##x=1##, namely 1/4.

Off the top of my head, I can't think of an example where you get two different answers, but I also don't see any reason for it to always give the same answer, no matter which ##F## you choose

That's a good point; so to rephrase it, I guess my quote ""if you rewrite the sum of all integers in terms of a complex function" can be done in different ways, and I can't think of a reason why all these different ways still would give the same answer after analytical continuation; after all, you're analyically continuing different functions.

I don't have a clear-cut answer to that.
 
  • Like
Likes bhobba
  • #19
haushofer said:
I don't have a clear-cut answer to that.

Its the standard transformation between the Zeta and Eta functions. Even though its an equality doing this has a number of advantages - first the Eta function converges for S>0 but the Zeta function for S>1. It simply is an advantage of writing it in a different but equal form. The second advantage is a bit more subtle - and shows there is an unstated assumption being made - The Zeta function is not stable - this is easily seen considering s=0 so you have 1+1+1+1... This is the same as 1 + (1+1+1+1...). So you can't use generic summation. I think it was Hardy that showed because of that when written in that form you can't assign any value to it - you really must make further assumptions.

This is the real key - what assumption was made in deriving this equality? Each an in Σan must be associated with a n^s so you have an implied order making the series stable. In 1 + 2 + 3 + 4... the ordering is obvious ie you generalize to 1^k + 2^k + 3^k ... This also means when you have Σan*n^k + Σbn*n^k it equals Σ(an+bn)*n^k. Although obvious, and easily provable for finite and conventionally convergent sums, it is something that is assumed - the n^k is associated by the order of the series - this can break down if you have zeroes - you might think removing them would make no difference but because its not stable it can.

Assuming the above the transformation from the Zeta to Eta function is rigorous - we now have a series that has had it's region of convergence extended and is stable so generic summation can now be applied.

Its an assumption, but I think most would accept it.

Thanks
Bill
 
  • #20
I agree that Borel summation seems like a reasonable way to extend the set of summable series.

However, here's the abstract description of the issue.

Mathematically, an infinite sequence is an infinite-dimensional vector, an element of the vector space ##V = R^\omega##. Let's make up a term, an "evaluation", which is a partial linear map from ##V## into ##R##. An evaluation has a domain, which is the set of infinite sequences that it evaluates.

There is one particular evaluation, which I'll call ##\Sigma##, which is the usual notion of the limit of the finite sums of terms in a sequence. Its domain is sum subset of ##R^\omega##.

The goal in summation techniques is to come up with an evaluation ##E## such that
  1. ##dom(E) \supset dom(\Sigma)##: It declares more sequences to be summable.
  2. If a sequence ##s## is in ##dom(\Sigma)##, then ##E(s) = \Sigma(s)##. It agrees with ##\Sigma## on all convergent series.
Examples of such evaluations include Borel summation and Abel summation and Zeta regularization, etc.

My concern is the possibility that you might have two different evaluations ##E_1## and ##E_2## that give different answers for a particular series.
 
  • Like
Likes haruspex and bhobba
  • #21
stevendaryl said:
My concern is the possibility that you might have two different evaluations ##E_1## and ##E_2## that give different answers for a particular series.

Your concern is valid, and there are Tauberian theorems showing when it may occur as well as what methods give the same results. If a method is stable it will always give the same result as Generic Summation - and most methods are stable (Borel isn't - but Borel Exponential is and most series are summable by Borel exponential so must give the same result as Borel. But there are some pathological counter examples:
https://en.wikipedia.org/wiki/Borel_summation

However the series I have used are all Generic Summable so there is no issue - the summation is unique by any method that is stable. To me the critical bit is the formula converting from Zeta to Eta - as I posted I am now pretty sure it has a hidden assumption to make it stable - the Eta function is stable so something must be going on to turn an unstable series into a stable one. I posted what I think it is - could be wrong and my investigation into divergent series is still proceeding.

Thanks
Bill
 
  • #22
If the sum of something is "infinity", the sum of something plus 10 is also "infinity". This does not mean that 10=0! Or, in latex notation: If [itex] \sum_{n=1}^{\infty}a_{n}=\infty[/itex] then also [itex]10+\sum_{n=2}^{\infty}a_{n-1}=\infty [/itex].

OK. Neither expression has any meaning, so any conclusion drawn from them is equally meaningless.
 
  • #23
Svein said:
OK. Neither expression has any meaning, so any conclusion drawn from them is equally meaningless.

This divergent series stuff is weird. The problem with what you say above has to do with stability - sometimes it turns out a0 + a1 +a2 +a3... does not equal a0 + (a1 +a2 +a3...). Such divergent series are called non stable. If they are stable then so called generic summation will usually work and every summation method will give the same result eg consider 1 + x + x^2 + x^3... Then assuming generic summation S-xS = 1 or S = 1/1-x. It is true for any x except x =1 where you have a singularity. What's going on is S is normally convergent if |x| <1. But when you graph that in the complex plane you get a analytic function (basically a function that can be expressed as a power series - but that is not the whole story - you need to study complex variables for that - however is perfectly true in this case); normally existing for |x| < 1. But there is this theorem that says if you know an analytic function in some small so called simply connected area then you know the whole function everywhere in the complex plane. In the unit disk its 1/1-x. An analytic function the same as 1/1-x in the unit circle that exists everywhere is of course 1/1-x - except of course at x =1. So we know for all x, 1 + x + x^2 + x^3... = 1/1-x because there is only one analytic function as I showed in a previous post. Because the function is analytic so it can be expressed as ∑an*x^n = a0 + ∑an*x^n - hence is stable so most of the time you can use generic summation to sum a divergent series. So let x=2 and you have 1 + 2 + 4 + 9 ... which is normally divergent to ∞, but a more careful analysis shows, rather strangely, it is 1/1-2 = -1/2. Divergent series combined with complex analysis is - well rather strange. But there is no way around it - the reals are a subset of complex numbers so of course you can analyse it that way.

Thanks
Bill
 
Last edited:
  • #24
I've found a paper (http://ri.conicet.gov.ar/bitstream/...890-be1b2d8d3d16_A.pdf?sequence=2&isAllowed=y) that states the following:

"...Theorem 1. Any method Y assigning a finite number to the expression 1+1+1+1+1+… is (i) not totally regular, (ii) not regular and (iii) contradictory.
Theorem 2. Any method Y assigning a finite number to the expression 1+2+3+4+5+… is (i) not totally regular, (ii) not regular and (iii) contradictory.

By contradictory we mean that incompatible statements corresponding to r = Y ({an}) = s for (real) numbers r≠s can be proved in this context. Proofs are given in Appendix B..."

The demonstration is pretty straightforward. It's important to mention that here

  • they are not talking about the standard sum. Instead they are talking about any method that extends the domain of the standard sum.
  • by regular it means that the extension of the sum should give the same result as the standard sum in the case of convergent series.
  • The extension of the sum should be linear (the sum of a*sequence(A) + b*sequence(B) should be the same as a*sum(sequence(A))+b*sum(sequence(b)))
  • The extension of the sum should be stable (sum(a1;a2;a3;a4...)=a1+sum(a2;a3;a4...))

It does not say anything about other divergent series, but it looks that sum methods that give results like 1/(1-4)=4^0+4^1+4^2+4^3... still have a chance to be regular, linear, stable and not contradictory.

Thanks!
 
  • Like
Likes bhobba
  • #25
I have already mentioned this issue about some divergent sums. The easiest one like that is simply 1+1+1+... You can trivially see all the things mentioned ie "Any method Y assigning a finite number to the expression 1+1+1+1+1+… is (i) not totally regular, (ii) not regular and (iii) contradictory. The second one is not that hard to show either.

To first sum it you must write it in a form that does not have those issues - in general its ∑n^k which means (if k=0) you can't write it in the form 1 + (1+1+1+1+...) because that would be of the form n^1 + ∑n^k. If you do and impose the reasonable restriction when summing only powers of n^k can be added so you get the transformation to the Eta function that does not have the issue. Let S = ∑n^k. Then S - (2*2^k)*S = 1 - 2^k +3^k - 4^k ... which is the Eta function that does not have these issues. So S = (1 - 2^k +3^k - 4^k ...)/(1-2*2^k). This doesn't have those issues.

So to be clear, one must do some very reasonable things to do the sum. This is the basis of the comment in the paper - The correct expression would be Y (n)≡Y (1,2,3,4,. ..)= −1 12 (1) where Y must be understood as the method based on Riemann’s Zeta function.

This is the key - to get the answer you must put it in the form of the Zeta function so you can transform it to the Eta function. But this is a very natural thing to do. 1 + 2 +3 +4... = 1^1 + 2^1 +3^1 +4^1...

Thanks
Bill
 
Last edited:
  • #26
Sorry Bhobba I could not followed you. But it seems that you are ok with 1+2+3+4...=-1/12. Or at least, as you said "Y (1,2,3,4,. ..)= −1 12 (1) where Y must be understood as the method based on Riemann’s Zeta function".

But why should we use that method? As the paper said, if that method is able to produce a finite result for the sequence (1;2;3;4;...) then that method is not regular and contradictory.

I'm really more confortable saying (1;2;3;4;...) is not in the domain of Y, given that Y is stable, regular and linear. And if it is stable and linear and produces a result, that method is not regular and it is contradictory.

Although it is not very important, If a method is not regular and contradictory (as it seems with the Riemann's Zeta Function), I don´t like it.

Thanks again!
 
  • #27
the_pulp said:
But why should we use that method? As the paper said, if that method is able to produce a finite result for the sequence (1;2;3;4;...) then that method is not regular and contradictory.

Zeta function summation does not have that issue because of the form it is written. Look at the theorems in the paper and try them when writtten in Zeta function form - as I explained in my reply. The paper even states you get -1/12 when you do that - no problems. Its very natural to write 1+1+1+1... = ∑n^k with k=1. Its a strict equality, of course you can write it differently - but it would not be natural. It is, as I said at the start, with strict equalities, we can convert it into forms that can be summed. Why chose the form of the Zeta function - it's just so natural. But yes it is important to know it is that choice that allows it to be summed.

Thanks
Bill
 
  • #28
Any linear and stable method has that issue.

Let´s suppose that Y(1;2;3;4...)=-1/12

then by linearity: Y(1;1;1;1...)=Y(2;3;4;5...)-Y(1;2;3;4;5...)
By stability: Y(1;1;1;1...)=(Y(1;2;3;4;5)-1)-Y(1;2;3;4;5...)=-1
But, by stability (if we take one "1" out of the Y), Y(1;1;1;1;1...)=1+Y(1;1;1;1;...) hence, a contradiction.

As a consequence, if Y is linear and stable and Y(1;2;3;4;...) has a value then Y(1;1;1;1...) has a value and that produces contradictions. As a consequence, if Zeta Function Sum produces a value for Y(1;2;3;4) and if, as you said, it produces no contradictions, then Zeta Function Sum is not linear or not stable (I guess it is the second case, it may not be stable).

Could this be the case? Is Zeta Function Sum not stable?

Thanks again!
 
  • #29
the_pulp said:
Is Zeta Function Sum not stable

I showed ζ(s) = η(s)/(1-2^(1-s). Since the Eta function is linear and stable so must the Zeta function be.

Since it is linear and stable it is generic summable, hence the Zeta function is the same.

Before I showed using generic summation for all x: 1 +x + x^2 +++++ = 1/1-x. Let x = -1 so you have 1 - 1 +1 - 1 +1 ... = 1/2. So we have ζ(-1) = 1+1+1+1... = 1 -1 + 1.../-1 = -1/2

For 1 +2 +3 +4 ... we need to work out 1 - 2 + 3 - 4 ... This is easiest done by differentiating 1 -x + x^2 -x^3 +++++ = 1/1+x (replacing x by -x). So -1 + 2x -3x^2 + 4x^3 ... = -1/(1+x)^2. Set x =1 so 1 - 2 + 3 -4 .../-3 = 1 +2 +3 ... = -1/12

The paper you referenced says there is no issue with Zeta function summation, as it must be because of the simple relation to the Eta function.

Thanks
Bill
 
  • #30
The paper I referenced says that there is an issue with Zeta sum. In fact:

"... In Nesterenko & Pirozhenko (1997) we encounter an attempt to justify the use of the Riemann’s Zeta function. The authors refer to Hardy’s book for the actual method. They use axioms A and B and the zeta function to write equality between ∑ n=1 ∞ n and -1∕12 (see their eq. (2.20)). The conclusion is evident: the method does not comply with Hardy’s axioms. Furthermore, the result is false since to reach their conclusion the authors disregard a divergent contribution. Hence, the equal sign does not relate identical quantities as it should..."

Note that Hardys axioms are Stability, Linearity and Regularity.

I really can´t see what your are saying. Nevertheless I appreciate your prompt responses.

Thanks!
 
  • #31
Its simple. Let's spell it out again. This time I will use the Zeta function in a different form C(k) defined as ∑n^k (the sum is from 1 - not zero). Now 2*2^k*C(k) = 2*2^k +2*4^k +2*6^k ... So C(k) - 2*2^k*C(k) = 1 + (1 - 2)2^k + 3^k +(1-2)4^k ... = 1 - 2^k +3^k - 4^k ... which I will call E(k). So we have (1-2*2^k)*C(k) = E(k) or C(k) = E(k)/(1-2*2^k). Now we will show for k=0 and k=1 E(k) is linear and stable by using what's called Generic Summation to sum them. Hardy took this as the defining axioms of a series summation as pointed out in your posted article. If these axioms give a value to the series then they obviously obey those axioms. Let's start with k=0 so E(0) = 1 - 1 + 1 - 1 ... (this is called Grandi's series). There are a number of ways of summing it - but here simply applying the axioms is easiest E(0) = 1 - E(0) or 2*E(0) = 1 ⇒ E(0) = 1/2. But C(0) = E(0)/(1-2*2^k) = E(0)/(1-2*2^0) = -1/2. So C(0) = 1+1+1+1+1+... = -1/2 Similarly E(1) = 1 -2 +3 -4 ... = 1 - (1+1) + (1+2) - (1+3) ... = (1 - 1 + 1 - 1 ... ) - (1 - 2 + 3 ...) = 1/2 - E(1) ⇒ E(1) = 1/4. And we get C(1) = 1+2+3+4... = E(1)/(1-2*2^1) = -1/12.

You can do the rest by using linear and stable summation techniques like Borel Exponential Summation.

How did it evade the theorems in your paper? By transforming the problem into one where they do not apply - as can be seen by E(k) being linear and stable. The n in ∑n^k ensures you can't perform things like taking the first term out etc. For example in the proof it says: That the method Y is not totally regular is immediate since otherwise it should assign the value +∞ to the proposed expression. By (C), Y ({1,1,1,…}) has the same value as 0 + 1 + 1 + 1 + …. However in the form ∑n^k you can't put zero in front of it - the sum is from 1. You will find similar issues with other parts of the proof.

Thanks
Bill
 
Last edited:
  • #32
Thank for your answer, but I think we are using different definitions regarding stability. In fact, you say

The n in ∑n^k ensures you can't perform things like taking the first term out etc.

But the definition of stability is (at least from what I read):

Y(a1;a2;a3;a4;...)=a1+Y(a2;a3;a4;...)


So, If the Zeta Function Sum method "forbids" taking the first term out, then it does not guarantee this equation for every sequence in the domain of Y and, as a consequence, it is not stable given this definition of stability.

Ps: It should be the case that we should be using different definitions of stability because I can't see how this method could be stable given that, as it is mentioned in the paper, there is no consistent, linear, stable and regular method that asigns a value to (1;2;3;4;...)
 
  • #33
the_pulp said:
Thank for your answer, but I think we are using different definitions regarding stability.

Its the same.

What I am doing is taking a certain class of sums - note it is not a general method of summation - and transforming it into another sum. That sum is of the form ∑ (-1)^(n+1) * n^k/C where C has been defined before - just too lazy to write it out. That new sum is stable and can be summed by stable summation methods such as Borel Exponential.

The theorems are about general summation methods not specific examples - note again - it is not a general summation method but how to sum a class of sums eg those of the Zeta function. That's why the theorems fail. Rigorously they fail because ∑ (-1)^(n+1) * n^k is stable.

So to discuss stability you need to discuss the stability of ∑ (-1)^(n+1) * n^k. That is easily seen as stable because its Boral Sum exists and limit e^n*(-1)^(n+1) * n^k is zero and there is a theorem that says if that is the case tghen is the same as Boral Exponetial which is stable.

The quote you gave before depends on the details in that book he quotes.
"... In Nesterenko & Pirozhenko (1997) we encounter an attempt to justify the use of the Riemann’s Zeta function. The authors refer to Hardy’s book for the actual method. They use axioms A and B and the zeta function to write equality between ∑ n=1 ∞ n and -1∕12 (see their eq. (2.20)). The conclusion is evident: the method does not comply with Hardy’s axioms. Furthermore, the result is false since to reach their conclusion the authors disregard a divergent contribution. Hence, the equal sign does not relate identical quantities as it should..."
\
The fact he says he is dropping and infinity suggests he is using Ramnujan summation which I will explain later - have to head off to lunch now.

Thanks
Bill
 
  • #34
I have been dealing with this problem for two years and proposed my solution in the article Y.N. Zayko, The Geometric Interpretation of Some Mathematical Expressions Containing the Riemann zeta-Function, Mathematics Letters, 2016; 2 (6): 42-46.
The following are excerpts from this (and other) articles on this topic.

Usually, when we mention the Riemann zeta-function, the famous Riemann hypothesis (RH)
comes to memory, which says that the real parts of the nontrivial zeros of the zeta-function is
1/2. By the way, mathematicians have not yet been able to find its evidence (or refutation). This
result is so important (it is related to the distribution of prime numbers) that Clay's mathematical
institute has included RH in the number of the most important problems of the millennium.

However, in physical applications, the Riemann zeta-function appears much more often without
any mention of RH. As an example, perhaps not the best, we mention the problem of
regularization of divergent expressions of field theory-the so-called zeta-regularization of S.
Hawking. Mathematicians have long been accustomed to the fact that if we represent the final
result of the theory in the form of an expression containing the zeta-function, then one does not
have to worry, that it, being written in another form, may contain divergence, i.e. be
meaningless. This is due to one surprising feature of the zeta-function - to "absorb" infinity into
itself, i.e. to ascribe to the expressions, at first sight, divergent, finite values. For example

zeta (0) = 1 + 1 + 1 + 1 + ... = - 0.5
zeta (-1) = 1 + 2 + 3 + 4 + 5 + ... = - 1/12

However, this fact that did not cause surprise of the mathematicians surprised the non-specialists
[1].
An attempt to comprehend the above results was undertaken in [2]. The meaning of the last
paper is to represent the calculation of the zeta function as the result of the operation of a certain
Turing machine (MT) the role of a tape of which plays a numerical axis, and the role of a head
plays some physical particle which is moving in accordance with the equations of motion
determined by the divergent expressions on the right-hand side of formulas given above. Since
partial sums in the expression for the second formula determine the path traveled by the particle
at constant acceleration, it is necessary to introduce into the equations of motion the source of
this acceleration, or of gravity according to Einstein's equivalence principle. In other words, for
the equations of motion of the particle, one should choose the equations consistent with the
general theory of relativity of Einstein with a suitable source. After solving them, we define the
metric on the numerical axis in which the motion of the particle will no longer cause surprise
because the final path that a particle will pass in an infinite time will be finite. The final
expression (-1/12) for the path is obtained if we take into account the curvature of the metric of
the numerical axis in accordance with the solution of the Einstein equations. In fairness, it should
be noted that the result in this paper differs from the exact one by about 3% due to the fact that
instead of the relativistic expression , the nonrelativistic expression was used for acceleration of
the particle. (In the opposite case, the equations could not be solved)

From the point of view of the theory of the Turing machine, the result obtained means the
inclusion infinity in the number of admissible values of the counting time. Earlier infinite time
meant non-computability of the problem. This is true, since summation of a divergent series on
an ordinary Turing machine is related to non-computable problems. Therefore, the MT described
in the paper refers to the so-called relativistic MT [3].
In development of these ideas the calculation of the zeta function of the complex argument [4]
was performed and the RH was proved [5]. In addition, the idea was expressed that computation,
like motion, can change the geometry of the numerical continuum, and moreover the recognized
system of Euclidean postulates should be changed to conform with the above formulas [6].
References
1. D. Berman, M. Freiberger, Infinity or -1/12 ?, + plus magazine, Feb. 18,
2014, http://plus.maths.org/content/infinity-or-just-112.
2. Y.N. Zayko, The Geometricl Interpretation of Some Mathematical Expressions Containing
the Riemann zeta-Function, Mathematics Letters, 2016; 2 (6): 42-46.
3. I. Nemeti, G. David, Relativistic Computers and the Turing Barrier. Applied Mathematics
and Computation, 178, 118-142, 2006.
4. Y. N. Zayko, Calculation of the Riemann Zeta-function on a Relativistic Computer,
Mathematics and Computer Science, 2017; 2 (2): 20-26.
5. Y. N. Zayko, The Proof of the Riemann Hypothesis on a Relativistic Turing Machine,
International Journal of Theoretical and Applied Mathematics. 2017; 3 (6): 219-
224, http: //www.sciencepublishinggroup.com/j/ijtam, doi: 10.11648 / j.ijtam.20170306.17.
6. Y. N. Zayko, The Second Postulate of Euclid and the Hyperbolic Geometry, International
Journal of Scientific and Innovative Mathematical Research (IJSIMR), Volume 6, Issue 4, 2018,
PP 16-20. http://dx.doi.org/10.20431/2347-3142.0604003; arXiv: 1706.08378, 1706.08378v1
[math.GM])
 
  • Like
Likes bhobba
  • #35
The theorems are about general summation methods not specific examples - note again - it is not a general summation method but how to sum a class of sums eg those of the Zeta function. That's why the theorems fail. Rigorously they fail because ∑ (-1)^(n+1) * n^k is stable. So to discuss stability you need to discuss the stability of ∑ (-1)^(n+1) * n^k.

Reference https://www.physicsforums.com/threa...e-and-1-2-3-4-1-12.959557/page-2#post-6087836

What does it mean that a sequence is Stable? What I've read is that a method can be stable or not. But not a sequence. I guess, what you are saying is that a pair sequence-method is stable, but I'm not sure.

I'm having trouble understanding you, do you have a reference in order to look at it directly from the source and not bothering you anymore (at least until I read the source)?

Thanks again anyway
 

Similar threads

Replies
5
Views
438
Replies
4
Views
2K
Replies
2
Views
2K
Replies
1
Views
1K
Replies
16
Views
3K
Replies
5
Views
2K
Replies
1
Views
2K
Replies
7
Views
511
Back
Top