Why Does Zee's Steepest-Descent Approximation Seem Incorrect?

In summary, Zee's approximation for corrections to the steepest descent is incorrect; it goes to infinity.
  • #1
weirdoguy
1,095
1,028
Hello everyone, my first post :shy:

I'm reading Zee's 'QFT in a Nutshell' and I came to one thing that bothers me - he's short discussion of steepest-descent approximation. I've known this thing for quite a long time now, but I've never seen the approximation of the corrections. Here is what he writes:

[tex]I = e^{-(1/\hbar)f(a)} (\frac {2\pi \hbar}{f''(a)})^{1/2} e^{-O(\hbar ^{1/2})}[/tex]

Of course we take the limit in which Planck's constant is small, and that is where problem occurs. Because in this limit [tex]e^{-O(\hbar ^{1/2})}[/tex] will approach infinity, and that is not what it should be like, right? Any thoughts about this issue? I just think that what he wrote is simply incorrect.
I tried to derive this approximation by not neglecting the cubic termis in (x-a), and I don't even see why there is a square root of Planck's constant...


Sorry for my english, it's been a long time since I wrote something in this language :shy:
 
Physics news on Phys.org
  • #2
Why does it approach infinity? If ℏ goes to zero, the square root does the same, and the expression goes to e^0 = 1.
Or does the O mean something different?

##e^{-(1/\hbar)}## will go to zero.
 
  • #3
And the other factor goes to zero ... and since 1 x 0 = 0 the expression vanishes.
 
  • #4
weirdoguy said:
Of course we take the limit in which Planck's constant is small, and that is where problem occurs. Because in this limit ##e^{-O(\hbar ^{1/2})}## will approach infinity, and that is not what it should be like, right?
In the limit of small ħ, exp(-O(ħ1/2)) approaches one, not infinity. Perhaps you are thinking of exp(O(ħ-1/2)). That's a very different quantity from exp(-O(ħ1/2)).
 
  • #5
Oh, my bad... I don't know why I thought that it goes to infinity, I had graph of a wrong function in my mind.

So now, I still have a problem - how did he get this exp(-O(ħ1/2)) factor? Because I don't see why it is a square root of ħ...
 
  • #6
If you tell us, from where the book starts, we may be able to help you. Is it the application of this method to the generating functional in QFT. Then my QFT manuscript may help you too:

http://fias.uni-frankfurt.de/~hees/publ/lect.pdf

the [itex]\hbar[/itex] (loop) expansion is found on p. 125ff
 
  • #7
It is probably applied later in the book, but it firstly appears in the very begining. I made a screen of everything that there is about this issue, not that much though...


And thank you vanhees71 for lecture, I'll check it later :smile:
 

Attachments

  • zee.jpg
    zee.jpg
    39.1 KB · Views: 502
  • #8
weirdoguy said:
So now, I still have a problem - how did he get this exp(-O(ħ1/2)) factor? Because I don't see why it is a square root of ħ...
For a reasonably well-behaved function f(q), the error in that Taylor expansion is going to be dominated by the last included term, ##\frac 1 2 f''(q_0)(q-q_0)^2##.

Look at ##\int_{-\infty}^{\infty} \exp\left(\frac{-1}{2h} f''(q_0)(q-q_0)^2\right)\, dq##. Sans some scale factors, that's just the Gaussian integral.
 
  • #9
D H said:
Look at ##\int_{-\infty}^{\infty} \exp\left(\frac{-1}{2h} f''(q_0)(q-q_0)^2\right)\, dq##. Sans some scale factors, that's just the Gaussian integral.

Well, I know, and I know how to integrate it. But still, I can't see why corrections are in the form of exponent with ##-O(\sqrt{\hbar})##.
But now - I tried to see what will happen if we won't neglect higher order terms. So, we have:
[tex]f(q)=f(a)+\frac{1}{2}f''(a)(q-a)^2+\frac{1}{3!}f'''(a)(q-a)^3+\ldots [/tex]
[tex]I=\int_\mathbb{R}dq\exp\left(\frac{-1}{\hbar}f(a)+\frac{-1}{\hbar}\frac{1}{2}f''(a)(q-a)^2+\frac{-1}{\hbar}\frac{1}{3!}f'''(a)(q-a)^3+\ldots \right)[/tex]
And now I leave the exponent with the quadratic term, and expand the exponent with qubic and higher terms in Taylor series:
[tex]I=\int_\mathbb{R}dq\exp\left(\frac{-1}{\hbar}f(a)+\frac{-1}{\hbar}\frac{1}{2}f''(a)(q-a)^2\right)\left[1+\left(\frac{-1}{3!\hbar}f'''(a)(q-a)^3+\ldots\right)+\left(\frac{-1}{3!\hbar}f'''(a)(q-a)^3+\ldots\right)^2 \right][/tex]
Now, ##I## is a sum, first summand is of course just our basic integral, which I will denote by ##I_0##:
[tex]I=I_0+\int_\mathbb{R}dq\exp\left(\frac{-1}{\hbar}f(a)+\frac{-1}{\hbar}\frac{1}{2}f''(a)(q-a)^2\right)\left[\left(\frac{-1}{3!\hbar}f'''(a)(q-a)^3+\ldots\right)+\left(\frac{-1}{3!\hbar}f'''(a)(q-a)^3+\ldots\right)^2 \right][/tex]
First non-zero integral will be the one with ##(q-a)^4##:
[tex]-\frac{f^{(IV)}(a)}{4!\hbar}e^{\frac{-f(a)}{\hbar}}\int_\mathbb{R}dqe^{-\frac{f''(a)}{2\hbar}(q-a)^2}(q-a)^4=-\frac{3f^{(IV)}(a)}{4!\hbar}\left(\frac{\hbar}{f''(a)}\right)^2
\sqrt{\frac{2\pi\hbar}{f''(a)}}
e^{\frac{-f(a)}{\hbar}}[/tex]
Combining this with ##I_0## we get:
[tex]I=
\sqrt{\frac{2\pi\hbar}{f''(a)}}
e^{\frac{-f(a)}{\hbar}}
\left(1+C\cdot\hbar+\ldots\right)
[/tex]
Where ##C## is a constant. I looked at terms with ##(q-a)^6## and it will give contribution to the terms with ##\hbar## and ##\hbar^2##. Anyway, the conclusion is that still I see no way of exponent correcions with ##-O(\hbar^{1/2})## :shy:
 
  • #10
weirdoguy said:
D H said:
Look at ##\int_{-\infty}^{\infty} \exp\left(\frac{-1}{2h} f''(q_0)(q-q_0)^2\right)\, dq##. Sans some scale factors, that's just the Gaussian integral.
Well, I know, and I know how to integrate it. But still, I can't see why corrections are in the form of exponent with ##-O(\sqrt{\hbar})##.
But now - I tried to see what will happen if we won't neglect higher order terms.
Don't look at those higher order terms. Look instead at the last term that was included.

In quoting just the last part of my previous post, you omitted the key point of that post. Once again,

For a reasonably well-behaved function f(q), the error in that Taylor expansion is going to be dominated by the last included term, ##\frac 1 2 f''(q_0)(q-q_0)^2##.​

You don't need to look at those higher order derivatives in the expansion of f(q) if that function is "well-behaved". By well behaved I mean that there exists some finite C>0 such that for all q in the interval of interest,
[tex]\left|\sum_{r=n+1}^{\infty} \frac 1 {r!} f^{r}(a)(q-a)^r\right|<C\left|\frac 1 {n!} f^{n}(a)(q-a)^n\right|[/tex]

In other words, for a well behaved function, the last included term in the Taylor expansion (in this case, the second order derivative term) bounds the error. If this is the case, that exp(-O(ħ1/2)) multiplicative factor falls right out.
 

FAQ: Why Does Zee's Steepest-Descent Approximation Seem Incorrect?

1. What is the steepest-descent approximation method?

The steepest-descent approximation method is a mathematical optimization technique used to find the minimum value of a function. It involves taking small steps in the direction of the steepest slope of the function in order to reach the minimum point.

2. How does the steepest-descent approximation method work?

The steepest-descent approximation method works by first finding the gradient of the function at a given point. The gradient represents the direction of the steepest slope of the function. Then, a step size is chosen and a new point is calculated by moving in the direction of the gradient. This process is repeated until the minimum point is reached.

3. When is the steepest-descent approximation method used?

The steepest-descent approximation method is commonly used in machine learning and optimization problems where the objective is to minimize a cost function. It is also used in physics and engineering to find the minimum energy state of a system.

4. What are the advantages of using the steepest-descent approximation method?

One of the main advantages of the steepest-descent approximation method is its simplicity and ease of implementation. It also converges quickly to the minimum point, making it efficient for solving optimization problems. Additionally, it can handle large and complex functions with multiple variables, making it a versatile technique.

5. Are there any limitations to the steepest-descent approximation method?

One limitation of the steepest-descent approximation method is that it may get stuck in a local minimum instead of reaching the global minimum of a function. It also requires a carefully chosen step size in order to converge to the minimum point. In some cases, the method may also take longer to converge compared to other optimization techniques.

Similar threads

Replies
1
Views
4K
Replies
6
Views
2K
Replies
53
Views
1K
Replies
16
Views
2K
Replies
2
Views
2K
Replies
1
Views
2K
Replies
6
Views
1K
Back
Top