Feynman's Calculus Method: When Can We Use It?

In summary: The conditions of the if-part of the measure theory theorem are translated to statements in real variable calculus. After all, the then-part of Leibnitz's theorem doesn't work for all real valued functions of two variables.
  • #1
Nebuchadnezza
79
2
I am reading about Feynman integration or more commonly known as differentiating under the integral sign. My question is when can we use this method?

http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/integratnfeynman.pdf

Here is a link explaining the method quite thoroughly. I have no problems actually performing the maths, I just don't know when I can apply this rule. I read the definition in the pdf, but since I have only taken Calc1 and some Calc2 this stumped me quite a bit. I am good at doing integration just not reading Greek... If anyone could explain this to me in layman terms, it would be much appreciated.

C1GqV.png
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
As to when you apply this rule: When you're differentiating the integral in different terms other than those in which the integral is. Using you're example, you use Feynman's b/c you are differentiating in terms of x, but the integral is in terms of omega(hereforth referred to as w). Alternatively, if you where differentiating in terms of w, since the integration is in terms of w, there would be nothing special to do, and you would get d/dw f(x,w) as your answer.
 
  • #3
This is known as Leibnitz integral rule
http://en.wikipedia.org/wiki/Leibniz_integral_rule

It is used, for example, in fluid mechanics in a formula called "Reynold's transport equation"
http://en.wikipedia.org/wiki/Reynolds_transport_theorem

If you see the wiki on that, you can intuitively see an application which relates the total derivative of a property as the rate of change of the property inside a volume element plus the flux of the property through the element's surface (which can be changing or moving or w/e).
 
  • #4
TylerH said:
Alternatively, if you where differentiating in terms of w, since the integration is in terms of w, there would be nothing special to do, and you would get d/dw f(x,w) as your answer.

Saying that doesn't make sense. Omega is a set. This isn't a theorem about a function of two variables.
 
  • #5
^This is a theorem about a function of two variables. If we were tiating in terms of w we would get zero as the w dependence is integrated out. The lower case omega is a variable, the upper case omega is the set of values the lower case omega takes. This is Leibnitz rule, the only thing it has to do with Feynman is he thought it was neat in high school (like most everybody) and wrote a funny story about in in a book. The statement used here is very careful to give sufficient conditions for the interchange of limits. These conditions are often met, though at times the result holds when they are not and must be justified through other means.
 
  • #6
lurflurf said:
^This is a theorem about a function of two variables. .

Ok, but not necessarily two real variables. We don't know that such a thing as the derivative of f with respect to omega exists, do we? The theorem quoted has Leibnitz's rule as a special case, but it is more general than Leibnitz's rule.
 
  • #7
So let's say we have the integral

[tex]\int_0^1{\frac{\ln(x)}{x^2+1}}[/tex]

We can solve this by induct the variable b,like this

[tex]\int_0^1{\frac{\ln(x)}{x^b+1}}[/tex]

Now we can easily solve this problemby differenting under the integral sign..

But why is this substitution allowed, but not the substitution under?

[tex]\int_0^1{\frac{\ln(x)}{x^2+1}}e^-bx[/tex]

So simply according to rule number three why is this substitution allowed?
 
  • #8
Stephen Tashi said:
Saying that doesn't make sense. Omega is a set. This isn't a theorem about a function of two variables.
I think you are confusing the two omegas: "For almost all [itex]\omega\in \Omega[/itex]".
[itex]\Omega[/itex] (capital omega) is a set, [itex]\omega[/itex] (small omega), which is what TylerH meant, is a member of that set.
 
  • #9
HallsofIvy said:
I think you are confusing the two omegas: "For almost all [itex]\omega\in \Omega[/itex]".
[itex]\Omega[/itex] (capital omega) is a set, [itex]\omega[/itex] (small omega), which is what TylerH meant, is a member of that set.

I agree that little omega is a member of capital Omega and that Capital Omega is a set. But capital Omega is not necessarily a set of real numbers. It can be a set of anything. So, in the context of the theorem, it doesn't make sense to assume that omega is a real variable and begin to speak as if f(x,omega) is a function of two real variables and talk about a derivative with respect to little omega.

The OP did requrest a layman's explanation, so I see why the subject has turned to Leibntiz's theorem, which is a special case. If we only talk about Leibntiz's theorem, the question is how are the conditions of the if-part of the measure theory theorem translated to statements in real variable calculus. After all, the then-part of Leibnitz's theorem doesn't work for all real valued functions of two variables.
 
  • #10
Mmm I still do not completely know when we can differentiate under the integral sign. Perhaps it is the english , or the fact that I have yet to attend college...

Is it perhaps correct that the new variable you add, or the variable you differentiate must dominate the function in the given range?
 
  • #11
Nebuchadnezza said:
Mmm I still do not completely know when we can differentiate under the integral sign. Perhaps it is the english , or the fact that I have yet to attend college...

Is it perhaps correct that the new variable you add, or the variable you differentiate must dominate the function in the given range?
Perhaps you're looking at the wrong rule. What level of understanding are you on calc III(multivariable calculus) or above? If calc III, you need to look at Leibniz's rule. If above, then this is the correct rule.

http://en.wikipedia.org/wiki/Leibniz_integral_rule
 
  • #12
I think Nebuchadnezza is asking a very specific question. I'll rephrase one of his earlier posts. See if I have stated his question correctly.

According to Theorem 2.1 of the article "Integration: Feynman's Way" (see below), we can differentiate the following integral with respect to b by doing the differentiation inside the integral sign:

[tex]\int_0^1{\frac{\ln(x)}{x^b+1}}dx [/tex]

But, according to [who?] , we cannot use Theorem 2.1 to justify moving the differentiation with respect to b inside the integral sign in the problem:

[tex]\int_0^1{\frac{\ln(x)}{x^2+1}}e^{-bx}dx [/tex]

Theorem 2.2 of the article may be used to move the differentiation inside the integral sign. However, Theorem 2.2 is stated in very abstract terms. Can someone explain, in terms of the concepts of ordinary calculus, how Theorem 2.2 justifies moving the differentiation inside the integral sign?

-------References

PDF of "Integration: The Feynman Way"
http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/integratnfeynman.pdf

Theorem 2.1 (Elementary Calculus Version). Let f : [a, b] × Y → R be a function,
with [a, b] being a closed interval, and Y being a compact subset of Rn . Suppose
that both f (x, y) and ∂f (x, y)/∂x are continuous in the variables x and y jointly.
Then [itex] \int_Y f (x, y)dy [/tex] exists as a continuously differentiable function of x on [a, b], with derivative
[tex] \frac{d}{dx} \int_Y f(x,y) dy = \int_Y \frac{\partial}{\partial x} f(x,y) dy [/tex]

Theorem 2.2 (Measure Theory Version). Let X be an open subset of R, and Ω be
a measure space. Suppose f : X × Ω → R satisfies the following conditions:
(1) f (x, ω) is a Lebesgue-integrable function of ω for each x ∈ X.
(2) For almost all ω ∈ Ω, the derivative ∂f (x, ω)/∂x exists for all x ∈ X.
(3) There is an integrable function Θ : Ω → R such that |∂f (x, ω)/∂x| ≤ Θ(ω)
for all x ∈ X.
Then for all x ∈ X,
[tex] \frac{d}{dx} \int_{\Omega} f(x,\omega) d\omega = \int_{\Omega} \frac{\partial}{\partial x} f(x,\omega) d\omega [/tex]

---- my own questions

Can theorem 2.1 really be applied to the first problem? ln(x) is undefined at x = 0. Is it Theorem 2.2 that justifies moving the differentiation in the first problem?
 
Last edited by a moderator:

FAQ: Feynman's Calculus Method: When Can We Use It?

What is Feynman's Calculus method?

Feynman's Calculus method, also known as the "Feynman integration technique," is a problem-solving approach developed by physicist Richard Feynman. It involves breaking down complex problems into smaller, more manageable parts, and using visual representations and intuitive reasoning to solve them.

How is Feynman's Calculus method different from traditional calculus?

Feynman's Calculus method differs from traditional calculus in that it focuses on conceptual understanding and visual reasoning rather than mathematical formulas and techniques. It also encourages a more creative and intuitive approach to problem-solving.

Can Feynman's Calculus method be used in all areas of science?

Yes, Feynman's Calculus method can be applied to any scientific field that involves problem-solving. It has been used in physics, mathematics, engineering, and even biology.

Is Feynman's Calculus method suitable for beginners?

While Feynman's Calculus method may seem intimidating at first, it can be used by beginners with some practice and guidance. It is a valuable tool for developing critical thinking and problem-solving skills in any level of education.

Can Feynman's Calculus method be used for real-world applications?

Yes, Feynman's Calculus method has practical applications in various fields, including engineering, physics, and economics. Its focus on intuitive reasoning and visual representations makes it a useful tool for solving complex problems in real-world scenarios.

Back
Top