How to Calculate the Integral of the Dirac Delta Function?

In summary, the conversation discusses different ways of defining the Dirac delta function and its properties. It also mentions the Fisher product of a generalised function with the Dirac delta and its relationship with the Schwartz distribution. The conversation ends with a proof sketch of the Fisher's result for the r=1 case.
  • #1
LagrangeEuler
717
20
How to calculate
##\int^{\infty}_{-\infty}\frac{\delta(x-x')}{x-x'}dx'##
What is a value of this integral? In some youtube video I find that it is equall to zero. Odd function in symmetric boundaries.
 
Physics news on Phys.org
  • #2
One possible way to make the expression precise is that you first denote as [itex]C(\mathbb{R})[/itex] the set of all continuous functions [itex]f:\mathbb{R}\to\mathbb{R}[/itex]. Then you define a mapping [itex]\Phi_x:C(\mathbb{R})\to\mathbb{R}[/itex] by setting [itex]\Phi_x(f)=f(x)[/itex], and also denote

[tex]
\int\limits_{-\infty}^{\infty} \delta(x-x')f(x')dx' := \Phi_x(f)
[/tex]

If you define [itex]f[/itex] with the formula [itex]f(x') = \frac{1}{x-x'}[/itex], then [itex]f\notin C(\mathbb{R})[/itex], and it makes no sense to ask what [itex]\Phi_x(f)[/itex] is.

It's like defining some function [itex]\phi:\{0,1,2,3\}\to\mathbb{R}[/itex] and then asking what [itex]\phi(4)[/itex] is.

Do you know yourself what definition you are using for the Dirac delta?
 
  • #3
The Fisher product of the generalised function PV ##x^{-1}## and Dirac ##\delta## is equal to ##-\frac{1}{2}\delta'##. This is a Schwartz distribution of compact support, so it can be applied to to the test function ##\phi(x)=1##, which gives the answer 0.
 
  • #4
Tnx for the answer. But I really don't understand it. Can you explain that with detail or just give me some text which I can read.
 
  • #5
If I understand you well.
## \delta \cdot x^{-1}=-\frac{1}{2}\delta' ##
where ##\cdot## is some kind of product. So in my case.
## \int^{\infty}_{-\infty}\frac{\delta(x)}{x}dx=-\frac{1}{2}\int^{\infty}_{-\infty}\delta'(x)dx##
and we know that the right side is zero.
 
  • #6
It seems obvious that LagrangeEuler himself does not know what definition he is using for the Dirac delta, so the pedagogical answer should be to point out this fact, and request for clarification on the definition.

Nevertheless, I too would be interested to learn what the Fisher product is, so I wouldn't mind if pwsnafu showed the definition here.
 
  • #7
I use Dirac delta in the form
## \int^{\infty}_{-\infty}\delta(x)f(x)dx=f(0) ##
However, I can not solve previous integral in this way so I am confused. Sometimes I use that the Dirac delta is even function so ##\delta(-x)=\delta(x)##. And of course sometimes I use integral representation.
## \delta(x)=\frac{1}{2\pi}\int^{\infty}_{-\infty}e^{ikx}dk##
or differential representation
## \theta'(x)=\delta(x)##
where ##\theta(x)## is Heaviside step function. I know some definition but I give precise what my problem is!
 
  • #8
The text I use is the one by R.F. Hoskins and J.Sousa Pinto Distribution, Ultradistributions and other Generalised Functions because it covers a very large number of topics in very short space.

I'm going to give a proof sketch of the Fisher's result
[tex]x^{-r} \cdot \delta^{(r-1)} = (-1)^{r} \frac{(r-1)!}{2(2r-1)!}\delta^{(2r-1)}(x).[/tex]
The result is published in Proc Camb Phil Soc, 72, pp 201-204. I going to assume you know the basics, namely that the space of test functions ##\mathcal{D}## is a dense subset of ##\mathcal{D}'##.

The (classical) Fisher product is an example of what we call sequential products. The ideal is simple: for any two Schwartz distributions ##\mu## and ##\nu##, we find sequences of smooth functions ##\mu_n \to \mu## and ##\nu_n \to \nu## and define ##\mu \cdot \nu = \lim_{n\to\infty} \mu_n \nu_n##. This ultimately is a losing battle. The more constraints you place on your sequences the more functions you can multiply, but the properties of your product get steadily worse (such as no distributive law).

We start by choosing a smooth function ##\rho(x)## satisfying
  1. ##\rho## is non-negative,
  2. the area under the curve over the reals is 1,
  3. ##\rho(x) = 0## for all ##-1 \leq x \leq 1##,
  4. ##\rho(-x) = \rho(x)## for all x,
  5. ##\rho^{(r)}(x)## has only ##r## changes of sign for ##r=1,2,3,\ldots##.
The sequence ##\rho_n(x) = n\rho(nx)## is called a symmetric model sequence and we set ##\mu_n = \mu * \rho_n## (where * is the convolution of distributions). Such a ##\rho## exists, namely the bump function,
[tex]\rho(x) = A exp(1/(x^2-1))[/tex] for ##|x|\leq1## and zero elsewhere. Here A is just a normalization constant. It should be clear that ##\rho_n \to \delta##.

The singular distribution ##x^{-r}## is defined as
[tex]x^{-r} = \frac{(-1)^r}{(r-1)!}D^r \log|x|[/tex]
and our sequence
[tex](x^{-r})_n = \frac{(-1)^{r-1}}{(r-1)!} \int_{-1/n}^{1/n} \rho^{(r)}_{n}(t) \log|x-t|\, dt.[/tex]
Now we define ##\mathcal{I}f(x) := \int_{-\infty}^x f(t) dt## then
[tex]F_n(x) := \mathcal{I}^{2r-1}[(x^{-r})_n\rho^{(r-1)}_n(x)] = \frac{1}{(2r-2)!}\int_{-1/n}^{x}(t^{-r})_n \rho^{(r-1)}_n(x-t)^{2r-2}dt.[/tex]
It can be shown that
[tex]\int_{-1/n}^{1/n}(x^{-r})_n \rho_n^{(r-1)}(x)x^{m}dx = \frac{(-1)^{r+1}}{2}(r-1)![/tex]
when ##m=2r-1## and zero for ##m=0,1,\ldots, 2r-2##.

Putting the results together you get
[tex]\int_{1/n}^{-1/n}(t^{-r})_n \rho_n^{(r-1)}(t) (1/n-t)^{2r-2}dt =0[/tex]
and using ##\rho_n^{(r-1)}(x) = 0## for ##|x| \geq 1/n##; you get ##\mathcal{I}^{2r-1}[(x^{-r})_n\rho_n^{(r-1)}(x)] = 0## for ##|x| \geq 1/n## as well. The basically means that the support is converging to ##\{0\}##.

Moving on. ##\rho^{(r)}## has only r changes of sign; therefore ##(x^{-r})_n## has only r changes of sign, therefore the ##(2r-1)##th primitive is either always non-negative or always non-positive. And finally,
##\int_{-1/n}^{1/n}\mathcal{I}^{2r-1}[(x^{-r})_n\rho_n^{(r-1)}(x)]dx = (-1)^r\frac{(r-1)!}{2(2r-1)!}.##
Hence ##F_n## converges distributionally to ##(-1)^r\frac{(r-1)!}{2(2r-1)!} \delta(x)##. Differentiate (2r-1) times to get the result. Setting r=1, we get ##x^{-1}\cdot\delta(x) = -\frac{1}{2}\delta'(x)## as required.
 
Last edited:
  • #9
It's worth pointing out there is shorter proof of the r=1 case due to Mikusinski. Let
[tex]u_n (x) = \rho_n(x) (x^{-1}*\rho_n(x)).[/tex]
We define
[tex]I_n = \int_{-\infty}^\infty u_n(x) \, dx = \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{\rho_n(x)\rho_n(t)}{x-t}dxdt,[/tex]
[tex]K_n = \int_{-\infty}^\infty x\,u_n(x) \, dx = \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{x \rho_n(x)\rho_n(t)}{x-t}dxdt,[/tex]
these are understood in principal integrals. If we swap x and t, then ##I_n## swaps sign hence ##I_n =0##.

To find ##K_n## we write
[tex]K_n = 1- \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{t \rho_n(x)\rho_n(t)}{x-t}dxdt,[/tex]
and swap x and t to obtain the identity ##K_n = 1- K_n## which means ##K_n = 1/2##.

Lastly define
[tex]F_n(x) = \int_{-\infty}^x (x-t)u_n(t) \, dt = x\int_{-\infty}^x u_n(t)dt - \int_{-\infty}^x t u_n(t) dt,[/tex]
If x<0 then ##F_n(x)## converges to zero, while x>0 then ##F_n(x)## converges to -1/2. The work above demonstrates that ##F_n## is bounded by constants independent of n.

Hence ##F_n## converges to ##-\frac{1}{2}H## where ##H## is the Heaviside step function. But ##u_n = F_{n}''## so ##u_n## converges to ##-\frac{1}{2}\delta'## as required.

Aside Mikusinski used this result to prove
[tex]\delta^2(x) - \frac{1}{\pi^2}\left(\frac{1}{x}\right)^2 = -\frac{1}{\pi^2 x^2}[/tex]
from quantum physics.
Mikusinski's insight was to observe that the individual terms on the left cannot exist individually, but the entire expression on the left hand side can be given meaning.
 
Last edited:
  • Like
Likes 1 person
  • #10
jostpuur said:
It seems obvious that LagrangeEuler himself does not know what definition he is using for the Dirac delta

LagrangeEuler said:
I use Dirac delta in the form
## \int^{\infty}_{-\infty}\delta(x)f(x)dx=f(0) ##
However, I can not solve previous integral in this way so I am confused. Sometimes I use that the Dirac delta is even function so ##\delta(-x)=\delta(x)##. And of course sometimes I use integral representation.
## \delta(x)=\frac{1}{2\pi}\int^{\infty}_{-\infty}e^{ikx}dk##
or differential representation
## \theta'(x)=\delta(x)##
where ##\theta(x)## is Heaviside step function. I know some definition but I give precise what my problem is!

It seems that LagrangeEuler still does not know what definition he is using.

pwsnafu said:
We start by choosing a smooth function ##\rho(x)## satisfying
  1. ##\rho## is non-negative,
  2. the area under the curve over the reals is 1,
  3. ##\rho(x) = 0## for all ##-1 \leq x \leq 1##,
  4. ##\boldsymbol{\rho(-x) = \rho(x)}## for all x,
  5. ##\rho^{(r)}(x)## has only ##r## changes of sign for ##r=1,2,3,\ldots##.

This is artificial, because there exists sequences of functions, which converge to [itex]\delta_0[/itex] (with many possible definitions), but which are not symmetric.
 
  • #11
jostpuur said:
This is artificial, because there exists sequences of functions, which converge to [itex]\delta_0[/itex] (with many possible definitions), but which are not symmetric.

Correct, but it is necessary in order define this product. At one point the proof uses
[tex]f(x,t):=\rho^{(r)}_n(x)\rho^{(r)}_n(t)(x^{m+1}-t^{m+1})\log|x-t|=-f(t,x)[/tex]
which is from symmetry of ##\rho##.
Just because there are non-symmetric ##\rho## doesn't mean we care about them. Again, the more constraints on ##\rho## the more functions are able to be multiplied, at the cost of making it harder to obtain desirable properties. That is, the stronger the constraints placed on ##\rho## there are less and less functions that ##\rho## can take, this means the product can be used to multiply more and more distributions.

NB: I'm not sure if if it is necessary for this specific multiplication. I'll see what I can dig up.

Update:It appears the requirements (4) and (5) are necessary for ##x^{-r}\cdot\delta^{(r-1)}## for ##r=2,3,4,\ldots## but not for ##r=1##. My copy of Hoskins and Pinto states the following:

Define a sequence of smooth functions (chosen from ##\mathcal{D}##)
  1. ##\rho_n(x) \geq 0## for all x
  2. area under curve is 1
  3. supp##(\rho_n)\to\{0\}## as ##n\to\infty##.
We then call ##\rho_n## a strict delta sequence.
Note that we are not choosing a ##\rho## then setting ##\rho_n(x)=n\rho_n(x)##.
The product S1 is defined as ##\lim_{n\to\infty}(\mu*\rho_n)\nu## and SP4 is defined as ##\lim_{n\to\infty}(\mu*\rho_n)(\nu*\rho_n)##.

Now apparently in Theory of distributions : the sequential approach, Antosik et al prove that ##x^{-1}\cdot\delta## is undefined as SP1 product, but ##x^{-1}\cdot\delta = -\frac{1}{2}\delta'## exists as SP4 product. I say apparently because I don't own their book so I can't verify their proof right now. But it's still a moot point: the Fisher product can multiply together anything SP4 can multiply together. Removing symmetry to obtain something worse is counterproductive.
 
Last edited:
  • #12
Here comes my attempt to succeed in pedagogy:

Problem one: First I defined a function [itex]\phi[/itex] by setting

[tex]
\phi:\{0,1,2,3\}\to\mathbb{R},\quad\quad\left\{\begin{array}{l}
\phi(0) = 5 \\ \phi(1) = -4 \\ \phi(2) = 30 \\ \phi(3) = 14
\end{array}\right.
[/tex]

Then I got stuck trying to prove what [itex]\phi(4)[/itex] is. LagrangeEuler, do you have any idea what [itex]\phi(4)[/itex] is? Can you prove it?

Problem two: First I defined a function [itex]\Phi[/itex] by setting

[tex]
\Phi:C(\mathbb{R})\to\mathbb{R},\quad\quad \Phi(f) = f(0)
[/tex]

Here [itex]C(\mathbb{R})[/itex] is the set of all continuous functions [itex]f:\mathbb{R}\to\mathbb{R}[/itex].

Then a defined a function [itex]f[/itex] by setting

[tex]
f:\;]-\infty,0[\;\cup\;]0,\infty[\;\to\mathbb{R},\quad\quad f(x) = \frac{1}{x}
[/tex]

and I got stuck trying to prove what [itex]\Phi(f)[/itex] is. Can anyone here solve the [itex]\Phi(f)[/itex]? Can anyone here prove that [itex]\Phi(f)[/itex] is something?

LagrangeEuler, do you see how the problems one and two are related?
 
  • #13
jostpuur said:
Here comes my attempt to succeed in pedagogy:

Problem one: First I defined a function [itex]\phi[/itex] by setting

[tex]
\phi:\{0,1,2,3\}\to\mathbb{R},\quad\quad\left\{\begin{array}{l}
\phi(0) = 5 \\ \phi(1) = -4 \\ \phi(2) = 30 \\ \phi(3) = 14
\end{array}\right.
[/tex]

Then I got stuck trying to prove what [itex]\phi(4)[/itex] is. LagrangeEuler, do you have any idea what [itex]\phi(4)[/itex] is? Can you prove it?
That's non-sense. If you defined [itex]\phi[/itex] by that, then [itex]\phi(4)[/itex] does not exist- it is literally "undefined". (Or was that your point?)

Problem two: First I defined a function [itex]\Phi[/itex] by setting

[tex]
\Phi:C(\mathbb{R})\to\mathbb{R},\quad\quad \Phi(f) = f(0)
[/tex]

Here [itex]C(\mathbb{R})[/itex] is the set of all continuous functions [itex]f:\mathbb{R}\to\mathbb{R}[/itex].

Then a defined a function [itex]f[/itex] by setting

[tex]
f:\;]-\infty,0[\;\cup\;]0,\infty[\;\to\mathbb{R},\quad\quad f(x) = \frac{1}{x}
[/tex]

and I got stuck trying to prove what [itex]\Phi(f)[/itex] is. Can anyone here solve the [itex]\Phi(f)[/itex]? Can anyone here prove that [itex]\Phi(f)[/itex] is something?

LagrangeEuler, do you see how the problems one and two are related?
 
  • #14
HallsofIvy said:
That's non-sense. If you defined [itex]\phi[/itex] by that, then [itex]\phi(4)[/itex] does not exist- it is literally "undefined". (Or was that your point?)

Yes it was my point :wink: :biggrin:

I'm trying to explain that the [itex]\Phi(f)[/itex], which becomes mentioned next, is undefined too. So to clarify the reason the undefined quantity [itex]\Phi(f)[/itex] is being compared to another undefined quantity [itex]\phi(4)[/itex].

The final step in the answer to the original question would be to convince LagrangeEuler of the fact that the most obvious way to interpret the formal integral expression he mentioned would be to interpret it as something like [itex]\Phi(f)[/itex]. Other interpretations would not be reasonable unless clearly explained alongside with the formal integral expression.
 
  • #15
jostpuur said:
Other interpretations would not be reasonable unless clearly explained alongside with the formal integral expression.

This problem really illustrates why the integral notation needs to be avoided.
 

FAQ: How to Calculate the Integral of the Dirac Delta Function?

What is the Dirac delta function?

The Dirac delta function, also known as the Dirac delta distribution, is a mathematical function that is used to represent an infinitely sharp spike or impulse at a specific point in space or time. It is often used in physics and engineering to model point masses or point charges.

How is the Dirac delta function defined mathematically?

The Dirac delta function is defined as a distribution, or generalized function, that has the value of infinity at a single point and zero everywhere else. It is typically denoted as δ(x) and has the following properties:

  • δ(x) = 0 for all x ≠ 0
  • ∫δ(x)dx = 1
  • δ(x) = ∞ at x = 0

What is the integral of the Dirac delta function?

The integral of the Dirac delta function is equal to 1. This is because the Dirac delta function is defined to have a value of infinity at a single point, but zero everywhere else. Therefore, when integrated over a small interval around the point where it is nonzero, the integral will approach 1.

How is the Dirac delta function used in physics and engineering?

The Dirac delta function is used in many applications in physics and engineering, such as in Fourier analysis, signal processing, and quantum mechanics. It is also used to model point sources in electromagnetism and point masses in mechanics.

Can the Dirac delta function be graphed?

No, the Dirac delta function cannot be graphed in the traditional sense because it is not a regular function. It has a value of infinity at a single point and zero everywhere else, making it impossible to graph on a traditional x-y plane. However, it can be represented graphically as a spike or impulse at the point where it is nonzero.

Similar threads

Replies
2
Views
656
Replies
3
Views
2K
Replies
9
Views
2K
Replies
15
Views
931
Replies
1
Views
2K
Back
Top