- #1
JustinLevy
- 895
- 1
"paradox" regarding energy of dipole orientation
I've ran into a "paradox" concerning deriving the energy of a dipole's orientation in an external field. For example, the energy of a magnetic dipole m in an external field B is known to be:
[tex]U= - \mathbf{m} \cdot \mathbf{B}[/tex]
In Griffiths Intro to Electrodynamics, this is argued by looking at the torque on a small loop of current in an external magnetic field.
The "paradox" arises instead when we try to derive it by looking at the energy in the magnetic field.
[tex] U_{em} = \frac{1}{2} \int (\epsilon_0 E^2 + \frac{1}{\mu_0} B^2) d^3r [/tex]
If we consider an external field B and the field due to a magnetic dipole B_dip, we have:
[tex] U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r [/tex]
[tex] U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r[/tex]
the [tex]B^2[/tex] and [tex]B^2_{dip}[/tex] terms are independent of the orientation and so are just constants we will ignore. Which leaves us with:
[tex] U = \frac{1}{\mu_0} \int \mathbf{B} \cdot \mathbf{B}_{dip} d^3r [/tex]
Now we have:
[tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] + \frac{2 \mu_0}{3} \mathbf{m} \delta^3(\mathbf{r})[/tex]
If you work through the math, only the delta function term will contribute, which gives us:
[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]
Which not only has the wrong magnitude, but the wrong sign. And thus the "paradox". Obviously, there is no paradox and I am just calculating something wrong, but after talking to several students and professors I have yet to figure out what is wrong here.
One complaint has been that while doing the [tex]d\theta[/tex] and [tex]d\phi[/tex] integrals show that the non-delta function term doesn't contribute, it is unclear if this argument holds for the point right at r=0. I believe it still cancels, but to alleviate that, let's look at a "real" dipole instead of an ideal one. A spinning spherical shell of uniform charge with a magnetic dipole m, has the magnetic field outside the sphere:
for r>=R [tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] [/tex]
So this is a good stand in for the idealized dipole, as it has a pure dipole field outside of the sphere. Inside the sphere the magnetic field is:
for r<R [tex]\mathbf{B}_{dip} = \frac{2 \mu_0}{3} \frac{\mathbf{m}}{\frac{4}{3} \pi R^3}[/tex]
which as you can see, reduces to the "ideal" case in the limit R -> 0.
Here there is no funny business at r=0. The math again gives:
[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]
What gives!?
Who can help solve this "paradox"?
I've ran into a "paradox" concerning deriving the energy of a dipole's orientation in an external field. For example, the energy of a magnetic dipole m in an external field B is known to be:
[tex]U= - \mathbf{m} \cdot \mathbf{B}[/tex]
In Griffiths Intro to Electrodynamics, this is argued by looking at the torque on a small loop of current in an external magnetic field.
The "paradox" arises instead when we try to derive it by looking at the energy in the magnetic field.
[tex] U_{em} = \frac{1}{2} \int (\epsilon_0 E^2 + \frac{1}{\mu_0} B^2) d^3r [/tex]
If we consider an external field B and the field due to a magnetic dipole B_dip, we have:
[tex] U = \frac{1}{2\mu_0} \int (\mathbf{B} + \mathbf{B}_{dip})^2 d^3r [/tex]
[tex] U = \frac{1}{2\mu_0} \int (B^2 + B^2_{dip} + 2 \mathbf{B} \cdot \mathbf{B}_{dip})d^3r[/tex]
the [tex]B^2[/tex] and [tex]B^2_{dip}[/tex] terms are independent of the orientation and so are just constants we will ignore. Which leaves us with:
[tex] U = \frac{1}{\mu_0} \int \mathbf{B} \cdot \mathbf{B}_{dip} d^3r [/tex]
Now we have:
[tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] + \frac{2 \mu_0}{3} \mathbf{m} \delta^3(\mathbf{r})[/tex]
If you work through the math, only the delta function term will contribute, which gives us:
[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]
Which not only has the wrong magnitude, but the wrong sign. And thus the "paradox". Obviously, there is no paradox and I am just calculating something wrong, but after talking to several students and professors I have yet to figure out what is wrong here.
One complaint has been that while doing the [tex]d\theta[/tex] and [tex]d\phi[/tex] integrals show that the non-delta function term doesn't contribute, it is unclear if this argument holds for the point right at r=0. I believe it still cancels, but to alleviate that, let's look at a "real" dipole instead of an ideal one. A spinning spherical shell of uniform charge with a magnetic dipole m, has the magnetic field outside the sphere:
for r>=R [tex]\mathbf{B}_{dip} = \frac{\mu_0}{4 \pi r^3} [ 3(\mathbf{m} \cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m} ] [/tex]
So this is a good stand in for the idealized dipole, as it has a pure dipole field outside of the sphere. Inside the sphere the magnetic field is:
for r<R [tex]\mathbf{B}_{dip} = \frac{2 \mu_0}{3} \frac{\mathbf{m}}{\frac{4}{3} \pi R^3}[/tex]
which as you can see, reduces to the "ideal" case in the limit R -> 0.
Here there is no funny business at r=0. The math again gives:
[tex]U=\frac{2}{3} \mathbf{m} \cdot \mathbf{B}[/tex]
What gives!?
Who can help solve this "paradox"?
Last edited: