A Vanishing of an integral of a divergence over a closed surface

  • A
  • Thread starter Thread starter Kostik
  • Start date Start date
Kostik
Messages
274
Reaction score
32
TL;DR Summary
E. Poisson in his GR textbook claims the vanishing of an integral of a divergence over a closed surface in "angular" variables. Why?
I want to understand/prove why Eric Poisson drops the 2nd integral in the 2nd equation on the right side of the attached image, from pp. 69-70 of "A Relativist's Toolkit". It's hard to imagine a closed 3D hypersurface embedded in 4D, so I will look at the simpler example of a closed 2D surface embedded in 3D. So let ##M## be a 3D volume, and ##S=\partial M## its boundary, a closed surface.

Poisson views ##M## as a union of concentric surfaces ##S(\xi^0)##, analogous to the layers of an onion. Let the coordinate ##0 \le \xi^0 \le 1## index the concentric hypersurfaces, with ##\xi^0=1## on the outermost hypersurface ##S(1)=\partial M##, and ##\xi^0=0## on the innermost hypersurface ##S(0)##, the “center” of ##M## with zero volume. Let the other coordinates ##\xi^1, \xi^2## be the spherical coordinates ##\theta, \phi##.

Let ##\bf{A}## be a vector field in ##\mathbb{R}^3##, which can be expressed in this coordinate system ##{\bf{A}}(\xi^0,\theta,\phi)##. Poisson asserts that in these coordinates, the integral on a surface ##\xi^0 =## constant: $$\oint_S \text{div} { \bf{A} } \,dS=0 \,\,.$$ Here the divergence ##\text{div} { \bf{A} }## is the 2D divergence.

It seems the angular coordinates are important. Obviously, in Cartesian coordinates, choosing ##{ \bf{A} } = x{ \bf{i} } + y{ \bf{j} }##, we have ##\text{div} { \bf{A} } = A^x_{\,,x} + A^y_{\,,y} = 2##, so the integral which is supposed to vanish is equal to ##2 \,\, \times## the surface area of ##S##.

In spherical coordinates: $$\int_S \text{div}{\bf{A}}\,dS = \int_S \left[ \frac{1}{r\sin\theta}\frac{\partial(A^\theta \sin\theta)}{\partial\theta} + \frac{1}{r\sin\theta}\frac{\partial A^\phi}{\partial\phi} \right] r^2 \sin\theta d\theta d\phi$$ $$ \qquad\qquad\qquad\qquad\qquad = \int_0^{2\pi} \left[ \int_0^{\pi} r\frac{\partial(A^\theta \sin\theta)}{\partial\theta} d\theta \right] d\phi + \int_0^{\pi} \left[ \int_0^{2\pi} r\frac{\partial A^\phi}{\partial\phi} d\phi \right] d\theta \,\,. \quad\quad (*)$$ The surface ##S## can be defined by ##r=r(\theta,\phi)##. If ##r## is a constant (i.e., the surface is a sphere), it can be taken outside the integrals, and the integrals in ##(*)## do indeed vanish. However, in general, ##r=r(\theta,\phi)##, so we're stuck with that.

How does one show the integral vanishes for a general surface ##r=r(\theta,\phi)## and not just a sphere?

[1]: https://i.sstatic.net/jziTDqFd.jpg
Poisson.jpg
 
Last edited:
Physics news on Phys.org
This is a good question. I had to think about it a bit before answering and I don't think this book offers a good path to really understand the result, but you have some errors in your reasoning which I would like to point out which may help.

Let's start with you first in cartesian coordinates. You asserted that ##\oint_S \text{div} \mathbf A dS = 2 \times ## the surface area of S. However, the problem with this reasoning is the following. The theorem the author is trying to prove only holds for a Manifold with boundary and so S has to be a closed surface that is a manifold without boundary. So, then we can talk about the interior of a square and it's boundary, let's say in ##\mathbb R^2##, but then the boundary is the square which is one dimensional.

So to make your example with cartesian coordinates work we could construct a cube in ##\mathbb R^3## centered at the origin. Now imagine we have the vector field ##\mathbf A = A^x \mathbf i + A^y \mathbf j + A^z \mathbf k##. The key thing we need to notice is how the author split his integral of ##\text{div} \mathbf A##. He breaks it up into an integral over the parts of the divergence that are tangent to the surface and those that are normal (or alternatively we could construct a vector field that is only tangent to this surface, i.e., changes on each face). So in our example of this cube, for the top and bottom faces we would only consider div of the x and y components of ##\mathbf A##. The other thing to notice is that we need to integrate over oriented area elements defined by a normal vector. So what we will find is that ## \oint_{top} \text{div} (A^x \mathbf i + A^x \mathbf j) dS + \oint_{bottom} \text{div} (A^x \mathbf i + A^x \mathbf j) dS = 0## because the bottom face's normal points in the opposite direction introducing a minus sign. Similar cancellations happen for front and back, and left and right pairs, so we see we do get 0. The keys here to getting the result were the following:
  1. The vector field that we were taking the divergence of must be tangent to the surface everywhere.
  2. The surface must not have a boundary itself. I believe it might need to be compact and closed.
Now let's talk about your sphere example. If you think about what I have said above, if ##r = r(\theta, \phi)## is not constant and integrate the divergence of vector field with components in the ##\theta## and ##\phi## directions then, the result does not hold because then these directions are no longer tangent to the surface! This is the only point of the theorem.

The easiest way to think of this is as a fluid flow velocity described by a vector field. The divergence at a point is the limit of the net amount fluid leaving a cube per unit time. So Gauss' theorem is just the realization that all of these flows out must cancel with the flow into neighboring cube and the only net flow out can be through the boundary.

So, now if you think of the boundary itself as a manifold and only consider the fluid flowing along the boundary, i.e., tangent to it, and now are cubes are a squares since a boundary is a manifold of one dimension less, we can reason that all of these divergences must cancel since the boundary of our original manifold cannot have a boundary itself, often stated as the boundary of a boundary is empty, i.e., the closed unit ball in ##\mathbb R^3## has the sphere as it's boundary but the sphere does not have a boundary.

Hope this helps, but I really think you should look at Lee's book "Introduction to Manifolds" if you really want to understand it.

Note, I should also mention that any embedded surface is locally a level set of a function so you can always locally find coordinates such that the surface is constant along one of the coordinates.
 
Last edited:
OK, so this has bugged me for a while about the equivalence principle and the black hole information paradox. If black holes "evaporate" via Hawking radiation, then they cannot exist forever. So, from my external perspective, watching the person fall in, they slow down, freeze, and redshift to "nothing," but never cross the event horizon. Does the equivalence principle say my perspective is valid? If it does, is it possible that that person really never crossed the event horizon? The...
From $$0 = \delta(g^{\alpha\mu}g_{\mu\nu}) = g^{\alpha\mu} \delta g_{\mu\nu} + g_{\mu\nu} \delta g^{\alpha\mu}$$ we have $$g^{\alpha\mu} \delta g_{\mu\nu} = -g_{\mu\nu} \delta g^{\alpha\mu} \,\, . $$ Multiply both sides by ##g_{\alpha\beta}## to get $$\delta g_{\beta\nu} = -g_{\alpha\beta} g_{\mu\nu} \delta g^{\alpha\mu} \qquad(*)$$ (This is Dirac's eq. (26.9) in "GTR".) On the other hand, the variation ##\delta g^{\alpha\mu} = \bar{g}^{\alpha\mu} - g^{\alpha\mu}## should be a tensor...
ASSUMPTIONS 1. Two identical clocks A and B in the same inertial frame are stationary relative to each other a fixed distance L apart. Time passes at the same rate for both. 2. Both clocks are able to send/receive light signals and to write/read the send/receive times into signals. 3. The speed of light is anisotropic. METHOD 1. At time t[A1] and time t[B1], clock A sends a light signal to clock B. The clock B time is unknown to A. 2. Clock B receives the signal from A at time t[B2] and...
Back
Top