The Schwarzschild Geometry: Physically Reasonable?
In the last article, we looked at various counterintuitive features of the Schwarzschild spacetime geometry, as illustrated in the Kruskal-Szekeres spacetime diagram. But counterintuitive, in itself, does not mean physically unreasonable or unlikely. So the obvious next question is, how much of the entire spacetime geometry we have been looking at is actually believed to be physically reasonable?
We can get a handle on this by observing that the geometry we have been looking at is vacuum everywhere–the stress-energy tensor is zero. But in our real universe, of course, there is matter and energy, so the stress-energy tensor is not zero everywhere. It is true, though, that, at least on the distance scales we deal with most of the time (basically in any context except cosmology), we can view the universe as consisting of isolated objects containing nonzero stress-energy, separated by large regions of zero stress-energy. (True, strictly speaking, there is very sparse matter and energy present in these regions, but it is much too sparse to have any significant effect on the spacetime geometry, so its stress-energy tensor can be considered to be effectively zero.) And since most of the isolated objects are rotating, if at all, very slowly (here “very slowly” means their angular momentum is very small compared to their mass when both are normalized to geometric units), they can be considered, at least to a good approximation for most purposes, as being spherically symmetric, which means that the vacuum regions around them, at least at distances that are small compared with the distance to other isolated objects, can also be considered to be spherically symmetric.
This is important because there is a theorem known as Birkhoff’s Theorem, which says that any vacuum solution to the Einstein Field Equation that is spherically symmetric must be (at least a portion of) the Schwarzschild geometry. So if we idealize a single isolated object as a spherically symmetric region of nonzero stress-energy surrounded by vacuum out to infinity, then the spacetime geometry must be Schwarschild from ##r \rightarrow \infty## down to some finite value of ##r##, which we can call ##R##, corresponding to the surface of the isolated object. (Inside this surface, the geometry will be different because the region is not vacuum; we’ll go into that below.)
The obvious next question is, what values of ##R## are possible? Of course, this question is very general, but we can start by asking a more specific version of it: what values of ##R## are possible for an isolated object that is static, i.e., ##R## does not change with time? (Here it doesn’t matter whether we interpret “time” to be Schwarzschild coordinate time, Gullstrand-Painleve coordinate time, or the proper time of some observer sitting on the surface.) It turns out that there is another theorem, known as Buchdahl’s Theorem, that answers this question: for a spherically symmetric static object, we must have ##R > \frac{9}{4} M##, i.e., 9/8 of the Schwarzschild radius ##r = 2M##. The basic reason is that, for a static object, gravity must be balanced by internal pressure; and it turns out that, as ##R \rightarrow \frac{9}{4} M##, we have ##p \rightarrow \infty## at the center of the object.
(By the way, there is another closely related theorem, due to Einstein, which says that a spherically symmetric system composed of particles in circular free-fall orbits, whose overall mass and radius does not change with time, must have a surface radius ##R## which is larger than ##3M##. Here the reason is that, as ##R \rightarrow 3M##, the free-fall orbits have orbital speeds approaching the speed of light–more precisely, the worldlines of the orbits become null instead of timelike. We now know that this is because ##r = 3M## is what is called the “photon sphere” in the Schwarzschild geometry, the radius at which light rays can have closed circular orbits. Einstein concluded–incorrectly–that his theorem showed that black holes could not form; we now understand that this is a misconception, similar to the one discussed in the first article of this series, that objects can never reach a black hole’s horizon.)
Buchdahl’s Theorem tells us something important: there cannot be a static object with a surface radius arbitrarily close to ##r = 2M##, let alone equal to or less than that value. However, there is still a way for an object with surface radius ##R## smaller than ##\frac{9}{4} M## to exist: if ##R## is changing with time. This opens up two possibilities: ##R## could be decreasing with time, or ##R## could be increasing with time.
The first possibility is just gravitational collapse: an object that can no longer support itself against gravity will collapse and form a black hole. The first idealized model of this kind of process was presented in a classic paper by Oppenheimer and Snyder in 1939. The vacuum region of their model has a spacetime diagram, in coordinates similar to Kruskal-Szekeres coordinates, that looks similar to this (courtesy of PF Science Advisor DrGreg):
As you can see, this diagram only includes a portion of regions I and II from the full Kruskal-Szekeres spacetime diagram that we looked at in previous articles. The boundary on the left of the diagram is the surface of the object: the region to the left of that surface, not shown on the diagram, has the spacetime geometry like that of a portion of a collapsing universe, i.e., a collapsing FRW spacetime. The point at the upper left of the diagram is where the collapsing object reaches ##R = 0##, i.e., its surface collapses to a point; that is where the singularity forms (in this idealized model), and the hyperbola at the top is the singularity itself. So if we were to fill in the non-vacuum region occupied by the collapsing matter, it would have a “width” from right to left that gradually decreased from the bottom to the top of the diagram, tapering to zero width at the top left corner where it meets the singularity. The left boundary of this region would be labeled ##r = 0## if we adopted appropriate coordinates in this region (which would not be precisely the same as any of the ones we have looked at in these articles for the vacuum region); and in this region, ##r = 0## means what your ordinary intuition would think it means, the spatial point at the center of the collapsing object. In more technical language, the curve ##r = 0## is timelike up to the point where the singularity forms; only after that does it become spacelike, like a moment of time instead of a place in space.
Notice that in this diagram there is also an event horizon–the 45 degree dotted line going up and to the right. The event horizon also extends inside the collapsing matter; the 45 degree line just extends into the non-vacuum region until it intersects its left boundary at ##r = 0##. It is instructive to think about what this means physically. Consider a series of light rays emitted radially outward from the spatial point ##r = 0## inside the collapsing matter. Any such rays emitted before the event horizon line will reach the surface of the matter and continue outward to infinity. But there will be some ray that is emitted exactly at the event horizon line. This ray will reach the surface of the collapsing matter at the exact instant that the radius ##R## of the collapsing matter is equal to ##2M##. This ray will then, since it is now in the vacuum region with Schwarzschild geometry, remain at ##r = 2M## forever. So the event horizon can be thought of as simply the set of all such light rays, emitted in all possible directions. (These rays, or more properly the null curves that describe their worldlines, are called the “generators” of the horizon.)
This model presents a consistent (if highly idealized) picture of gravitational collapse, at least until we get close to the singularity and issues of arbitrarily large spacetime curvature arise. (We’ll talk about that further below.) Consequently, the portions of the Schwarzschild geometry that appear in this model, namely the portions of regions I and II and the event horizon between them, would appear to be physically reasonable: these portions of the geometry could exist in our universe, at least as far as classical GR is concerned. But there is no reason to think that the rest of the geometry–the “past horizon” and regions III and IV–is physically reasonable, at least not based on models of gravitational collapse.
What about the other possibility mentioned above, that the surface radius ##R## of an object could be increasing with time? A model of this sort would basically look like the time reverse of the Oppenheimer-Snyder model: we would have a vacuum region consisting of portions of regions I and IV from the Kruskal-Szekeres diagram, and the past horizon between them, plus a non-vacuum region containing expanding matter, with geometry like that of a portion of an expanding FRW spacetime. The past horizon would extend into this region and intersect the spatial point ##r = 0##. The term “white hole” could be used to describe this kind of model (although it is more frequently used to describe region IV in the full, vacuum everywhere Schwarzschild spacetime).
If we consider whether such a model could describe an isolated region of our universe where a white hole is exploding, much as the Oppenheimer-Snyder model describes an isolated region where an object is collapsing into a black hole, there is a serious difficulty: where does the initial singularity come from? There is no known physical process that could create one since the past horizon isolates it from all other objects in the universe–much as nothing can get out of a black hole, nothing can get into a white hole. So such a white hole singularity would have to be “built into” the universe from the beginning. That seems highly implausible, and as far as I know, nobody has seriously tried to defend such a model.
The question of whether our entire observable universe could be a portion of such a white hole spacetime is a bit more interesting, because such a model would imply that our observable universe is somewhere inside the non-vacuum region of such a model, and depending on where (and when) we are in that non-vacuum region, it is possible that no light signals from the vacuum region outside could have reached us yet. In other words, since the non-vacuum portion of the white hole model is a portion of an expanding FRW universe, the fact that our observable universe looks like a portion of an expanding FRW universe is not enough, in itself, to rule out a white hole model for the universe as a whole (as opposed to just a full expanding FRW universe and nothing else).
However, there is another reason for thinking that a white hole model for the entire universe is unlikely. This is basically the converse of the reason for thinking isolated white holes in our observable universe are unlikely. There the question was where the initial singularity would come from; here the question is where the region outside the past horizon would come from. Basically, our universe would have to be an isolated white hole inside some much larger “universe”, so this model doesn’t really give a final answer; it just pushes the question back a step. The model of our entire universe as an expanding FRW spacetime does not have this issue, because an expanding FRW spacetime–the full model, not just a portion–is self-contained, with no need to postulate anything outside it.
To summarize, our best current belief is that regions I and II of the Schwarzschild geometry, as shown in the Kruskal-Szekeres spacetime diagram, are physically reasonable, but the rest of the geometry, including the “antihorizon” boundary between those two regions and the other two, is not. At least, that is the best answer we can give according to classical General Relativity; quantum effects might change this picture. In fact, they are expected to, at the very least, when spacetime curvature becomes large enough, where “large enough” is thought to be, heuristically, when the radius of curvature of spacetime is of the order of the Planck length. In the Schwarzschild geometry, this would happen for some value of ##r## that was sufficiently small, i.e., sufficiently close to the singularity at ##r = 0##.
We don’t know exactly how quantum effects would change the picture in this regime, and won’t until we have a good theory of quantum gravity. However, it seems likely that, if quantum effects do change the picture, it will be in direction of making less of the full geometry physically reasonable, not more, by making the portion of region II below some positive value of ##r## not physically reasonable, because, as above, quantum effects are expected to become relevant when the spacetime curvature gets large enough. It is even possible that quantum effects might make all of region II and its event horizon boundary not physically reasonable, in the sense that this region would no longer occur in the classical limit of whatever quantum model ends up being confirmed because quantum effects would prevent a true event horizon from ever forming. This is an active area of research, and we’ll have to wait and see what comes out.
References
(1) Einstein’s 1939 paper on a stationary system of particles in free-fall orbits:
http://www.cscamm.umd.edu/tiglio/GR2012/Syllabus_files/EinsteinSchwarzschild.pdf
(Note that Einstein uses isotropic coordinates in this paper; these have a radial coordinate which is not the same as the areal radius ##r##. In the article above I have translated his result into a form that uses the areal radius ##r## as the radial coordinate.)
- Completed Educational Background: MIT Master’s
- Favorite Area of Science: Relativity
[QUOTE="PeterDonis, post: 5653801, member: 197831"]Yes. The problem is that we don't (yet) have a more fundamental theory that covers the regime that GR does not.”The way I understand cosmic censorship is that it is a research program that specifically tries not to appeal to a different theory. So it wouldn't merely be a statement that say quantum mechanics regulates the problems with the singularity, rather something about the mathematics behind the singularity structure in classical GR allows you to do that. So for instance the original definition of strong cosmic censorship was that you could only have spacelike singularities in physically reasonable theories with gravitational collapse (modulo some details that prevented simple counterexamples from being formulated)I share Ben Niehoffs view that simply excising the singularity in the standard way done in textbooks counts as a rather *hard* regularization scheme, which perhaps misses some of the details that might help prove or disprove the conjecture. Perusing some of the work that's been done on the subject one can see that there have been multiple approaches and redefinitions and that there is no consensus on which direction to even take. So it stands more on simple physical arguments like the one's already given in the thread, and does not appear to have the correct mathematical formulation yet.
[QUOTE="martinbn, post: 5653902, member: 252793"]My personal view is that incompleteness as in the Schwartzschild solution is OK. The observer is torn apart by infinite curvature and ceases to exists, but everyone is accounted for. Incompleteness as in the Kerr solution, where the observer reaches the Cauchy horizon in finite proper time and there is no unique extension beyond it, is not OK. The theory loses its predictability. But this is where the strong cosmic censorship conjecture comes in. If true, these situations are non generic and therefore the theory is still as good as ever.”The cosmic censorship I'm familiar with only requires the BH be surrounded by a horizon. It doesn't say anything about the a-causality of the manifold inside the inner horizon. Does the strong version go further and say something like no singularity in the past light cone of any observer? This would require rejection of the inner horizon region due to CTCs. It also would reject the full Kurskal geometry.
[QUOTE="Ben Niehoff, post: 5653568, member: 99109"] But I still feel that an incomplete spacetime is physically unhealthy in some way, as it means that there are some observers who can reach a region of "Here be dragons" in finite proper time. Perhaps that is simply the best that GR can say about such spacetimes, and an obvious indication that the theory cannot be fundamental (since it cannot answer physically reasonable questions about what happens to some observers).”My personal view is that incompleteness as in the Schwartzschild solution is OK. The observer is torn apart by infinite curvature and ceases to exists, but everyone is accounted for. Incompleteness as in the Kerr solution, where the observer reaches the Cauchy horizon in finite proper time and there is no unique extension beyond it, is not OK. The theory loses its predictability. But this is where the strong cosmic censorship conjecture comes in. If true, these situations are non generic and therefore the theory is still as good as ever.
[QUOTE="Ben Niehoff, post: 5653568, member: 99109"]I still feel that an incomplete spacetime is physically unhealthy in some way, as it means that there are some observers who can reach a region of "Here be dragons" in finite proper time.”I think that is the way most physicists look at it.[QUOTE="Ben Niehoff, post: 5653568, member: 99109"]Perhaps that is simply the best that GR can say about such spacetimes, and an obvious indication that the theory cannot be fundamental”Yes. The problem is that we don't (yet) have a more fundamental theory that covers the regime that GR does not.
Ah, I see we are meant to think of this slightly differently than I was expecting. The definition doesn't care about completeness, but only whether causal curves are "inextendible" (i.e., that they do not have endpoints within the manifold). Under such definitions, the "Reissner-Nordstrom geometry minus the bad regions" (as I described above) is "globally hyperbolic". The point is the term is meant to capture the physically-sensible regions of a geometry, by asking what is the maximal globally hyperbolic Cauchy development of some initial data.So I will agree I had the definition wrong. But I still feel that an incomplete spacetime is physically unhealthy in some way, as it means that there are some observers who can reach a region of "Here be dragons" in finite proper time. Perhaps that is simply the best that GR can say about such spacetimes, and an obvious indication that the theory cannot be fundamental (since it cannot answer physically reasonable questions about what happens to some observers).
[QUOTE="Ben Niehoff, post: 5653380, member: 99109"]Most (open patches of) manifolds can be extended in more than one way.”This is true, and the Hawking & Ellis definition of a "manifold" includes the global topology, so Minkowski spacetime and the torus with Lorentzian metric are two different manifolds by this definition, even though they have the same local geometry.[QUOTE="Ben Niehoff, post: 5653380, member: 99109"]If you can point to a mathematician's definition of "global"”I don't know if it does. I was using a physicist's definition of "globally hyperbolic", which, as you say, might not be the term a mathematician would choose. (I'm not even sure about that, though; this aspect of GR relies heavily on differential geometry and topology.)As far as I can tell, this Wikipedia page is a good summary of the Hawking & Ellis definition of a globally hyperbolic manifold:https://en.wikipedia.org/wiki/Globally_hyperbolic_manifold
Some are incomplete in the past. If it falls under the assumptions of Hawking's theorem.
I'm not altogether familiar with cosmological metrics. Is the FLRW metric geodesically complete in the past? (Probably the answer depends on the matter content!)
Oh, I see, you are objecting about the use of global. I don't know, you might be right. But take a standard Friedman model. Why wouldn't we call it globally hyperbolic?p.s. Perhaps one has to dig out the history of the terminology. May be this is how the PDE people speak.
Most (open patches of) manifolds can be extended in more than one way. For example, ##ds^2 = -dt^2 + dx^2## could be Minkowski space, or it could be a torus. Minkowski space is globally hyperbolic, but a torus with a Lorentzian metric has closed timelike curves.I can agree that if a smooth extension exists, then we shouldn't be referring to properties as "global". However, I am generically uncomfortable calling something "global" when our manifold is geodesically incomplete, even if all possible analytical continuations are singular.If you can point to a mathematician's definition of "global" which is meant to include such cases, then obviously I'll have to change my mind. But physicists tend to be quite loose with this kind of terminology.Although I suppose practically speaking in this case, it doesn't make much difference.
[QUOTE="Ben Niehoff, post: 5652579, member: 99109"]What, then, stops me from saying the Reissner-Nordstrom geometry is globally hyperbolic? Can't I just excise the Cauchy horizons (and whatever lies beyond them) and call it a day?In a similar fashion, it seems I can call anything "globally hyperbolic", if I just cut out the bad parts and redefine what I'm talking about.”You can do that, but then you will have a space-time that is extendible (and it can be extended in more than one way).
What, then, stops me from saying the Reissner-Nordstrom geometry is globally hyperbolic? Can't I just excise the Cauchy horizons (and whatever lies beyond them) and call it a day?In a similar fashion, it seems I can call anything "globally hyperbolic", if I just cut out the bad parts and redefine what I'm talking about.
[QUOTE="Ben Niehoff, post: 5652326, member: 99109"]I disagree with the terminology "globally hyperbolic" here.”Then you disagree with Hawking & Ellis, not to mention every other textbook on GR that I'm aware of that discusses this topic. It's a standard term in the field.[QUOTE="Ben Niehoff, post: 5652326, member: 99109"]The equations of motion fail at the singularities, and the singularities are reachable in finite proper time.”The technical definition of "globally hyperbolic" allows this, because the manifold must be an open set, and the singularities are not part of the manifold. The fact that the globally hyperbolic region is geodesically incomplete does not prevent it from being globally hyperbolic. At least, not according to Hawking & Ellis and all the other textbooks.[QUOTE="Ben Niehoff, post: 5652326, member: 99109"]This means you cannot just excise the singularities”Sure you can; within the open set that constitutes the manifold, all curvature invariants are finite at every event.Whether the resulting globally hyperbolic region is physically reasonable is a different question; AFAIK nobody thinks the white hole region is physically reasonable. But that doesn't change its mathematical properties. And AFAIK nobody thinks classical GR remains valid to arbitrarily large values of curvature invariants. But that doesn't change the mathematical properties either.[QUOTE="Ben Niehoff, post: 5652326, member: 99109"]The issue is that the singularities don't obey the equation.”That's irrelevant because the equation is never applied at the singularities. They're not part of the manifold.
[QUOTE="stevendaryl, post: 5652132, member: 372855"]Right. In the black hole interior, [itex]frac{dr}{dtau}< 0[/itex] and in the white hole interior, [itex]frac{dr}{dtau} > 0[/itex].So now I'm a little confused: What is it that prevents having two nearby test particles with opposite signs of [itex]frac{dr}{dtau}[/itex]?”Applying an orientation to an orientable spacetime involves choosing a consistent labeling of past/future of all light cones. Then, for any world line, a tangent directed one way (one sign, in your case) is future directed, while the other sign is past directed.
[QUOTE="stevendaryl, post: 5652132, member: 372855"]What is it that prevents having two nearby test particles with opposite signs of ##frac{dr}{dtau}##?”The convention you just implicitly adopted for the direction along the worldline in which ##tau## increases. To be fair, I slipped it in there without saying so. :wink:A more explicit unpacking would be this: first, at every event in the spacetime, we make a choice of which half of the light cone is the "future" half, and which half is the "past" half, in such a way that the choice is continuous throughout the spacetime. There are only two ways of doing this: we can choose the half that points towards region II on the Kruskal diagram as the "future" half, or we can choose the half that points towards region IV. But once we've made that choice at one event, for continuity we have to make the same choice at every event. The usual convention is to choose the "future" half to point towards region II.Then we just define ##tau## along every timelike worldline such that it increases from the past to the future, as defined by the halves of the light cones. Once we've done that, then we must have ##dr / dtau > 0## in region IV and ##dr / dtau < 0## in region II along every timelike worldline.If you think about it, you will see that there is no actual loss of generality in doing all this, because the spacetime as a whole is time symmetric.
[QUOTE="Haelfix, post: 5652117, member: 4167"]Sure, you can formally do this. Butt then I can formally take a line in the middle of the diagram, evolve it arbitrarily far backwards to the singularity region, then evolve it forward again back to the start. The two resulting hypersurfaces won't necessarily agree anymore depending upon details of what takes place near the singularity. This is why it's often said that naked singularities yield problems for determinism. So I would say the propriety of those sorts of manipulations are basically equivalent to whether you accept (weak) cosmic censorship or not.”I think the problem you are trying to highlight is merely that you can't use the white-hole singularity as a Cauchy surface. This doesn't mean that Cauchy surfaces don't exist. Informally, anything can come out of a white hole, much like anything can fall into a black hole.I agree this leads to problems with causality in the eternal black hole spacetime, because effectively one cannot evolve from the infinite past into the infinite future. So one cannot answer the question, "What happens if I put a white hole in spacetime?" However, the Cauchy problem is not "What happens if I do something undefined?", but rather "Given that the current state is A, what happens next?"[QUOTE="PeterDonis, post: 5651401, member: 197831"]You don't have to evolve them forward. You can evolve the initial data on the hypersurface ##T = 0## in Kruskal-Szekeres coordinates both forwards and backwards. Doing so will give you the complete globally hyperbolic region, all the way back to the past singularity and forward to the future singularity. Since the equations are time symmetric, this is perfectly well-defined and justified.[/quote]I disagree with the terminology "globally hyperbolic" here. The equations of motion fail at the singularities, and the singularities are reachable in finite proper time. Thus the hyperbolic region is not "global".The main issue here is the geodesic incompleteness at the singularities. This means you cannot just excise the singularities, as you could if they were "infinitely far away".[quote]I don't know what you're basing this on. The subject under discussion is a well-defined solution of the classical Einstein Field Equation. Any event with finite spacetime curvature invariants, including arbitrarily large ones, can occur in such a solution. The solution might not end up describing anything physically relevant, but that doesn't mean the points with large spacetime curvature values "don't obey the equation"; it just means physics, unlike this particular mathematical model, chooses some other equation at that point.”The issue is that the singularities don't obey the equation. There is no sense in which they do (in contrast, e.g., to the singularity in the electric field of a point charge, which can be dealt with by using distributions).
[QUOTE="PeterDonis, post: 5651313, member: 197831"]You have these backwards.”Right. In the black hole interior, [itex]frac{dr}{dtau}< 0[/itex] and in the white hole interior, [itex]frac{dr}{dtau} > 0[/itex].So now I'm a little confused: What is it that prevents having two nearby test particles with opposite signs of [itex]frac{dr}{dtau}[/itex]?
[QUOTE="martinbn, post: 5652028, member: 252793"]Well, it's not how it works. The initial data doesn't include anything from the past of the Cauchy surface. In fact until you solve the equations, there is no past nor future. The initial data consists of fields defined on the surface. Whatever the values of the past and future evolution may be, say arbitrary large, they are not part of the initial conditions. So there is nothing dubious here and by construction you get solutions to the Einstein equation.”Sure, you can formally do this. Butt then I can formally take a line in the middle of the diagram, evolve it arbitrarily far backwards to the singularity region, then evolve it forward again back to the start. The two resulting hypersurfaces won't necessarily agree anymore depending upon details of what takes place near the singularity. This is why it's often said that naked singularities yield problems for determinism. So I would say the propriety of those sorts of manipulations are basically equivalent to whether you accept (weak) cosmic censorship or not.
[QUOTE="Haelfix, post: 5651091, member: 4167"]Yes but think about it, any such line has access to the singularity region in its causal past. Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations, when they likely don't even obey the equation to begin with. The entire future spacetime is thus built out of that dubious development. “Well, it's not how it works. The initial data doesn't include anything from the past of the Cauchy surface. In fact until you solve the equations, there is no past nor future. The initial data consists of fields defined on the surface. Whatever the values of the past and future evolution may be, say arbitrary large, they are not part of the initial conditions. So there is nothing dubious here and by construction you get solutions to the Einstein equation.”When people formulate statements about cosmic censorship they are trying to formalize that notion somehow (and I know there are difficulties with making the statement precise). I'll look into it when I get the chance”I am not sure if this is relevant but one way the strong cosmic censorship conjecture is formulated is that the maximal Cauchy development is not extendible. Which is the case in Schawrtzschild, but not Kerr. The weak version usually asks for completeness of future null infinity.
[QUOTE="Haelfix, post: 5651091, member: 4167"]Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations”You don't have to evolve them forward. You can evolve the initial data on the hypersurface ##T = 0## in Kruskal-Szekeres coordinates both forwards and backwards. Doing so will give you the complete globally hyperbolic region, all the way back to the past singularity and forward to the future singularity. Since the equations are time symmetric, this is perfectly well-defined and justified.[QUOTE="Haelfix, post: 5651091, member: 4167"]when they likely don't even obey the equation to begin with.”I don't know what you're basing this on. The subject under discussion is a well-defined solution of the classical Einstein Field Equation. Any event with finite spacetime curvature invariants, including arbitrarily large ones, can occur in such a solution. The solution might not end up describing anything physically relevant, but that doesn't mean the points with large spacetime curvature values "don't obey the equation"; it just means physics, unlike this particular mathematical model, chooses some other equation at that point.
To me there is a simple inverse symmetry between BH and WH : for a WH, the singularity is in the past light cone of every event in the interior, while for a BH it is the future light cone of every interior event.
[QUOTE="stevendaryl, post: 5651149, member: 372855"]It's true by definition that:
“You have these backwards.[QUOTE="stevendaryl, post: 5651149, member: 372855"]Why was entropy lower in the far past? General Relativity doesn't answer this question. (I'm not sure what does”We don't have a final answer to this question, because we don't know what preceded the hot, dense, rapidly expanding "Big Bang" state. We only know that the entropy of that state was much lower than the present entropy of the universe.[QUOTE="stevendaryl, post: 5651149, member: 372855"]Another complication is to include test particles that don't move on geodesics, because of non-gravitational forces. How does that affect the picture?”In regions IV and II (the white hole and black hole), it doesn't really change things at all: all test particles must still leave the white hole, and all test particles that enter the black hole still can't escape.In region I (and III as well), it allows test particles that would otherwise fall into the black hole to avoid it and stay in region I (or III). It still doesn't allow anything to enter the white hole.
[QUOTE="Haelfix, post: 5649478, member: 4167"]2) The white hole horizon is conceptually really bizarre…Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there, and there will be a severe blue shift when viewed from infinity. This blue sheet is a sort of classical instability, and it is argued that it leads to gravitational collapse, and thus there is likely a singularity in the future as well!”Exactly what the white hole is is a little mysterious to me. It seems that there is a sense in which there is no difference in the spacetime geometry of a black hole and a white hole; the difference is simply initial conditions of the test particles traveling in that geometry.Let me explain why I think that.To simplify, let's talk about purely radial motion, so we can treat the Schwarzschild geometry as if there were only one spatial dimension. Let [itex]Q[/itex] be the Schwarzschild factor defined by: [itex]Q equiv 1 – frac{2GM}{c^2 r}[/itex]. Let [itex]tau[/itex] be proper time. Let [itex]U^mu equiv frac{partial x^mu}{partial tau}[/itex]. Then for a test particle of mass [itex]m[/itex] moving along a radial timelike geodesic, we have the following conserved quantities:
Putting these together gives an equation for [itex]U^r[/itex]:[itex]frac{m}{2} (U^r)^2 – frac{GMm}{r} = mathcal{E}[/itex]where [itex]mathcal{E} = frac{K^2}{2 mc^2} – frac{mc^2}{2}[/itex]I wrote it in this way so that you can immediately see that it's just the energy equation for a test particle moving under Newtonian gravity. So without any mathematics, we can immediately guess the qualitative behavior:If [itex]mathcal{E} < 0[/itex], and initially, [itex]U^r > 0[/itex], then the test particle will rise to some maximum height: [itex]r_{max} = frac{GMm}{|mathcal{E}|}[/itex], and then will fall back to annihilation at [itex]r=0[/itex] in a finite amount of (proper) time. The interesting case is [itex]r_{max} > frac{2GM}{c^2} equiv r_S[/itex], where [itex]r_S[/itex] is the black hole's Schwarzschild radius. In that case, this scenario represents a particle rising from below the event horizon and then turning around and falling back through the event horizon. That seems to contradict the fact that nothing can escape from the event horizon, but to see why it doesn't, you have to see what the time coordinate [itex]t[/itex] is doing: In the time period between the particle rising out of the event horizon and falling back into the event horizon, only a finite amount of proper time passes, but an infinite amount of coordinate time passes. In the far past, [itex]t rightarrow -infty[/itex], the particle arises from the event horizon, and in the far future, [itex]t rightarrow +infty[/itex], the particle sinks below the event horizon. The time period while the particle is rising up to the event horizon, and the time period while the particle is falling below the event horizon is not covered by the coordinate [itex]t[/itex] (well, you can still have a [itex]t[/itex] coordinate there, but its connection to the [itex]t[/itex] coordinate above the horizon is broken by the event horizon). So from the point of view of someone far from the black hole, using the [itex]t[/itex] coordinate for time, nothing ever crosses the event horizon (in either direction) for any finite value of [itex]t[/itex].Going back to the test particle, we can identify the various parts of the Schwarzschild geometry:
(A fourth region, Region III, is not visited by the test particle, but is a black hole exterior like Region I).The point is that nothing about the local geometry of spacetime changes in going from Region IV (the white hole interior) to Region II (the black hole interior). The only difference is the sign of [itex]frac{dr}{dtau}[/itex]. So the difference between a black hole and a white hole is simply the initial conditions of the test particle. So it's not that the particle is repelled by the white hole and is attracted by the black hole. It's true by definition that:
As for the exterior, the same region, Region I, serves as the exterior of the white hole and the black hole. The same event horizon looks like a white hole in the far past [itex]t rightarrow -infty[/itex], because the test particle is rising from it, and looks like a black hole in the far future [itex]t rightarrow +infty[/itex], because the test particle is falling toward it. (For a realistic black hole formed from the collapsed of a star, there is no event horizon in the limit [itex]t rightarrow -infty[/itex], so there is no corresponding white hole.)Here are some puzzles having to do with the test particles:
Yes but think about it, any such line has access to the singularity region in its causal past. Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations, when they likely don't even obey the equation to begin with. The entire future spacetime is thus built out of that dubious development. When people formulate statements about cosmic censorship they are trying to formalize that notion somehow (and I know there are difficulties with making the statement precise). I'll look into it when I get the chance
[QUOTE="Haelfix, post: 5650876, member: 4167"]So there are certainly spacelike Cauchy surfaces that one can construct that will have finite values for all physical quantities arbitrarily 'near' the singularity, but I don't believe this is sufficient condition for being a well posed surface (regular is I agree an incorrect word choice). There are other technical restrictions on the form of the initial data and I'd have to consult a textbook (im currently away) for the exact statements. Clearly having arbitrarily large(but finite) tidal forces is not what one would want for well behaved data.” I don't think there is any problem, but I would like to know, so I'd like to see it when you find it. It seems that you expect the initial hypersurface to be as far back in the past as possible, but that is not needed any surface could be used. For example a horizontal line that goes right in the middle of the diagram is as good as any other.
So there are certainly spacelike Cauchy surfaces that one can construct that will have finite values for all physical quantities arbitrarily 'near' the singularity, but I don't believe this is sufficient condition for being a well posed surface (regular is I agree an incorrect word choice). There are other technical restrictions on the form of the initial data and I'd have to consult a textbook (im currently away) for the exact statements. Clearly having arbitrarily large(but finite) tidal forces is not what one would want for well behaved data.
[QUOTE="Haelfix, post: 5650091, member: 4167"]A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations such that the process satisfies certain constraints (basically you want reversibility, avoiding many to one mappings, etc). Here, the initial data surface is singular as there is geodesic incompleteness, and physically this manifests itself as a loss of predictability between any 'two' distinct states in the theory, provided the singular surface was is in at least ones past lightcone. Basically you are taking an infinite amount of information (states) and allowing that to propagate throughout spacetime. This language is often used when discussing formulations of cosmic censorship, but for some reason that I don't understand the FRW singularity and the White hole singularity seem to be excluded from theorems about cosmic censorship (probably b/c they are trivial). “Can you elaborate, because as written it doesn't seem right? The initial hypersurface of the initial value problem is not singular. It is a complete Riemannian manifold. Its future (and past Cauchy) development is incomplete (Lorentzian manifold), but the initial data is as regular as it gets.
[QUOTE="Haelfix, post: 5650091, member: 4167"]A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations” Ah, ok. I had seen that language before but got confused thinking of a Cauchy horizon. [QUOTE="Haelfix, post: 5650091, member: 4167"]Here, the initial data surface is singular as there is geodesic incompleteness” I am still confused by this, however. As I said before, the maximally extended Schwarzschild spacetime is globally hyperbolic; that means it automatically has a well-posed initial value problem. As an example of how to formulate it, the spacelike surface ##T = 0## in Kruskal-Szekeres coordinates is a Cauchy surface for the spacetime; appropriate initial data on that surface (basically the geometry of all the 2-spheres that make it up, which is equivalent to specifying the one free parameter ##M## in the line element) determines the entire spacetime. It is true that the entire spacetime thus determined is geodesically complete–more precisely, it is timelike geodesically incomplete. But that is not inconsistent with the spacetime being globally hyperbolic and having a well posed initial value problem. [QUOTE="Haelfix, post: 5650091, member: 4167"]The geometry i'm referring to is not vacuum, but it is somewhat similar to Oppenheimer Snyder which you were discussing. It is the *perturbed* extended Schwarschild solution with an infalling sheet of spherically symmetric null dust.” I'll look at the paper you linked to and comment further after I've read it.
[QUOTE="PeterDonis, post: 5649504, member: 197831"]If "Cauchy problem" is intended to mean that the spacetime has a Cauchy horizon, this is not true. The Schwarzschild spacetime is globally hyperbolic. “A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations such that the process satisfies certain constraints (basically you want reversibility, avoiding many to one mappings, etc). Here, the initial data surface is singular as there is geodesic incompleteness, and physically this manifests itself as a loss of predictability between any 'two' distinct states in the theory, provided the singular surface was is in at least ones past lightcone. Basically you are taking an infinite amount of information (states) and allowing that to propagate throughout spacetime. This language is often used when discussing formulations of cosmic censorship, but for some reason that I don't understand the FRW singularity and the White hole singularity seem to be excluded from theorems about cosmic censorship (probably b/c they are trivial). [QUOTE="PeterDonis, post: 5649504, member: 197831"]Since it is dealing with the early universe, it obviously is not using a vacuum geometry, and the Schwarzschild spacetime I am discussing in this series is a vacuum solution (except for the Oppenheimer-Snyder model, which has a non-vacuum region, but that model also has no region III or IV so it's not relevant here). In short, I'm not sure the term "white hole" in that paper means the same thing as I mean by "white hole" in these articles.”Sorry i'm not being clear here. The geometry i'm referring to is not vacuum, but it is somewhat similar to Oppenheimer Snyder which you were discussing. It is the *perturbed* extended Schwarschild solution with an infalling sheet of spherically symmetric null dust. Unfortunately I'm now away from my institution for the holidays, and it seems hard to find material discussing this that's not behind a paywall (there is a whole chapter about white hole instabilities in Novikov and Frolov), but for the Eardley instability I found roughly the picture I was looking for in the following paper, as well as some of the discussion of the setup: See figure 1http://gravityresearchfoundation.org/pdf/awarded/1989/blau_guth.pdf[QUOTE="PeterDonis, post: 5649504, member: 197831"]I'm aware of this hypothesis by Hawking, but I don't know if it has led to anything in the field of quantum gravity.”Hawking's argument is a statement about semiclassical states and thermal equilibrium, and in my opinion is pretty convincing. Of course without knowing the degrees of freedom of quantum gravity, it's hard to speculate whether a similar thing holds true in the full theory or not.
Fantastic series Peter, thanks!
[QUOTE="Haelfix, post: 5649478, member: 4167"]Quantum mechanically, if you believe in Hawking radiation/evaporation, and blackhole thermodynamics, in some sense black hole and white hole microstates have to be the same thing!”I'm aware of this hypothesis by Hawking, but I don't know if it has led to anything in the field of quantum gravity.
[QUOTE="Haelfix, post: 5649478, member: 4167"]There is a serious Cauchy problem with having a past singularity that is allowed to communicate information off to infinity.”If "Cauchy problem" is intended to mean that the spacetime has a Cauchy horizon, this is not true. The Schwarzschild spacetime is globally hyperbolic.It is true that the past singularity seems highly unphysical, but I'm not sure "Cauchy problem" is the best way to describe why.[QUOTE="Haelfix, post: 5649478, member: 4167"]Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there”Which test particles are these? If they are test particles in stable orbits in region I, they can equally well be viewed as orbiting the black hole; they certainly don't accumulate near the white hole horizon.If you mean test particles that are close to the white hole horizon, there are no stable orbits there; there are no stable orbits inside ##r = 6M##, and there are no orbits at all, even unstable ones, inside ##r = 3M##. So any freely falling object below ##r = 3M## will fall into the black hole, region II; it won't "accumulate" at the white hole horizon.[QUOTE="Haelfix, post: 5649478, member: 4167"]there will be a severe blue shift when viewed from infinity”Not for objects that are free-falling radially inward. They will see incoming light from infinity to be redshifted.Objects in free-fall orbits will see incoming light from infinity to be blueshifted, but at the lowest possible orbit, ##r = 3M##, the blueshift is quite modest.Objects that have nonzero proper acceleration can "hover" close to the horizon and will indeed see a large blueshift in light coming in from infinity. But this is due to their proper acceleration, which increases without bound as the horizon is approached.All of this is standard Schwarzschild spacetime physics; none of that changes when we include the full maximally extended spacetime in our model.[QUOTE="Haelfix, post: 5649478, member: 4167"]See:Death of White Holes in the Early Universe – Eardley, Douglas M. Phys.Rev.Lett. 33 (1974) 442-444″Unfortunately this paper is behind a paywall so I can't access it. If you want to email me a copy, I'm at peterdonis@alum.mit.edu. I would be curious to read the paper and see exactly what spacetime geometry it is assuming. Since it is dealing with the early universe, it obviously is not using a vacuum geometry, and the Schwarzschild spacetime I am discussing in this series is a vacuum solution (except for the Oppenheimer-Snyder model, which has a non-vacuum region, but that model also has no region III or IV so it's not relevant here). In short, I'm not sure the term "white hole" in that paper means the same thing as I mean by "white hole" in these articles.
There are a few other really interesting points about region III and region IV.1) There is a serious Cauchy problem with having a past singularity that is allowed to communicate information off to infinity.2) The white hole horizon is conceptually really bizarre…Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there, and there will be a severe blue shift when viewed from infinity. This blue sheet is a sort of classical instability, and it is argued that it leads to gravitational collapse, and thus there is likely a singularity in the future as well! See:Death of White Holes in the Early Universe – Eardley, Douglas M. Phys.Rev.Lett. 33 (1974) 442-4443) Quantum mechanically, if you believe in Hawking radiation/evaporation, and blackhole thermodynamics, in some sense black hole and white hole microstates have to be the same thing! See: Black Holes and Thermodynamics – Hawking, S.W. Phys.Rev. D13 (1976) 191-197