Understanding the Luminosity of Radiative Stars

In summary, the conversation discusses the idea that nuclear fusion is not the only factor that determines the luminosity of main-sequence stars. The Wikipedia entry on the mass-luminosity relation is used as a starting point for the discussion, which shows that a basic understanding of a star's luminosity can be obtained without referencing nuclear fusion. This is because a star's luminosity is determined by factors such as temperature, density, and radius. The conversation also mentions the assumptions and simplifications needed to derive the mass-luminosity relation, and how fusion plays a role in the feedback mechanism that sets both luminosity and fusion rate. The conclusion is that the idea of nuclear fusion solely determining a star's luminosity is incorrect.
  • #36
Mordred said:
Wouldn't you also have to be concerned by the variations in temperature absorbtion, shock waves etc?
Not to get the luminosity. Those shocks and T variations are due to the stirring below the surface caused by, you guessed it, the luminosity of the star! It's not a bad example of a natural Carnot engine, whereby you move heat across a temperature difference and get it to do work, which is then used to stir the gas up and make shocks and magnetic activity. But eventually that work turns back to heat, and rejoins the luminosity from whence it came, without having much effect on the latter.
So at best this method is an approximation.
You can say that again!
However I'm unclear if the method your proposing is a better or worse approximation.
It's the only approximation. There isn't any other simple approximate scheme for deriving the luminosity of a star from first principles, there just isn't. If anyone thinks I'm wrong, they are welcome to try and provide an alternative approach to the link I gave!
Seems to me you still need to understand the stars composition to get an accurate luminosity relation.
Composition is only in there in how it affects the diffusion physics, how long it takes the light to get out. This depends on the opacity in the interior, and that depends on the composition. You would see that if you filled in the constants in the factors they left out in that link, the opacity is in there (and so it has to be approximated rather roughly to get their result, but again, do you really want to model in detail the opacity in a star, or just understand that the reason it matters is it can quantitatively alter that diffusion physics?). For example, you could change the composition at the surface, but if you didn't change the mass or the composition over the bulk of the star, it would have little to no effect on the luminosity.
 
Astronomy news on Phys.org
  • #37
Ken G said:
So do a lot of other things that are equally unhelpful in obtaining understanding. Don't tell me you've never heard of a device called idealizaton?

Sure, and in this case the idealization works great when you're only worried about the energy flux from the multi-million temperature core to the outside. If that's all your concerned with then you'll get a good approximate answer. But if you want to really understand the physics that governs the star you simply can't get rid of the surface temp. Note that you're jumping back and forth between "understanding" and "calculating". You can calculate the luminosity of the star via the mass-luminosity relationship. But you'll never understand how a star works if you ignore the surface temperature.

Well, if you don't agree that to "understand" we must derive from first principles, at least I'm sure we can agree that derivations from first principles is quite important in physics-- even (especially?) when idealizations are included!

Absolutely.

The beauty of science is that it allows us to test the validity of statements like this. So let's say we have two stars that are exactly like the Sun, but one of them has at its surface a thin spherical half-silvered mirror that allows half the light through, and reflects the other half. So we must admit we have here two totally different physical mechanisms for emitting light from the surfaces of those two stars, and indeed their surface T will be quite different. Now the question: will their luminosity be different?

Except that the mirror isn't the star's surface, and the luminosity of the star will be much higher since so much is being reflected back.
 
  • #38
Drakkith said:
Sure, and in this case the idealization works great when you're only worried about the energy flux from the multi-million temperature core to the outside. If that's all your concerned with then you'll get a good approximate answer.
Right, the luminosity, that's what the thread is about.
But if you want to really understand the physics that governs the star you simply can't get rid of the surface temp.
Why would I want to "get rid of" the surface T? I want to use my understanding of L, for which I never needed surface T, to then understand surface T, for which I need L. That's not getting rid of it, that's putting it in its proper place.
Note that you're jumping back and forth between "understanding" and "calculating".
Huh? I'm calculating to get understanding. I'm using the first principles of physics to determine the luminosity of a star, and along the way, I'm noticing what I need (diffusion physics of light), and I'm noticing what I do not need (the surface T and the presence or absence of fusion). I'm doing just what that link did.
You can calculate the luminosity of the star via the mass-luminosity relationship. But you'll never understand how a star works if you ignore the surface temperature.
I do understand how stars work, by putting each aspect in its proper place. The logic is, the structure of the star (which comes from its history of gravitational contraction) determines its luminosity. The radius R is dropping all along, and at some point, gets small enough that the core T reaches about 10 million K, and fusion initiates. That has no effect at all on the L of the star, but it does affect the timescale for continued contraction-- it basically pauses the contraction until the fuel runs out. All the while, the L we have derived will tell us the surface T for each R the star has as it contracts. This is the correct understanding of a radiative star that is comprised of an ideal gas. If you do not understand what I just said, you do not understand stars, and if you do, you do. When you understand, you'll understand.

Except that the mirror isn't the star's surface, and the luminosity of the star will be much higher since so much is being reflected back.
The mirror is the star's surface, I put it at the surface of the star. And the luminosity of the star will certainly not be much higher. If you think it will, please tell me the step in that derivation in the link I gave becomes invalid if there is a half-silvered mirror on the surface of the star. The answer is, no step becomes invalid, the derivation is just fine even if there is such a mirror.
 
  • #39
Ken G said:
Right, the luminosity, that's what the thread is about.
Why would I want to "get rid of" the surface T? I want to use my understanding of L, for which I never needed surface T, to then understand surface T, for which I need L. That's not getting rid of it, that's putting it in its proper place.

Good lord, are you even trying to understand me? Am I not explaining myself well enough?

Huh? I'm calculating to get understanding. I'm using the first principles of physics to determine the luminosity of a star, and along the way, I'm noticing what I need (diffusion physics of light), and I'm noticing what I do not need (the surface T and the presence or absence of fusion). I'm doing just what that link did.

If you don't need the surface temperature, why is there a term for the surface temperature in the diffusion equations in the link? It looks to me like you do need the surface temperature to understand the luminosity correctly.

The mirror is the star's surface, I put it at the surface of the star.

I don't agree that the mirror is the star's surface.
 
  • #40
Drakkith said:
Good lord, are you even trying to understand me? Am I not explaining myself well enough?
I believe I understand what you are claiming, but you are not supporting your position, and indeed you cannot, because it is incorrect and for the reasons I have told you-- if your position is that I need to know the surface temperature to understand the luminosity (to a reasonable approximation anyway). The link I gave makes this crystal clear, I'm not at all understanding why you continue to hold to an incorrect stance in the face of clear evidential support to the contrary. Perhaps I don't understand what you are claiming: do you think I need to know the surface physics, or don't you?
If you don't need the surface temperature, why is there a term for the surface temperature in the diffusion equations in the link?
There is no such term, the surface temperature is taken to be zero expressly because it's value is of no concern. All you need to know is that stars are much hotter in their interiors than at their surface, if you use that link. I can derive the same result without even assuming that, I just use an estimate of the time it takes light to random walk out of the star. Same physics, same answer.
I don't agree that the mirror is the star's surface.
Well I view that as an odd stance, but it is of no matter, I can easily accomplish the same result by sprinkling scatterers into any region that you would consider the star's surface. Just put little white balls that do nothing but reflect the light that hits them, and sprinkle them liberally around the surface region, but not over the bulk of the star. Do you know what will happen? The surface temperature will go up quite a bit, and the luminosity will change... no measurable amount! When you understand the truth of that claim, you will understand what sets the luminosity of a star.
 
  • #41
Thanks for the reply in regards to my post. After looking into various examples and your reply I can see the reasoning. Obviously a detailed analysis of the stars complete thermodynamic process would lead to a more accurate luminosity relation. However I recognize that this isn't necessarily practical. As mentioned the approximations do work in most circumstances. Enough that with cross checks via the cosmic distance ladder such as the tully-fisher and stellar parallex. With the cross checks approximations are usually sufficient as well as practical..
 
  • #42
And I'll be the first to admit there is an important place for "black box" simulations of "everything but the kitchen sink, and the kitchen sink too" to calculate what is going on in stars. That's how we predict all kinds of detailed things. But those are not appropriate for a basic understanding, and the basic understanding serves us well-- even when (especially when?) we also have access to the black boxes.
 
  • #43
Ken G said:
The beauty of science is that it allows us to test the validity of statements like this. So let's say we have two stars that are exactly like the Sun, but one of them has at its surface a thin spherical half-silvered mirror that allows half the light through, and reflects the other half. So we must admit we have here two totally different physical mechanisms for emitting light from the surfaces of those two stars, and indeed their surface T will be quite different. Now the question: will their luminosity be different?

When you realize the correct answer is "no, not measurably so", you will be able to see that your assertion does not test out.

Obviously, the luminosity observed outside will be unchanged by the mirror. This must be so simply because of energy conservation: in a state of equilibrium, the energy leaving the volume must be equal to the energy produced inside, whatever the physical conditions. The only consequence of the mirror would be that the radiation density inside would be now twice as high as before, which however exactly compensates for the 50% transmissivity of the mirror, i.e. the luminosity observed outside will be unchanged, but so will be the temperature on the outside surface of the mirror (as per the Stefan-Boltzmann law).
 
Last edited:
  • #44
Fantasist said:
The only consequence of the mirror would be that the radiation density inside would be now twice as high as before, which however exactly compensates for the 50% transmissivity of the mirror, i.e. the luminosity observed outside will be unchanged, but so will be the temperature on the outside surface of the mirror (as per the Stefan-Boltzmann law).
You were basically correct up to that last part. The temperature of the mirror is not defined, mirrors don't need a temperature. But the key point is, the emergent light will be bluer, so the star will look hotter-- as well it should, the temperature of the gas will be higher. But the luminosity is the same! So we have a case where the Stefan-Boltzmann law does not apply at the surface, yet we can still know the luminosity (to a reasonable approximation) via the physics of that link.

If people think the mirror is too artificial to make the point, instead imagine that scattering centers, such as little white balls, are scattered liberally around the surface regions of the star so that they have the same effect as a half-silvered mirror. What will happen is, again the Stefan-Boltzmann law will not apply at the surface, and the temperature at the surface will go up, the star will look bluer-- and the luminosity will not change. So I do not need to know if those white balls are there or not to get the luminosity, but I do need to know it to get the surface temperature. That's just incontrovertible proof that surface physics is essentially irrelevant to the luminosity of a star, unless it was something really extreme. None of Drakkith's arguments refute that in the least, though I do not dispute that the microphysics of how that luminosity is emitted from the surface involves the temperature of the gas doing the emitting-- I am talking about how to know what the luminosity must be, from first principles.

Now, back to the real point of the thread: nuclear fusion is equally unnecessary, for a basic understanding of the luminosity of a radiative star that has a simple internal structure that makes it essentially "all one thing" (some stars have shell fusion that breaks them quite radically up into a core and an envelope, we're talking about main-sequence stars or stars just before they reach the main sequence).
 
Last edited:
  • #45
I tell you what, Ken, you derive the luminosity of a star from first principles without using the surface temperature of the star, at all (even an approximation), and you'll convince me. Until then I stand by both links you've posted which both use the surface temperature.
 
  • #46
OK, that is a perfectly fair challenge (though actually, neither really use the surface T, but it's better if I just show you). I will simply use the time it takes light to diffuse out. Start with a star of mass M, which is at a point in its contraction where it has radius R (this will turn out to not matter). By hydrostatic equilibrium, its characteristic average temperature (not surface temperature, I don't care what that is) satisfies kT ~ GMm/R, where m is the mass of a proton, and I won't bother to include any order-unity factors (the derivation is intended to be rough and conceptual). The energy density in a thermal radiation field is aT4, where a is related to the Stefan-Boltzmann constant (a=4*sigma/c), and the volume is order R3, so the total radiant energy in the star is of order aT4R3. The luminosity is then this amount of energy, divided by the characteristic diffusion time. Call the diffusion time t, and we have
L ~ aT4R3/t.

Now all we need is t. For that, we need to know the time light takes to cross a "mean free path", and we need to know how many mean free paths it has to cross. The mean free path is given by l = 1/(kappa*rho), where kappa is the cross section per gram and rho is the mass density. Light will cross l in a time l/c, but as is well known in a random walk, the number of times it has to do that is (roughly) equal to the square of the number of mean-free paths across the star. Hence, we have
t ~ (l/c)*(R/l)2.
Plug and chug all this into the expression for L, and you get:
L ~ sigma*T4R / (kappa*rho) ~ sigma*T4R4/(kappa*M)
Now put in T ~ GMm/(kR), and voila,
L ~ M3*A
where A = (Gm/k)4*sigma/kappa, so if we make the rough approximation that the cross section per gram kappa is a fixed constant (as is true for free electron scattering, but not in general for all the kinds of opacity we find in a star), then we can think of A as a constant. (In actuality, A will tend to increase with M because higher mass stars are lower density stars and that tends to drop kappa as metals ionize, so the actual power is a little steeper than M3----ETA to fix typo T-->M)

Bottom line, we not only get the L ~ M3 scaling we find in the mass-luminosity relationship (it's a bit steeper, more like L ~ M3.5 on average), we can even estimate the constant A if we know something about the opacity kappa, so we can flat out estimate the luminosity of a star knowing only its mass. No fusion, no surface T, fairly reasonable accuracy though you can't expect too much-- there's no convection, and no detailed opacity physics, in this model! So there are considerable inaccuracies, but not due to not knowing about fusion, and not due to not knowing the surface physics-- neither of those matter nearly as much as simply not knowing the opacity and what convection does!
 
Last edited:
  • #47
Alright, you've convinced me that you can find the luminosity without ever considering the surface temp.
 
  • #48
OK thanks. Your skepticism is just good science. Now we must turn to the main issue-- notice the significance that I did not mention fusion at all. Many people are convinced that a star will emit whatever luminosity is pumped out by the fusion rate, and you can see all kinds of erroneous arguments about why high-mass stars fuse faster, but we can now see the reason they do that: they emit light faster, and the fusion just has to keep up (because fusion is self-regulated to supply whatever heat is lost by the star, much like a thermostat). Isn't that remarkable?
 
  • #49
Ken G said:
OK, that is a perfectly fair challenge (though actually, neither really use the surface T, but it's better if I just show you). I will simply use the time it takes light to diffuse out. Start with a star of mass M, which is at a point in its contraction where it has radius R (this will turn out to not matter). By hydrostatic equilibrium, its characteristic average temperature (not surface temperature, I don't care what that is) satisfies kT ~ GMm/R, where m is the mass of a proton, and I won't bother to include any order-unity factors (the derivation is intended to be rough and conceptual).

Would you like demonstrating that?
What is "characteristic average temperature", precisely how is it calculated and what is its relevance?

As far as I can follow:
A star can be held up by 3 sources of pressure:
1) Pressure of light
2) Thermal pressure of electrons, ions, atoms or molecules
3) Degeneracy pressure of electrons.
Now, stars which are held up mainly by 1) tend to be weakly stable against free expansion or contraction.
Stars which are held up mainly by 3) tend to be weakly stable against thermal runaway heating or cooling.
So we can concentrate on assumption that a star is held up mainly by 2).
Can you demonstrate precisely which is the temperature derivable from first principles?
 
  • #50
snorkack said:
Would you like demonstrating that?
What is "characteristic average temperature", precisely how is it calculated and what is its relevance?
Simplifying concepts like an "average characteristic temperature" of the interior of a star are quite powerful for conceptual understanding of a wide array of things, you should add them to your arsenal. They must be used with care, which is why I said the star has to have a simple internal structure (more technically, a "polytrope"), which conceptually means that the star is "all one thing" whose internal values are characterized by global numbers like T, R, and M. In this case, the value is easy to demonstrate-- just compare the result I derived for the luminosity of a main sequence star, using a reasonable approximation for the characteristic cross section per gram (free electron opacity is kind of a lower bound there of about kappa = 0.4 cm2/g, so using that would yield an upper bound to the luminosity, but real stellar opacities are larger by up to a factor of 10 or so), and see what you get. That will demonstrate for you the value of concepts like characteristic average internal temperatures.
As far as I can follow:
A star can be held up by 3 sources of pressure:
1) Pressure of light
This is negligible for all but the highest mass stars, and would require modifications to the connection between T and M/R that I used, and yield the "Eddington limit" where L is proportional to M. My derivation is for all stars with M below about 50 times solar or so.
2) Thermal pressure of electrons, ions, atoms or molecules
Yes, that's what I'm using.
3) Degeneracy pressure of electrons.
I'm using that the gas is not degenerate. So this all relates to my expression for T in terms of M/R, which only works for your case (2), but that's the vast majority of main-sequence stars.
Now, stars which are held up mainly by 1) tend to be weakly stable against free expansion or contraction.
True enough, but not relevant.
Stars which are held up mainly by 3) tend to be weakly stable against thermal runaway heating or cooling.
Not necessarily, it depends on whether they reach temperatures capable of fusing any remaining nuclear fuel they possess. Still, that doesn't matter here, I've been very clear about the kind of star I am talking about: main-sequence stars, or just before they enter the main sequence (to see why fusion doesn't matter much).
So we can concentrate on assumption that a star is held up mainly by 2).
Yes.
Can you demonstrate precisely which is the temperature derivable from first principles?
I used the "virial theorem" to arrive at kT ~ GMm/R. That is a first principle. It doesn't apply to your case (1) because it neglects radiation pressure, and it doesn't apply to case (3) because it associates kT with the average kinetic energy per particle, but degeneracy reduces T way below that.
 
Last edited:
  • #51
Ken G said:
You were basically correct up to that last part. The temperature of the mirror is not defined

If you would touch the mirror, you could convince yourself that its temperature is defined.

Ken G said:
But the key point is, the emergent light will be bluer, so the star will look hotter-- as well it should, the temperature of the gas will be higher. But the luminosity is the same!

But that could not be a blackbody spectrum anymore, as otherwise it would violate energy conservation..
 
  • #52
Ken G said:
(free electron opacity is kind of a lower bound there of about kappa = 0.4 cm2/g, so using that would yield an upper bound to the luminosity, but real stellar opacities are larger by up to a factor of 10 or so),
Does it mean that subdwarfs are brighter for the same mass, not only smaller and hotter?
Ken G said:
This is negligible for all but the highest mass stars, and would require modifications to the connection between T and M/R that I used, and yield the "Eddington limit" where L is proportional to M. My derivation is for all stars with M below about 50 times solar or so.
Massive stars run into Eddington limit in main sequence, other stars encounter it later.
Ken G said:
True enough, but not relevant.
It is the reason I give for excluding case 1). Bright stars tend to be shortlived not only because they are bright (duh) but because they have poor stability.
Ken G said:
Not necessarily, it depends on whether they reach temperatures capable of fusing any remaining nuclear fuel they possess.
If they don´t then the unstable thermal runaway simply is operating in cooling direction.
Ken G said:
Yes.I used the "virial theorem" to arrive at kT ~ GMm/R. That is a first principle. It doesn't apply to your case (1) because it neglects radiation pressure, and it doesn't apply to case (3) because it associates kT with the average kinetic energy per particle, but degeneracy reduces T way below that.
And that shows the question of what the significance of that T is.
 
  • #53
Ken G said:
Bottom line, we not only get the L ~ M3 scaling we find in the mass-luminosity relationship

It is the mass-luminosity relationship (essentially the same derivation as the one on Wikipedia page). And it is not really surprising that the luminosity is basically only determined by the mass (after all, the mass of the primordial cloud is the only parameter that can possibly make any difference for the star formation (assuming identical chemical composition)).

Ken G said:
we can even estimate the constant A if we know something about the opacity kappa, so we can flat out estimate the luminosity of a star knowing only its mass. No fusion, no surface T

It is not further surprising that fusion didn't come into it, as the assumption of 'blackbody' radiation doesn't have to care about the details of the processes by means of which radiation is created and destroyed. It is a 'black box' model based on the assumption of an equilibrium between emission and absorption processes (whatever they may be).

In any case, you can calculate the luminosity from the surface temperature (as determined from the spectrum), and I bet you will get a far more accurate value for it than from your mass-luminosity relationship (where, as you seem to realize yourself, you have to make certain assumptions about the stellar structure and other parameters determining the diffusion process if you want to arrive at an absolute numerical value for the luminosity).

Ken G said:
you can see all kinds of erroneous arguments about why high-mass stars fuse faster, but we can now see the reason they do that: they emit light faster,

That would contradict your derivation above: the time t increases with increasing radius and thus with increasing mass. So a more massive star should take longer to emit a certain percentage of the radiative energy it contains.

Ken G said:
and the fusion just has to keep up (because fusion is self-regulated to supply whatever heat is lost by the star, much like a thermostat).

I don't think the fusion rate cares about the radiation lost from the star. It is only determined by the local temperature and density. If you put a 100% reflective mirror around the star, the temperature will steadily increase, and I don't think the fusion will regulate itself down in response. On the contrary, it will result in a fusion bomb.
 
  • #54
Fantasist said:
If you would touch the mirror, you could convince yourself that its temperature is defined.
I'll presume you are being facetious, but the mirror would feel hot because it is radiating light. A perfect mirror does not have a temperature.
But that could not be a blackbody spectrum anymore, as otherwise it would violate energy conservation..
It would have the same spectrum as a blackbody, but not the same flux as the Stefan-Boltzmann law. This is called "albedo."
 
  • #55
snorkack said:
Does it mean that subdwarfs are brighter for the same mass, not only smaller and hotter?
Yes, that occurred to me as well. Subdwarfs must not just have lower metallicity at their surfaces, but all over, so they should have higher luminosity for the same mass. But they fall below the main-sequence for the same spectral type. So I think what must be happening there is, they are actually superluminous for their mass, but because the main sequence is so steep in an HR diagram, and their surface temperatures are shifted upward (perhaps by the very albedo effect we are talking about), they end up looking underluminous for their surface T.
Massive stars run into Eddington limit in main sequence, other stars encounter it later.
Yes, I mentioned that, but only very massive stars.
It is the reason I give for excluding case 1). Bright stars tend to be shortlived not only because they are bright (duh) but because they have poor stability.
They are short-lived because they burn up their nuclear fuel quickly, and nuclear fuel is the main thing that delays a star's evolution. Also, low-mass stars have access to the white dwarf stage, which is extremely long-lived. So we really have two issues here-- one is, how quickly do they evolve to their "end stage" (and that is all about how fast their heat leaks out in the form of light), and the other is, what is that end stage and how long-lived is that. I speak only to the first issue here, the second is another thread.
If they don´t then the unstable thermal runaway simply is operating in cooling direction.
No, white dwarfs in the absence of fusion have no runaway effects, they just gradually cool as their heat leaks out. The reason nuclear fusion is thermally unstable in a white dwarf is that the faster it occurs, the more it piles up heat, which increases the temperature of the nuclei, and that increases the fusion rate. If no fusion is occurring, no instabilities are present.
And that shows the question of what the significance of that T is.
T is quite important, that's why I invoke it. But this is the characteristic interior T, not the surface T, which is totally different and is set by the luminosity. The interior T is set by the hydrostatic equilibrium. It's apples and oranges, which is why that Wiki approach is a conceptual boondoggle.
 
  • #56
Ken G said:
They are short-lived because they burn up their nuclear fuel quickly, and nuclear fuel is the main thing that delays a star's evolution.
If it were the case, Eddington limit would set a lower bound to stellar lifetime.
Ken G said:
No, white dwarfs in the absence of fusion have no runaway effects, they just gradually cool as their heat leaks out. The reason nuclear fusion is thermally unstable in a white dwarf is that the faster it occurs, the more it piles up heat, which increases the temperature of the nuclei, and that increases the fusion rate. If no fusion is occurring, no instabilities are present.
The same thermal instability can operate in the other direction. The slower the fusion occurs, the cooler the star gets, and that further slows down fusion, etc. Which is why brown dwarfs do not sustain long term protium fusion even if they fuse some small amounts of protium when heated up by initial contraction, and also sustain even lower rate of protium fusion due to pure pycnonuclear reactions.
Ken G said:
T is quite important, that's why I invoke it. But this is the characteristic interior T, not the surface T, which is totally different and is set by the luminosity. The interior T is set by the hydrostatic equilibrium.

Where is that "characteristic" T?
 
  • #57
Fantasist said:
It is the mass-luminosity relationship (essentially the same derivation as the one on Wikipedia page).
Yes it is, but the Wiki derivation is horrendous, because it first does it completely wrong (plug in the numbers you'd get from their approach, you'll see how staggeringly wrong it is), and then applies a "correction," which completely eradicates the original horrendous physics, and swaps in the real physics through the back door. It is a perfect example of what a conceptual morass you end up in if you think you should be using surface temperature to infer luminosity. When you understand what they really did there, you'll see what I mean.
And it is not really surprising that the luminosity is basically only determined by the mass (after all, the mass of the primordial cloud is the only parameter that can possibly make any difference for the star formation (assuming identical chemical composition)).
It is extremely surprising that it depends only on the mass, in the sense that it is surprising it does not depend on either R or the fusion physics.

The lack of dependence on R means that if you have a radiating star that is gradually contracting (prior to reaching the main sequence), its luminosity should not change! That would be true even if the star contracted by a factor of 10, if the opacity did not change, and the internal physics did not shift from convection to radiation. But contracting stars do tend to start out highly convective, so do make that transition, and that's why we generally have not noticed this remarkable absence of a dependence on R.

The lack of dependence on fusion physics means that when a star initiates fusion, nothing really happens to the star except it stops contracting. That's not necessarily what must happen, for example when later in the star's life it begins to fuse hydrogen, it will undergo a radical change in structure, and change luminosity drastically. But the onset of hydrogen fusion does not come with any such drastic restructuring of the star, because it started out with a fairly simple, mostly radiative structure, and when fusion begins, it just maintains that same structure because all the fusion does is replace the heat that is leaking out.
It is not further surprising that fusion didn't come into it, as the assumption of 'blackbody' radiation doesn't have to care about the details of the processes by means of which radiation is created and destroyed.
Try telling that to a red giant that begins fusing helium in its core! But you are certainly right that if we get away with assuming that fusion does not affect the internal structure of the star, then that structure is indeed a kind of black box. That's how Eddington was able to deduce that internal structure before he even knew that fusion existed. Still, if you think it's not surprising that fusion doesn't matter, then not only have you learned an important lesson, you may also find it hard to read all the textbooks and online course notes that tell you the fusion physics explains the mass-luminosity relationship!
In any case, you can calculate the luminosity from the surface temperature (as determined from the spectrum), and I bet you will get a far more accurate value for it than from your mass-luminosity relationship (where, as you seem to realize yourself, you have to make certain assumptions about the stellar structure and other parameters determining the diffusion process if you want to arrive at an absolute numerical value for the luminosity).
I'm sure that's true, but it fails the objective of understanding the luminosity from first principles. We can also just measure the luminosity, that's the most accurate way yet!
That would contradict your derivation above: the time t increases with increasing radius and thus with increasing mass.
That's not what I meant by "emit light faster", I did not mean "the diffusion time is less", I meant "they emit light from their surface at a faster rate."
I don't think the fusion rate cares about the radiation lost from the star.
Well, we know that cannot be true, because the fusion rate equals the rate that radiation is lost from the star.
It is only determined by the local temperature and density.
Thank you for bringing that up, it's an important part of the mistake that many people make. You will see a lot of places that say words to the effect that "because fusion depends so sensitively on temperature, the fusion rate controls the luminosity". That's exactly backward. Because the fusion rate depends so sensitively on temperature, tiny changes in T affect the fusion rate a lot, so the fusion rate has no power to affect the star at all. After all, the thermodynamic properties of the star are not nearly as sensitive to T, so we just need a basic idea of what T is to get a basic idea of what the star is doing. But since fusion needs a very precise idea of what T is, we can always get the fusion to fall in line with minor T modifications. That's why fusion acts like a thermostat on the T, but it has little power to alter the stellar characteristics other than establishing at what central T the star will stop contracting.

If you don't see that, look at it this way. Imagine you are trying to iterate a model of the Sun to get its luminosity right. You give it an M and a R, and you start playing with T. You can get the T basically right just from the gravitational physics (the force balance), and you see that it is in the ballpark of where fusion can happen. You also get L in the right ballpark, before you say anything about fusion (as I showed). But now you want to bring in fusion, so you tinker with T. Let's say originally your T was too high, so the fusion rate was too fast and was way more than L. So you lower T just a little, and poof, the fusion rate responds mightily (this is especially true of CNO cycle fusion, more so than p-p chain, so it works even better for stars a bit more massive than the Sun). So you don't need to change T much at all, so you don't need to update the rest of your calculation much, so you end up not changing L to reach a self-consistent solution! So we see, it is precisely the T-sensitivity of fusion that has made it not affect L much, though many places you will see that logic exactly reversed.
If you put a 100% reflective mirror around the star, the temperature will steadily increase, and I don't think the fusion will regulate itself down in response. On the contrary, it will result in a fusion bomb.
Yes, 100% reflection causes a lot of physical difficulties, because you can't reach an equilibrium. Even if you just stick to 99%, you would not have much problem-- L would still not be changed much.
 
  • #58
snorkack said:
If it were the case, Eddington limit would set a lower bound to stellar lifetime.
Well, the Eddington limit does set a lower bound to stellar lifetime! Any star with given mass M has a lower limit to its main-sequence lifetime, set by the Eddington limit, but it is generally way shorter than the actual main-sequence lifetime-- except for stars of mass of about 50 solar masses or more.
The same thermal instability can operate in the other direction. The slower the fusion occurs, the cooler the star gets, and that further slows down fusion, etc.
Yes, if there is something that is fusing in the first place. As I said, there is no instability if there is no fusion going on.
Which is why brown dwarfs do not sustain long term protium fusion even if they fuse some small amounts of protium when heated up by initial contraction, and also sustain even lower rate of protium fusion due to pure pycnonuclear reactions.
That can't be right. Any instability can go in either direction, so the issue is, which direction is going to dominate? If you have an instability that can either turn off fusion, or make it go nuts, then in some places you will turn the fusion off, and in other places you will make it go nuts. Which of those places is going to matter more, say in an H bomb?
Where is that "characteristic" T?
Throughout the interior of a star, where T is uniformly high and not varying dramatically (though obviously it monotonically decreases with r). If you want to make it precise, you "de-dimensionalize" your T variable. That means you write T(r) = To*y(x) using r = R*x, where y(x) is a dimensionless order-unity function that determines the details of the T structure, and x runs from 0 to 1. Here To is what I am calling the "characteristic T." Then we assume a "homology class", which means that as we vary M from one model to another, we assume that the function y(x) stays the same, so we can look for scaling relations between T and M and R and L and so on. This is also a key aspect of what are called "polytropic models", used routinely (and by Eddington) to model stars. What you don't seem to recognize is that everything I'm saying is basic stellar physics, nothing but a simplified and more conceptually accessible version of Eddington's work on stellar structure.
 
Last edited:
  • #59
Ken G said:
Well, the Eddington limit does set a lower bound to stellar lifetime! Any star with given mass M has a lower limit to its main-sequence lifetime, set by the Eddington limit, but it is generally way shorter than the actual main-sequence lifetime-- except for stars of mass of about 50 solar masses or more.
Then where are all these stars with different large masses and equal main sequence lifetimes?
Ken G said:
That can't be right. Any instability can go in either direction, so the issue is, which direction is going to dominate? If you have an instability that can either turn off fusion, or make it go nuts, then in some places you will turn the fusion off, and in other places you will make it go nuts. Which of those places is going to matter more, say in an H bomb?
If the instability goes into fusion direction then the instability disappears and causes stable fusion, like in case there was no instability to begin with.
Ken G said:
Throughout the interior of a star, where T is uniformly high and not varying dramatically (though obviously it monotonically decreases with r). If you want to make it precise, you "de-dimensionalize" your T variable. That means you write T(r) = To*y(x) using r = R*x, where y(x) is a dimensionless order-unity function that determines the details of the T structure, and x runs from 0 to 1. Here To is what I am calling the "characteristic T." Then we assume a "homology class", which means that as we vary M from one model to another, we assume that the function y(x) stays the same, so we can look for scaling relations between T and M and R and L and so on.

But the matter is that opacity varies with temperature in a complex manner.
 
  • #60
snorkack said:
Then where are all these stars with different large masses and equal main sequence lifetimes?
At this very moment? Mostly in star-forming regions in the spiral arms of galaxies I should imagine. They're just rare, stars with such high masses are rare. Many seem to think they would have been much more common in the very early universe, so we might perhaps conclude that population III stars largely have that property. It is easy to estimate that minimum lifetime, set L = 4 Pi GMc/kappa and t = fMc2/L where f is some small fusion efficiency factor like .001 which accounts for how much mass is in the core and how much energy it can release. We get that the minimum main-sequence lifetime, which is also the main-sequence lifetime of all the highest-mass stars, is about t = f c kappa/4 Pi G. We also have to estimate the cross section per gram, which is kappa, but if we take free electrons as our opacity, then kappa is about 0.4, which is a lower bound so perhaps just take 1. The result is then about a million years, not a bad estimate.
If the instability goes into fusion direction then the instability disappears and causes stable fusion, like in case there was no instability to begin with.
Then you will have stable fusion, not fusion turning off everywhere like you claimed above. I just don't see how that flavor of instability is of any particular importance, eventually the star will be in a state of stable fusion if it has the instability you describe. Indeed, that's probably more or less just what's happening in the Sun right now, where fusion on very small scales can either turn itself off or go unstable, but on larger scales you see stable burning. The details don't matter, the total fusion rate is still set by the luminosity! That's the most important thing to get from this thread: the details of fusion don't matter, and that's why you don't see any difference in the star when fusion initiaties, or any difference along the main sequence when p-p chain fusion at lower mass gives over to CNO cycle fusion for higher mass stars. Even the one detail that is somewhat important in some ways, which is the fact that fusion is very T-sensitive and quite capable of yielding any L you need, only comes into play in explaining why the main-sequence is so narrow in an H-R diagram, which means that stars cease contracting when in that phase.
But the matter is that opacity varies with temperature in a complex manner.
A fact I pointed myself. That's why idealizations are necessary to understand the mass-luminosity relation. If you want high accuracy, you must put that in, plus a whole lot of other things like convection zones, neutrino losses, winds, metallicity, rotation, perhaps magnetic fields...etc.
 
Last edited:
  • #61
Ken G said:
At this very moment? Mostly in star-forming regions in the spiral arms of galaxies I should imagine. They're just rare, stars with such high masses are rare. Many seem to think they would have been much more common in the very early universe, so we might perhaps conclude that population III stars largely have that property. It is easy to estimate that minimum lifetime, set L = 4 Pi GMc/kappa and t = fMc2/L where f is some small fusion efficiency factor like .001 which accounts for how much mass is in the core and how much energy it can release. We get that the minimum main-sequence lifetime, which is also the main-sequence lifetime of all the highest-mass stars, is about t = f c kappa/4 Pi G. We also have to estimate the cross section per gram, which is kappa, but if we take free electrons as our opacity, then kappa is about 0.4, which is a lower bound so perhaps just take 1. The result is then about a million years, not a bad estimate.
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
Ken G said:
Then you will have stable fusion, not fusion turning off everywhere like you claimed above.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
Ken G said:
I just don't see how that flavor of instability is of any particular importance, eventually the star will be in a state of stable fusion if it has the instability you describe.
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Ken G said:
Indeed, that's probably more or less just what's happening in the Sun right now, where fusion on very small scales can either turn itself off or go unstable, but on larger scales you see stable burning.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
 
  • #62
snorkack said:
The question is, do massive stars near Eddington limit exist for periods of time where significant fraction of protium is fused (as computed, around 2 million years), or are they destroyed in completely different and much faster ways (shedding most of their mass, unfused, through steady stellar winds or radial oscillations)?
That is indeed an open question. This analysis only covers the luminosity of the star, other evolutionary channels require a different analysis.
Yes, if the instability is in direction of runaway heating. Yet the instability can also go in the direction of runaway cooling.
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No, it often is in state of long term cooling, and a brown dwarf rather than a star. Look at the mass/luminosity relationship of old stars, and it is NOT a continuous relationship because of the discontinuous jump between the least massive red dwarfs and most massive brown dwarfs.
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
Are there perhaps even red and brown dwarfs of equal mass and composition, because of having a path dependent state and luminosity?
Again, I don't say there is no interesting physics happening to stars that are not ideal gases, I say that if they are subject primarily to ideal-gas physics, then the above derivation applies to them. If they are not, it doesn't.
Can it? The rate of protium fusion is slow and weakly dependent on temperature, while Sun´s heat capacity is huge.
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
 
Last edited:
  • #63
Ken G said:
But eventually, it will have gone the way of runaway heating in enough places that the star is no longer in that previous state, correct? So the runaway cooling cannot be an important contributor to the structure of the star, the runaway heating is never reversed, it must proceed until something stabilizes it. Just imagine a set of dimmer switches that can be turned up or down, but once they are on all the way, they stay on all the way-- wait long enough, and you will be in a bright room!
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Ken G said:
I presumed that was because the most massive brown dwarfs have a different internal structure owing to non-ideal-gas type behavior. They are also fusing deuterium, not hydrogen, correct? In any event, it may have some interesting physics going on there, but it has nothing to say about the derivation I gave, as it is a different physical model. My derivation treats an ideal gas because I asserted that the average energy per particle has the ideal-gas connection to the temperature.
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
Ken G said:
I have no idea what you are saying here. Protium fusion is regular old p-p chain fusion, which is well known to be highly temperature sensitive (though less so than CNO-cycle, that much is true). The large heat capacity of the Sun only means that we can assume the energy in the radiation field is the slave to the heat content, as was done when I used the characteristic T of the ideal gas to get the T of the radiation field. I'm not sure what you are objecting to, the derivation is quite transparent.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
 
  • #64
snorkack said:
No. Runaway heating or cooling are too slow to take place in spots within star - the eat is distributed faster within star through adiabatic movement, convection and conduction, so runaway cooling or heating happens to the whole star.
Then when the heating runs away for the whole star, in the heating way, what stabilizes it, and how does it ever go unstable again? This model just sounds like the helium flash of a normal star, which stabilizes when it knocks the core completely out of the unstable state. That's occurs when the gas is highly degenerate, perhaps there's some different physics when the degeneracy is only partial. In any event, the derivation I gave is for ideal gases with minimal radiation pressure, like for the main sequence below about 50 solar masses but enough mass to not have become degerate by the time fusion begins (which is generally not called the main sequence).
They fuse deuterium and lithium. They ALSO fuse some protium, especially when they are young and hot from the initial contraction. And so do young red dwarfs.
Sure, and if they are radiative ideal gases, my derivation applies to them. The nature of the fusion is irrelevant, as long as it is stabilized in the usual way that fusion is stable in a large ideal gas. The other branch you are describing just sounds like it's not ideal gas physics, so it says nothing about my derivation.
Both young big brown dwarfs and young small red dwarfs are hot, they have some contribution to pressure from thermal pressure and some from degeneracy, and some rate of protium fusion. The difference is that as they age, red dwarfs stabilize at some temperature and rate of protium fusion (these shall actually grow in long term as protium fraction decreases), while the brown dwarfs continue to cool and the protium fusion slows down - and decreasing radius does NOT cause increase of interior temperature.
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
A small deviation in Sun interior temperature has a tiny effect on actual fusion heat generation, so that effect is completely swamped by the rapid adiabatic response to deviation from hydrostatic balance.
The adiabatic response is due to the heat generation! But yes, the net result is the stabilization of the fusion, so it can do what I have been saying it does: replace the lost heat, period.
After hydrostatic balance is restored, what is the size and direction of the remaining thermal imbalance?
When the physics is ideal gas physics, as in the Sun, there is no "remaining thermal imbalance", the adiabatic response stabilizes the thermal state. It makes the fusion do nothing but replace the heat lost due to the luminosity of the star, as derived above.
 
Last edited:
  • #65
Ken G said:
Then when the heating runs away for the whole star, in the heating way, what stabilizes it,
Increasing contribution of thermal pressure.
Ken G said:
I'm sure you'll find that's all due to the deviation from ideal gas physics. It could be included as some kind of addendum to the derivation of this thread, along the lines of how things are different if the temperature does not come directly from the average kinetic energy per particle as it does in an ideal gas.
Yes, it is the contribution of degeneracy pressure.
Now imagine a shrinking ball of gas, and make the assumption that its radial distribution of temperature and density remains unchanged, that it obeys ideal gas laws, and also that its heat capacity is constant (this last is least likely).
If the radius shrinks twice
then the density increases 8 times
the surface gravity increases 4 times
the pressure of a column of fixed depth thus increases 32 times
since the column of gas from surface to centre gets 2 times shorter, the central pressure grows 16 times
but since the central density grew just 8 times, the central temperature must have doubled.

Now, think what degeneracy pressure does.
If you heat water at 1 atmospheres from 273 K to 277 K, it does NOT expand 1,5 % like an ideal gas would - it actually shrinks.
When you heat water from 277 K to 373 K, it does expand - but not 35 % like ideal gas, only 1,5 %
Then, when you heat water from 373,14 to 373,16 K, it expands over 1000 times!

If you heat water at higher pressures, you will find:
that it is slightly denser, because very slightly compressed, at any equal temperature below boiling point
that the boiling point rises with pressure
that water expands on heating near the boiling point at all pressures over about 0,01 atm
that the density of water at boiling point decreases with higher temperature and pressure
that steam, like ideal gas, expands on heating at each single pressure
that steam, like ideal gas, is compressed by pressure at each single temperature
that the density of steam at boiling point increases with pressure and temperature
that the contrast between boiling water and boiling steam densities decreases with temperature and pressure.

At about 220 atmosphere pressure, the contrast disappears.
Now, if you heat water at slightly over 220 bar then the thermal expansion still starts very slight at low temperatures but increases and is, though continuous, very rapid around the critical point (a bit over 374 Celsius).

But when you increase pressure further, you would find that the increase of water thermal expansion from the low temperature liquid-like minimal expansion to the ideal gas expansion proportional to temperature would take place at increasing temperatures and also become monotonous, no longer having a maximum near the critical point.

And interiors of planets and stars typically have much higher pressures than critical pressure. The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
 
  • #66
snorkack said:
Yes, it is the contribution of degeneracy pressure.
OK, so that's a different situation. It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
And interiors of planets and stars typically have much higher pressures than critical pressure.
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
The transition between liquidlike behaviour of little thermal expansion and mainly degeneracy pressure at low temperature, and ideal-gas-like behaviour of volume or pressure proportional to temperature and mainly thermal particle pressure, would be continuous and monotonous.
Sure, but the same could be said about general relativistic corrections as you go from a main-sequence star to a neutron star. You are still not using GR in most stellar models, because the corrections would be unimportant.
 
  • #67
Ken G said:
It's quite interesting physics, but not relevant to the luminosity of main-sequence stars.
Quite relevant.
Ken G said:
Well, that depends on what one means by "typical!" Certainly there are lots of brown dwarf stars out there, probably the most common type of star, but that's not what you see when you look up at the night sky. So stars like you describe are normally viewed as oddballs, ironically! The "typical star", to most astronomers, is a main-sequence star, and those are ruled by ideal gas pressure, and do not show liquid-like phase changes or degeneracy, until much later in life.
They do.
Now, excluding general relativistic effects but also heat production, and assuming only one radial distribution of temperature and density for each radius:

when a shrinking ball of gas is large and tenuous, its pressure is dominated by thermal pressure and therefore its internal temperature is proportional to the inverse of its radius, as demonstrated before;
whereas when the ball is dense and cool, its pressure is dominated by degeneracy pressure and therefore it has minimal thermal expansion - its radius is near a finite minimum and increases very slightly with temperature.
This is a continuous transition. The temperature of a shrinking ball of gas goes through a smooth maximum - first the temperature increases with inverse of radius, then the temperature increase slows below that rate, the temperature reaches a certain maximum, then the temperature falls while still being high and accompanied by significant further shrinking, finally the temperature falls to low levels with very little further shrinking near the minimum size.

If there is no heat production then this is what happens to the shrinking ball of gas. The speed of evolution varies with heat loss rate, which gets slow at the low temperatures, so the ball would spend most of its evolution with temperature slowly falling towards zero and radius slowly shrinking towards nonzero minimum value. But the maximum of internal temperature would happen just the same.

Now what happens if there IS heat production through fusion?
Thermonuclear fusion is strongly dependent on temperature - but the dependence is still continuous. So the heat production rate goes through a continuous maximum roughly where temperature goes through its continuous maximum.
The rate of heat loss via radiation and convection is also dependent on temperature. But it also depends on the temperature gradient and area for the same temperature but different radius, opacity, thermal expansivity, viscosity... all of which change with density around the continuous maximum of temperature.

Therefore, the ratio of heat production rate through fusion to heat loss rate goes through a continuous maximum which is generally somewhere else than the continuous maximum of temperature (in which direction?), but since the heat production rate through fusion is strongly dependent on temperature, the maximum of heat production/heat loss rate is somewhere quite near the maximum of temperature.

Now, if a shrinking ball of gas near the maximum of temperature, at which point it is significantly degenerate and nonideal gas (otherwise it would be nowhere near maximum!) reaches a maximum of heat production/heat loss rate which is close to one but does not reach it then it never reaches thermal equilibrium - the brown dwarf goes on to cool, whereupon the heat generation decreases. Note that there WAS significant amount of fusion - since the heat generation rate through fusion did approach the heat loss rate near the maximum temperature, it significantly slowed down shrinking in that period. So fusion was significant but not sustained.

If, however, the maximum of heat production/heat loss ratio is slightly over one then it is never reached. The star will stop shrinking when the heat production/heat loss ratio equals one, so it will not reach the target maximum temperature, nor the maximum (over one) ratio of heat production to heat loss.

But as shown above, it has a very significant contribution of degeneracy pressure (otherwise it would have been nowhere near the maximum temperature, and the maximum heat production/heat loss ratio would have been far over one, not slightly over one).

And such a stable star IS, by definition, a main sequence star. Most main sequence stars are red dwarfs... and have a significant contribution of degeneracy pressure/nonideal behaviour.
 
  • #68
@snorkack, your analysis essentially begins from the perspective of a star that does not have enough mass to ever reach the main sequence, and then you gradually increase the mass and ask what happens when you get to stars that barely reach the main sequence. These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection. So your point is well taken that this is a kind of "forgotten population", because no one ever sees any of these stars when they look up in the night sky, yet they are extremely numerous and no doubt play some important role in the grand scheme that those who research them must keep reminding others of. That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform, but either way, it is clearly a transitional population, no matter how numerous, between the standard "main sequence" and the brown dwarfs.)

Anyway, you make interesting points about the different physics in stars that are kind of like main-sequence stars, but have important degeneracy effects, in that transitional population that does include a lot of stars by number. The standard simplifications are to either treat the fusion physics in an ideal gas (the standard main-sequence star), or the degeneracy physics in the absence of fusion (a white dwarf), but this leaves out the transitional population that you are discussing. Your remarks are an effort to fill in that missing territory, but are a bit of a sidelight to this thread.

Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion. The interesting conclusions I reach are that not only is the surface temperature of no particular interest in deriving these mass-luminosity relationships, neither is the presence or absence of fusion, in stark refutation of all the places that say you need to understand the fusion rate if you want to derive the luminosity.
 
  • #69
Ken G said:
These types of stars tend to have two physical effects that are not in my derivation: degeneracy and convection.
Yes.
Ken G said:
That must be a frustrating position, so when you see people refer to "main sequence stars" in a way that omits this population, you want to comment. I get that, point taken-- but I am still not talking about that type of star, whether we want to call them "main sequence stars" or not. (Personally, I would tend to define a main-sequence star as one that has a protium fusion rate that is comparable to the stellar luminosity, so if it has more deuterium fusion, or if it is mostly just radiating its gravitational energy, then it is not a main-sequence star. The question is then, just how important is degeneracy when you get to the "bottom of the main sequence," and I don't know if it gets really important even in stars that conform to this definition, or if it only gets really important for stars that do not conform,
Pretty obviously it does. See my derivation of the definition in previous point.
But trying to restate it:
Any ideal gas sphere with no inner heat source, no matter how small its mass, would keep contracting at Kelvin-Helmholz timescale to arbitrarily small size and arbitrarily high internal temperature.
This contraction can be stopped by one of the two effects:
1)the gas becomes significantly nonideal, and the gas sphere cools down and slowly finishes contraction to nonzero final size
or 2) the fusion does provide an internal heat source sufficient to stop the contraction
A gas ball which is still contracting and heating is not yet on main sequence, whether or not it eventually reaches main sequence.
Now a low mass gas ball stops heating because it passes through the maximum pressure as of 1)
A massive gas ball would reach a much higher maximum temperature but, because of fusion, it never reaches that point. Instead, it acquires internal heat source that balances heat loss while the temperature is far below maximum, and the gas behaviour is still close to ideal.
So what happens to an intermediate mass gas ball? Well, 1) takes place continuously, so the gas behaviour is significantly nonideal while the temperature is still rising towards the maximum but the rise is slowing because of nonideal behaviour.
But since 2) can happen at any point where temperature is rising, it can happen on the region where the temperature is approaching maximum.
Note that these stars are on the main sequence side of the end of main sequence. Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
Ken G said:
Still, I take your point that if we hold to some formal meaning of a "main-sequence star", and we look at the number of these things, a lot of them are going to be red dwarfs, and the lower mass versions of those are in a transitional domain where degeneracy is becoming more important, and thermal non-equilibrium also raises its head. My purpose here is simply to understand the stars with higher masses than that, say primarily in the realm from 0.5 to 50 solar masses, which are typically ideal gases with a lot of energy transport by radiative diffusion.

But besides the degeneracy, another important effect is convection.
The whole assumption of radiative heat conduction is that the heat transport is proportional to temperature gradient, so the temperature gradient changes with heat flow.

Not the case with convection! The heat transport is negligibly small below a certain gradient (the conductive heat flow), then arbitrarily large at a fixed (adiabatic) temperature gradient. Convection also is thermostat, but it fixes the temperature gradient.

And convection is significant far from the lower mass end of main sequence! Sun is convective for 30 % of its radius.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
 
  • #70
snorkack said:
Pretty obviously it does.
No it's not obvious at all, nor does your argument answer the question. You would need actual numbers to answer it-- you would need the protium burning rate, the deuterium burning rate, and the luminosity. If the first and last are close, it's a main-sequence star. If the second and last are close, it's a brown dwarf. If the last is unbalanced, it is a protostar. And if it is a protostar, my derivation still applies, unless either convection or degeneracy dominate the internal structure. The rest of what you said I already know.
Main sequence ends exactly because the gas behaviour near the end, on the inner side, is significantly nonideal.
A point I have been making all along-- non-ideal behavior bounds the "bottom of the main sequence," so once degeneracy dominates, we don't call it a main-sequence star any more. There is of course a transition zone which is a "gray area" to the nomenclature-- my derivation begins to break down in that gray area. All the same, everything I said above is correct, and if you want to add some additional physics at the degenerate end of the main sequence, fine, but it is something of a distraction from what this thread is actually about.
With this kind of significance, does a derivation requiring the conduction distance to be equal to star radius hold water?
Read the title of the thread.
 
Back
Top