Questions about the lifecycle of stars

  • I
  • Thread starter JohnnyGui
  • Start date
  • Tags
    Stars
In summary: When the core becomes depleted of hydrogen, it starts to collapse. The helium fusion reactions create more energy than the original hydrogen reactions, so the star expands.
  • #36
snorkack said:
Hardly. A degenerate core is not a source of energy.
But it is a very powerful source of gravity, which does serve as an energy source for the shell around it.
Expanding a thin layer to 8 times its previous volume requires heating it to 8 times its previous temperature. Expanding a self-gravitating core to 8 times its previous volume causes its temperature to fall to 1/2 of its previous temperature.
Yes, your former point is what leads to thermal pulses in shell sources, and also to classical novae, while your latter point is what stabilizes fusion in the Sun right now. But in the case of red giants, the fusing shell is not thin compared to the degenerate core, so is dynamically stabilized in a way that is more like core fusion.
 
Astronomy news on Phys.org
  • #37
Ken G said:
Yes, you can indeed say that. As the envelope puffs out, what drops is the amount of material and the density in the fusing shell, not its temperature. That is indeed the key difference with stars fusing in their cores.

Then what happens to the top of the formerly fusing shell?
If a fusing shell expands on heating, the bottom of the shell expands, but does not move. The top of the shell moves outwards.
What makes the upper part of the shell from fusing to nonfusing as it expands?
 
  • #38
snorkack said:
Then what happens to the top of the formerly fusing shell?
If a fusing shell expands on heating, the bottom of the shell expands, but does not move. The top of the shell moves outwards.
What makes the upper part of the shell from fusing to nonfusing as it expands?
What stays the same is the temperature as a function of radius, not the temperature of a given parcel of gas. Thus, "the fusing shell" is defined by a temperature layer, not a given set of gas. As the gas expands, it simply leaves the fusing shell, while the temperature of the fusing shell continues to be kept fixed by the gravity from the core. Thus we should say that the fusing shell self-regulates how much material it contains and what is the density of that material, such that you don't have the same material in the shell that you had prior to the expansion. Note also that the expansion we are talking about here is happening on the evolutionary timescale, not the dynamical timescale where we have the usual adiabatic dynamical stability, and not on the thermal timescale where we have solar-like thermostatic stability. What the degenerate core controls is the temperature at which the thermostat is "set to," and that's the main difference from main-sequence stars-- in the latter, the thermostat is set to the temperature that allows fusion to replace the heat leaking out.
 
  • #39
Let me see if I have this straight:

When hydrogen fusion ceases in the core of solar mass star, the core contracts until it is a hot, degenerate mass of helium. This contraction increases the gravitational pull on the shell of hydrogen just outside the core. This increased gravity causes the shell to compress and heat up until it reaches fusion temperatures. But because the gravity is so high, the temperature needed to stabilize it against further contraction is much higher than the temperature in the main-sequence core. This causes the fusion rate to skyrocket until it provides enough energy to offset the energy loss from the shell and to puff out the shell and outer envelope. This reduces the density of material in the shell,stabilizing the fusion rate by way of limiting the amount of fusion fuel in the shell.

Is that mostly correct?
 
  • #40
Yes, that sounds good to me.
 
  • #41
Ken G said:
What stays the same is the temperature as a function of radius, not the temperature of a given parcel of gas. Thus, "the fusing shell" is defined by a temperature layer, not a given set of gas. As the gas expands, it simply leaves the fusing shell, while the temperature of the fusing shell continues to be kept fixed by the gravity from the core.
Why would anything stay the same?
For a star that is 100 % fusible sphere, no core, the temperature decreases with expansion - falls 2 times when volume increases 8 times.
For a star that is nearly 100 % exhausted core, thin fusible shell, the temperature rises with expansion - rises 8 times when volume increases 8 times.
Core size is a continuous argument. You can have a star which is 5 % exhausted, 95 % fusible, or 50 % exhausted 50 % fusible, or 95 % exhausted 5 % fusible.
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
 
  • #42
snorkack said:
Why would anything stay the same?
The goal is to understand. Hence, looking for things that stay the same is a device for understanding, a useful tool if you will.
For a star that is 100 % fusible sphere, no core, the temperature decreases with expansion - falls 2 times when volume increases 8 times.
Yes, for homologous expansion that is correct. Of course, a star with a degenerate core does not expand homologously, which is the point.
For a star that is nearly 100 % exhausted core, thin fusible shell, the temperature rises with expansion - rises 8 times when volume increases 8 times.
As I said above, the fusing shell is not thin compared to the core, so a red giant does not follow either of these categories as a whole, but the fusing shell in a red giant responds more like the first category, i.e., its fusion is dynamically stabilized. The hydrogen fusing shell in an asymptotic giant is further out in the star and can act more like the second category, which can lead to unstable fusion events called thermal pulses.
Core size is a continuous argument. You can have a star which is 5 % exhausted, 95 % fusible, or 50 % exhausted 50 % fusible, or 95 % exhausted 5 % fusible.
That is correct, the physics of a red giant is very much controlled by the mass of the core. Hence, red giants change as the core mass grows-- their radius and luminosity increases, all for reasons that are readily understandable.
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
There is a period of evolution after the center runs out of fusible material, a transition from one way of describing the situation to another, and during that transition, the changes in heat capacity occur (though remember that one of the changes is that the star ceases to respond homologously so no longer can be characterized as having a single gravothermal heat capacity). At first, the center is not important, it is not even degenerate, and the star is not recognizably different from a main-sequence star. With time, a degenerate core builds up, but the star is still not a red giant until the core gets enough mass to start to dictate the structure of the star. Also, the envelope is not fully convective either. This intermediate phase is called the "subgiant" phase and is of course more difficult to understand, being a transitional phase. However, once the core mass has built up enough that it is dictating the structure to the rest of the star (roughly when the binding energy of the core is comparable to the binding energy of the rest of the star, which does not take a lot of core mass because it is so highly contracted), at this point we can regard the object as an evolving red giant (evolving as the core mass builds up more and more), and we can understand it via the means I've described above. By the time the core mass reaches about 0.5 solar masses, regardless of the mass of the rest of the star, the structure that this core mass dictates controls both the maximum luminosity the giant reaches, and the point where helium fusion initiates in the core. This is why the evolutionary tracks of all red giants are quite similar.
 
  • Like
Likes rootone, davenn and Drakkith
  • #43
snorkack said:
At which core size, as a fraction of total star mass, does the heat capacity go to infinite from negative?
In the largest of stars which have cores fusing into iron and nickel, that is the end of the game.
While outer layers may still be burning carbon and silicon for a while it doesn't last long.
Neutron star is the next stage, but that might not be stable, so supernova instead,
 
  • #44
Ken G said:
However, once the core mass has built up enough that it is dictating the structure to the rest of the star (roughly when the binding energy of the core is comparable to the binding energy of the rest of the star, which does not take a lot of core mass because it is so highly contracted), at this point we can regard the object as an evolving red giant (evolving as the core mass builds up more and more), and we can understand it via the means I've described above. By the time the core mass reaches about 0.5 solar masses, regardless of the mass of the rest of the star, the structure that this core mass dictates controls both the maximum luminosity the giant reaches, and the point where helium fusion initiates in the core. This is why the evolutionary tracks of all red giants are quite similar.

Stars which are more massive have convective cores while on main sequence. Therefore protium is exhausted over the whole core. If a star starts with a core mass over 0,5 solar, what does it look like as a red giant?
 
  • #45
snorkack said:
Stars which are more massive have convective cores while on main sequence. Therefore protium is exhausted over the whole core. If a star starts with a core mass over 0,5 solar, what does it look like as a red giant?
Ah, important question. When the core is already over 0.5 solar masses when the star leaves the main sequence, it cannot go degenerate at all, because it will start fusing helium first. Thus, it will never be a red giant, it will not have a degenerate core that dictates to the structure of the rest of the star and its luminosity will not rise as a result. This is why high-mass stars evolve more or less horizontally across the H-R diagram, they keep their radiative-diffusive luminosity, but they do puff out in radius as their core contracts (still as an ideal gas), and the core and envelope are still separated by a shell of fusion-- but that shell does not have an uncomfortably high temperature dictated to it, it simply continues to fuse to replace the light that leaks out. Since the total rate that light leaks out is not dependent on the radius of the star (when to a first approximation the cross section per gram stays fairly constant), and the fusion temperature is not dictated to it by some hugely contracted degenerate core, it can self-regulate its temperature to simply replace the heat being lost, and that doesn't require increasing the luminosity to get a huge puffing of the envelope. Ironically, this type of star is called a "red supergiant", but it doesn't puff out as much as red giants do in a relative sense, because high-mass main-sequence stars start out larger and lower density in the first place. That's why they can remain ideal gases the whole time.

Incidentally, there is an intermediate mass range, say 2 - 8 solar masses, where the transition between these phases happens very suddenly, creating what is known as the "Hertsprung gap" observed in the H-R diagram. This sudden core contraction happens because of a gravitational instability that exists for a non-fusing core in that mass range, but that instability only happens to ideal gases, and the core never goes degenerate for the reason you are asking about-- it would have more than 0.5 solar masses by the time it would otherwise go degenerate, so it just starts fusing helium instead.
 
  • #46
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave
 
  • Like
Likes JohnnyGui
  • #47
Ken G said:
Ah, important question. When the core is already over 0.5 solar masses when the star leaves the main sequence, it cannot go degenerate at all, because it will start fusing helium first. Thus, it will never be a red giant, it will not have a degenerate core that dictates to the structure of the rest of the star and its luminosity will not rise as a result. This is why high-mass stars evolve more or less horizontally across the H-R diagram, they keep their radiative-diffusive luminosity, but they do puff out in radius as their core contracts (still as an ideal gas), and the core and envelope are still separated by a shell of fusion-- but that shell does not have an uncomfortably high temperature dictated to it, it simply continues to fuse to replace the light that leaks out. Since the total rate that light leaks out is not dependent on the radius of the star (when to a first approximation the cross section per gram stays fairly constant), and the fusion temperature is not dictated to it by some hugely contracted degenerate core, it can self-regulate its temperature to simply replace the heat being lost, and that doesn't require increasing the luminosity to get a huge puffing of the envelope. Ironically, this type of star is called a "red supergiant", but it doesn't puff out as much as red giants do in a relative sense, because high-mass main-sequence stars start out larger and lower density in the first place. That's why they can remain ideal gases the whole time.
Why, then, would red supergiants puff out at all?
 
  • #48
davenn said:
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave
You are more than welcome! I have indeed found it hard to find this information in most sources, they generally don't do a great job past the main sequence.
 
  • Like
Likes Drakkith
  • #49
snorkack said:
Why, then, would red supergiants puff out at all?
Also an important question. The most natural expectation when fusion runs out in the core is that the star would simply continue the same homologous contraction it was doing prior to the onset of fusion. One might expect fusion to simply be a long pause in this inexorable homologous contraction while there is a net loss of heat. But the "homologous" in the above essentially means "as though the star were basically all one thing," but that's only true up to (and including, mostly) the main sequence. After core fusion ends, what reverses the contraction of the outer radius of the star is a very significant break in the homology-- the star can no longer be treated as all one thing, it must be treated as three things, a core, a fusing shell, and an envelope.

Thus the key difference between red giants and red supergiants is how compact is the core, and as a result, what is its gravitational affect on the rest of the star. A red giant is a remarkable object that has a core with a volume some one trillionth of the volume of the rest of the star, yet the gravitational binding energy of that core comes to vastly exceed the binding energy of the rest of the object! It's a crucial feature that generates a similarly small fusion engine, a shell around that core (though not thin relative to the core) that is responsible for the huge luminosity of the star because the temperature is forced (by the core) to be very high (by fusion standards). Fusion goes nuts, the rest of the star must dial it down by removing weight from the shell, and that's why red giants puff out, in ways all controlled by the mass of that little tiny degenerate core. The luminosity goes way up because the light need only diffuse through the tiny shell mass before it gets picked up and efficiently carried by the convective envelope, so when the light leaks out so quickly and easily, the luminosity must rise.

In red supergiants, the homology is still broken, and the star must be treated as three things rather than one, and that's why the envelope puffs out rather than contracting as the core contracts. But here the core remains an ideal gas, so it never gets to the huge gravitational scale of a red giant and it never dictates a high temperature to the shell. Nevertheless, as the core contracts, the shell temperature does rise, so the fusion does overproduce a bit, and does need to lift off some weight by puffing out the envelope. But this doesn't change the luminosity much, it remains mostly the same radiative diffusion process it was on the main sequence, because radius is not a key factor in radiative diffusion. The key difference is that in a red supergiant, the virialized temperature of the fusion zone is set by the mass and radius of the fusion zone, and this represents a significant fraction of the stellar mass-- it's a little like that middle zone is still more or less a main-sequence star of its own, with a hole punched out of its center that has a tendency to contract and force up the fusion temperature above its equilibrium value, which is compensated by having the envelope puff out to lift off weight and keep the fusion rate nearly fixed by the nearly-constant radiative diffusion through the fusion zone. In red giants, the tiny mass of the fusion zone has no input into its temperature, that's all controlled by the significantly more massive core, so the fusion rate and luminosity shoot up before the puffing out of the envelope can finally recover an equilibrium at this new crazy fusion temperature. The red supergiant luminosity might still be two orders of magnitude higher than the red giant luminosity, but it started out more like five orders of magnitude higher on the main sequence, and gram for gram of gas that is actually in the fusion zone, the fusion rate in the red giant is much much faster than in the red supergiant.
 
Last edited:
  • Like
Likes Drakkith
  • #50
davenn said:
Hi @Ken G
Just got to comment ...
thanks for your excellent posts in this thread. You have been filling in a number of holes in my knowledge of stellar physics :smile:

Dave

Wanted to say the same thing. @Ken G really enlightened me on how stars really behave physics-wise. Thanks!

I wanted to make sure if I understand the mathematical relationship of radius ##r## and some other measurements in a star.

Are these correct to say?

- Temperature in a star is inversely proportional to ##r##

- Light (photon) loss per unit time has several relationships with ##r##:
1. It's proportional to the area, thus proportional to ##r^2##
2. It's also inversely proportional to the time it takes that a photon travels from within the star to the surface, thus it's inversely proportional to ##r##
3. It's proportional to the temperature (energy per photon-wise) and since temperature is inversely proportional to ##r##, that means it's again inversely proportional to ##r##
4. This all means that the net change in light loss per unit time when you expand a main-sequence star is 0, before a star contracts again after the expansion "kick". This conclusion is without taking the fusion rate into account that is also affected during expansion.

- Fusion rate is proportional to the temperature ##T## but to different extents depending on what is being fused. Fusion rate of hydrogen is proportional to ##T^4##, fusion rate of helium is proportional to ##T^{40}##. Thus it's proportional to ##r## and influences the light loss per unit time in different amounts depending on what is being fused.

One other question @Ken G ; you said fusion rate is proportional to mass to the 3rd or 4th power. Is this apart from the temperature being higher or lower with mass? So if I add more mass to a star while keeping the temperature constant per unit mass, fusion rate would still go up?
 
Last edited:
  • #51
JohnnyGui said:
- Temperature in a star is inversely proportional to ##r##
Yes, this is the virial theorem, but there are a few important caveats. First of all, the virial theorem is a kind of average statement, so is really only useful when the whole star can be treated as "all one thing," where the temperature is characterized by the temperature over most of the interior mass, and the radius characterizes that mass. So it's best for pre-main-sequence and main-sequence stars, failing badly for giants and supergiants which have decoupled outer radii. Secondly, we must be very clear that the temperature we mean is the interior temperature, not the surface temperature you find in an H-R diagram. You may well know this, but this confusion comes up in a lot of places where people try to marry the Stefan-Boltzmann law, applying only to surface temperature, to the interior temperature. The surface is like the clothes worn by the star, much more than it is like the star itself, but since we only see the surface this can cause confusion.
- Light (photon) loss per unit time has several relationships with ##r##:
1. It's proportional to the area, thus proportional to ##r^2##
This is the Stefan-Boltzmann law, but one must be wary of the cause and effect. In protostars that are fully convective and have surface temperatures controlled to be about 3000-4000 K or so, this law is quite useful for understanding the rate that energy is transported through the star. In effect, the luminosity is controlled outside-in, because the convective interior will pony up whatever heat flux the surface says it needs to (via the relation you mention). However, when stars are not fully convective, or when they have fully convective envelopes controlled by tiny interior fusion engines (like red giants), the cause and effect reverses, and the luminosity is handed to the surface. In that case, it is not that the luminosity is proportional to ##1/r^2##, it is that the radius is proportional to the inverse root of the luminosity.
2. It's also inversely proportional to the time it takes that a photon travels from within the star to the surface, thus it's inversely proportional to ##r##
When radiative diffusion controls the luminosity (pre-main-sequence and main-sequence, and also giants and supergiants to some degree), what you mention is one of the factors. But not the only one-- diffusion is a random walk, so optical depth enters as well, not just distance to cross.
3. It's proportional to the temperature (energy per photon-wise) and since temperature is inversely proportional to ##r##, that means it's again inversely proportional to ##r##
Best not to think in terms of energy per photon, but rather energy per unit volume. That scales like ##T^4## (that's the other half of the Stefan-Boltzmann law), not T.
4. This all means that the net change in light loss per unit time when you expand a main-sequence star is 0, before a star contracts again after the expansion "kick". This conclusion is without taking the fusion rate into account that is also affected during expansion.
If you are testing dynamical stability (the usual meaning of a "kick," you would kick it on adiabatic timescales, i.e., timescales very short compared to the energy transport processes that set the luminosity. So for dynamical timescales, use adiabatic expansion, and ignore all energy release and transport. If you want to know how the luminosity evolves as the stellar radius (gradually) changes, that's when the above considerations about the leaky bucket of light come into play.
- Fusion rate is proportional to the temperature ##T## but to different extents depending on what is being fused. Fusion rate of hydrogen is proportional to ##T^4##, fusion rate of helium is proportional to ##T^{40}##. Thus it's proportional to ##r## and influences the light loss per unit time in different amounts depending on what is being fused.
The simplest way to treat fusion is to pretend the exponent of T is very high, and just say T makes minor insignificant adjustments until the fusion rate matches the pre-determined luminosity. For p-p fusion, the exponent is a little low (about 4, as you say), so that's not a terrific approximation, but it's something. For all other fusion (including CNO cycle hydrogen fusion), it's a darn good approximation. So if you are making this approximation, you don't care about the value of the exponent, the fusion just turns on at some T and self-regulates. However, in red giants, where the fusion T cannot self-regulate, there you do need the full exponent, you need to explicitly model the T dependence of the fusion because T is preset to be quite high.
One other question @Ken G ; you said fusion rate is proportional to mass to the 3rd or 4th power. Is this apart from the temperature being higher or lower with mass? So if I add more mass to a star while keeping the temperature constant per unit mass, fusion rate would still go up?
Yes, for p-p hydrogen fusion, say like in the Sun. In fact, this is not a bad approximation for what would actually happen if you added mass to the Sun-- you wouldn't need to keep the interior temperature the same, the thermostatic effects of fusion would do that for you. The Sun would expand a little, and its luminosity would go up a little because it is now a bigger leakier bucket of light. Fusion would simply increase its own rate to match the light leaking out, and it would do that with very little change in temperature, expressly because it is so steeply dependent on T. But this story would work even better if the fusion rate was even more sensitive to T, say for the CNO cycle fusion in somewhat more massive stars than the Sun. (Ironically, many seemingly authoritative sources get this reasoning backward, and claim that the temperature sensitivity of fusion is why the luminosity is higher for higher mass, on grounds that adding mass will increase the temperature which will increase the fusion rate which will increase the luminosity. They are saying that the sensitivity of the fusion rate to T is why it rules the star's luminosity, when the opposite is true-- it is why the fusion rate is the slave of the luminosity. The situation is similar to having a thermostat in your house, and throwing open the windows in winter-- opening the windows is what causes the heat to escape, not the presence of a furnace, but the extreme sensitivity of a thermostat is what causes the furnace burn rate to be enslaved to how wide you open the windows.)
 
Last edited:
  • #52
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.

Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
 
  • #53
snorkack said:
So:
You could have a main sequence star that has a small convective core of, say, 0,4 solar masses, which briefly goes inert when protium is exhausted - then accumulates mass to 0,5 solar, undergoes helium flash and resumes fusion.
A main-sequence star with a convective core that massive is a fairly high-mass star, so its core will remain an ideal gas, so it will never undergo a "helium flash". But it will start to fuse helium at some point, so let's continue from there:
Or you could have a slightly more massive main sequence star with a convective core of, say, 0,6 solar masses, which promptly begins helium fusion when protium is exhausted.
Neither necessarily begins helium fusion promptly, their ideal-gas cores simply accumulate mass gradually as ash is added to them, and are maintained at the temperature of the shell around them. The number 0.5 solar masses only matters if the core goes degenerate before it reaches 0.5 solar masses, as that will produce a red giant, but if the core is still ideal when it reaches 0.5 solar masses, it will never go degenerate, never make a red giant, and never have its luminosity shoot up, it will instead make a red supergiant and keep its luminosity almost the same. If the star is in the range 2-8 solar masses, the transition will happen rather abruptly as the core collapses in a gravitational instability, and at the lower-mass end of that range it will still have less than 0.5 solar masses so will indeed make a red giant and will later have a helium flash. At the higher mass end of that range, the core will already exceed 0.5 solar masses before it goes degenerate, so it will never go degenerate, even after the core collapses and jumps the star across the Hertsprung gap. That makes a red supergiant, because the core is still ideal.
Both cases, a result is a core of 0,6 solar masses undergoing helium fusion, surrounded by protium fusing shell.
Yes, any time the core gets above 0.5 solar masses before going degenerate, it will start helium fusion without ever going degenerate, and will therefore not create a red giant-- we will call it a red supergiant prior to helium fusion. The name "supergiant" is a bit misleading, because although the star will be larger than a giant, it will not be as puffed out relative to its own core, and will actually behave more like a main-sequence star, merely cloaked in a suprisingly large cool envelope due to the effects of having a central hole that is not participating in the fusion and has a tendency to either collapse (below 8 solar masses), or at least contract as ash is added to it..
Are, therefore, red supergiants and stars that have undergone helium flash homologous to each other?
Stars that have undergone a helium flash are fusing helium in their cores, whereas red supergiants (and red giants) have inert cores. So no, they have very different structures. However, stars undergoing core fusion tend to be more homologous, though what they are fusing makes a big difference in the composition of the star, and the presence or absence of additional fusing shells is a complicated break in the homology.
 
  • #54
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
 
  • #55
snorkack said:
The assumptions of thermostat and of leaky bucket of light are flagrantly contradictory to each other.
That's incorrect. As I explained above, the leaky bucket picture gives you the luminosity of a radiatively diffusive star (nearly) independently of its radius and temperature. This is called the "Henyey track", it was known about before fusion was even discovered. The fact that the radiative diffusive luminosity is independent of radius and temperature is the reason that the thermostat has little to do with the luminosity, it only has to do with the fusion. Again, Eddington understood the luminosity of the Sun quite well before anybody knew there was a thing called fusion. This is not contradiction, it is historical fact that the leaky bucket picture was understood before fusion was known about, and the discovery of fusion allowed the thermostatic piece to be added, helping us understand how the Sun could be so static for so long.
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.
I have no idea what this is intended to mean.
 
Last edited:
  • #56
Ken G said:
Of course fusion serves as a thermostat in a main-sequence star, that's astronomy 101. And of course it mostly happens at the center, though of course not only precisely at the center. I have no idea what this is intended to mean.
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
If fusion takes place at the centre of zero size (where alone temperature reaches the fusion thermostat temperature) then the radiative flux density diverges to infinity at centre. In which case so does the temperature gradient.
 
  • #57
snorkack said:
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.
Apparently you are not understanding the concept of a "thermostat," or are interpreting it too narrowly to be of much use to you. All it means is that the fusion self-regulates the temperature such that the fusion rate replaces the heat lost, so if the core temperature found itself for whatever reason being too high or too low compared to the "thermostat setting," the action of fusion would quickly return the core to the necessary temperature. This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature, it is informative to recognize that the fusion domain will lie within a fairly narrow temperature regime. Indeed, even over the entire main sequence, the central temperature remains within about a factor of 2. So we have some 8 orders of magnitude in luminosity, and only about a factor of 2 in central temperature. That's what I call a remarkable thermostat, though it appears you mean something more restrictive by the term than how it used in most sources.
 
  • #58
Ken G said:
This certainly does not imply that the entire star is at the same temperature. Nor does it imply that fusion only occurs at precisely one temperature. Nevertheless, for the purposes of understanding the extreme sensitivity of fusion to temperature,
And my point is that fusion cannot be "extremely" sensitive to temperature, in order for fusion to take place over an extended volume of space AND support a temperature gradient allowing radiative conduction across that volume.
Indeed, if the sensitivity of fusion to temperature were too strong, fusion would get concentrated into too small volume, leading to too high heat fluxes and temperature gradients and violating the assumption of radiative conduction.
 
  • #59
It is certainly true that the more temperature sensitive is the fusion, the more centrally concentrated is the fusion zone. It is not true that there is some limit to how T sensitive the fusion can be, at least in the sense of solutions to the basic equations (if and when those equations actually apply is another matter, we are working within a given mathematical model). You simply equate the local fusion rate to the divergence of the heat flux, the former is a function of T and the latter a function of derivatives of T, so you simply find the T structure that solves it, you could do it easily it's generally just solving a second-order differential equation for the T. There's always a solution, for any fusion rate that is a continuous function of T no matter how steep. But of course, that is within a given mathematical framework, other issues might appear like convection and so on. They don't change the basic picture, which is why Eddington met with so much success using only simple models (and indeed, even the inclusion of fusion does not change the situation drastically, it only changes the evolutionary timescales drastically, much to Eddington's chagrin).

There are two separate meanings of "thermostat" that I think you are confusing-- one is a tendency to keep the entire star at the same T (which is not what we are talking about), and the other is tendency to keep the central T at the same value, but there is still a T structure. It is the latter type of "thermostat" that applies for stars on the main sequence, though of course it is only an insightful approximate picture. In actuality, the central T does vary across the main sequence, but surprisingly little-- as the stellar luminosity increases by some 6 orders of magnitude over the bulk of the main sequence, the central T increases by only some factor of 2. The thermostat in my house isn't much more effective that that against things like throwing open all the windows.
 
  • #60
Furthermore, the basic assumption that stars are leaky buckets of light holds only for a narrow mass range, or not at all.
A third of Sun´s radius is convecting, not radiating. For stars less massive than Sun, that fraction is bigger. For stars less than about 0,25 solar masses, the whole star is convective - yet fusion does happen.

What should happen to the size of a star when fusion happens?
4 atoms of protium, once ionized, are 8 particles (4 protons, 4 electrons).
1 atom of helium 4, once ionized, is 3 particles (1 alpha, 2 electrons).
pV=nRT.
If pV were constant, nT would have to be constant. Then T would have to increase 8/3 times. But that´s forbidden by the assumption of thermostat.
What then? Does the radius of the star have to increase as the number of particles decreases?
 
  • #61
snorkack said:
Furthermore, the basic assumption that stars are leaky buckets of light holds only for a narrow mass range, or not at all.
That's also incorrect, in fact it works over most of the main sequence. It only fails at the lowest masses where the main sequence approaches the Hayashi track and there is no Henyey track leading to the main sequence. However, red dwarfs down there are not only highly convective, they are even starting to become degenerate, which is why there is a mass bottom to the main sequence in the first place.
A third of Sun´s radius is convecting, not radiating.
Which is irrelevant, because that region wouldn't have much effect on the radiative diffusion time anyway, given how little mass is up there.
For stars less massive than Sun, that fraction is bigger.
Again, only relevant for the red dwarfs, which are getting close to brown dwarfs, which aren't main sequence stars at all. Should we be surprised a good approximation for understanding the main sequence starts to fail when the main sequence concept itself starts to lose relevance?
For stars less than about 0,25 solar masses, the whole star is convective - yet fusion does happen.
Yup, that's close to where all the main sequence concepts cease to work, it's close to the edge of the main sequence. Nobody should be surprised that approximations start to break down when you get near the edge of a domain.
What should happen to the size of a star when fusion happens?
Almost nothing when it initiates. As it plays out, some changes in the radius do occur, nothing terribly significant at a first level of approximation. This is easy to see from the virial theorem that sets R when T is thermostatic, we say GM^2/R ~ NkT, where N is the number of particles. If we treat T as nearly thermostatic as N is lowered by fusion, and M stays nearly fixed, we expect R to be inversely proportional to N. Of course this is highly approximate, as it treats the star as "all one thing" that is perfectly thermostatic. Actually, as light escapes more easily from the Sun (as its electrons blocking the light start getting eaten up by fusion), the luminosity rises, and so the core temperature must self-regulate its thermostat to be a little higher, which I'm neglecting to first order. Also, as the central regions get a different composition from the rest of the star, the homology starts to break down, and treating the star as "all one thing" will begin to become a worse approximation. Nevertheless, as we shall see shortly, it's still a good way to understand the evolution of the Sun while it is still on the main sequence.
4 atoms of protium, once ionized, are 8 particles (4 protons, 4 electrons).
1 atom of helium 4, once ionized, is 3 particles (1 alpha, 2 electrons).
Except that this change does not happen to the whole star, only to a fairly small fraction of it. What's more, the star already had some helium in it. So between the beginning and ending of the main sequence, the number of particles in the star goes from a situation where some 30% of the stellar mass went from 12 protons, 1 helium, and 14 electrons (that's 27 particles) to 4 helium and 8 electrons (that's 12 particles). The remaining 70% had 27 particles stay 27 particles. So that means in total, 27 particles goes to 0.7*27 + 0.3*12, or about 22.5 particles. No big whoop there, but it does lead us to expect a rise in radius of about 20%. Yup, that's what happens all right, to a reasonable approximation. So what's your issue?
If pV were constant, nT would have to be constant. Then T would have to increase 8/3 times. But that´s forbidden by the assumption of thermostat.
Yup, indeed it is. So what happens instead is what I just said.
What then? Does the radius of the star have to increase as the number of particles decreases?
That's called evolution on the main sequence.
 
  • Like
Likes davenn
  • #62
Ken G said:
That's also incorrect, in fact it works over most of the main sequence. It only fails at the lowest masses where the main sequence approaches the Hayashi track and there is no Henyey track leading to the main sequence. However, red dwarfs down there are not only highly convective, they are even starting to become degenerate, which is why there is a mass bottom to the main sequence in the first place.
Which is irrelevant, because that region wouldn't have much effect on the radiative diffusion time anyway, given how little mass is up there.
So how is the luminosity affected by convection? How to demonstrate that convection over most of Sun´s volume does not affect Sun´s luminosity?
And massive stars have convection in cores. Which is a high density region of the star.
Ken G said:
Again, only relevant for the red dwarfs, which are getting close to brown dwarfs, which aren't main sequence stars at all. Should we be surprised a good approximation for understanding the main sequence starts to fail when the main sequence concept itself starts to lose relevance?
Yup, that's close to where all the main sequence concepts cease to work, it's close to the edge of the main sequence. Nobody should be surprised that approximations start to break down when you get near the edge of a domain.
The assumption of ideal gas breaks down at both ends of main sequence, causing these ends.
The assumption of radiative conduction is inapplicable over much more of the main sequence.
 
  • #63
snorkack said:
So how is the luminosity affected by convection?
Very little.
How to demonstrate that convection over most of Sun´s volume does not affect Sun´s luminosity?
Because you can get a reasonable estimate of the Sun's luminosity without including it, that's what Henyey did.
And massive stars have convection in cores. Which is a high density region of the star.
Yet it still does not significantly alter the average energy escape time, it is dominated by radiative diffusion which is then used to determine the mass-luminosity relation. Obviously this involves approximations, it is for people who wish to treat a stellar interior as something other than a computerized black box. It's not for everyone.
The assumption of ideal gas breaks down at both ends of main sequence, causing these ends.
Actually, the ideal gas approximation only breaks down at the low-mass end. What happens at the high-mass end is relativity becomes important, because much of the pressure is from relativistic particles (photons).
The assumption of radiative conduction is inapplicable over much more of the main sequence.
That would certainly have come as a big surprise to Eddington and Henyey, and all the progress they made understanding the structure of those stars using precisely that approximation-- even before they knew about fusion. Of course this is all a matter of historical record, I'm not sure you have gained a good conceptual understanding of either main-sequence stars, or the history of their modeling. The first pass for understanding all main-sequence stars (except the low-mass end where convection starts to dominate and degeneracy can also appear) is radiative diffusion. One can then add convection as an improvement that does not have a crucial effect on the luminosity, and one then faces the problem that there is no ab initio model for the complex process that is convection.

What we can see is that you asked my how stars evolve while on the main sequence if we use the thermostatic effects of fusion as our guide, and what we get is a good understanding of what actually does happen. One would think that would have been enough for you.
 
  • Like
Likes davenn
  • #64
snorkack said:
If fusion is a thermostat that turns on at a specified temperature, then fusion only happens in the centre of star.
Near the centre, heat flux and therefore temperature gradient diverges to infinity.
Therefore, the heat flux cannot be carried by radiation.

snorkack said:
If fusion is a thermostat and the star has a constant (thermostat-defined fusion) temperature over a core of nonzero size then within that core, temperature gradient is zero and no heat will be radiated away.

Fusion is not something that switches on at a certain temperature. The rate of fusion increases as temperature increases until the temperature reaches some peak value, beyond which the rate decreases once more. Obviously the rate here at room temperature is virtually zero, but as you get into the multi-million kelvin range the rate begins to rise sharply.

Also, the core of a star is not at a single temperature. The temperature at the very center is highest and there is a gradual falloff as you move outwards. This is exactly what we would expect for any hot object surrounded by a cooler environment. (Like the hot-pocket whose insides burn your mouth when you bite into it despite the outside being merely warm to the touch)
 
  • #65
Ken G said:
Actually, the ideal gas approximation only breaks down at the low-mass end. What happens at the high-mass end is relativity becomes important, because much of the pressure is from relativistic particles (photons).
Either way, the pressure ceases being described by PV=nRT. And that makes a crucial difference to stability.
Ken G said:
The first pass for understanding all main-sequence stars (except the low-mass end where convection starts to dominate and degeneracy can also appear) is radiative diffusion. One can then add convection as an improvement that does not have a crucial effect on the luminosity, and one then faces the problem that there is no ab initio model for the complex process that is convection.
Then what you need to do is find out, qualitatively, what the effect of convection is.
 
  • #66
snorkack said:
ither way, the pressure ceases being described by PV=nRT.
Sure, it becomes radiation pressure. Which is also easy, but that's for a different thread. Suffice it to say that the mass-luminosity relation is somewhat easier at the high-mass end than over the rest of the main sequence, since the luminosity is characterized by the "Eddington luminosity", which means it is proportional to the stellar mass divided by the average opacity per gram. As is always true with radiative diffusion, the tricky step is in getting the opacity, but it helps when it is predominantly Thomson scattering opacity.
Then what you need to do is find out, qualitatively, what the effect of convection is.
Don't forget about the role of rotation. Magnetic fields. Non-Maxwellian velocity distributions. Plasma instabilities. Nah, I think I'll just understand how the star basically works, instead. For that, we already know the answer from the successes in the above narrative in making reasonable (yet certainly approximate) predictions about the effects of evolution on the luminosity and radius. That was the goal, all along.
 
Last edited:
  • Like
Likes davenn
  • #67
snorkack said:
Now, make the liquid 8 times denser by compressing the drop at unchanged total mass to 2 times smaller linear size.
In that case, the surface gravity of the drop increases 4 times (because of inverse square law of gravity). Since the density of the liquid was increased 8 times, the weight of an 1 m column was increased 32 times. But since the depth of column from surface to centre was halved, the pressure at the centre was increased 16 times.

Now, try making the liquid 8 times denser but this time by adding mass to the drop at constant size.
In this case, the surface gravity of the drop increases 8 times (gravity is proportional to mass). Since the density of liquid was still increased 8 times, the weight of the 1 m column is increased 64 times, and since now the depth of column is unchanged, the pressure at the centre was increased also 64 times.

Sorry for my late reply regarding this. I was wondering if there's a way to calculate the critical point at which adding mass actually makes a planet not increase in size anymore. I understand that this depends on the compressibility of the material that a planet consists of.
Could you perhaps give me an example of a calculation for that when a planet completely consists of water theoretically? I'm aware you've given me its compressibility but I'm not sure how to calculate the critical point.

Also, I have a few remarks that I'd really like some verification on to make sure I understand this;

- If the compressibility of a material is 0 (i.e. density doesn't increase) then adding that material to make a planet would make the planet increase in radius, where ##M ∝ r^3##,
So that if the planet's mass is increased by a factor of x, the radius ##r## would increase by a factor of ##x^{\frac{1}{3}}##, and the gravity force is increased by a factor of ##x / x^{\frac{2}{3}} = x^{\frac{1}{3}}##?

- If a material has a compressibility of 100% (i.e. density changes but not radius), then adding that material to make a planet would make the planet increase in density, where ##M ∝ ρ##. So that if a planet's mass is increased by a factor of x, the radius doesn't change and the density, gravity force and pressure are all increased by a factor of x?

- If compressibility at 100% makes a planet not change in radius, what is then the reason that makes a planet shrink in size when adding matter? The only conclusion that I can pull is that the opposing pressure force is not enough. But why is it that pressure force is eventually not enough?
 
Last edited:
  • #68
Drakkith said:
Let me see if I have this straight:

When hydrogen fusion ceases in the core of solar mass star, the core contracts until it is a hot, degenerate mass of helium. This contraction increases the gravitational pull on the shell of hydrogen just outside the core. This increased gravity causes the shell to compress and heat up until it reaches fusion temperatures. But because the gravity is so high, the temperature needed to stabilize it against further contraction is much higher than the temperature in the main-sequence core. This causes the fusion rate to skyrocket until it provides enough energy to offset the energy loss from the shell and to puff out the shell and outer envelope. This reduces the density of material in the shell,stabilizing the fusion rate by way of limiting the amount of fusion fuel in the shell.

Is that mostly correct?

Degenerate matter does not respond to temperature the same way as non-degenerate matter. The available electron orbitals are all filled with electrons. The electrons can not change to higher or lower energy states.

I think of heat as the random motion of atoms. If you imagine locking the electrons into an outside frame then the motion can only be movement of the nucleus. Doubling or halving the momentum of the nucleus does not change the degenerate frame it is moving in so the higher temperature does not cause increased volume. Increased gravity changes the electron degenerate frame itself. The degenerate gas remains electrically neutral so the nuclei are also closer when gravity increases.

Fusion is still temperature dependent. The fusion rate can rapidly increase because the temperature is not changing volume. Collisions happen more frequently as the temperature increases.

The outer layer(s) are not degenerate. Fusion taking place inside the degenerate core can cause electrons in the surface gas to move away. Surface gasses with escape velocity can move away join the outer shells or leave as part of the planetary nebula or nova. Surface gasses without escape velocity remain on the surface and can radiate energy.
 
  • #69
stefan r said:
Degenerate matter does not respond to temperature the same way as non-degenerate matter. The available electron orbitals are all filled with electrons. The electrons can not change to higher or lower energy states.

The part of my post that you highlighted refers to the non-degenerate shell, not the degenerate core.
 
  • #70
JohnnyGui said:
Sorry for my late reply regarding this. I was wondering if there's a way to calculate the critical point at which adding mass actually makes a planet not increase in size anymore. I understand that this depends on the compressibility of the material that a planet consists of.
Could you perhaps give me an example of a calculation for that when a planet completely consists of water theoretically? I'm aware you've given me its compressibility but I'm not sure how to calculate the critical point.
It certainly gets complex, because the compressibility has a complex dependence on pressure, and therefore also is not the same at different depth of the same body.
JohnnyGui said:
Also, I have a few remarks that I'd really like some verification on to make sure I understand this;

- If the compressibility of a material is 0 (i.e. density doesn't increase) then adding that material to make a planet would make the planet increase in radius, where ##M ∝ r^3##,
So that if the planet's mass is increased by a factor of x, the radius ##r## would increase by a factor of ##x^{\frac{1}{3}}##, and the gravity force is increased by a factor of ##x / x^{\frac{2}{3}} = x^{\frac{1}{3}}##?
To avoid roots: if the material has no compressibility, and mass is increased by factor of x3
then radius is increased by factor of x
surface gravity, and pressure of column of given depth, is increased by factor of x
central pressure is increased by factor of x2.
JohnnyGui said:
- If a material has a compressibility of 100% (i.e. density changes but not radius), then adding that material to make a planet would make the planet increase in density, where ##M ∝ ρ##. So that if a planet's mass is increased by a factor of x, the radius doesn't change and the density, gravity force and pressure are all increased by a factor of x?
If mass is increased by a factor of x3, then density is increased by factor of x3, surface gravity is increased by factor of x3, pressure of a column of fixed depth is increased by factor of x6, and so is central pressure.
JohnnyGui said:
- If compressibility at 100% makes a planet not change in radius, what is then the reason that makes a planet shrink in size when adding matter? The only conclusion that I can pull is that the opposing pressure force is not enough. But why is it that pressure force is eventually not enough?

Compressibility is more of a local property of matter. If the density is independent on pressure, so pressure increases suddenly for practically no change of density, then the density is constant. If density increases with square of pressure then the planet does not change in radius with added mass.
And if density increases proportional to pressure, as is the case for isothermal ideal gas, then the planet will shrink by itself, without any added mass.
 

Similar threads

Back
Top