How does the size of a system affect its access to new states with added heat?

In summary: real world terms, there can be an infinite number of configurations of a system. so if we apply the definition strictly, there can never be a maximum. but in everyday use, we can say that entropy reaches a maximum when the number of configurations of the system reaches its maximum.
  • #1
mmmcheechy
13
0
I am interested in finding as many unit less representations of the second law of thermodynamics as possible. other then gravity, it is the physical law that we experience most in daily life. to greater or lesser extents it can used to describe everything that controls our existence. I'm looking for as simple an equation as posible to describe that entropy strives towards a maximum. I have a couple of equations in mind but i would like to see what some fellow nerds can come up with.
 
Science news on Phys.org
  • #2
mmmcheechy said:
I am interested in finding as many unit less representations of the second law of thermodynamics as possible. other then gravity, it is the physical law that we experience most in daily life. to greater or lesser extents it can used to describe everything that controls our existence. I'm looking for as simple an equation as posible to describe that entropy strives towards a maximum. I have a couple of equations in mind but i would like to see what some fellow nerds can come up with.
Your premise is incorrect. Entropy does not strive toward a maximum. There is no maximum. All we can say is that in any theoretical process, the entropy of the universe cannot decrease and that in any real process, the entropy of a system and its surroundings will increase. Therefore, the entropy of the universe is always increasing.

Also, entropy has units of energy/temperature so I am not sure how you can mathematically represent entropy without using units.

AM
 
  • #3
"the energy of the world is constant; the entropy of the world strives toward a maximum"
Clausius
i'm sure you can find, as i have found multiple textbooks, both old and new, that have the exact phrase entropy strives towards a maximum. so don't start a war over semantics when you knew what i was talking about
"Your premise is incorrect. Entropy does not strive toward a maximum. There is no maximum"
your argument does not invalidate mine. any system has or can have a theoretical maximum. it is however a mater of resolution of measurement.
but i should have been more clear in what i mean by a unit less equation. E=mc[itex]^{2}[/itex] is the equation for mass energy equivalence. however c is a unit and does nothing to convey to the the mind of someone else that matter can be converted to energy and vice versa only the ratio of the units in an actual calculation. namely the speed of light squared
in my search for a simple representation of the second law of thermodynamics i did't want to involve units of temperature or heat flow etc. I didn't want to use
[itex] \oint \frac{\delta Q}{T} \leq 0. [/itex]
or even
[itex] dS = \frac{\delta Q}{T} \! [/itex]
I want to use the idea of entropy to define irreversibility without using a dictionary. I want a mathematical image that shows entropy always increases
Plank's symbolism for this idea was [itex]{S-S}'\geq 0[/itex]
I'm looking for something simple like this, where S represents entropy but is otherwise not very useful for calculations
 
  • #4
Andrew Mason said:
All we can say is that in any theoretical process, the entropy of the universe cannot decrease and that in any real process, the entropy of a system and its surroundings will increase. Therefore, the entropy of the universe is always increasing.

Doesn't that necessarily entail that the universe has surroundings?
 
  • #5
I think the most useful concept of entropy is the natural log of the number of configurations of the system that is counted within some allowed class or set of constraints that control the entropy. Often this gets multiplied by the Boltzmann constant k, but that's just because T has arbitrarily chosen units that bring in k. There's no reason not to measure T in energy units, and then S is unitless and you can just drop the k.
 
  • #6
mmmcheechy said:
"the energy of the world is constant; the entropy of the world strives toward a maximum"
Clausius
i'm sure you can find, as i have found multiple textbooks, both old and new, that have the exact phrase entropy strives towards a maximum. so don't start a war over semantics when you knew what i was talking about .
I thought you were talking about entropy approaching a maximum. Just because Clausius said it does not mean it is right. Clausius also said the energy of the universe is constant.
your argument does not invalidate mine. any system has or can have a theoretical maximum. it is however a mater of resolution of measurement.
Why can't the entropy of a system increase without limit?

AM
 
  • #7
Andrew Mason said:
Why can't the entropy of a system increase without limit?
Thats exactly what i thought i had said. or i least i hoped that's what would be understood. what makes infinity not a maximum? by the strictest use of the word, maximum does imply a limit. in my mind with any real system entropy is going to be closer to infinity then zero (infinitely closer). so the maximum becomes infinity.
when you replied to my original post with this statement
Andrew Mason said:
Entropy does not strive toward a maximum. There is no maximum.
I understood (or perhaps better, misunderstood) that statement to mean that you had placed a limit to entropy.
 
  • #8
There are many situations where entropy does reach a maximum, subject to whatever constraints are in play. This is how things like the Maxwell-Boltzmann velocity distribution or the Planck spectrum are derived.
 
  • #9
is there any universal time equations
 
  • #10
or any universal time equation
 
  • #11
I'm sorry, I don't understand why you're doing this.

I'm looking for as simple an equation as posible to describe that entropy strives towards a maximum. I have a couple of equations in mind but i would like to see what some fellow nerds can come up with.

Some fellow nerds have already come up with it, it's called S=klnΩ. It is extremely simple and relates counting microstates to temperature and the entire concept of thermal equilibrium. That is pretty appealing. Why would you waste your time searching for an aesthetically pleasing way of saying the same thing? Especially when the way it is being said already is extremely simple and illuminating? What could be a better way to express entropy than an equation you could use to derive the precious Ideal Gas Law from with suitable assumptions? An equation that fully describes how temperature flows from hot to cold and how systems come to equilibrium? Using this equation you can predict the most likely state of a system and how much more likely it is than any other macrostate. Then given assumptions about how many times a you meausure a system per unit time you can determine how many times you would have to measure a system for it to go through all accessible microstates. What is more telling than that?I am not saying there is no theoretical work to be done in the area of irreversibility or the arrow of time, simply that you might as well study the phenomena and not try to re-arrange symbols for some personal infatuation with entropy...what are you trying to make a tattoo or something? ...
 
  • #12
Ken G said:
There are many situations where entropy does reach a maximum, subject to whatever constraints are in play. This is how things like the Maxwell-Boltzmann velocity distribution or the Planck spectrum are derived.
Entropy is a quantity whose measurement requires an equilibrium state. The Maxwell-Boltzmann distribution describes the distribution of molecular speeds only in an equilibrium state. Similarly, the Planck distribution describes the energy distribution of photons in thermal equilibrium. So I don't see how entropy is used to derive those distributions.

Maximum entropy for a closed system constrained to a certain volume will be achieved when all parts of the system are in complete equilibrium with each other and there is no internal source of energy or energy sink. At that point, energy will not flow within the system and, since it is closed to the rest of the universe and has fixed volume, it cannot exchange work or heat flow with anything else, so nothing will happen. That will be a state of maximum entropy for that system.

AM
 
  • #13
Andrew Mason said:
Entropy is a quantity whose measurement requires an equilibrium state. The Maxwell-Boltzmann distribution describes the distribution of molecular speeds only in an equilibrium state. So I don't see how entropy is used to derive those distributions.
Just start with the definition of entropy, and maiximize it, subject to the total energy available, and you get M-B without ever mentioning temperature or thermal equilibrium. That's the only difference-- if you derive the M-B distribution from thermal equilibrium, you are specifying the temperature, if you derive it from maximizing entropy, you are specifying the total energy. Put differently, if you have a gas in a box that is completely insulated from its surroundings, and the gas has a given internal energy, you can derive the Maxwell-Boltzmann distribution for that gas simply by maximizing its entropy subject to its internal energy. No T, no thermal equilibrium, just entropy. But it will be the equilibrium state, because by the second law it has nowhere else to go once it reaches maximum entropy. Ergo, saying that gas "seeks maximum entropy" is tantamount to saying it "reaches equilibrium", and hence, the two concepts are very close, and both are important to the usefulness of thermodynamics. If you change the conditions, the maximum entropy will change, so we cannot say they "seek maximum entropy" in some kind of absolute way, but we can say that this does indeed tend to happen given the specific constraints in place.
 
Last edited:
  • #14
I agree with KenG - A good way to look at entropy is to divide the thermodynamic entropy S by Boltzmann's constant k, which will give a dimensionless entropy, [itex]H=S/k[/itex] which is equal to Shannon's information entropy of the system. In the second law, the [itex]T\,dS[/itex] term is replaced by [itex](kT)\,dH[/itex], where [itex]kT[/itex] is a new temperature scale with units of energy. By doing this, you get rid of the rather artificial temperature units, and the Boltzmann constant is eliminated in favor of the new, very specially designed temperature scale.

Now Boltzmann's famous equation [itex]S=k\ln W[/itex] becomes [itex]H=\ln W[/itex] where W is the number of different microstates a system could possibly be in that would exhibit the same macroscopic parameters of the state you are looking at. What this equation is saying is that, if you use base 2 logarithms, the info-entropy is equal to the average number of yes/no questions you would have to ask, in order to determine the microstate of the system, given that you know the macrostate (i.e. temperature, pressure, etc). Shannon's definition of information entropy, is basically just that - the information entropy (H) is the amount of missing information, and the amount of missing information is the average number of yes/no questions you have to ask to recover that missing information.

Consider a digitized image 256x256, pixels are either black or white. If the left half is black, right half is white, how many ways can this happen? One. Ok, suppose you have blurry vision, cannot distinguish down to the pixel level, only to about a 4x4 box, then there are more ways, millions, maybe, that you couldn't see the difference between. But if you are looking at a picture that is flat grey, now how many ways? An enormous number of ways, something like 10^19 different ways that could give you flat grey with your blurry vision. If the pixels in the half black half white picture start changing randomly, the picture will start to turn flat grey, and the number of ways climbs towards that maximum. The entropy increases.

In the thermodynamic system, just as with this example, you cannot see the individual molecules, your macroscopic equipment "blurs" the system you are looking at, only being able to measure temperature, pressure, etc. for a small volume containing many molecules, not the individual molecular energies. The collisions between the molecules cause each molecule to change its energy, just as the pixels started randomly changing their color. Eventually the system you are looking at can be represented by a huge number of possible energies of the individual molecules. Just as the picture goes grey and the number of possibilities becomes huge, so the gas temperature, pressure, density go flat and the number of possible ways becomes huge. The entropy increases.
 
Last edited:
  • #15
That's a very nice description of information entropy, thank you. A crucial point you make is that the entropy is not a physical entity, it depends on what we claim to know about the system, and what we are choosing to treat as unknown (or effectively unknowable).
 
Last edited:
  • #16
Ken G said:
That's a very nice description of information entropy, thank you. A crucial point you make is that the entropy is not a physical entity, it depends on what we claim to know about the system, and what we are choosing to treat as unknown (or effectively unknowable).

I get worried about saying it is not a physical entity. We can envision various different ways of measuring the same system, and we will come up with different values of entropy, each of which are valid, so entropy is not on the same footing as temperature, pressure, etc. However, the differential dS does not change, as long as things don't get too microscopic (your vision does not come too close to perfect in the digital picture analogy), so it has more "physicality" than the entropy S. I also have trouble intuitively understanding [itex]dU=(kT)dH[/itex] (assuming constant volume and number of particles). I have trouble understanding the meaning of (kT) in this formulation, how the amount of missing information yields the internal energy. I can do the math, the whole derivation of Boltzmann, etc. etc. and it all makes mathematical sense to me. I understand kT is twice the energy per degree of freedom, etc. but I feel like I still don't intuitively get it. I mean if H is the number of yes/no questions, then kT is the energy per question. I'm having trouble with that.
 
  • #17
Andrew Mason said:
Your premise is incorrect. Entropy does not strive toward a maximum. There is no maximum. All we can say is that in any theoretical process, the entropy of the universe cannot decrease and that in any real process, the entropy of a system and its surroundings will increase. Therefore, the entropy of the universe is always increasing.

Also, entropy has units of energy/temperature so I am not sure how you can mathematically represent entropy without using units.

AM

The process of increasing entropy is reversible adiabatic process
 
  • #18
Rap said:
I also have trouble intuitively understanding [itex]dU=(kT)dH[/itex] (assuming constant volume and number of particles). I have trouble understanding the meaning of (kT) in this formulation, how the amount of missing information yields the internal energy.
The way I see it, it's not that the missing information yields the internal energy, it is that the former is how we can understand the presence of the latter. The fundamental rule is that missing information can be cast in terms of a number of equally likely states, the counting of which quantifies the missing information, as you so clearly explained. But the number of equally likely states also connects with the likelihood the system will find itself in that class of states, and that in turn connects with the affinity of a system to draw energy from a reservoir.

It is the reservoir, not the system, that brings in the concept of kT-- kT means the energy that the reservoir "covets." By that I mean, if you add kT of energy to a reservoir, you increase by e the number of equally likely states that the reservoir has access to. This is really the meaning of T. Interestingly, it doesn't matter how big the reservoir is-- a lump of coal or a planet, if both are at 1000 K, will "covet" the same energy kT, and will both have their number of states multiplied by e if they get kT of energy. So given that reservoirs covet energy in this sense, they are also loathe to part with it, but they can be coaxed into parting with kT of energy if some other system can have its number of accessible states increase by more than the factor e by receiving that energy.

The net result will be an increase in the number of accessible states for the combined system, and so by sheer probability, this is more likely to happen. Heat will continue to cross out of the reservoir and into the system until the next kT of energy only increases the number of states in the system by the factor e (or more correctly, the next dQ increases the number of states by only the factor 1+dQ/kT), at which point we have equilibrium because the number of total states cannot be increased any more (nor can the entropy, as you point out). So what this all means is, there is a connection between the number of questions you need to ask to pinpoint the particular state of the system out of the full class it might be in, and the fact that a big full class has a proportionately high probability of being belonged to. The place where the internal energy comes in is that the more states the system can gain access to, the better it is at drawing energy from the reservoir, to maximize the total number of combined states, and thus also maximizing the number of questions you'd need to answer to cull out the actual state from the class of possibilities.

I mean if H is the number of yes/no questions, then kT is the energy per question. I'm having trouble with that.
I hope you now see that the reason for this is that each question you need to answer represents the presence of states that perfectly offset the loss of those states by the reservoir when it loses its "coveted" energy (the total number of states being the product, not the sum, of possible states in each component). So it's all about maximizing the combined number of states that the full system+reservoir has access to. The reason I said it depends on what we know, rather than something physical outside of us, is that the actual state of the combined system is always just one thing-- it is only how we classify it and group it with indistinguishably similar states that we come upon the concept of entropy and the concept of probability of belonbing to that classification group. But you're right, the energy is there, that much is physical-- it is the explanation for why that energy is there that depends on how we classify things. The "real physical reason" the energy is there must depend on microphysics that we are simply not tracking, not on entropy. But the entropy is a story we can tell, based on what we are tracking, that can be used to determine how much energy will come across from the reservoir, via microphysics that is not in our story but is the real physical reason for that energy being there (if there is such a thing as a real physical reason).
 
Last edited:
  • #19
Ok, I will have to read that more than once and think about it. Give me a few days :)
 
  • #20
Ken G said:
...
The place where the internal energy comes in is that the more states the system can gain access to, the better it is at drawing energy from the reservoir, to maximize the total number of combined states, and thus also maximizing the number of questions you'd need to answer to cull out the actual state from the class of possibilities.

In my mind, I say ok, suppose we have a reservoir at temperature T and two systems, large (L) and small (S), each at somewhat lower temperature T'. If I transfer kT to the small system, [itex]dU_S=kT=kT'dH_S[/itex]. If I had transferred to the larger system, [itex]dU_L=kT=kT'dH_L[/itex] so it looks like dH's are the same, equal to T/T' (which is larger than one question, but the same for both).

I don't see how the larger system is "better at drawing energy from the reservoir". Am I misinterpreting your statement?
 
  • #21
Rap said:
I don't see how the larger system is "better at drawing energy from the reservoir". Am I misinterpreting your statement?
Yes. A system does not gain access to more states, for given dQ, simply because it is larger. Any system that is large enough to be considered a reservoir itself, so has a meaningful T itself, will continue to draw heat from a reservoir at T, so long as those systems can gain access to the same factor of new states that the reservoir is losing. This is true regardless of size of the systems drawing heat, so long as they are large. Since the reservoir is losing a factor 1+dQ/kT states (that's the meaning of T), any large system, regardless of size, will draw heat if it gains access to more than the factor 1+dQ/kT of new states for receiving dQ. When the systems will gain exactly the factor 1+dQ/kT of new states, we say they have equilibrated at T. All this holds if we have a rock or a mountain, at T. The factor by which the system increases its access to states does not depend on the size of the reservoir, that's why T means the same thing for all reservoirs.

If, on the other hand, the system in question is too small to be considered a reservoir, say it is a single atom, then it also does not really have its own T, instead it has a statistical likelihood of various energy states. If each atomic state has a nondegenerate energy, then each energy state is just one physical state, so the probability of each energy state depends only on the factor of states lost by the reservoir it is in thermal contact with. So in other words, if the system is not itself a reservoir, then we must speak in terms of probabilities rather than the definite most likely state of the (small) system, and those probabilities are determined entirely by the T of the reservoir, rather than saying the system has a T. Collecting a large enough ensemble of such small systems eventually graduates to the meaning of a reservoir at T, at which point we have a population distribution (like the Maxwell-Boltzmann distribution) instead of a probability distribution. That's when we can talk about the factor by which the accessible states gets multiplied by adding heat, and that factor will not depend on the size of the reservoir (which is what you just showed).

The bottom line is, systems large enough to have their own T get access to new states when you add dQ in ways that do not depend at all on the size of the system, it depends on the system T, because that's just what T means. Systems too small to be considered reservoirs and so don't really have their own T take on various amounts of energy with various probabilities, and those probabilities just depend on how loathe the reservoir is to give up that heat, not on the number of new states the (small) system gets access to (often the small system is always in a state that is chosen from a class of only 1 anyway, say the nondegenerate energy level of an atom).
 
Last edited:

FAQ: How does the size of a system affect its access to new states with added heat?

What is the second law of thermodynamics?

The second law of thermodynamics is a fundamental principle that states that in any isolated system, the total entropy (a measure of disorder) will always increase over time. This means that natural processes tend to move towards a state of higher disorder, or towards a state of equilibrium.

What is the difference between the first and second law of thermodynamics?

The first law of thermodynamics, also known as the law of conservation of energy, states that energy cannot be created or destroyed, only transformed from one form to another. The second law, on the other hand, focuses on the direction of energy transfer and states that the total entropy of a closed system will always increase over time.

Can the second law of thermodynamics be violated?

No, the second law of thermodynamics is considered a universal law and has been extensively tested and observed to be true. While there may be exceptions on a microscopic scale, on a macroscopic scale, the law always holds true.

What is heat death in relation to the second law of thermodynamics?

Heat death, also known as the "heat death of the universe," is a hypothetical state of maximum entropy that the universe is predicted to eventually reach according to the second law of thermodynamics. In this state, all energy will be evenly distributed throughout the universe, rendering it unable to sustain any further thermodynamic processes.

How does the second law of thermodynamics relate to the concept of entropy?

The second law of thermodynamics is often described in terms of entropy, which is a measure of the disorder or randomness of a system. The law states that the total entropy of a closed system will always increase over time, meaning that the system will tend towards a state of higher disorder or equilibrium.

Back
Top