How Does the Universe Use Temperature Differences to Create Structures?

In summary, thermodynamics allows for processes that are maximally irreversible, such as the free expansion of a perfect gas. This allows for the creation of structures, such as the universe we live in, without requiring any work to be done.
  • #71
Jimster41 said:
##exp## just means "expectation value" right.

I think it means the exponential function.

Jimster41 said:
I am confused a few paragraphs later by the "Entropy change of the bath = ##-\beta Q## "

As I understand it, the particular process being modeled in this example is reversible, so the total entropy change is zero. That means the entropy change of the bath must be minus the entropy change of the gas. But I may be misunderstanding, since I've only skimmed the paper.

Jimster41 said:
and by the expression "odd under time reversal".

As I understand it, that just means that, if some process has a given entropy change, the time reverse of that process must have minus that entropy change. But again, I have only skimmed the paper so there may be subtleties I'm missing.
 
Space news on Phys.org
  • #72
Thanks Peter. Much appreciated. It's encouraging to know they were understandable questions.

I was on-board with the entropy conservation between the system and bath. I was just confused about the convention of sign. I was expecting negative entropy change for the system, and positive entropy change for the bath. But I realize I am imagining that the energy change to the system is decreasing disorder. It seems like it could be described either way...
 
Last edited:
  • #73
Jimster41 said:
I was expecting negative entropy change for the system, and positive entropy change for the bath.

Assuming I'm correct that the process being modeled is reversible, then this will be true for one direction of the process. The entropy changes for the other direction of the process would have the opposite sign, positive for the system and negative for the bath.
 
  • Like
Likes Jimster41
  • #74
when I look at G. Crooks "Generalized Formulation of the Fluctuation Theorem"

[itex]\frac { { P }_{ F }\left[ +\omega \right] }{ { P }_{ R }\left[ -\omega \right] } ={ e }^{ +\omega }[/itex]

[itex]ω=lnρ(x−τ)−lnρ(x+τ)−βQ[x(+t),λ(+t)][/itex]

and ask what would it take for that [itex]{ e }^{ +\omega }[/itex] to vary, in a way that would "select" some transitions over others, in a non-linear (perhaps periodic) way due to the selectee having a smaller or larger [itex]ω[/itex], I can't help wondering about those natural logarithms and [itex]{ e }^{ { i }^{ n } }[/itex]. I know that [itex]i[/itex] is discrete scale invariant (pattern repeating under exponentiation). If the "entropy" terms due to the configuration probability of the initial and final configurations were to sum to zero, or a lower value than some random pair of states, or transitions, in some periodic way, then transition probabilities would support non-linearly varying likelihoods of configuration selections - a population of favored configurations producing significantly "more" or "less" entropy than random selections. Just a whack thought. And it seems to be consistent with what "larger coarse graing regions" means. But it's a slightly different perspective on why those regions might be what they seem to be. Granted the eq above is looking at forward and reverse transition probabilities, but it seems like a super case of any old transition probability comparison.

Then I read something over in the thread below, where [itex]{ i }[/itex] is some sort of candidate for the value of the spooky Imirzzi parameter [itex]\gamma[/itex].

https://www.physicsforums.com/threa...ntropy-from-three-dimensional-gravity.810372/

Whiiiich, i do not understand, though it sounds tantalizingly related, if Gravity is entropic.
http://en.wikipedia.org/wiki/Immirzi_parameter

Random googly-eye connections?:)... but time and scale periodic in-variance is in there somewhere... just got to be... else where in the heck does it all come from?
 
Last edited:
  • #75
PeterDonis said:
True. I'm just pointing out that our models of reality are not the same as reality.
Absolutely.

PeterDonis said:
there is no sharp line where an atom "ends" and the rest of the universe "begins". Any such line we might pick out is arbitrary, even though the atom itself is not.
I like that way of putting it.

PeterDonis said:
Yes; the rationale is that we want to explain and predict things, and we need models to do that, and the models we have come up with that make good predictions require us to draw boundaries and pick out particular systems and interactions and ignore everything else.

Yes. That's why I don't think picking out organizations as objects of interest is fundamentally different from anything else done in physics. In fact, organizations tend to suggest themselves to the observer, because one of their main self-repair tasks is boundary maintenance. They maintain a boundary that is necessarily permeable to the outside world. They be able to must take in "food" and get rid of "waste". But they have to maintain some distinction between outside and inside, because otherwise they would wear themselves out trying to control the entire world around them. In the social-political context, this is why control freaks tend to crack up, or at least cause lots of trouble for the rest of us.

PeterDonis said:
But is that because those models are really the best possible models, the ones that really do "carve nature at the joints"? (Btw, I think you're right that that phrase originated with Plato.) Or are they just the best models we have come up with thus far? Could there be other even better models, that we just haven't conceived of yet, that carve nature at different "joints"?

Before you answer "how could that happen?", think carefully, because that's exactly what did happen when we discovered many of our current models. Take GR as an example. In GR, gravity is not even a force; it's spacetime curvature. So many questions that a Newtonian physicist would want to ask about gravity aren't even well formed in GR--at least not if you look at the fundamentals of the theory. Of course we can build a model using GR in the weak field, slow motion limit and show that in that limit, Newton's description of gravity works well enough. But conceptually, GR carves gravity at very different "joints" than Newtonian physics does. The same thing might happen to GR when we have a theory of quantum gravity; we might find that theory carving nature at different "joints" yet again, and explaining why GR works so well within its domain of validity by deriving it in some limit.

I really like these paragraphs. Very well put.

After my passionate defense of a version of commonsense realism, you might be surprised to hear me say this: I very much doubt that we will ever carve nature perfectly at the joints. To my mind, it must always be a work in progress. But to me, the important part is that it is progress. Newton's physics really is an improvement on Aristotle's physics. Einstein's physics really is an improvement on Newton's. By improvement, I mean that it captures more of reality--makes more of it available to perception.

When thinking about how it is we can know things, an idea that I like is that of a stable perception. A perception that you keep coming back to, even after actively trying to get multiple points of view, multiple opportunities to disconfirm it, is a perception that you can't help holding on to. It is a stable perception. Models that provide a more stable perception of the world are better than ones that don't. They may not be the most stable possible, but they have something of reality in them.

I am also open to the idea that there could be several distinct but equally good ways of carving nature at the joints. It is hard to picture it how it would work in the sciences, but I can draw an analogy with mathematics. There are mathematical structures that can be axiomatized in several different ways, each system having its own benefits and drawbacks. Each axiom system is a window on the underlying mathematical object, but the object is distinct from anyone of these systems.

PeterDonis said:
What I get from all this is that we should be very careful not to get overconfident about the "reality" of the objects that we pick out in our models. That doesn't mean our models are bad--after all, they make good predictions. Newtonian gravity makes good predictions within its domain of validity. But it does mean that the fact that a model makes good predictions should not be taken as a reliable indication that the entities in the model must be "real". One saying that expresses this is "all models are wrong but some are useful".

Agreed.
 
  • Like
Likes PeterDonis
  • #76
Jimster41 said:
Now i see you are talking about eq 5 in Crooks. And after third read I follow the distinction between path independent probability, and reverse path probability.

[tex]\frac { P\left[ x\left( +t \right) |\lambda \left( +t \right) \right] }{ P\left[ \overline { x } \left( -t \right) |\overline { \lambda } \left( -t \right) \right] } =exp\left\{ -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right] \right\}[/tex]

Yes, that's the "condition of microscopic reversibility" that I was talking about. I wish I knew where that came from. That is awesome because it is claimed to apply to non-equilibrium processes.

I don't think the forward and reverse paths are "reversible" in the thermodynamic sense of not producing any net entropy. I think in this context, "reversibility" is just referring to the fact (?) that at the lowest level, everything is reversible. That's what I understand Lochschmidt's paradox to be about--how do you get macroscopic irreversibility out of microscopic reversibility? Couldn't you just play the tape backwards without violating the laws of physics? I still don't have a good answer to that question.

So that equation gives quantitative form to the intuitive notion that even though some process can in principle go in both forward and reverse directions, you will more often see it go in the one that generates positive entropy in the surroundings. At least I think that's what it's saying. It sounds great, but how do they know that?

Oh, the reason for the minus sign in front of the Q is that Crooks is using a different sign convention for heat. He is counting Q as heat absorbed from the surroundings, while England is taking Q to be heat rejected to the surroundings, if I recall correctly. That's how I was using it.

EDIT: which Dennett book are you getting? I've read several and I haven't been disappointed.
 
Last edited:
  • #77
"Darwin's Dangerous Idea"
 
  • #78
techmologist said:
Yes, that's the "condition of microscopic reversibility" that I was talking about. I wish I knew where that came from. That is awesome because it is claimed to apply to non-equilibrium processes.

I don't think the forward and reverse paths are "reversible" in the thermodynamic sense of not producing any net entropy. I think in this context, "reversibility" is just referring to the fact (?) that at the lowest level, everything is reversible. That's what I understand Lochschmidt's paradox to be about--how do you get macroscopic irreversibility out of microscopic reversibility? Couldn't you just play the tape backwards without violating the laws of physics? I still don't have a good answer to that question.

So that equation gives quantitative form to the intuitive notion that even though some process can in principle go in both forward and reverse directions, you will more often see it go in the one that generates positive entropy in the surroundings. At least I think that's what it's saying. It sounds great, but how do they know that?

Oh, the reason for the minus sign in front of the Q is that Crooks is using a different sign convention for heat. He is counting Q as heat absorbed from the surroundings, while England is taking Q to be heat rejected to the surroundings, if I recall correctly. That's how I was using it.

EDIT: which Dennett book are you getting? I've read several and I haven't been disappointed.

Yeah, it bears a lot of thought...One minute I think I get it then I'm not sure...

I took his argument to be something like:

  1. The vanilla fluctuation theorem applied to macroscopic states describes the probability of transitions between those macroscopic states as a signed real value, proportional to the relative frequency of states indistinguishable from the start and end state in the total phase space, and the energy dissipated over the transition. Just good old entropy, observing that although macroscopic states are reversible, they have a probabalistic tendency to do some things rather than others.
  2. If you assume the microscopic domain of some controolled macroscopic transition, is a stochastic Markovian one, and that the phase space distributions of state and control parameter are "the same" at the start and end of the state transition, then according to all available observables, they are reversible (I think this is his big point)
  3. Two types of systems obey the rules of indistinguishability at start and end of transition (and so reversibility). 1) A process traveling from equilibrium back to equilibrium 2) A system traveling from a non-equilibrium steady state back to the same non-equilibrium steady state. (also a big claim he's making that needs support, but I can't see any big flaw it)
I think he's kindof saying, "what's the difference between macroscopic and microscopic when identifying a reversible proces... Same rules apply. (this is all talking about classical systems)And to me it all seems to make sense. I guess I buy it.

The part that intrigues me is the path-work definition (equivalent to the entropy production), that applies scale and direction to these transition or path probabilities. This evokes the opening lines of Verlinde's paper on Entropic Gravity where he uses the Unruh temperature and the example of polymer elasticity to claim that entropy is a force that does work. The question then, I think, is nicely set at the microscopic level, to wonder - what is doing that work? What is the cause of the "force" that does work, we call entropy?

and his (Crook's that is) equation..

[itex]ω=lnρ(x−τ)−lnρ(x+τ)−βQ[x(+t),λ(+t)][/itex]

looked at now down in the microscopic "path-work" context, is saying that the configuration terms (along with dissipation) are part of what entropy is. This sound obvious but here we are talking about microscopic system paths not about macroscopic ensembles.What is it about one microscopic configuration path that makes it a path of less work? It does not seem remotely sufficient in this context, where we are defining the mechanics entropy itself, to say it's because the path is "more probable". Rather these are the terms that define that statement. The question here is why it is more probable and how? It is because it requires, or is, a different amount of work. Configuration differences themselves contain and require work. Information is energy, or rather energy is information. This is just so... Verlinde.

BTWI just started this https://www.amazon.com/dp/0786887214/?tag=pfamazon01-20] - Holy crap is it interesting.

[Edit] I'm not familiar with that paradox, but if I had to guess how you get macroscopic irreversibility, which is only probabilistic, from microscopic reversibility, is because whatever it is that is "choos.ing" some paths and not others, whatever it is which is assigning "cost" to microscopic paths, is distributable, and assigning tthat work (unevenly) over the microscopic parts that make up the macroscopic ensembles. There are LQG-ish notions to this I think.
 
Last edited:
  • #79
Jimster41 said:
"Darwin's Dangerous Idea"

You couldn't have picked a better place to start.

Jimster41 said:
If you assume the microscopic domain of some controolled macroscopic transition, is a stochastic Markovian one, and that the phase space distributions of state and control parameter are "the same" at the start and end of the state transition, then according to all available observables, they are reversible (I think this is his big point)

I wasn't taking it to mean the start and end distributions were the same, just that the system starts in equilibrium and is then allowed to relax to equilibrium again after being driven for a finite time. Could be a different equilibrium state. Since the start and end states are both equilibrium states, you can meaningfully define [itex]\Delta F[/itex]. And then he was able to relate this to the work done on the system during the finite time it was driven.

I would write the equation but the procedure for using latex has changed since I used it last. Have to get up to date.

Jimster41 said:
This evokes the opening lines of Verlinde's paper on Entropic Gravity where he uses the Unruh temperature and the example of polymer elasticity to claim that entropy is a force that does work. The question then, I think, is nicely set at the microscopic level, to wonder - what is doing that work? What is the cause of the "force" that does work, we call entropy?

Could be he is just talking about the way it appears in the thermodynamic potential (i.e. free energy):

G = U + pV - TS

orF = U-TS

A process that increases the internal entropy of a system decreases its thermodynamic potential, and that thermodynamic potential can be converted into work done on the environment. I haven't read Verlinde paper but it looks neat. Possibly a little over my head, but worth taking a look at.

Jimster41 said:
looked at now down in the microscopic "path-work" context, is saying that the configuration terms (along with dissipation) are part of what entropy is. This sound obvious but here we are talking about microscopic system paths not about macroscopic ensembles.What is it about one microscopic configuration path that makes it a path of less work?

The system is in contact with a heat bath, so it is getting random bumps and jolts from outside. That can affect how much work it takes to drive it from one state to another. I might be missing your point.

Jimster41 said:
BTWI just started this https://www.amazon.com/dp/0786887214/?tag=pfamazon01-20] - Holy crap is it interesting.

Hey...now there's one. Anything by Strogatz is bound to be reliable. You don't have to worry that he's just some crank throwing around jargon. Thanks! I have so many new books for my reading list :) I will get to them "in the fulness of time", as I used to hear growing up.
 
  • #80
I think you are correct observation that it could be "different" equilibrium. I'm a bit confused to be honest.

I see that his precise claim is that the two groups of applications are both "Odd under time reversal" which is clearly a technical concept, and I don't quite feel I understand what it means well enough. Reading again I see he clarified it to just mean that entropy production would be equal but inverse if run from the other direction. So I think you are more correct. I don't think it affects his claim that the transitions contain equal but opposite amounts of work? Do you?

I think the meaning is the same, as in the thermodynamic potential. But what I was trying to convey earlier is that I find it most interesting that he is saying the path selection of the system does work, is a term into total value of entropy. I know this is obvious at some level. We define entropy as a property of a state, in relationship to the frequency of states like it in the phase space of a system, and more importantly how likely those stares are to occur over time evolituon of the system. But that is in some sense a post hoc observation used as a definition (why entropy is so slippery sort of). What I think England is getting ready to talk about (I have only started his paper) is the way that path selection is a causal term of work production. This opens up types of path selection dynamics that support "improbable structure"... which must be constructed, without violating the second law. Which is arguably what we have.

in other words the way to read it is more like.

[itex]lnρ(x−τ)-lnρ(x+τ)=ω+βQ[x(+t),λ(+t)][/itex]

[itex]lnρ(x−τ)-lnρ(x+τ) = \triangle { path }_{ x+\tau }\\ \\ \triangle { path }_{ x+\tau }=ω+βQ[x(+t),λ(+t)][/itex]

In other words here is an "entropic potential entergy" that literally does work through path selection. The reason is because I'm interested in the idea(of Verlinde and others) that Quantum Mechanical Gravity may be sort of configuration-ally specific, sensitive to, or varying through configuration or "information" ? This is I think what Verlinde is getting at with Holographic Entropic Gravity

And oh yeah, this is all over my head, but that doesn't stop me one bit (in the ensemble average anyay):woot:. Actually Verlinde's paper, is pretty readable of the first bits. But it is conceptually a twistor.:confused: Pretty controversial I think. But there is a lot going on in the Loop Quantum Gravity side that I am of a beer betting mind, is going to crack the mystery of entropy, at least in half.

I'm making a concerted effort to get better with Latex, because I want to understand the very equations - not translations of them, or to clarify translations straight from the source.

This is probably all just me getting a better, or at least fuller, understanding of the subtleis of thermodynamicso_O
 
Last edited:
  • #81
Jimster41 said:
I don't think it affects his claim that the transitions contain equal but opposite amounts of work? Do you?

That sounds right to me. Crooks is just talking about pairs of processes, forward and reverse, where the reverse is the complete time-reversed version of the forward path. So the reverse path starts in the final state of the forward path, and ends in the initial state of the forward path. If the forward path releases heat Q to the bath, the reverse path absorbs Q from the bath. If it required work W from outside to drive the system along the forward path, then the reverse path does work W on its surroundings. All the quantities change sign in the reverse process.

The two types of scenarios he is talking about are 1) A system starts in equilibrium state A, is driven for a finite time, then relaxes to equilibrium state B, and 2) A system starts and ends in the same non-equilibrium, stationary state, and is driven in a time-symmetric way.

I still can't get my head around the "condition of microscopic reversibility". I need to learn some more statistical mechanics.

Jimster41 said:
In other words here is an "entropic potential entergy" that literally does work through path selection.

I'm unfamiliar with this stuff about path selection, which you have referred to several times. For example, I'm not sure what you're getting at here...

Jimster41 said:
I'm not familiar with that paradox, but if I had to guess how you get macroscopic irreversibility, which is only probabilistic, from microscopic reversibility, is because whatever it is that is "choos.ing" some paths and not others, whatever it is which is assigning "cost" to microscopic paths, is distributable, and assigning tthat work (unevenly) over the microscopic parts that make up the macroscopic ensembles. There are LQG-ish notions to this I think.

Can you explain it a little more? Oh yeah, I meant Loschmidt's paradox, not Lochschmidt. Ha ha.

Jimster41 said:
I'm making a concerted effort to get better with Latex, because I want to understand the very equations - not translations of them, or to clarify translations straight from the source.

Yep, it's better to be able to have direct access to what's being said. When I come across something that looks important, like in a technical paper, I'm willing to put in some work to understand the math.

I think England is using the standard quantitative definition of fitness, the net growth rate g-δ (births minus deaths). So he is assuming replication as a given. Based on the article about him, I was thinking he was going to tell us why we should expect there to be things that replicate. Maybe I read it with wishful thinking. But with the assumption that things do replicate, he puts a lower bound on the amount of heat they must produce in the process. Then, making the plausible assumption that there is pressure on living organisms to get the most bang for their thermodynamic buck--to approach the bound--, this bound can itself be thought of as a thermodynamic measure of fitness.
 
Back
Top