How Does the Universe Use Temperature Differences to Create Structures?

In summary, thermodynamics allows for processes that are maximally irreversible, such as the free expansion of a perfect gas. This allows for the creation of structures, such as the universe we live in, without requiring any work to be done.
  • #36
PeterDonis said:
It sort of does, in the sense that, as long as the universe keeps expanding, it can never reach thermal equilibrium. Another way to put it is, if the universe keeps expanding forever, there is no such thing as a state of "maximum entropy" for the universe as a whole.

Is the universe expanding at the expense of anything? The expansion means more gravitational potential, so is it coming from kinetic energy or something else? I am assuming that the total energy of the universe remains constant, if that is even relevant here.

The way Frautschi put it was that although entropy is still non-decreasing, as the second law requires, the maximum possible entropy is always increasing. That sounds wonderful but I don't really understand it. It also directly contradicts one of the formulations of the second law that I am accustomed to, namely that the energy available to do work is non-increasing. If what Frautschi says is true, you can have increasing entropy and increasing free energy, too.

Based on what you said above though, I do see that the universe will tend to a situation where there are many increasingly isolated systems that can't equilibrate with each other.
 
Space news on Phys.org
  • #37
techmologist said:
Is the universe expanding at the expense of anything?

No.

techmologist said:
I am assuming that the total energy of the universe remains constant

There isn't any well-defined "total energy of the universe". In general in a curved spacetime there is no way to define one; it can only be done in certain special cases. In the case of a spatially closed (i.e., finite) universe, there is a sense in which the total energy is zero (heuristically, positive energy due to matter and radiation is exactly canceled by negative gravitational potential energy); but for a spatially infinite universe, which ours is as best we can tell, even that doesn't work.

techmologist said:
It also directly contradicts one of the formulations of the second law that I am accustomed to, namely that the energy available to do work is non-increasing.

That formulation only works in those special cases where a "total energy" can be defined.

techmologist said:
I do see that the universe will tend to a situation where there are many increasingly isolated systems that can't equilibrate with each other.

If the universe's expansion continues to be dominated by dark energy, yes, that is what will happen.
 
  • #38
PeterDonis said:
There isn't any well-defined "total energy of the universe". In general in a curved spacetime there is no way to define one; it can only be done in certain special cases. In the case of a spatially closed (i.e., finite) universe, there is a sense in which the total energy is zero (heuristically, positive energy due to matter and radiation is exactly canceled by negative gravitational potential energy); but for a spatially infinite universe, which ours is as best we can tell, even that doesn't work.

That is mind blowing. Why does any science based on the conservation of energy work? Is it somehow locally true that energy is conserved?

The expansion part even makes the other classical formulations of the 2nd law awkward. Can you have a cyclic process in an expanding universe?
 
  • #39
techmologist said:
Is it somehow locally true that energy is conserved?

Yes, of course. The issue is purely with not having a well-defined notion of "total energy" for the universe.

Local energy conservation is just the law that, locally, energy cannot be created or destroyed. That is what prevents perpetual motion machines from working. But to translate this into a global law about "total energy", we have to add up the energy in all local regions of space at some instant of time. Hopefully you see the issue: "space" and "time" are relative. In a general curved spacetime, there is no well-defined, unique notion of "space" or "time". So there is no well-defined, unique way to add up all the energy in local regions to get a "total energy".
 
  • #40
PeterDonis said:
Yes, of course. The issue is purely with not having a well-defined notion of "total energy" for the universe.

Local energy conservation is just the law that, locally, energy cannot be created or destroyed. That is what prevents perpetual motion machines from working. But to translate this into a global law about "total energy", we have to add up the energy in all local regions of space at some instant of time. Hopefully you see the issue: "space" and "time" are relative. In a general curved spacetime, there is no well-defined, unique notion of "space" or "time". So there is no well-defined, unique way to add up all the energy in local regions to get a "total energy".
Gotcha. I made an elementary logic error, transforming not (the total energy is conserved) into the total energy is not conserved without noticing it. Big difference. Heuristics work every time, except for when they don't.

So it is just because there is no way to talk about what is going on everywhere in the universe right now. Because whether or not things in different places happen at the same time depends on your reference frame.

Is it okay to talk about total energy at the galaxy level or is that too big? Then you could apply energy conservation to say that the energy of the galaxy is decreasing according to how bright it is.
 
  • #41
techmologist said:
Is it okay to talk about total energy at the galaxy level or is that too big?

Any system that can be treated as an isolated system--a bunch of stuff surrounded by emptiness--can be given a well-defined total energy, at least as a good approximation. A planet, a star, a solar system, and a galaxy all can be treated reasonably well as isolated systems.

techmologist said:
Then you could apply energy conservation to say that the energy of the galaxy is decreasing according to how bright it is.

Yes--the energy carried away by radiation would be equal to the decrease in energy of the galaxy.
 
  • #42
PeterDonis said:
Any system that can be treated as an isolated system--a bunch of stuff surrounded by emptiness--can be given a well-defined total energy, at least as a good approximation. A planet, a star, a solar system, and a galaxy all can be treated reasonably well as isolated systems.

I am probably making that same logical error again, but is there a sense in which the universe is not isolated?
 
  • #43
There is if you're inclined to take multiverses and colliding membranes in higher dimensions seriously.
 
  • #44
techmologist said:
is there a sense in which the universe is not isolated?

More than that, there is no sense in which the universe is isolated; it is not a bunch of matter and energy surrounded by emptiness. Matter and energy is everywhere in the universe.
 
  • #45
rootone said:
There is if you're inclined to take multiverses and colliding memiranes in higher dimensions seriously.

This is not the sense of "isolated" I am talking about. The technical term for what I've been calling "isolated" is "asymptotically flat". The universe is not asymptotically flat. That is a statement about our 4-dimensional universe, which is valid regardless of whether or not there are multiverses, colliding branes, etc.
 
  • #46
PeterDonis said:
More than that, there is no sense in which the universe is isolated; it is not a bunch of matter and energy surrounded by emptiness. Matter and energy is everywhere in the universe.

Ah, I was making the same error again. I need to name it so I will recognize it better. I'll call it the "Nothing-is-better-than-steak fallacy" (and hamburgers are better than nothing, so...).
 
  • #47
I'm going to go out on a limb and guess that the second law of thermodynamics is not directly applicable to the universe as a whole for the same reason, at least not when stated in a global form like "the total entropy is increasing". That version would only apply to parts of the universe that are approximately isolated. Perhaps the local, negative versions about what can't result from a cyclic process are the best, since they don't require that any extensive state function be defined for the universe.
 
  • #48
techmologist said:
I'm going to go out on a limb and guess that the second law of thermodynamics is not directly applicable to the universe as a whole for the same reason, at least not when stated in a global form like "the total entropy is increasing".

Actually, it's not entirely clear that this is true. Entropy can be defined in a way that's more general than the usual way (where it's linked to the definition of energy); the more general definition is that the entropy of a given state is the logarithm of the number of states that have the same macroscopic properties as the given state. But those macroscopic properties don't have to be extensive; for example, heuristically, if we consider the universe as a whole to be homogeneous and isotropic (i.e., ignoring all local variations in energy density, etc.), then we can describe it by its energy density, pressure, and curvature, which are intensive quantities, and we could say that its entropy is just the logarithm of the number of possible universes that have the same energy density, pressure, and curvature. (This is heuristic because we don't currently have a way of counting the "possible universes", but it illustrates the sort of thing that could in principle be done.)
 
  • #49
PeterDonis said:
for example, heuristically, if we consider the universe as a whole to be homogeneous and isotropic (i.e., ignoring all local variations in energy density, etc.), then we can describe it by its energy density, pressure, and curvature, which are intensive quantities, and we could say that its entropy is just the logarithm of the number of possible universes that have the same energy density, pressure, and curvature. (This is heuristic because we don't currently have a way of counting the "possible universes", but it illustrates the sort of thing that could in principle be done.)

That is an attractive idea. So these intensive quantities would effectively be averages over the 4-dimensional manifold, right? There would be no taking account of any gradients (and associated flows) in this picture.

Since you brought up possible universes, is there anything to the claims of fine-tuning of this particular universe? Some of the more extreme claims are obvious b.s., but the one that says some fundamental constants must be extremely precise in order for galaxies and stars to form got my attention. That is a very serious claim. Even a secular humanoid such as myself has a hard time imagining any life forms without stars and planets. But I don't know about cosmology and can't evaluate the claim.
 
  • #50
techmologist said:
these intensive quantities would effectively be averages over the 4-dimensional manifold, right?

Correct.

techmologist said:
There would be no taking account of any gradients (and associated flows) in this picture.

More precisely, differences in gradients/flows would be different "microstates" (detailed states of the universe) that correspond to the same "macrostate" (average values of the intensive quantities).

techmologist said:
is there anything to the claims of fine-tuning of this particular universe?

This is still an open question, because, as I said before, we don't know how to count the "possible universes", so we don't know how to quantitatively estimate how "fine-tuned" our universe really is.
 
Last edited:
  • #51
PeterDonis said:
More precisely, differences in gradients/flows would be different "microstates" (detailed states of the universe) that correspond to the same "macrostate" (average values of the intensive quantities).

That makes sense.

PeterDonis said:
This is still an open question, because, as I said before, we don't know how to count the "possible universes", so we don't know how to quantitatively estimate how "fine-tuned" our universe really is.

Thanks. Just knowing that it is a real question helps. I couldn't tell if I was being conned. Some of the more vocal proponents of fine-tuning have motivations that are at best unrelated to scientific understanding.

In your opinion, is the question "why are there heat engines" a real question?
 
  • #52
techmologist said:
In your opinion, is the question "why are there heat engines" a real question?

Well, it's led to a real thread. :wink:

I think the answer is "sort of". It's certainly true that our local observation that there are heat engines must be consistent with what we know of the universe as a whole, so in that sense it's a real question.

But our concept of a "heat engine" is based on our concept of "useful work", and that's not really a physics concept; it depends on what we find to be "useful", so it's more of a subjective concept. Physically, something we call a "heat engine" is no different from any other system; it obeys all the same laws. It just happens to have an output that we consider "useful". So in that sense, "why are there heat engines" isn't a real question, or at least not a real physics question; it's a question about how we choose to describe certain portions of reality, not a question about the laws that govern reality.
 
  • #53
Thanks for making it a real thread Peter! I'm trying to read G.Crooks paper from 1999 talking about the fluctuation theorem. This after realizing J. England is sort of starting with that. Very interesting. He is generalizing the work done by a heat bath coupled classical system in transitioning over a path in configuration space, whether the path exchanges heat with the bath or is isothermal but selects between microstates (I may be botching that) - so I got to think about your statement that "useful work" is observer dependent.
 
Last edited:
  • #54
PeterDonis said:
But our concept of a "heat engine" is based on our concept of "useful work", and that's not really a physics concept; it depends on what we find to be "useful", so it's more of a subjective concept. Physically, something we call a "heat engine" is no different from any other system; it obeys all the same laws. It just happens to have an output that we consider "useful". So in that sense, "why are there heat engines" isn't a real question, or at least not a real physics question; it's a question about how we choose to describe certain portions of reality, not a question about the laws that govern reality.

I think you're right that the "useful" in "useful" work is not strictly a physics concept. But what I have in mind is not completely subjective, either. I definitely do not mean only useful to humans. I would say "usefulness" has a certain objectivity in the context of organization. The "purpose" of any organization is simply to persist, to keep producing itself. How it does this depends on how it fits into a larger network of relations among organizations. This larger network of relations is itself a higher-order organization. Within the context of that higher-order organization, the organization performs a "function". But it is only performing this "function" because by doing so, it directs resources to itself and persists--produces itself, renews itself, repairs itself. So to an organization, "useful work" is self-repair.

As an economic example, a steel-producing firm performs an essential function as part of a larger economy. But the owners of the firm aren't doing it out of the goodness of their hearts, or patriotism, or whatever. To the extent they have an interest in the continuation of that business, they will consider "useful" any action that tends to grow the business, or at least maintain it. Actually it is more complicated than that, because in any modern firm of that type management and labor also have their own interests, all pulling in somewhat different directions. So the organization, the firm, ends up "acting" as if it had a personality of its own, not identical to that of any of its constituents. It's actions are useful the extent that they tend to keep that organization going.

At the physics level, useful work performed by a Benard cell is work that overcomes viscous drag, keeping the Benard cell from fizzling out. Similar things can be said of a thunderstorm or hurricane. These may or may not have some direct use to humans, but the usefulness referred to here is from the perspective of the organization itself.

I realize that the second law of thermodynamics doesn't explicitly refer to "engines" in the sense of "useful to somebody"--to power their car. It just says that if you have a system and two heat baths at different temperatures, it is possible to arrange a cyclic process with the result that thermal energy is absorbed from the hotter bath, some of which energy is used by the system to do work on its environment, and some of which is passed on as thermal energy to the colder bath. The second law is completely agnostic about whether such a work-producing cyclic process will ever happen. It only puts limits on what such a process could achieve, should it happen. That's where my question is coming from. Is it just a case of anything that can happen will happen?

Jimster41 said:
Thanks for making it a real thread Peter!
Seconded. :)
 
  • #55
Jimster41 said:
Thanks for making it a real thread Peter! I'm trying to read G.Crooks paper from 1999 talking about the fluctuation theorem. This after realizing J. England is sort of starting with that. Very interesting. He is generalizing the work done by a heat bath coupled classical system in transitioning over a path in configuration space, whether the path exchanges heat with the bath or is isothermal but selects between microstates (I may be botching that) - so I got to think about your statement that "useful work" is observer dependent.

Everything in those papers seems to hinge on the condition of microscopic reversibility relating the probability of a forward process to the probability of its reverse process.

P(A->B)/P(B->A) = e^(beta*Q)

where Q is the heat delivered to the surrounding bath during the forward process.

This idea is new to me. I am familiar with detailed balance, which applies at equilibrium, but this microscopic reversibility condition is claimed to apply away from equilibrium. How do they know that? Is there some way to see why it must be so?
 
  • #56
techmologist said:
I would say "usefulness" has a certain objectivity in the context of organization.

But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of interest.

techmologist said:
At the physics level, useful work performed by a Benard cell is work that overcomes viscous drag, keeping the Benard cell from fizzling out. Similar things can be said of a thunderstorm or hurricane.

True, but again, it is not physics that picks out the Benard cell or the thunderstorm or hurricane; it's us. True, these systems are usually thought of as being "natural", whereas a refrigerator or an engine is thought of as "artificial"; but even those are distinctions made by us, not physics.

techmologist said:
The second law is completely agnostic about whether such a work-producing cyclic process will ever happen. It only puts limits on what such a process could achieve, should it happen. That's where my question is coming from. Is it just a case of anything that can happen will happen?

Not every possible work-producing process that could happen, actually does happen. Since the underlying microscopic physics is chaotic (i.e., it has a sensitive dependence on initial conditions), we really have no way of knowing what picks out which work-producing processes actually happen (except in the obvious cases where somebody deliberately arranged for a particular process to happen).
 
  • #57
PeterDonis said:
But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of interest.

Right, there's no law of physics that says so. But who says physics is all there is? Everything that happens is founded in physics, in the sense that the underlying laws of physics provide the background for everything. But most things aren't objects of physics. Like algorithms, for example. At some level, it's physics that makes your graphing calculator work. But it isn't physics that makes it give you the right answer. The same physics governs a calculator that gives you the wrong answer.

And while it's true that we do pick out things of interest, we aren't totally at liberty to pick out just anything, or ignore just anything. Our minds organize around a real world that we find ourselves in. They have to or we wouldn't be here.
 
  • #58
techmologist said:
who says physics is all there is?

It isn't, but it's all that's on topic for this forum. :wink: If your question "why are there heat engines" wasn't a question about physics, then it's off topic. I was assuming it was a question about physics.
 
  • #59
techmologist said:
Our minds organize around a real world that we find ourselves in.

Quite true. But there's still a difference between our models of reality, and the reality that is being modeled.
 
  • #60
PeterDonis said:
But what counts as an "organization" is subjective. There's no law of physics that says what an "organization" is; it's just a particular piece of reality that someone picks out as being of intetest.
Chaisson's breakdown of "complexity" as "energy flux density" is pretty objective isn't it?
 
Last edited:
  • #61
Jimster41 said:
Chaisson's breakdown of "complexity" as "energy flux density" is pretty objective isn't it?

I'm not familiar with Chaisson's work, so I can't really comment on it. But a definition of "complexity" in terms of some physical observable is not the same as picking out a system as a "heat engine" or an "organization" and separating it from the rest of reality. That's the part that is subjective.
 
  • #62
PeterDonis said:
I'm not familiar with Chaisson's work, so I can't really comment on it. But a definition of "complexity" in terms of some physical observable is not the same as picking out a system as a "heat engine" or an "organization" and separating it from the rest of reality. That's the part that is subjective.

I agree with that for the most part...

Maybe the fact that there are multiple ways of decomposing the same set of differentiable things we can see, as a "complex dissipative structure" or "organized heat engine" is because those terms are pure subjective projection, totally anthropocentric, or personally subjective. It also may be because everything we see, including ourselves, is one big "complex dissipative structure" or "organized heat engine", one with multiple kinds of symmetry. Seems like an equally consistent explanation, but better in some respects.

And I think it's hard to argue it is completely subjective which is why such a physical observable as "energy rate (or flux) density" - even if only roughy quantifiable (Chaisson takes great care to say this) is available. He positions the term as having useful qualitative meaning over a very broad landscape of interest.
 
Last edited:
  • #63
PeterDonis said:
It isn't, but it's all that's on topic for this forum. :wink: If your question "why are there heat engines" wasn't a question about physics, then it's off topic. I was assuming it was a question about physics.

I couldn't very well post it in General Discussion, could I? They have very stringent guidelines there...

I would like to suggest two lines of thought. First, that some organizations really are more "physical" than others, and belong to physics if they belong to any discipline at all. Second, that most of the objects of physics that we take for granted don't meet the strict requirement of objectivity that you are using to rule out all organizations as objects of physical study.

To come back to the example of the Benard convection cells, there really is a physical reason for their organization, whether you want to call it organization or not. Some patterns in nature are objectively better than others at getting themselves amplified. Once the critical temperature difference is reached, the static, conducting configuration of the water in the dish is unstable. The solutions to the linearized approximate equation for the stream function are swirling modes. Any small perturbation has components of these modes, and they get amplified. For reasons I don't entirely understand, viscosity damps out the higher modes, leaving the first one. This can be used to predict the width and velocity distribution of the cells, but their exact configuration in the dish is random. This is all true regardless of whether there is somebody there to say "hey, look at that!". Benard cells form and maintain themselves in a pretty physical way.

Then there are atoms. Are atoms objective enough? Don't they also have to be picked out as things of interest? They represent solutions to the Schrodinger equation with a very idealized Hamiltonian, one that typically ignores the existence of most everything else in the universe. But this simplified picture helps us understand things like atomic spectra, so it is our explanation of what we see. The fact that we pick out things of interest doesn't make them arbitrary. There might be a good rationale for picking out certain things rather than others. Dan Dennett uses the expression, "carving nature at its joints," which I think he got from Plato.
 
  • Like
Likes Jimster41
  • #64
"Carving nature at it's joints" Love that. I have a book by Dennet On the way.

That G. Crook paper just sent my head spinning. It clarifies a few things I feel I do understand and don't quite understand. I'd like to dive into the first few equations here, maybe relate them to the first page of England's paper. (If only tomorrow wasn't Monday).

My understanding of the Benard cells is consistent with the way @techmologist describes them. They represent a non-linear reconfiguration that allows a step change in convection efficiency, and there is the puzzle of what triggers the sudden change, what drives and constrains the re-configuration to become what it does, rather than something else.

The book "Why Stock Markets Crash: critical events in complex financial systems" by Didier Sornette (a geo-physicist, turned market analyst) really left an impression on me. Specifically w/respect to the role "re-normalization" under scaling operations and discrete scale invariance (power laws) play in emergence. Not physics per se, but believe it's relevant, in that it is the same general process mathematically (and so arguably at some level, to some dgree - a similar process "physically"). More strongly than that, I think it's an example of the symmetry and scale invariance of the "emergence" process in and of itself. Sornette proposes a "log periodic" model of approach to the critical points in the market price signal case. Interestingly the "condensate" past the critical point re-configuration is essentially a "stampede". The market becomes superconductive to fear. Generally, the market it is not well organized over short Time periods and large price ranges. Rather prices are stablized by disorganized individual responses to what is considered ambiguous market information.
 
Last edited:
  • Like
Likes techmologist
  • #65
Jimster41 said:
That G. Crook paper just sent my head spinning. It clarifies a number of things I feel I do understand and don't quite understand. I'd like to dive into the first few equations here, maybe related them to the first page of England's paper. (If only tomorrow wasn't Monday).

If I could understand how the condition of microscopic reversibility is arrived at, I think that goes part way toward answering my question. It actually talks about the relative probabilities of a process and its reverse in terms of the entropy produced in the surroundings. This is more than you can get from the SLOT, which doesn't talk about the probability or rate of any process.

I messed up the equation earlier. I should have written P(forward)/P(reverse) rather than P(A->B)/P(B->A), because it matters that it is the time reversed path. Detailed balance is where you only have to consider the initial and final state.

That Didier Sornette book sounds like a winner. He gets a mention in Per Bak's book, How Nature Works. He sounds like my kind of scientist. According to Bak, he generates all sorts of crazy ideas, and thus has a very low batting average. But it only takes one good one.
 
  • Like
Likes Jimster41
  • #66
techmologist said:
some organizations really are more "physical" than others, and belong to physics if they belong to any discipline at all.

...

The fact that we pick out things of interest doesn't make them arbitrary.

True. I'm just pointing out that our models of reality are not the same as reality.

Take your example of atoms. You correctly point out that our model of an atom is greatly oversimplified. But even in that oversimplified model, atoms have no boundaries; there is no sharp line where an atom "ends" and the rest of the universe "begins". Any such line we might pick out is arbitrary, even though the atom itself is not. And once atoms start interacting, forming molecules, forming crystals, forming metals, etc., the boundaries we draw get even more arbitrary, even in our oversimplified models.

techmologist said:
There might be a good rationale for picking out certain things rather than others.

Yes; the rationale is that we want to explain and predict things, and we need models to do that, and the models we have come up with that make good predictions require us to draw boundaries and pick out particular systems and interactions and ignore everything else. But is that because those models are really the best possible models, the ones that really do "carve nature at the joints"? (Btw, I think you're right that that phrase originated with Plato.) Or are they just the best models we have come up with thus far? Could there be other even better models, that we just haven't conceived of yet, that carve nature at different "joints"?

Before you answer "how could that happen?", think carefully, because that's exactly what did happen when we discovered many of our current models. Take GR as an example. In GR, gravity is not even a force; it's spacetime curvature. So many questions that a Newtonian physicist would want to ask about gravity aren't even well formed in GR--at least not if you look at the fundamentals of the theory. Of course we can build a model using GR in the weak field, slow motion limit and show that in that limit, Newton's description of gravity works well enough. But conceptually, GR carves gravity at very different "joints" than Newtonian physics does. The same thing might happen to GR when we have a theory of quantum gravity; we might find that theory carving nature at different "joints" yet again, and explaining why GR works so well within its domain of validity by deriving it in some limit.

What I get from all this is that we should be very careful not to get overconfident about the "reality" of the objects that we pick out in our models. That doesn't mean our models are bad--after all, they make good predictions. Newtonian gravity makes good predictions within its domain of validity. But it does mean that the fact that a model makes good predictions should not be taken as a reliable indication that the entities in the model must be "real". One saying that expresses this is "all models are wrong but some are useful".
 
  • Like
Likes techmologist
  • #67
I
techmologist said:
If I could understand how the condition of microscopic reversibility is arrived at, I think that goes part way toward answering my question. It actually talks about the relative probabilities of a process and its reverse in terms of the entropy produced in the surroundings. This is more than you can get from the SLOT, which doesn't talk about the probability or rate of any process.

I messed up the equation earlier. I should have written P(forward)/P(reverse) rather than P(A->B)/P(B->A), because it matters that it is the time reversed path. Detailed balance is where you only have to consider the initial and final state.

That Didier Sornette book sounds like a winner. He gets a mention in Per Bak's book, How Nature Works. He sounds like my kind of scientist. According to Bak, he generates all sorts of crazy ideas, and thus has a very low batting average. But it only takes one good one.
Yeah, those equations. I'm looking at eq1 from G. Crooks P(+sigma)/P(-sigma) = e^t*sigma etc. Just an exponential function of time, to see positive entropy production. But I feel like thinking about it. Is it more connotative-ly interesting when read left to right or right to left? I like it read right to left. Suddenly I see Entropy is not fundamental. It is the just a "quality" describing microscopic change, via comparison of likelihood of any two events. The arrow of time. Of course I've heard that more or less. I'm just stating it pretty typically, and it's obvious mathematically, but it seems like we describe entropy often as a thing, something fundamental. So forgetting entropy, in the context of this discussion, what term could be placed "equal" left of the left side to say "the difference between any two candidate events" can be how else defined? Maybe "relative dissipative efficiency" would be one candidate. Or maybe "synchronistic identity" with some associated but separate events (entanglement?). Minimization of space-time curvature (ala Verlinde) in the presence of "bulk pressure", "dark energy" "Lambda" etc.

Per Bak. I was going to get Per Bak.
 
Last edited:
  • #68
Now i see you are talking about eq 5 in Crooks. And after third read I follow the distinction between path independent probability, and reverse path probability.

[itex]\frac { P\left[ x\left( +t \right) |\lambda \left( +t \right) \right] }{ P\left[ \overline { x } \left( -t \right) |\overline { \lambda } \left( -t \right) \right] } =exp\left\{ -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right] \right\} [/itex]

I am confused a few paragraphs later by the "Entropy change of the bath = [itex]-\beta Q[/itex]" (I thought it would be positive, though I am guessing it's negative because [itex]\beta[/itex] is an "inverse temperature") , and by the expression "odd under time reversal". I have a lame bucket I throw that in, labeled "Matrix nomenclature, basically like a minus sign or conjugate", but then later I think I missed something really important about "odd".

More dumb questions that betray my mathlessness. [itex]exp[/itex] just means "expectation value" right. I get confused as to how interchangeable that term is with powers of e.
 
Last edited:
  • #69
link to the paper http://arxiv.org/abs/cond-mat/9901352
The Entropy Production Fluctuation Theorem and the Nonequilibrium Work Relation for Free Energy Differences
Gavin E. Crooks
(Submitted on 29 Jan 1999 (v1), last revised 29 Jul 1999 (this version, v4))
There are only a very few known relations in statistical dynamics that are valid for systems driven arbitrarily far-from-equilibrium. One of these is the fluctuation theorem, which places conditions on the entropy production probability distribution of nonequilibrium systems. Another recently discovered far-from-equilibrium expression relates nonequilibrium measurements of the work done on a system to equilibrium free energy differences. In this paper, we derive a generalized version of the fluctuation theorem for stochastic, microscopically reversible dynamics. Invoking this generalized theorem provides a succinct proof of the nonequilibrium work relation.then eq (6) he says

[itex]\omega \quad =\quad ln\rho \left( { x }_{ -\tau } \right) -ln\rho \left( { x }_{ +\tau } \right) -\beta Q\left[ x\left( +t \right) ,\lambda \left( +t \right) \right] [/itex]

which I understand as combining the entropy terms associated with an "isothermal" configuration change, and the term associate with heat exchange. At some level this seems like a circularity, or a redundancy, or something since it doesn't seem clear to me that the entropy change due to heat exchange isn't the same thing/process as the entropy change due to a an isothermal configuration change. But then maybe that's why it's fair to "add them up".

Right after that he references the importance of "odd under time reversal" and I realize I'm pretty confused about what "odd" means. The following section seems pretty crucial, and I don't feel confident I am taking all the implications of the setup into the parts after (eq7). It seems like he's just claiming that "microscopic entropy production is symmetric under time reversal". At some level that seems simple (simple enough to suggest the possibility I don't get it at all).

"This condition is equivalent to requiring that the final distribution of the forward process [itex]{ \rho }_{ F }\left( { x }_{ +\tau } \right) [/itex], is the same (after a time reversal) as the initial phase space distribution of the reverse process, [itex]{ \rho }_{ R }\left( { \overline { x } }_{ -\tau } \right) [/itex]... two broad types of work process that fulfill this condition. Either the system begins and ends in equilibrium, or the system begins and ends in the same time symmetric nonequilibrium steady state."
 
Last edited:
  • #70
Also, since I can't get Verlinde out of my head. I keep wondering about the relationship between Crooks' "work relation" and the Unruh Temperature/holographic principle invoked in his paper below.

http://arxiv.org/abs/1001.0785
On the Origin of Gravity and the Laws of Newton
Erik P. Verlinde
(Submitted on 6 Jan 2010)
Starting from first principles and general assumptions Newton's law of gravitation is shown to arise naturally and unavoidably in a theory in which space is emergent through a holographic scenario. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to the Einstein equations. When space is emergent even Newton's law of inertia needs to be explained. The equivalence principle leads us to conclude that it is actually this law of inertia whose origin is entropic.These are both somewhat old papers at this point and there appears to be a lot of work discussing each respectively. But they both seem somewhat pivotal in separate threads - having generated a lot of discussion. Maybe someone is connecting them.
 
Last edited:
Back
Top