Understanding Entropy and the 2nd Law of Thermodynamics
Table of Contents
Introduction
The second law of thermodynamics and the associated concept of entropy have been sources of confusion for thermodynamics students for centuries. The objective of the present development is to clear up much of this confusion. We begin by first briefly reviewing the first law of thermodynamics, to introduce in a precise way the concepts of thermodynamic equilibrium states, heat flow, mechanical energy flow (work), and reversible and irreversible process paths.
First Law of Thermodynamics
A thermodynamic equilibrium state of a system is defined as one in which the temperature and pressure are constant, and do not vary with either location within the system (i.e., spatially uniform temperature and pressure) or with time (i.e., temporally constant temperature and pressure).
Consider a closed system (no mass enters or exits) that, at the initial time [itex]t_i[/itex], is in an initial equilibrium state, with internal energy [itex]U_i[/itex], and, at a later time [itex]t_f[/itex], is in a new equilibrium state with internal energy [itex]U_f[/itex]. The transition from the initial equilibrium state to the final equilibrium state is brought about by imposing a time-dependent heat flow across the interface between the system and the surroundings, and a time-dependent rate of doing work at the interface between the system and the surroundings. Let [itex]\dot{q}(t)[/itex] represent the rate of heat addition across the interface at time t, and let [itex]\dot{w}(t)[/itex] represent the rate at which the system does work at the interface at time t. According to the first law (basically conservation of energy),
[tex]\Delta U=U_f-U_i=\int_{t_i}^{t_f}{(\dot{q}(t)-\dot{w}(t))dt}=Q-W[/tex]
where Q is the total amount of heat added and W is the total amount of work done by the system on the surroundings at the interface.
The time variation of [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] between the initial and final states uniquely defines the so-called process path. There is an infinite number of possible process paths that can take the system from the initial to the final equilibrium state. The only constraint is that Q-W must be the same for all of them.
A reversible process path is defined as one for which, at each instant of time along the path, the system is only slightly removed from being in thermodynamic equilibrium with its surroundings. So the path can be considered as a continuous sequence of thermodynamic equilibrium states. As such, the temperature and pressure throughout the system along the entire reversible process path are completely uniform spatially. To maintain these conditions, a reversible path must be carried out very slowly so that [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] are both very close to zero over the entire path.
An irreversible process path is typically characterized by rapid rates of heat transfer [itex]\dot{q}(t)[/itex] and work being done at the interface with the surroundings [itex]\dot{w}(t)[/itex]. This produces significant temperature and pressure gradients within the system (i.e., the pressure and temperature are not spatially uniform throughout), and thus, it is not possible to identify specific representative values for either the temperature or the pressure of the system (except at the initial and the final equilibrium states). However, the pressure ##P_{Int}(t)## and temperature ##T_{Int}(t)## at the interface can always be measured and controlled using the surroundings to impose whatever process path we desire. (This is equivalent to specifying the rate of heat flow and the rate of doing work at the interface [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex]).
Both for reversible and irreversible process paths, the rate at which the system does work on the surroundings is given by:
[tex]\dot{w}(t)=P_{Int}(t)\dot{V}(t)[/tex]
where, again, ##P_{Int}(t)## is the pressure at the interface with the surroundings, and where [itex]\dot{V}(t)[/itex] is the rate of change of system volume at time t.
If the process path is reversible, the pressure P throughout the system is uniform and thus matches the pressure at the interface, such that
[tex]P_{Int}(t)=P(t)\mbox{ (reversible process path only)}[/tex]
Therefore, in the case of a reversible process path, [tex]\dot{w}(t)=P(t)\dot{V}(t)\mbox{ (reversible process path only)}[/tex]
This completes our discussion of the First Law of Thermodynamics.
Second Law of Thermodynamics
In the previous section, we focused on the infinite number of process paths that are capable of taking a closed thermodynamic system from an initial equilibrium state to a final equilibrium state. Each of these process paths is uniquely determined by specifying the heat transfer rate [itex]\dot{q}(t)[/itex] and the rate of doing work [itex]\dot{w}(t)[/itex] as functions of time at the interface between the system and the surroundings. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path is given by the two integrals:
[tex]Q=\int_{t_i}^{t_f}{\dot{q}(t)dt}[/tex]
[tex]W=\int_{t_i}^{t_f}{\dot{w}(t)dt}[/tex]
In the present section, we will be introducing a third integral of this type (involving the heat transfer rate [itex]\dot{q}(t)[/itex]) to provide a basis for establishing a precise mathematical statement of the Second Law of Thermodynamics.
The discovery of the Second Law came about in the 19th century and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 1800s.
(The following is a somewhat fictionalized account, designed to minimize the historical discussion, and focus more intently on the scientific findings.) Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:
[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}[/tex]
where ##T_{Int}(t)## is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing: For any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) is not arbitrary; instead, there is a unique upper bound to the value of the integral. Clausius also found that this observation is consistent with all the “word definitions” of the Second Law.
If there is an upper bound for this integral, this upper bound has to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.
But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, out of the infinite number of possible process paths, there exists a well-defined subset, each member of which gives the same maximum value for the integral. This subset consists of all the reversible process paths. Thus, to determine the change in entropy between two equilibrium states, one must first “dream up” a reversible path between the two states and then evaluate the integral over that path. Any other process path will give a value for the integral lower than the entropy change. (Note that the reversible process path used to determine the entropy change does not necessarily need to bear any resemblance to the actual process path. Thus, for example, if the actual process path were adiabatic, the reversible path would not need to be adiabatic.)
So, mathematically, we can now state the Second Law as follows:
[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}[/tex]
where [itex]\dot{q}_{rev}(t)[/itex] is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the system temperature at time t (which, for a reversible path, matches the temperature at the interface with the surroundings ##T_{Int}(t)##). This constitutes a precise mathematical statement of the Second Law of Thermodynamics. The relationship is referred to as the Clausius Inequality.
Paper of interest for further learning:
http://arxiv.org/abs/cond-mat/9901352
PhD Chemical Engineer
Retired after 35 years experience in industry
Physics Forums Mentor
Chet,
Is this too long for a comment?
Thank you for explaining the solution, for real materials, to my infinite entropy change
problem–maybe; I, as indicated, suspected something of the kind might be the explanation. Do
you know, however, that taking into account both the variation of heat capacity with temperature
and pressure, and also phase transitions, always leads to a finite value of ∫dq/T when integrated
between 0°K and a higher temperature? Do you care?Actually, as an chemical engineer who worked on processes substantially above absolute zero, I have no interest in this whatsoever.
Maybe you are concerned only with
changes in entropy for processes operating between two non-zero temperatures. Did you use
entropy change calculations in your chemical engineering job? I know that they can be used in
some cases to indicate that a proposed process is impossible, by showing that it would involve
reduction in entropy of an isolated system, thus violating the second law. A well-known example
is the operation of a Carnot cycle heat engine with efficiency greater than that set by the
requirement that the reduction in entropy caused by the removal of thermal energy from the high
temperature heat bath must be accompanied by at least as great an increase in entropy caused
by the addition of thermal energy to the low temperature heat bath.The concept of entropy figures substantially in the practical application of thermodynamics to chemical process engineering, but not in the qualitative way that you describe. Entropy is part of the definition of Gibbs free energy which is essential to quantifying chemical reaction and phase equilibrium behavior in the design and operation of processes involving distillation, gas absorption, ion exchange, crystallization, liquid extraction, chemical reactors, etc.
One might think that the Δ(entropy) = ∫dq/T law should always give a finite Δ(entropy) for pure
ideal gases as well as real materials, but as I (simply) demonstrated, it doesn’t do so for ideal
gases with the lower temperature equal to 0°K, even though ideal gases would not experience
any variation of heat capacity with temperature or pressure, or any phase transitions. I recently
thought about this problem some more, stimulated by the PF discussion, and arrived at a (now
obvious to me) solution, or at least an explanation, that actually favors the thermodynamic
definition of entropy change, which involves infinite, or arbitrarily large, entropy change for
process (involving ideal gases) with starting temperatures at, or arbitrarily near, absolute zero,
over the statistical mechanical view, which requires any finite system to have only finite absolute
entropy at any temperature, including absolute zero, so gives only finite entropy change for
processes, for finite systems, operating between any two states at any two finite temperatures,
even when one is absolute zero. This solution or explanation will require quite a number of lines
to state. I hope that it is not so obvious a one that I am wasting your, my, and any other persons
who are reading this posts’ time by going through it.I personally have no interest in this, but other members might. Still, I caution you that Physics Forums encourages discussion only of mainstream theories, and specifically prohibits discussing personal theories. I ask you to start a new thread with what you want to cover (which seems tangential to the main focus of the present thread), possibly in the Beyond the Standard Model forum. You can then see whether anyone else has interest in this or whether it is just deemed a personal theory. I'm hoping that @DrDu and @DrClaude might help out with this.
For now, I think that the present thread has run its course, and I'm hereby closing it.
For pure materials, the ideal gas is only a model for real gas behavior above the melting point and at low reduced pressures, ##p/p_{critical}##. For real gases, the heat capacity is not constant, and varies with both temperature and pressure. So, the solution to your problem is, first of all, to take into account the temperature-dependence of the heat capacity (and pressure-dependence, if necessary). Secondly, real materials experience phase transitions, such as condensation, freezing, and changes in crystal structure (below the freezing point). So one needs to take into account the latent heat effects of these transitions in calculating the change in entropy. And, finally, before and after phase transitions, the heat capacity of the material can be very different (e.g., ice and liquid water).Chet,
Is this too long for a comment?
Thank you for explaining the solution, for real materials, to my infinite entropy change
problem–maybe; I, as indicated, suspected something of the kind might be the explanation. Do
you know, however, that taking into account both the variation of heat capacity with temperature
and pressure, and also phase transitions, always leads to a finite value of ∫dq/T when integrated
between 0°K and a higher temperature? Do you care? Maybe you are concerned only with
changes in entropy for processes operating between two non-zero temperatures. Did you use
entropy change calculations in your chemical engineering job? I know that they can be used in
some cases to indicate that a proposed process is impossible, by showing that it would involve
reduction in entropy of an isolated system, thus violating the second law. A well-known example
is the operation of a Carnot cycle heat engine with efficiency greater than that set by the
requirement that the reduction in entropy caused by the removal of thermal energy from the high
temperature heat bath must be accompanied by at least as great an increase in entropy caused
by the addition of thermal energy to the low temperature heat bath.
One might think that the Δ(entropy) = ∫dq/T law should always give a finite Δ(entropy) for pure
ideal gases as well as real materials, but as I (simply) demonstrated, it doesn’t do so for ideal
gases with the lower temperature equal to 0°K, even though ideal gases would not experience
any variation of heat capacity with temperature or pressure, or any phase transitions. I recently
thought about this problem some more, stimulated by the PF discussion, and arrived at a (now
obvious to me) solution, or at least an explanation, that actually favors the thermodynamic
definition of entropy change, which involves infinite, or arbitrarily large, entropy change for
process (involving ideal gases) with starting temperatures at, or arbitrarily near, absolute zero,
over the statistical mechanical view, which requires any finite system to have only finite absolute
entropy at any temperature, including absolute zero, so gives only finite entropy change for
processes, for finite systems, operating between any two states at any two finite temperatures,
even when one is absolute zero. This solution or explanation will require quite a number of lines
to state. I hope that it is not so obvious a one that I am wasting your, my, and any other persons
who are reading this posts’ time by going through it.
The basic reason that the statistical mechanical (SM) entropy (S) of a pure (classical) ideal gas
SYS in any equilibrium macrostate at 0°K or any temperature above that is zero or a finite
positive number, whereas its thermodynamic (THRM) entropy change between an equilibrium
macrostate of SYS at 0°K and one at any temperature above that is infinite, is that the SM
entropy of SYS in some equilibrium macrostate MAC is calculated using a discrete
approximation NA to the uncertainty of the exact state of SYS when in MAC–NA is the number
of microstates available to SYS when it is in MAC, with some mostly arbitrary definition of the
size in phase space of a microstate of SYS– whereas the THRM entropy change between two
equilibrium macrostates of SYS is calculated using the (multi-dimensional) area or volume in
phase space of the set of microstates available to SYS when in those macrostates, which can
be any positive real number (and for a macrostate at 0°K is 0). The details follow:
The state of an ideal gas SYS composed of N point-particles each of mass m which interact
only by elastic collision is specified by a point P[SUB]s[/SUB] in 6N-dimensional phase space, 3N
coordinates of them for position and 3N of them for momentum. If the gas is in equilibrium,
confined to a cube 1 unit on a side, and has a thermal energy of E, SM and THRM both consider P[SUB]s[/SUB]
to be equally likely to be anywhere on the energy surface ES determined by E, which is the set of all
points corresponding to SYS having a thermal energy of E, and the probability density of P[SUB]s[/SUB]
being at any point x is the same positive constant for each x ∈ ES, and 0 elsewhere . Since E is
purely (random) kinetic energy, E = Σ[SUB]1[/SUB][SUP]N[/SUP]p[SUB]i[/SUB][SUP]2[/SUP]/2m, where p[SUB]i[/SUB] is the ith particle's momentum,
so this energy surface is the set of all points with position coordinates within the unit cube in the
position part of the phase space for SYS, and whose momentum coordinates are on the 3N-1
dimensional sphere MS in momentum space centered at the origin with radius r = √(2mE). The
area (or volume) where P[SUB]s[/SUB] might be is proportional to the area A of MS, and A ∝ r[SUP]3N-1[/SUP]. The entropy
S of SYS is proportional to ln(the area of phase space where P[SUB]s[/SUB] might be), S ∝ ln(A), therefore
S = const1.+ const2. x ln(E), and since E ∝ T by the equipartition theorem, S = const1.+ const2. x
[const3. + ln(T)]. Thus dS/dE ∝ dS /dT = const2. x 1/T, so dS/dE = const4. x 1/T, and choosing const4.
to be 1, dS = dE/T. This shows the origin of your THRM dS law, for ideal gases (with dE = dq), which
you probably knew. SM approximates this law, adequately for high T and so large A, by dividing
phase space up into boxes with more-or-less arbitrary dimensions of position and momentum, and
replacing A by the number NA of boxes which contain at least one point of ES. This makes S a
function of T which is not even continuous, let alone differentiable, but for large T the jumps in NA,
and so in S, as a function of T are small enough compared to S to ignore, and the SM entropy
can approximately also follow the dS = dE/T law, and be about equal to the THRM entropy, for
suitable box dimensions. However, as T approaches 0°K, the divergence of the SM entropy from the
THRM entropy using these box dimensions becomes severe. As T decreases in steps by factors
of, say, D, the THRM entropy S decreases by some constant amount ln(D) per step, becoming
arbitrarily negative for low enough T, but with T never quite reaching 0°K by this process. For
T = 0°K, A = 0, so S = (some positive) const. x ln(A) = const. x ln(0) = minus infinity. Since the
energy surface ES must intersect at least one box of the SM partition of phase space, NA can never
go below 1, no matter how small T and so A become. Thus the SM entropy S can never go below
const. x ln(1) = 0. The THRM absolute entropy can be finite, except at T = 0, because, although Δ(S)
from a state of SYS whose T is arbitrarily close to 0°K to a state at a higher T can be arbitrarily
large (positive), S at the starting state can be negative enough that the resulting S for the state at
the higher temperature is some constant finite number, regardless of how near 0°K the starting
state is. For the SM entropy, a similar situation is not the case, since although the SM Δ(S) is
about as large as the THRM Δ(S), the SM S at the starting state can never be less than 0. The
temperature at which the SM entropy S gets stuck at 0, not being able to go lower for a lower T, is
not a basic feature of the laws of the universe. Making SYS bigger or making the boxes of the
SM partition of phase space smaller would result in the sticking temperature being lower, and
of course making SYS smaller or the boxes larger would raise the sticking temperature.
I have read somewhere (of course, maybe it was written by a low-temperature physicist) that the
amount of interesting phenomena for a system within a range of temperatures is proportional to
the ratio of the highest to the lowest temperature of that range, not to their difference. If so,
there would be as large an amount of such phenomena between .001°K and .01°K as between
100°K and 1000°K, but the usual SM entropy measure would show no entropy difference
between any two states of a very small system in the lower temperature range, but a non-zero
difference between different states in the upper range, so would be of no help in analyzing
processes in the lower range, even though of some help in the upper range (or if not, for a given
system, for these two temperature ranges, it would be so for some other two temperature ranges
each with a 10 to 1 temperature ratio). On the other hand, the THRM entropy measure would show
as much entropy difference (which would be non-zero) between states at the bottom and at the top
of the lower range as between states at the bottom and at the top of the upper range.
[…] dq = C(dT), which you've used in evaluating such integrals, with C = the (constant) heat capacity,
say at constant volume, of SYS, or dq = k(dT/2)x(the number of degrees of freedom of SYS), which
is implied by the Equipartition Theorem,[…]Even in phenomenological thermodynamics, the heat capacity C generically depends on temperature. The equipartition theorem is a theorem from classical mechanics. It is approximately applicable if the number of quanta in each degree of freedom is >>1. In solids, this leads to the well known rule of Dulong-Petit, stating that the heat capacity per atom in a solid is approximately ##3k_mathrm{B}##. At lower temperatures, the heat capacity decreases continuously as the degrees of freedom start to "freeze out", with the exception of the sound modes. This leads to the celebrated Debye expression for the heat capacity at low temperatures ##C_V approx T^3##.
You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero….No, the third law was formulated by Walter Nernst. He also did not find that the absolute entropy at T=0 was 0. Rather he found that the entropy of an ideal crystal becomes independent of all the other variables of the system (like p) in the limit T to 0. So entropy at T=0 is a constant and this constant can conveniently be chosen to be 0.
For pure materials, the ideal gas is only a model for real gas behavior above the melting point and at low reduced pressures, ##p/p_{critical}##. For real gases, the heat capacity is not constant, and varies with both temperature and pressure. So, the solution to your problem is, first of all, to take into account the temperature-dependence of the heat capacity (and pressure-dependence, if necessary). Secondly, real materials experience phase transitions, such as condensation, freezing, and changes in crystal structure (below the freezing point). So one needs to take into account the latent heat effects of these transitions in calculating the change in entropy. And, finally, before and after phase transitions, the heat capacity of the material can be very different (e.g., ice and liquid water).
Chestermiller submitted a new PF Insights post
Understanding Entropy and the 2nd Law of Thermodynamics
View attachment 178074
Continue reading the Original PF Insights Post.
Wow. Thank you for finally clarifying your question.
You are asking how the absolute entropy of a system can be determined. This is covered by the 3rd Law of Thermodynamics. I never mentioned the 3rd Law of Thermodynamics in my article. You indicated that, in my article, I said that "some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0." I never said this in my article or in any of my comments. If you think so, please point out where. My article only deals with relative changes in entropy from one thermodynamic equilibrium state to another.Chet,
My statement about what you had said regarding 0 entropy at 0° Kelvin did not involve a direct
quote from you, using “ “, it involved an indirect quote, using the word “that”, and included a part
which I wasn’t attributing to you, the “some important person in thermodynamics, I don’t remember
who, so call him “X” (maybe it was Clausius)” –that was my comment about what you had said. I
admit it wasn't perfectly clear which parts were ones that I was saying that you had said, and which
were mine, but making such things completely unambiguous in the English language often, as with
what I intended to say in this case, requires overly long and awkward constructions. Also, I didn’t
say that you had made the 0 entropy at 0° K statement in your article; in fact, I thought that you
had made it while replying to a comment about your article, but after you stated in your email that
you hadn't said it in your article or in any of your comments, I looked back over them, and found
that it had occurred in a quote from INFO-MAN which you had included in one of your comments.
X in that quote was "Kelvin", not "Clausius". According to INFO-MAN, Kelvin had said that a pure
substance (mono-molecular?–fox26's question, not Kelvin’s) at absolute zero would have zero
entropy. Using "entropy" in the statistical mechanical sense, this statement attributed to Kelvin is
true (classically, not quantum mechanically).
Fine, but that brings up what may be a serious problem with the thermodynamics equation:
Δ(entropy) for a reversible process between equilibrium states A and B of a system SYS = the
integral of dq/T between A and B. If SYS is a pure gas in a closed container, and A is SYS at 0° K,
and the relation between dq and dT, which one must know to evaluate the integral, is either
dq = C(dT), which you've used in evaluating such integrals, with C = the (constant) heat capacity,
say at constant volume, of SYS, or dq = k(dT/2)x(the number of degrees of freedom of SYS), which
is implied by the Equipartition Theorem, then the integral of dq/T between A and B is [the integral,
between 0° K and the final temperature T1, of some non-zero constant P times dT/T] =
P[ln(T1) – ln(0)] = ∞ (infinity [for T1 > 0], but actually even then it might be better to regard the
integral as not defined). This problem isn’t solved by requiring the lower (starting) temperature
T0 to be non-zero, but allowing it to be anything above zero, because the integral between
T0 and any T1 > 0 can be made arbitrarily (finitely) large by making T0 some suitably small but
non-zero temperature. Thus, if (1), Kelvin’s sentence is true with “entropy” having the
thermodynamic as well as with it having the statistical mechanical meaning, (2), the Δ(entropy) =
∫dq/T law is true for thermodynamic as well as statistical mechanical entropy, and (3), a linear
relation between dq and dT holds, then the thermodynamic entropy for any (non-empty)
system in equilibrium and at any temperature T1 above absolute zero can’t be finite, even though
the statistical mechanical entropy for such a (finite) system can be made arbitrarily small by taking
T1 to be some suitable temperature > 0° K. Surely the thermodynamic entropy can’t be so different
from the statistical mechanical entropy that the conclusion of the previous sentence is true. The
problem's solution might be that the heat capacity C varies at low temperatures in such a way, for
example C ∝ √T, that the integral is finite, or that the Equipartition Theorem breaks down at low
temperatures, but at least for systems which are a gas composed of classical point particles
interacting, elastically, only when they collide, which is an ideal gas (never mind that they would
almost never collide), the Equipartition Theorem leads to, maybe is equivalent to, the Ideal Gas Law,
which can be mathematically shown to be true for such a gas, even down to absolute zero, and
experimentally breaks down severely, at low temperatures with real gases, only because of their
departures, including their being quantum mechanical, from the stated conditions. What is the
solution of this problem? Must thermodynamics give up the Δ(entropy) = ∫dq/T law as an exact, and
for low temperatures as even a nearly exact, law?
I asked general questions because those were what I was interested in, not a specific calculation. You mostly made general statements, instead of specific calculations, in your article and answers to replies, which often were themselves general. However, if you won't answer general questions from me, here's a specific one, even though it's a particular case of the first general question in the last paragraph of my last previous reply:
Suppose a closed (in your sense) system SYS in state S1 consists of a gas of one kilogram of hydrogen molecules in equilibrium at 400 degrees kelvin in a cubical container one meter on a side; I leave it to you to calculate its internal pressure approximately, if you wish, using the ideal gas law. How can its entropy be calculated? Integrating dq/T over the path of a reversible process going from some other state S2 of SYS to S1 can give the change of entropy Δentropy(S2,S1) caused by the process, and entropy(S1) = entropy(S2) + Δentropy(S2,S1), but what is entropy(S2), and how can that be determined by thermodynamic considerations alone, without invoking statistical mechanical ones? You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0, so the problem would be solved, except for the question of how X had determined that entropy(S2), or any other system at absolute zero, = 0, using only thermodynamic considerations. You wrote "determined", so I assume he didn't do this just by taking entropy(any system at absolute zero) = 0 as an additional law of thermodynamics, or part of the thermodynamic definition of "entropy", but instead calculated it. How? It can be done by statistical mechanical considerations (for the SM idea of entropy), but you presumably would want to do it by thermodynamics alone.Wow. Thank you for finally clarifying your question.
You are asking how the absolute entropy of a system can be determined. This is covered by the 3rd Law of Thermodynamics. I never mentioned the 3rd Law of Thermodynamics in my article. You indicated that, in my article, I said that "some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0." I never said this in my article or in any of my comments. If you think so, please point out where. My article only deals with relative changes in entropy from one thermodynamic equilibrium state to another.
Huh??? From what you have written, I don't even really know whether we are disagreeing about anything. Are we?
By a specific problem, what I was asking for was not something general, such as systems you have only alluded to, but for a problem with actual numbers for temperatures, pressures, masses, volumes, forces, stresses, strains, etc. Do you think you can do that? If not, then we're done here. I'm on the verge of closing this thread.I asked general questions because those were what I was interested in, not a specific calculation. You mostly made general statements, instead of specific calculations, in your article and answers to replies, which often were themselves general. However, if you won't answer general questions from me, here's a specific one, even though it's a particular case of the first general question in the last paragraph of my last previous reply:
Suppose a closed (in your sense) system SYS in state S1 consists of a gas of one kilogram of hydrogen molecules in equilibrium at 400 degrees kelvin in a cubical container one meter on a side; I leave it to you to calculate its internal pressure approximately, if you wish, using the ideal gas law. How can its entropy be calculated? Integrating dq/T over the path of a reversible process going from some other state S2 of SYS to S1 can give the change of entropy Δentropy(S2,S1) caused by the process, and entropy(S1) = entropy(S2) + Δentropy(S2,S1), but what is entropy(S2), and how can that be determined by thermodynamic considerations alone, without invoking statistical mechanical ones? You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0, so the problem would be solved, except for the question of how X had determined that entropy(S2), or any other system at absolute zero, = 0, using only thermodynamic considerations. You wrote "determined", so I assume he didn't do this just by taking entropy(any system at absolute zero) = 0 as an additional law of thermodynamics, or part of the thermodynamic definition of "entropy", but instead calculated it. How? It can be done by statistical mechanical considerations (for the SM idea of entropy), but you presumably would want to do it by thermodynamics alone.
Is this not possible (classically, ignoring the internal energy of atoms and molecules and the relativistic rest-mass equivalent E = mc^2 energy): Total internal energy E of a closed (in my sense) system, in the center of mass frame = mechanical (macroscopic, including macroscopic kinetic and internal potential energy) energy + thermal (microscopic kinetic) energy? That is what I meant and, when I wrote my first comment, thought you meant, by "mechanical" and "thermal" energy. (I used "heat", non-precisely, in parenthesis after "thermal" in my second comment to try to indicate the meaning of "thermal" just because you seemed, in your reply to my first comment, to think my "thermal energy" meant the total internal energy of the system, which of course it didn't.) My comment that the atmosphere of the earth would not be a system in equilibrium under my stated conditions, according to your definition of "thermodynamic equilibrium state", follows from your definition of that in the first sentence after the second bold subheading "First Law of Thermodynamics" in your article. I agreed, in the last sentence of the first paragraph of my second comment (this is my third comment) with your statements 3 and 4, of 5 total, in your reply to Khashishi ("and" in 3 should be "which"). Two other specific problems are stated in the second and last paragraph of my second comment.Huh??? From what you have written, I don't even really know whether we are disagreeing about anything. Are we?
By a specific problem, what I was asking for was not something general, such as systems you have only alluded to, but for a problem with actual numbers for temperatures, pressures, masses, volumes, forces, stresses, strains, etc. Do you think you can do that? If not, then we're done here. I'm on the verge of closing this thread.
Not so. Internal energy is a physical property of a material (independent of process path, heat and work), and heat and work depend on process path.
Not correct. The atmosphere at a uniform temperature and completely still would be in equilibrium even with pressure variation. The form of the first law equation I gave, for simplicity, omitted the change in potential energy of the system.
I still don't understand what you are asking or saying. Why don't you define a specific problem that we can both solve together? Define a problem that you believe would illustrate what you are asking. Otherwise, I don't think I can help you, and we will just have to agree to disagree.Is this not possible (classically, ignoring the internal energy of atoms and molecules and the relativistic rest-mass equivalent E = mc^2 energy): Total internal energy E of a closed (in my sense) system, in the center of mass frame = mechanical (macroscopic, including macroscopic kinetic and internal potential energy) energy + thermal (microscopic kinetic) energy? That is what I meant and, when I wrote my first comment, thought you meant, by "mechanical" and "thermal" energy. (I used "heat", non-precisely, in parenthesis after "thermal" in my second comment to try to indicate the meaning of "thermal" just because you seemed, in your reply to my first comment, to think my "thermal energy" meant the total internal energy of the system, which of course it didn't.) My comment that the atmosphere of the earth would not be a system in equilibrium under my stated conditions, according to your definition of "thermodynamic equilibrium state", follows from your definition of that in the first sentence after the second bold subheading "First Law of Thermodynamics" in your article. I agreed, in the last sentence of the first paragraph of my second comment (this is my third comment) with your statements 3 and 4, of 5 total, in your reply to Khashishi ("and" in 3 should be "which"). Two other specific problems are stated in the second and last paragraph of my second comment.
Chet,
I didn't read your article before entering my post, just read your 5 listed points in reply to Khashishi, but a little while ago I did look at its first part, and found, of course, that what I called "thermal energy", q, is not, despite what you said, what you called the "internal energy", which according to your introduction includes both what I called thermal (heat) energy, but also mechanical energy, as it usually is meant to include.Not so. Internal energy is a physical property of a material (independent of process path, heat and work), and heat and work depend on process path.
Also I found that you defined "equilibrium" so that even the atmosphere of earth at a uniform temperature and completely still would not be in equilibrium, because of the pressure variation with altitude.Not correct. The atmosphere at a uniform temperature and completely still would be in equilibrium even with pressure variation. The form of the first law equation I gave, for simplicity, omitted the change in potential energy of the system.
The situation that I was concerned with is stated in the first sentence of my previous post. For my last statement of what your 5 points implied, instead of "entropy", I should have had "the integral of dq/T". The main point where I disagreed with you, apparently, is in the definition of "equilibrium", and so of "state" of the system. About the only thing you would consider to be a system in equilibrium is a sealed container with a gas, absolutely still, at uniform pressure and temperature, floating in space in free fall, with the state of the system, for a given composition and amount of gas, specified completely by its temperature and pressure, as DrDu said. Then its entropy, given the gas, is determined by that temperature and pressure, and I am willing to believe that Clausius did, by calculating many examples, almost show, except for paths involving such things as mechanical shock excitation of the gas, what you claimed he did show, for such a system.
However, defining entropy from just changes in entropy isn't possible; a starting point whose entropy is known is necessary. Can this be, say, empty space, with zero entropy (classically)? Also, if the second law, that the entropy of a closed system (by this I, and most other people, mean what you mean by an "isolated system") never (except extremely rarely, for macroscopic systems) decreases, is to have universal applicability, it must be possible to define "the entropy" of any system, even ones far from equilibrium, in your or more general senses of "equilibrium". How can this be done? In particular, why the entropy of the entire universe, or very large essentially closed portions of it, always increases, or at least never decreases, is now a fairly hot topic. Do you think this is a meaningful question?I still don't understand what you are asking or saying. Why don't you define a specific problem that we can both solve together? Define a problem that you believe would illustrate what you are asking. Otherwise, I don't think I can help you, and we will just have to agree to disagree.
I stated very clearly in the article that q represents the heat flowing into the system across its boundary, from the surroundings to the system. The T in the equation is the temperature at this boundary.
What you are calling the thermal energy of the system, I would refer to as its internal energy. But, the internal energy is not what appears in the definition of the entropy change.
I don't understand what you are trying to say here. Maybe it would help to give a specific focus problem to illustrate what you are asking.Chet,
I didn't read your article before entering my post, just read your 5 listed points in reply to Khashishi, but a little while ago I did look at its first part, and found, of course, that what I called "thermal energy", q, is not, despite what you said, what you called the "internal energy", which according to your introduction includes both what I called thermal (heat) energy, but also mechanical energy, as it usually is meant to include. Also I found that you defined "equilibrium" so that even the atmosphere of earth at a uniform temperature and completely still would not be in equilibrium, because of the pressure variation with altitude. The situation that I was concerned with is stated in the first sentence of my previous post. For my last statement of what your 5 points implied, instead of "entropy", I should have had "the integral of dq/T". The main point where I disagreed with you, apparently, is in the definition of "equilibrium", and so of "state" of the system. About the only thing you would consider to be a system in equilibrium is a sealed container with a gas, absolutely still, at uniform pressure and temperature, floating in space in free fall, with the state of the system, for a given composition and amount of gas, specified completely by its temperature and pressure, as DrDu said. Then its entropy, given the gas, is determined by that temperature and pressure, and I am willing to believe that Clausius did, by calculating many examples, almost show, except for paths involving such things as mechanical shock excitation of the gas, what you claimed he did show, for such a system.
However, defining entropy from just changes in entropy isn't possible; a starting point whose entropy is known is necessary. Can this be, say, empty space, with zero entropy (classically)? Also, if the second law, that the entropy of a closed system (by this I, and most other people, mean what you mean by an "isolated system") never (except extremely rarely, for macroscopic systems) decreases, is to have universal applicability, it must be possible to define "the entropy" of any system, even ones far from equilibrium, in your or more general senses of "equilibrium". How can this be done? In particular, why the entropy of the entire universe, or very large essentially closed portions of it, always increases, or at least never decreases, is now a fairly hot topic. Do you think this is a meaningful question?
In "dq/T", does "q" stand for, as it did in my statistical mechanics courses, the thermal energy of the system in question, call it "SYS"?I stated very clearly in the article that q represents the heat flowing into the system across its boundary, from the surroundings to the system. The T in the equation is the temperature at this boundary.
What you are calling the thermal energy of the system, I would refer to as its internal energy. But, the internal energy is not what appears in the definition of the entropy change.
If so, then dq/T is dS, the change in entropy of SYS, so in, for example, a process such as slow compression of a gas in a cylinder by a piston, which we will call "SYS", which would be a reversible process, the environment external to SYS, which supplies the mechanical energy for compression, can have zero entropy change, so dq and so dS must be zero (assuming T>0), otherwise the entropy change of SYS together with the environment would be non-zero, so positive–it could hardly be negative–so the process wouldn't be reversible. The point of this is that you said that in all reversible paths between the initial and final equilibrium states of a system give exactly the same (maximum) value of the integral of dq/T, which is the total entropy change of SYS together with the environment in our example, so all other paths between the initial and final states must give less than or equal to zero entropy change, and a change less than zero would violate the second law, so all paths must give zero entropy change, so no paths which increase the entropy can exist, which is obviously false. Was there a typo in both 3. and 4. in your article, and it should have been (minimum)? The only other likely possibility that I can think of right now is that your dq = my -dq.I don't understand what you are trying to say here.
You have taken internal pressure times change in volume in work equation with a positive convention. I am learning from YouTube lectures they have taken work as external pressure times change in volume with a negative convention, in this way work done on the system becomes positive but J M Smith takes work done by the system positive.So can you please explain me some logical reason behind these conventions and also about internal and external pressures in work equation.
ThanksSome people take work done on the system by the surroundings as positive and some people take work done by the system on the surroundings as positive. Of course, this results in different signs for the work term in the expression of the first law. In engineering, we take work done by the system on the surroundings as positive. Chemists often take work done on the system by the surroundings as positive
That's your opinion.
Did you not read the article? I made it perfectly clear that:
Try using the statistical mechanical definition of entropy to calculate the change in entropy of a real gas between two thermodynamic equilibrium states.
ChetIn "dq/T", does "q" stand for, as it did in my statistical mechanics courses, the thermal energy of the system in question, call it "SYS"? If so, then dq/T is dS, the change in entropy of SYS, so in, for example, a process such as slow compression of a gas in a cylinder by a piston, which we will call "SYS", which would be a reversible process, the environment external to SYS, which supplies the mechanical energy for compression, can have zero entropy change, so dq and so dS must be zero (assuming T>0), otherwise the entropy change of SYS together with the environment would be non-zero, so positive–it could hardly be negative–so the process wouldn't be reversible. The point of this is that you said that in all reversible paths between the initial and final equilibrium states of a system give exactly the same (maximum) value of the integral of dq/T, which is the total entropy change of SYS together with the environment in our example, so all other paths between the initial and final states must give less than or equal to zero entropy change, and a change less than zero would violate the second law, so all paths must give zero entropy change, so no paths which increase the entropy can exist, which is obviously false. Was there a typo in both 3. and 4. in your article, and it should have been (minimum)? The only other likely possibility that I can think of right now is that your dq = my -dq.
That's your opinion.
Did you not read the article? I made it perfectly clear that:
Try using the statistical mechanical definition of entropy to calculate the change in entropy of a real gas between two thermodynamic equilibrium states.
Chet
You have taken internal pressure times change in volume in work equation with a positive convention. I am learning from YouTube lectures they have taken work as external pressure times change in volume with a negative convention, in this way work done on the system becomes positive but J M Smith takes work done by the system positive.So can you please explain me some logical reason behind these conventions and also about internal and external pressures in work equation.
Thanks
“This is a good explanation, but personally I feel like the classical description of thermodynamics which defines entropy as some maximum value of an integral should be deprecated in light of our increasing knowledge of physics. The statistical mechanics definition of entropy is far superior.[/quote]
That’s your opinion.
[quote]
The classical definition is inherently confusing because entropy is a state variable, yet it is defined in terms of paths. Each path gives you a different integral. Experimentally, how do you determine what the maximum is out of an infinite number of possible paths? If the system is opened up in a way such that more paths become available, can the entropy increase?[/quote]
Did you not read the article? I made it perfectly clear that:
[LIST=1]
[*]Entropy is a function of state
[*]There are an infinite number of process paths between the initial and final equilibrium states of the system
[*]The integral of dq/T over all these possible paths has a maximum value, and is thus a function of state
[*]All reversible paths between the initial and final equilibrium states of the system give exactly the same (maximum) value of the integral, so you don’t need to evaluate all possible paths
[*]To get the change in entropy between the initial and final equilibrium states of the system, one needs only to conceive of a single convenient reversible path between the two states and integtrate dq/T for that path.
[/LIST]
Try using the statistical mechanical definition of entropy to calculate the change in entropy of a real gas between two thermodynamic equilibrium states.
Chet
“I think your article may be helpful to students, but it would be good to put some more disclaimers to places where it simplifies a lot.
For example, you wrote
[SIZE=4]
The time variation of q˙(t) and w˙(t) between the initial and final states uniquely defines the so-called process path
I think this is true for simple system whose thermodynamic state is determined by two numbers, say entropy and internal energy. But there may be more complicated situations, when one has magnetic and electric work in addition to volume work and then two numbers q˙ and w˙ are not sufficient to determine the path through the state space.[/SIZE]”
Thanks Jano L.
I toyed with the idea of mentioning that there are other forms of work that might need to be considered also, but in the end made the judgement call not to. You read my motivation for the article in some of my responses to the comments and in the article itself. I just wanted to include the bare minimum to give the students what they needed to do most of their homework problems. I felt that, if I made the article too long and comprehensive, they would stop reading before they had a chance to benefit from the article. There are many other things that I might have included as well, such as the more general form of the first law, which also includes changes in kinetic and potential energy of the system.
I invite you to consider writing a supplement to my article in which you flesh things out more completely. Thanks for your comment.
Chet
“This is a good explanation, but personally I feel like the classical description of thermodynamics which defines entropy as some maximum value of an integral should be deprecated in light of our increasing knowledge of physics. The statistical mechanics definition of entropy is far superior. The classical definition is inherently confusing because entropy is a state variable, yet it is defined in terms of paths. Each path gives you a different integral. Experimentally, how do you determine what the maximum is out of an infinite number of possible paths? If the system is opened up in a way such that more paths become available, can the entropy increase?
The statistical mechanics definition (and the related information theory definition) makes it clear why it is a state variable, because it only depends on the states. The paths are irrelevant.”
There is no one entropy to be defined by some optimal definition. In thermodynamics, (Clausius) entropy is defined through integrals. There is nothing confusing about it; to understand entropy in thermodynamics, the paths and integrals are necessary. It is hardly a disadvantage of the definition, since processes and integrals are very important things to understand while learning thermodynamics.
In statistical physics, there are several concepts that are also called entropy, but neither of these is the same concept as Clausius entropy. *Sometimes* the statistical concept has similar functional dependence on the macroscopic variables to thermodynamic entropy. But it is not the same concept as thermodynamic entropy. Any use of statistical physics for explanation of thermodynamics is based on the *assumption* that statistical physics applies to thermodynamic systems. It does not replace thermodynamics in any way.
This is a good explanation, but personally I feel like the classical description of thermodynamics which defines entropy as some maximum value of an integral should be deprecated in light of our increasing knowledge of physics. The statistical mechanics definition of entropy is far superior. The classical definition is inherently confusing because entropy is a state variable, yet it is defined in terms of paths. Each path gives you a different integral. Experimentally, how do you determine what the maximum is out of an infinite number of possible paths? If the system is opened up in a way such that more paths become available, can the entropy increase?
The statistical mechanics definition (and the related information theory definition) makes it clear why it is a state variable, because it only depends on the states. The paths are irrelevant.
I think your article may be helpful to students, but it would be good to put some more disclaimers to places where it simplifies a lot.
For example, you wrote
[SIZE=4]
The time variation of q˙(t) and w˙(t) between the initial and final states uniquely defines the so-called process path
I think this is true for simple system whose thermodynamic state is determined by two numbers, say entropy and internal energy. But there may be more complicated situations, when one has magnetic and electric work in addition to volume work and then two numbers q˙ and w˙ are not sufficient to determine the path through the state space.[/SIZE]
“Should read “mathematically precise”.”
In my judgement, this is sufficiently precise mathematically to address the students’ needs at their introductory level (i.e., giving them the ability to understand and do their homework). As with any subject, additional refinement can be introduced at a later stage. For example, when we first learn about heat capacity, we are told that it is defined in terms of the path-dependent heat flow Q = C ΔT, but later learn that, more precisely, heat capacity is a function of state (not path), defined in terms of the partial derivatives of internal energy U or enthalpy H with respect to temperature. In my opinion, my judgement call is a considerably less blatant use of literary license than this.
Chet
“I fear that one may get from this article the impression that the concept of entropy can only be introduced under very restricing assumptions. Here some rethoric questions: Are the systems for which we can introduce entropy really restricted to those describable in terms of only T and P? How about chemical processes or magnetization? Can and work only enter through the boundaries? How about warming a glas of milk in the microwave, then? In this case, the pressure is constant, but we can’t assign a unique temperature to the system, not even at the boundary, as the distribution of energy over the internal states of the molecules is out of equilibrium.
This already shows that the Clausius inequality is of restricted value, as the integrals aren’t defined for most of the irreversible processes.[/quote]
Thanks for your comment DrDu.
I fear that, perhaps, you have not read all my responses to the comments that have been made so far. I tried to make it clear what my motivation was for writing this article, particularly with regard to its limited scope. See, in particular, posts #12 and #31. To reiterate: My target audience was beginning thermodynamics students who are being exposed to the 1st and 2nd laws for the first time, but who, due to the poor manner in which the material is presented in most texts and courses, are so confused that they are unable to do their homework. I tried to keep the article “short and sweet” so that the students would not lose interest and stop reading before they reached the end. I did not want the article to be a complete treatise on thermodynamics. If you would like to expand on what I have written, you are welcome to write an Insights article of your own. I’m sure it would be very well received.
Along these same lines, I might mention that I am currently preparing another Insights article that focuses on the work done in reversible versus irreversible gas expansion/compression processes, and quantitatively identifies the fundamental mechanism by which the work in the two situations differ.
[quote]
In fact, we don’t need to break our heads about the complicated structure of non-equilibrium states. The point is that we can calculate entropy integrating over a sequence of equilibrium states. It plays no role whether we can approximate this integral by an actual quasistatic process.”
I thought that I had covered this in my article when I wrote: “Note that the reversible process path used to determine the entropy change does not necessarily need to bear any resemblance to the actual process path.”
Chet
“I have always thought that a reversible process would give the minimum change in entropy, i.e. a lower bound for the integral. Is it not that the higher the entropy change the more energy it’s dissipating from irreversibilities? In other words, why exactly is the integrand always less than or equal to and not greater than or equal to?”
This is discussed in posts #28 and 29. As I clearly said in my article, I am referring to the entropy of a (closed) system, not to the entropy of the combination of system and surroundings (which constitutes an isolated system). Have you never seen the Clausius Inequality in the form that I presented it before?
All you need to do to convince yourself that what I said is correct is do a few sample problems for irreversible processes, where you compare with the integral of the heat flow at the boundary divided by the temperature at the boundary with the entropy change of the system between the same two initial and final equilibrium states. In an isolated system, the integral is zero (since no heat is passing across the boundary of an isolated system), but the entropy change is greater than zero (for an irreversible change).
Chet
“This question goes beyond the scope of what I was trying to cover in my article. It involves the thermodynamics of mixtures. I’m trying to decide whether to answer this in the present Comments or write an introductory Insight article on the subject. I need time to think about what I want to do. Meanwhile, I can tell you that there is an entropy increase for the change that you described and that the entropy change can be worked out using energy with the integral of dq[SUB]rev[/SUB]/T.
I cannot understand the symbols in the integral.”
I don’t understand what you are saying here. There are some basic concepts that need to be developed to describe mixtures. The starting point for mixtures goes back again to ideal gases, and discusses the thermodynamic properties of entropy, enthalpy, free energy, and volume of ideal gas mixtures, based on “Gibbs Theorem.” This development enables you to determine the change in entropy in going from two pure gases, each at a certain pressure, to a mixture of the two gases at the same pressure. The partial pressures of the gases in the mixture are lower than their original pressures, so, to get to their final partial pressures, you would have to increase their volumes reversibly at constant temperature. This would give rise to an entropy increase for each of the gases. I’m uncomfortable going into it in more detail than this, because, to do it right, you need more extensive discussion. If you want to find out more about this, see Chapter 10 of Introduction to Chemical Engineering Thermodynamics by Smith and van Ness.
Chet
“I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference?
I don’t have a clear idea of what this equation is about. Let me try to articulate my understanding, and you can then correct it. You have an isolated system containing an exothermic chemical reaction vessel in contact with a cold bath. In one case, the bath contains ice floating in water at 0 C. In the other case, the bath contains only water at 0 C. Is there a difference in the energy transferred from the reaction vessel to the bath in the two cases. How is this affected if the amount of water in the second case is increased? (Are you also asking about the entropy changes in these cases?)
Using identical reaction vessels the energy transferred is set the same. The question is about the entropy change. Will heating water a tiny delta T or melting ice result in the same entropy change?”
Yes, provided the delta T of the water is virtually zero. Otherwise, the final reactor temperature will not be the same in the two cases. So the reactor would have a different entropy change.
Chet
“To make sure I understand the end of your article clearly:
(1) A system can go from an initial equilibirum state to a final equilibrium state through a reversible or irreversible process.
(2) Whichever process it undergoes, its change in entropy will be the same.
(3) That change in entropy can be determined by evaluating the following integral over any reversible process path the system could have gone through: [itex]int_{t_i}^{t_f} {frac{dot{q}_{rev}(t)}{T(t)}dt}[/itex].
(4) If the system goes through an irreversible process path, this integral: [itex]int_{t_i}^{t_f}{frac{dot{q}(t)}{T_{Int}(t)}dt}[/itex] will yield a lesser value than the reversible path integral, but the change in entropy would still be equal to the (greater) evaluation of the reversible path integral.
Is that right?”
Perfect.
To make sure I understand the end of your article clearly:
(1) A system can go from an initial equilibirum state to a final equilibrium state through a reversible or irreversible process.
(2) Whichever process it undergoes, its change in entropy will be the same.
(3) That change in entropy can be determined by evaluating the following integral over any reversible process path the system could have gone through: [itex]int_{t_i}^{t_f} {frac{dot{q}_{rev}(t)}{T(t)}dt}[/itex].
(4) If the system goes through an irreversible process path, this integral: [itex]int_{t_i}^{t_f}{frac{dot{q}(t)}{T_{Int}(t)}dt}[/itex] will yield a lesser value than the reversible path integral, but the change in entropy would still be equal to the (greater) evaluation of the reversible path integral.
Is that right?
”
I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference?[/quote]
I don’t have a clear idea of what this equation is about. Let me try to articulate my understanding, and you can then correct it. You have an isolated system containing an exothermic chemical reaction vessel in contact with a cold bath. In one case, the bath contains ice floating in water at 0 C. In the other case, the bath contains only water at 0 C. Is there a difference in the energy transferred from the reaction vessel to the bath in the two cases. How is this affected if the amount of water in the second case is increased? (Are you also asking about the entropy changes in these cases?)
[quote]
I have another problem with entropy. Some folks say it involves information. I have maintained that only energy is involved. Consider a system containing two gasses. The atoms are identical except half are red and the other are blue. Initially the red and blue are separated by a card in the center of the container. The card is removed and the atoms mix. How can there be a change in entropy?[/quote]
This question goes beyond the scope of what I was trying to cover in my article. It involves the thermodynamics of mixtures. I’m trying to decide whether to answer this in the present Comments or write an introductory Insight article on the subject. I need time to think about what I want to do. Meanwhile, I can tell you that there is an entropy increase for the change that you described and that the entropy change can be worked out using energy with the integral of dq[SUB]rev[/SUB]/T.
[quote]
Oh, one more please. Can you show an example where the entropy change is negative like you were saying?”
This one is easy. Just consider a closed system in which you bring about the isothermal reversible compression of an ideal gas, so that the final temperature is equal to the initial temperature, the final volume is less than the initial volume, and the final pressure is higher than the initial pressure.
Chet
“I thought it was a lower bound, not an upper bound and the lower bound is zero. If you go from one state to another with a perfectly reversible process then the entropy generated is zero.”
Nope, for a closed system undergoing a reversible change, the entropy change of the system is an upper bound. And for a perfectly reversible process, the entropy change for the system (which is clearly stated as the focus of my article) is not necessarily zero; in fact, it can even be less than zero.
For the combination of system and surroundings, the entropy generated in a reversible process is zero, provided that the surroundings are also handled reversibly.
Chet
I thought it was a lower bound, not an upper bound and the lower bound is zero. If you go from one state to another with a perfectly reversible process then the entropy generated is zero.
“thanks Chet, so ∫ is more complicated than it looks :smile:”
Hopefully not to those who have had calculus.:smile:
“It is an integral sign. Apparently, you haven’t had calculus yet. You are not going to be able to understand and apply much of thermodynamics without the basic tool of calculus.
Brackets are a kind of parenthesis.
Chet”
thanks Chet, so ∫ is more complicated than it looks :smile:
“What does the ∫ symbol in the thermodynamics equations mean? [/quote]
It is an integral sign. Apparently, you haven’t had calculus yet. You are not going to be able to understand and apply much of thermodynamics without the basic tool of calculus.
[quote]
(i know what + – * / mean :) Oh and brackets)”
Brackets are a kind of parenthesis.
Chet
What does the ∫ symbol in the thermodynamics equations mean? (i know what + – * / mean :) Oh and brackets)
Thanks Chet. Didn’t mean to take you off track. Just trying to leverage my confusion opportunity, to figure out what I’m getting wrong when I think of it. Maybe yournext one. Or I could talk it to a new thread?
“I realize why the quantum states question confuses me. Probably it is an issue of specific terms.
If I picture the piston and cylinder made of graph paper cells, containing 1’s and 0’s, with the volume of the uncompressed cylinder as an area of zeros 0’s mixed with 1’s (representing the uncompressed gas in the cylinder) this area then surrounded by some more 1’s representing the boundaries of the cylinder, including the piston. If I then compress the gas, by changing some of the cylinder volume cells to 1’s, I haven’t changed the number of states in the system (the graph paper hasn’t shrunk or lost cells) I have just added information assigning some of the cells of the cylinder volume with specific values. So I guess by “available QM states” you mean those that are uncertain, or “free” to be randomly set to 1or 0.
Maybe it’s a bad metaphor, because I get even more confused when I think that to expand the “cylinder” I still have to add information, changing a set of “fixed cells” to be “free”.”
My goal was to emphasize the classical approach to entropy in my development, and to generally skip the statistical thermodynamic perspective.
Chet
“Both your answers are correct. You remove heat from the system in an isothermal reversible compression, so ΔS < 0 (q is negative). The number of states available to the system is fewer, so, by that criterion also, ΔS < 0.A closed system is one that cannot exchange mass with its surroundings, but it can exchange heat and mechanical energy (work W). An isolated system is one that can exchange neither mass, heat, nor work.Chet" I realize why the quantum states question confuses me. Probably it is an issue of specific terms.If I picture the piston and cylinder made of graph paper cells, containing 1's and 0's, with the volume of the uncompressed cylinder as an area of zeros 0's mixed with 1's (representing the uncompressed gas in the cylinder) this area then surrounded by some more 1's representing the boundaries of the cylinder, including the piston. If I then compress the gas, by changing some of the cylinder volume cells to 1's, I haven't changed the number of states in the system (the graph paper hasn't shrunk or lost cells) I have just added information assigning some of the cells of the cylinder volume with specific values. So I guess by "available QM states" you mean those that are uncertain, or "free" to be randomly set to 1or 0.Maybe it's a bad metaphor, because I get even more confused when I think that to expand the "cylinder" I still have to add information, changing a set of "fixed cells" to be "free".
“So you’re saying there is a temperature gradient between the “bulk” system and the interface [/quote]
Yes. With an irreversible process, there is a temperature difference between the average temperature in the system and the temperature at the interface. However, at the very interface, the local system temperature matches the local surroundings temperature.
[quote]
, but no temperature gradient between the “bulk” surroundings and the interface?”
Not necessarily. I’ve tried to get us focused primarily on the system. I’m assuming that we are not concerning ourselves with the details of what is happening within the surroundings, except at the interface, where we are assuming that either the heat flux or the temperature is specified. (Of course, more complicated boundary conditions can also be imposed, and are included within the framework of our methodology). Thus, the “boundary conditions” for work and heat flow on the system are applied at the interface.
Chet
“I find your “temperature at the interface with the surroundings” confusing in that to me it implies an average temperature between the system and the surroundings at that point. Would it be more clear to say “temperature of the surroundings at the interface” or am I missing something?”
What you’re missing is that, at the interface, the local temperature of the system matches the temperature of the surroundings. There is no discontinuity in temperature (or in force per unit area) at the interface. However, in an irreversible process, the temperature within the system varies with distance from the interface.
Chet
“No sir, I was not clear on that precise difference of terms! Now I am. I believe you need to remove heat. Hmm, the quantum states. That one really makes me think, with great confusion, which is not good, since the answer should probably be obvious. In the closed sytem that has been isothermically compressed (heat removed), I would say the number of states is fewer? But it ‘s basically a guess. I don’t know how to decompose that question, with any confidence. I think I know something about the parts, but probably have way too many questions and misconceptions tangled up in it. Please do illuminate!
I say fewer because the volume is less, and so the available “locations” are reduced. But this does not seem very satisfactory, right, or clear.”
Both your answers are correct. You remove heat from the system in an isothermal reversible compression, so ΔS < 0 (q is negative). The number of states available to the system is fewer, so, by that criterion also, ΔS < 0.A closed system is one that cannot exchange mass with its surroundings, but it can exchange heat and mechanical energy (work W). An isolated system is one that can exchange neither mass, heat, nor work.Chet
“Suppose you compress a gas isothermally and reversibly in a closed system. To hold the temperature constant, do you have to add heat or remove heat? After you compress the gas to a smaller volume at the same temperature, are the number of quantum states available to it greater or fewer?
You are aware that, in thermodynamics, there is a difference between a closed system and an isolated system, correct?
Chet”
No sir, I was not clear on that precise difference of terms! Now I am. I believe you need to remove heat. Hmm, the quantum states. That one really makes me think, with great confusion, which is not good, since the answer should probably be obvious. In the closed sytem that has been isothermically compressed (heat removed), I would say the number of states is fewer? But it ‘s basically a guess. I don’t know how to decompose that question, with any confidence. I think I know something about the parts, but probably have way too many questions and misconceptions tangled up in it. Please do illuminate!
I say fewer because the volume is less, and so the available “locations” are reduced. But this does not seem very satisfactory, right, or clear.
“That was great Chet. It helps to know the purpose and scope. Hey, can you explain to a confused student why the change in entropy in a closed sytem is not always greater than or equal to 0? I think I know (Poincare’ recurrence?) but I also think I’m probably wrong.”
Suppose you compress a gas isothermally and reversibly in a closed system. To hold the temperature constant, do you have to add heat or remove heat? After you compress the gas to a smaller volume at the same temperature, are the number of quantum states available to it greater or fewer?
You are aware that, in thermodynamics, there is a difference between a closed system and an isolated system, correct?
Chet
“Hello Chestermiller.
Entropy and the Second Law of Thermodynamics is not exactly an intuitive concept. While I think your article is basically a good one, it is obviously somewhat limited in scope, and my only critique is that you did not cover some of the most important aspects of entropy.[/quote]
Thanks INFO_MAN. It’s nice to be appreciated.
Yes. You are correct. I deliberately limited the scope. Possibly you misconstrued my objective. It was definitely not to write a treatise on entropy and the 2nd law. I was merely trying to give beginning thermodynamics students who are struggling with the basic concepts the minimum understanding they need just to do their homework. As someone relatively new to Physics Forums, you may not be aware of the kinds of questions we get from novices. Typical of a recurring question is: How come the entropy change is not zero for an irreversible adiabatic process if the change in entropy is equal to the integral of dq/T and dq = 0? Homework problems frequently involve irreversible adiabatic expansion or compression of an ideal gas in a cylinder with a piston. Students are often asked to determine the final equilibrium state of the system, and the change in entropy. You can see where, if they were asking questions like the the previous one, how they would have trouble doing a homework problem like this.
My original introduction to the tutorial was somewhat longer than in the present version, and spelled out the objectives more clearly. However, the guidelines that Physics Forums set a goal of about 400 words for the Insight articles, and the present version of my article is well over 1000 words. Here is the introductory text that I cut out:
[INDENT]In this author’s judgement, the primary cause of the (students’) confusion is the poor manner in which these concepts are taught in textbooks and courses.
The standard approach is to present the chronological development of the subject in a straight line from beginning to end. Although this is the way that the subject had developed historically, it is not necessarily the best way to teach the subject. It is much more important for the students to gain a solid understanding of the material by whatever means possible than to adhere to a totally accurate account of the chronological sequence. Therefore, in the present document, we have created a somewhat fictionalized account of the historical sequence of events in order to minimize the historical discussion, focus more intently on the scientific findings, and make the concepts clearer and less confusing to students.
Another shortcoming of existing developments is that the physical situations they discuss are not specified precisely enough, and the mathematical relationships likewise lack proper constraint on their applicability and limitations (particularly the so-called Clausius Inequality). There is also a lack a concise mathematical statement of the second law of thermodynamics in such a way that it can be confidently applied to practical situations and problem solving. In the present development, we have endeavored to overcome these shortcomings.
[/INDENT]
[quote]I agree that most people have a very hard time grasping entropy and the second law of thermodynamics. But I am not sure I understand why your article keeps referring to reversible processes and adiabatic idealizations. In natural systems, the entropy production rate of every process is always positive (ΔS > 0) or zero (ΔS = 0). But only idealized adiabatic (perfectly insulated) and isentropic (frictionless, non-viscous, pressure-volume work only) processes actually have an entropy production rate of zero[URL=’http://en.wikipedia.org/wiki/Isentropic_process’]. [/URL]Heat is produced, but not entropy. In nature, this ideal can only be an approximation, because it requires an infinite amount of time and no dissipation.[/quote]
This is an example of one of those instances I was referring to in which the constraints on the equations is not spelled out clearly enough, and, as a result, confusion can ensue. The situation you are referring to here with the inequality (ΔS > 0) and equality (ΔS = 0) applies to the combination of the system and the surroundings, and not just to a closed system. Without this qualification, the student might get the idea that for a closed system, ΔS≥0 always, which is, of course, not the case.
Even though reversible processes are an idealization, there is still a need for beginners to understand them. First of all they provide an important limiting case with which irreversible processes can be compared. In geometry, there is no such thing as a perfect circle, a perfect rectangle, a perfect square, etc., but yet we still study them and apply their concepts in our work and lives. Secondly, some of the processes that occur in nature and especially in industry can approach ideal reversible behavior. Finally, and most importantly, reversible processes are the only vehicle we have for determining the change in entropy between two thermodynamic equilibrium states of a system or material.
[quote]
You hardly mention irreversible processes. An irreversible process degrades the performance of a thermodynamic system, and results in entropy production. Thus, irreversible processes have an entropy production rate greater than zero (ΔS > 0), and that is really what the second law is all about (beyond the second law analysis of machines or devices). Every naturally occurring process, whether adiabatic or not, is irreversible (ΔS > 0), since friction and viscosity are always present.[/quote]
I’m sorry that impression came through to you because that was not my intention. I feel that it is very important for students to understand the distinction between real irreversible processes paths and ideal reversible process paths. Irreversible process paths are what really happens. But reversible process paths are what we need to use to get the change in entropy for a real irreversible process path.
[quote]
Here is my favorite example of an irreversible thermodynamic process, the Entropy Rate Balance Equation for Control Volumes:
[IMG]https://www.ecourses.ou.edu/ebook/thermodynamics/ch06/sec067/media/eq060701.gif[/IMG][/quote]
This equation applies to the more general case of an open system for which mass is entering and exiting, and I was trying to keep things simple by restricting the discussion to closed systems. Also, entropy generation can be learned by the struggling students at a later stage.
[quote]
And here are are a couple of other important things you did not mention about entropy:
1) Entropy is a measure of molecular disorder in a system. According to Kelvin, a pure substance at absolute zero temperature is in perfect order, and its entropy is zero. This is the less commonly known Third Law of Thermodynamics.
2) “[U]A system will select the path or assemblage of paths out of available paths that minimizes the potential or maximizes the entropy at the fastest rate given the constraints[/U].” This is known as the Law of Maximum Entropy Production. “The Law of Maximum Entropy Production thus has deep implications for evolutionary theory, culture theory, macroeconomics, human globalization, and more generally the time-dependent development of the Earth as a ecological planetary system as a whole.” [URL]http://www.lawofmaximumentropyproduction.com/[/URL][/quote]
As I said above, I was trying to limit the scope exclusively to what the beginning students needed to understand in order to do their homework.
Chet
No problem Chet, don’t feel obliged. Might be just as well if you were to do something you thought would be most helpful, rather than follow us down a rabbit hole. This is the Crooks paper
[URL]http://arxiv.org/abs/cond-mat/9901352[/URL]
[SIZE=6]The Entropy Production Fluctuation Theorem and the Nonequilibrium Work Relation for Free Energy Differences[/SIZE]
[URL=’http://arxiv.org/find/cond-mat/1/au:+Crooks_G/0/1/0/all/0/1′]Gavin E. Crooks[/URL]
(Submitted on 29 Jan 1999 ([URL=’http://arxiv.org/abs/cond-mat/9901352v1′]v1[/URL]), last revised 29 Jul 1999 (this version, v4))
There are only a very few known relations in statistical dynamics that are valid for systems driven arbitrarily far-from-equilibrium. One of these is the fluctuation theorem, which places conditions on the entropy production probability distribution of nonequilibrium systems. Another recently discovered far-from-equilibrium expression relates nonequilibrium measurements of the work done on a system to equilibrium free energy differences. In this paper, we derive a generalized version of the fluctuation theorem for stochastic, microscopically reversible dynamics. Invoking this generalized theorem provides a succinct proof of the nonequilibrium work relation.
I’m interested in the fluctuation theorem, just understanding it really, it seems to underpin the second law? What I liked about Crooks formulation is that I thought I could see more how “entropy” path selection and work are related. But I have little confidence I understand it.
Hello Chestermiller.
“There have been nearly as many formulations of the second law as there have been discussions of it.”
~P. W. Bridgman
Entropy and the Second Law of Thermodynamics is not exactly an intuitive concept. While I think your article is basically a good one, it is obviously somewhat limited in scope, and my only critique is that you did not cover some of the most important aspects of entropy.
I agree that most people have a very hard time grasping entropy and the second law of thermodynamics. But I am not sure I understand why your article keeps referring to reversible processes and adiabatic idealizations. In natural systems, the entropy production rate of every process is always positive (ΔS > 0) or zero (ΔS = 0). But only idealized adiabatic (perfectly insulated) and isentropic (frictionless, non-viscous, pressure-volume work only) processes actually have an entropy production rate of zero[URL=’http://en.wikipedia.org/wiki/Isentropic_process’]. [/URL]Heat is produced, but not entropy. In nature, this ideal can only be an approximation, because it requires an infinite amount of time and no dissipation.
You hardly mention irreversible processes. An irreversible process degrades the performance of a thermodynamic system, and results in entropy production. Thus, irreversible processes have an entropy production rate greater than zero (ΔS > 0), and that is really what the second law is all about (beyond the second law analysis of machines or devices). Every naturally occurring process, whether adiabatic or not, is irreversible (ΔS > 0), since friction and viscosity are always present.
Here is my favorite example of an irreversible thermodynamic process, the Entropy Rate Balance Equation for Control Volumes:
[IMG]https://www.ecourses.ou.edu/ebook/thermodynamics/ch06/sec067/media/eq060701.gif[/IMG]
And here are are a couple of other important things you did not mention about entropy:
1) Entropy is a measure of molecular disorder in a system. According to Kelvin, a pure substance at absolute zero temperature is in perfect order, and its entropy is zero. This is the less commonly known Third Law of Thermodynamics.
2) “[U]A system will select the path or assemblage of paths out of available paths that minimizes the potential or maximizes the entropy at the fastest rate given the constraints[/U].” This is known as the Law of Maximum Entropy Production. “The Law of Maximum Entropy Production thus has deep implications for evolutionary theory, culture theory, macroeconomics, human globalization, and more generally the time-dependent development of the Earth as a ecological planetary system as a whole.” [URL]http://www.lawofmaximumentropyproduction.com/[/URL]
And apparently, I just got another trophy since this is my first post!
“Thanks Chet. Your explanation of the “Clausius Inequality” and you answer on the difference between reversible and non-reversible paths were helpful and lucid, and It means a lot to know they are at least sensible questions.
I don’t suppose [USER=174065]@techmologist[/USER] and I could get your help reading an old Galvin Crooks paper from 1999 on the “generalized fluctuation theorem”? We’ve got a thread going in the cosmology forum. PeterDonis has been helping us (humoring us more like it). It’s under @techmologists question “why are there heat engines?” It’s pretty rambly at this point so I would be more than happy to restart it focusing it back on Crooks’ paper and handful of equations, and drill in with your guidance.”
I’ll take a look and see if I can contribute. There are lots of pages and lots of posts, so it may take me a while to come up to speed. No guarantees.
Chet
Thanks Chet. Your explanation of the “Clausius Inequality” and you answer on the difference between reversible and non-reversible paths were helpful and lucid, and It means a lot to know they are at least sensible questions.
I don’t suppose [USER=174065]@techmologist[/USER] and I could get your help reading an old Galvin Crooks paper from 1999 on the “generalized fluctuation theorem”? We’ve got a thread going in the cosmology forum. PeterDonis has been helping us (humoring us more like it). It’s under @techmologists question “why are there heat engines?” It’s pretty rambly at this point so I would be more than happy to restart it focusing it back on Crooks’ paper and handful of equations, and drill in with your guidance.
“Thanks Chet, I hope it’s okay if I keep asking you questions. It really is my favorite way to learn, and I can get enough of the second law, and I’m sure I will learn something – not the least of which will be precision of terms.
In the the case of a gas in a cylinder with a piston (or damper) why does the amount of dissipation vary with the amount of force per unit time? what does the time rate of force have to who with the efficiency of conversion to mechanical energy? Why does the difference at any given time between the system and surroundings, dictate the reversibility, as opposed to say the amount of energy transferred altogether?”
Hi Jimster. You ask great questions.
Why don’t you introduce this in a separate thread, and we’ll work through it together? First we’ll consider the spring/damper system to get an idea of how a difference between a rapid deformation and a very slow deformation (between the same initial and final states) plays out in terms of mechanical energy dissipated in the damper and work done. The idea is for you to get a feel for how this works.
Chet
Thanks Chet, I hope it’s okay if I keep asking you questions. It really is my favorite way to learn, and I can get enough of the second law, and I’m sure I will learn something – not the least of which will be precision of terms.
In the the case of a gas in a cylinder with a piston (aka “the damper”) why does the amount of dissipation vary with the amount of force per unit time? what does the time rate of force have to who with the efficiency of conversion to mechanical energy? Why does the difference at any given time between the system and surroundings, dictate the reversibility, as opposed to say the amount of energy transferred altogether?
Thanks Jimster.
“Thanks Chester.
Yes. I really did find that clear.
Which is not to say I understood it…
What is special about the reversible paths? [/quote]
Reversible paths minimize the dissipation of mechanical energy to thermal energy, and maximize the ability of temperature differences to be converted into mechanical energy. In reversible paths, the pressure exerted by the surroundings at the interface with the system is only slightly higher of lower than the pressure throughout the system, and the temperature at the interface with the surroundings is only slightly higher or lower than the temperature throughout the system. This situation is maintained over the entire path from the initial to the final equilibrium state of the system.
For irreversible paths, the dissipation of mechanical energy to thermal energy is the result of viscous dissipation. The same thing happens if you compress a combination of a spring and (viscous) damper connected in parallel. If you compress the combination very rapidly from an initial length to a final length, you generate lots of heat in the damper (since the force carried by the damper is proportional to the velocity difference across the damper). On the other hand, if you compress the combination very slowly, the force carried by the damper is much less, and you generate much less heat. The amount of work you need to do in the latter case to bring about the compression is also much less. This is a very close analogy to what happens when you cause a gas in a cylinder to compress.
[quote]
Are all the other paths, the non-reversible ones, the same, or do some integrate to different values <= DeltaS than others?" They integrate to different values < ΔS. The equal sign does not apply to irreversible paths. They are all less.Chet
Kudos chestermiller. That was clear, and understandable, The historical perspective really helped.
I look forward to the day when chestermiller makes it similarly easy to understand why this second law implies that “the entopy of the universe tends to a maximum”. And how it relates to the kind of information debated in the Hawking/Susskind “black hole wars.”
Thanks Chester.
Yes. I really did find that clear.
Which is not to say I understood it…
What is special about the reversible paths? Are all the other paths, the non-reversible ones, the same, or do some integrate to different values <= DeltaS than others?
Awesome first entry chestermiller!
Should read “mathematically precise”.
But you claimed your formulation of the Clausius inequality to be ##\emph{mathematically precise}##, didn’t you?
I fear that one may get from this article the impression that the concept of entropy can only be introduced under very restricting assumptions. Here are some rhetoric questions: Are the systems for which we can introduce entropy really restricted to those describable in terms of only T and P? How about chemical processes or magnetization? Can work only enter through the boundaries? How about warming a glass of milk in the microwave, then? In this case, the pressure is constant, but we can’t assign a unique temperature to the system, not even at the boundary, as the distribution of energy over the internal states of the molecules is out of equilibrium. This already shows that the Clausius inequality is of restricted value, as the integrals aren’t defined for most of the irreversible processes. In fact, we don’t need to break our heads about the complicated structure of non-equilibrium states. The point is that we can calculate entropy integrating over a sequence of equilibrium states. It plays no role whether we can approximate this integral by an actual quasistatic process.
Thanks for your definitive take on thermodynamics one and two.
I have always thought that a reversible process would give the minimum change in entropy, i.e. a lower bound for the integral. Is it not that the higher the entropy change the more energy it’s dissipating from irreversibilities? In other words, why exactly is the integrand always less than or equal to and not greater than or equal to?
This question goes beyond the scope of what I was trying to cover in my article. It involves the thermodynamics of mixtures. I’m trying to decide whether to answer this in the present Comments or write an introductory Insight article on the subject. I need time to think about what I want to do. Meanwhile, I can tell you that there is an entropy increase for the change that you described and that the entropy change can be worked out using energy with the integral of dq[SUB]rev[/SUB]/T. I cannot understand the symbols in the integral.
I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference? I don’t have a clear idea of what this equation is about. Let me try to articulate my understanding, and you can then correct it. You have an isolated system containing an exothermic chemical reaction vessel in contact with a cold bath. In one case, the bath contains ice floating in water at 0 C. In the other case, the bath contains only water at 0 C. Is there a difference in the energy transferred from the reaction vessel to the bath in the two cases. How is this affected if the amount of water in the second case is increased? (Are you also asking about the entropy changes in these cases?) Using identical reaction vessels the energy transferred is set the same. The question is about the entropy change. Will heating water a tiny delta T or melting ice result in the same entropy change?
OK, I understand a little more and accept the last sentence. I think primarily about heat engines.I have two questions about closed systems. Consider two closed systems, both have a chemical reaction area which releases a small amount of heat and are initially at the freezing point. One has water and no ice and the other has ice. I expect after the chemical reaction the water system will absorb the heat with a tiny change in temperature and the other will convert a small amount of ice to water. Is there any difference in the increase of energy? Suppose I choose masses to enable the delta T of the water system to go toward zero. Is there any difference?I have another problem with entropy. Some folks say it involves information. I have maintained that only energy is involved. Consider a system containing two gasses. The atoms are identical except half are red and the other are blue. Initially the red and blue are separated by a card in the center of the container. The card is removed and the atoms mix. How can there be a change in entropy?Oh, one more please. Can you show an example where the entropy change is negative like you were saying?
So you’re saying there is a temperature gradient between the “bulk” system and the interface, but no temperature gradient between the “bulk” surroundings and the interface?
I find your “temperature at the interface with the surroundings” confusing in that to me it implies an average temperature between the system and the surroundings at that point. Would it be more clear to say “temperature of the surroundings at the interface” or am I missing something?
That was great Chet! It helps to know the purpose of scope. Hey, can you explain to a confused student why the change in entropy in a closed system is not always greater than or equal to 0? I think I know (Poincaré recurrence?) but I also think I’m probably wrong.
Excellent article! One of the best definitions of entropy and the second law I’ve ever read.
In the analysis of mixtures, we have that for ideal mixtures ##\Delta_{\text{mix}} H=0##. So I think it could be argued that the entropy change for ideal mixtures is zero, according to ##dS=\frac{dq}{dT}##. However, this is not the case and in fact the entropy is given by ##-nR\sum_i x_i \ln x_i ##
How can I resolve this?
I’m not sure if this is the kind of reply that is expected here, so I would like to know that too hehe.
Thanks.
Latex doesn’t seem to work here.
How can I like these pages?
This is probably a stupid question but what is dt? It appears in most of the equations, but I can’t find its definition in the article.
dt is the differential element of change in time.
test