Thermodynamics

Understanding Entropy and the 2nd Law of Thermodynamics

Estimated Read Time: 7 minute(s)
Common Topics: process, path, equilibrium, reversible, law

Introduction

The second law of thermodynamics and the associated concept of entropy have been sources of confusion to thermodynamics students for centuries.  The objective of the present development is to clear up much of this confusion.  We begin by first briefly reviewing the first law of thermodynamics, in order to introduce in a precise way the concepts of thermodynamic equilibrium states, heat flow, mechanical energy flow (work), and reversible and irreversible process paths.

First Law of Thermodynamics

A thermodynamic equilibrium state of a system is defined as one in which the temperature and pressure are constant, and do not vary with either location within the system (i.e., spatially uniform temperature and pressure) or with time (i.e., temporally constant temperature and pressure).

Consider a closed system (no mass enters or exits) that, at initial time [itex]t_i[/itex], is in an initial equilibrium state, with internal energy [itex]U_i[/itex], and, at a later time [itex]t_f[/itex], is in a new equilibrium state with internal energy [itex]U_f[/itex].  The transition from the initial equilibrium state to the final equilibrium state is brought about by imposing a time-dependent heat flow across the interface between the system and the surroundings, and a time-dependent rate of doing work at the interface between the system and the surroundings. Let [itex]\dot{q}(t)[/itex] represent the rate of heat addition across the interface at time t, and let [itex]\dot{w}(t)[/itex] represent the rate at which the system does work at the interface at time t. According to the first law (basically conservation of energy),
[tex]\Delta U=U_f-U_i=\int_{t_i}^{t_f}{(\dot{q}(t)-\dot{w}(t))dt}=Q-W[/tex]
where Q is the total amount of heat added and W is the total amount of work done by the system on the surroundings at the interface.

The time variation of [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] between the initial and final states uniquely defines the so-called process path. There is an infinite number of possible process paths that can take the system from the initial to the final equilibrium state. The only constraint is that Q-W must be the same for all of them.

A reversible process path is defined as one for which, at each instant of time along the path, the system is only slightly removed from being in thermodynamic equilibrium with its surroundings.  So the path can be considered as a continuous sequence of thermodynamic equilibrium states.  As such, the temperature and pressure throughout the system along the entire reversible process path are completely uniform spatially.  In order to maintain these conditions, a reversible path must be carried out very slowly so that [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex] are both very close to zero over then entire path.

An irreversible process path is typically characterized by rapid rates of heat transfer [itex]\dot{q}(t)[/itex] and work  being done at the interface with the surroundings [itex]\dot{w}(t)[/itex].    This produces significant temperature and pressure gradients within the system (i.e., the pressure and temperature are not spatially uniform throughout), and thus, it is not possible to identify specific representative values for either the temperature or the pressure of the system (except at the initial and the final equilibrium states). However, the pressure ##P_{Int}(t)## and temperature ##T_{Int}(t)## at the interface can always be measured and controlled using the surroundings to impose whatever process path we desire.  (This is equivalent to specifying the rate of heat flow and the rate of doing work at the interface [itex]\dot{q}(t)[/itex] and [itex]\dot{w}(t)[/itex]).

Both for reversible and irreversible process paths, the rate at which the system does work on the surroundings is given by:
[tex]\dot{w}(t)=P_{Int}(t)\dot{V}(t)[/tex]
where, again, ##P_{Int}(t)## is the pressure at the interface with the surroundings, and where [itex]\dot{V}(t)[/itex] is the rate of change of system volume at time t.

If the process path is reversible, the pressure P throughout the system is uniform, and thus matches the pressure at the interface, such that

[tex]P_{Int}(t)=P(t)\mbox{         (reversible process path only)}[/tex]

Therefore, in the case of a reversible process path, [tex]\dot{w}(t)=P(t)\dot{V}(t)\mbox{          (reversible process path only)}[/tex]

This completes our discussion of the First Law of Thermodynamics.

Second Law of Thermodynamics

In the previous section, we focused on the infinite number of process paths that are capable of taking a closed thermodynamic system from an initial equilibrium state to a final equilibrium state. Each of these process paths is uniquely determined by specifying the heat transfer rate [itex]\dot{q}(t)[/itex] and the rate of doing work [itex]\dot{w}(t)[/itex] as functions of time at the interface between the system and the surroundings. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path are given by the two integrals:
[tex]Q=\int_{t_i}^{t_f}{\dot{q}(t)dt}[/tex]
[tex]W=\int_{t_i}^{t_f}{\dot{w}(t)dt}[/tex]
In the present section, we will be introducing a third integral of this type (involving the heat transfer rate [itex]\dot{q}(t)[/itex]) to provide a basis for establishing a precise mathematical statement of the Second Law of Thermodynamics.

The discovery of the Second Law came about in the 19th century, and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in a complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 1800s.

(The following is a somewhat fictionalized account, designed to minimize the historical discussion, and focus more intently on the scientific findings.) Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system:
[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}[/tex]
where ##T_{Int}(t)## is the temperature at the interface with the surroundings at time t. He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing:  For any closed system, the values calculated for the integral over all the possible reversible and irreversible paths (between the initial and final equilibrium states) is not arbitrary; instead, there is a unique upper bound to the value of the integral. Clausius also found that this observation is consistent with all the “word definitions” of the Second Law.

Clearly, if there is an upper bound for this integral, this upper bound has to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy.

But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, out of the infinite number of possible process paths, there exists a well-defined subset, each member of which gives exactly the same maximum value for the integral. This subset consists of all the reversible process paths. Thus, to determine the change in entropy between two equilibrium states, one must first “dream up” a reversible path between the two states and then evaluate the integral over that path. Any other process path will give a value for the integral lower than the entropy change.  (Note that the reversible process path used to determine the entropy change does not necessarily need to bear any resemblance to the actual process path.  Thus, for example, if the actual process path were adiabatic, the reversible path would not need to be adiabatic.)

So, mathematically, we can now state the Second Law as follows:

[tex]I=\int_{t_i}^{t_f}{\frac{\dot{q}(t)}{T_{Int}(t)}dt}\leq\Delta S=\int_{t_i}^{t_f} {\frac{\dot{q}_{rev}(t)}{T(t)}dt}[/tex]
where [itex]\dot{q}_{rev}(t)[/itex] is the heat transfer rate for any of the reversible paths between the initial and final equilibrium states, and T(t) is the system temperature at time t (which, for a reversible path, matches the temperature at the interface with the surroundings ##T_{Int}(t)##). This constitutes a precise mathematical statement of the Second Law of Thermodynamics.  The relationship is referred to as the Clausius Inequality.

Paper of interest for further learning:
http://arxiv.org/abs/cond-mat/9901352

 

74 replies
« Older Comments
  1. Chestermiller says:
    fox26

    Chet,
    Is this too long for a comment?

    Thank you for explaining the solution, for real materials, to my infinite entropy change
    problem–maybe; I, as indicated, suspected something of the kind might be the explanation. Do
    you know, however, that taking into account both the variation of heat capacity with temperature
    and pressure, and also phase transitions, always leads to a finite value of ∫dq/T when integrated
    between 0°K and a higher temperature? Do you care?Actually, as an chemical engineer who worked on processes substantially above absolute zero, I have no interest in this whatsoever.
    Maybe you are concerned only with
    changes in entropy for processes operating between two non-zero temperatures. Did you use
    entropy change calculations in your chemical engineering job? I know that they can be used in
    some cases to indicate that a proposed process is impossible, by showing that it would involve
    reduction in entropy of an isolated system, thus violating the second law. A well-known example
    is the operation of a Carnot cycle heat engine with efficiency greater than that set by the
    requirement that the reduction in entropy caused by the removal of thermal energy from the high
    temperature heat bath must be accompanied by at least as great an increase in entropy caused
    by the addition of thermal energy to the low temperature heat bath.The concept of entropy figures substantially in the practical application of thermodynamics to chemical process engineering, but not in the qualitative way that you describe. Entropy is part of the definition of Gibbs free energy which is essential to quantifying chemical reaction and phase equilibrium behavior in the design and operation of processes involving distillation, gas absorption, ion exchange, crystallization, liquid extraction, chemical reactors, etc.
    One might think that the Δ(entropy) = ∫dq/T law should always give a finite Δ(entropy) for pure
    ideal gases as well as real materials, but as I (simply) demonstrated, it doesn’t do so for ideal
    gases with the lower temperature equal to 0°K, even though ideal gases would not experience
    any variation of heat capacity with temperature or pressure, or any phase transitions. I recently
    thought about this problem some more, stimulated by the PF discussion, and arrived at a (now
    obvious to me) solution, or at least an explanation, that actually favors the thermodynamic
    definition of entropy change, which involves infinite, or arbitrarily large, entropy change for
    process (involving ideal gases) with starting temperatures at, or arbitrarily near, absolute zero,
    over the statistical mechanical view, which requires any finite system to have only finite absolute
    entropy at any temperature, including absolute zero, so gives only finite entropy change for
    processes, for finite systems, operating between any two states at any two finite temperatures,
    even when one is absolute zero. This solution or explanation will require quite a number of lines
    to state. I hope that it is not so obvious a one that I am wasting your, my, and any other persons
    who are reading this posts’ time by going through it.I personally have no interest in this, but other members might. Still, I caution you that Physics Forums encourages discussion only of mainstream theories, and specifically prohibits discussing personal theories. I ask you to start a new thread with what you want to cover (which seems tangential to the main focus of the present thread), possibly in the Beyond the Standard Model forum. You can then see whether anyone else has interest in this or whether it is just deemed a personal theory. I'm hoping that @DrDu and @DrClaude might help out with this.

    For now, I think that the present thread has run its course, and I'm hereby closing it.

  2. fox26 says:
    Chestermiller

    For pure materials, the ideal gas is only a model for real gas behavior above the melting point and at low reduced pressures, ##p/p_{critical}##. For real gases, the heat capacity is not constant, and varies with both temperature and pressure. So, the solution to your problem is, first of all, to take into account the temperature-dependence of the heat capacity (and pressure-dependence, if necessary). Secondly, real materials experience phase transitions, such as condensation, freezing, and changes in crystal structure (below the freezing point). So one needs to take into account the latent heat effects of these transitions in calculating the change in entropy. And, finally, before and after phase transitions, the heat capacity of the material can be very different (e.g., ice and liquid water).Chet,
    Is this too long for a comment?

    Thank you for explaining the solution, for real materials, to my infinite entropy change
    problem–maybe; I, as indicated, suspected something of the kind might be the explanation. Do
    you know, however, that taking into account both the variation of heat capacity with temperature
    and pressure, and also phase transitions, always leads to a finite value of ∫dq/T when integrated
    between 0°K and a higher temperature? Do you care? Maybe you are concerned only with
    changes in entropy for processes operating between two non-zero temperatures. Did you use
    entropy change calculations in your chemical engineering job? I know that they can be used in
    some cases to indicate that a proposed process is impossible, by showing that it would involve
    reduction in entropy of an isolated system, thus violating the second law. A well-known example
    is the operation of a Carnot cycle heat engine with efficiency greater than that set by the
    requirement that the reduction in entropy caused by the removal of thermal energy from the high
    temperature heat bath must be accompanied by at least as great an increase in entropy caused
    by the addition of thermal energy to the low temperature heat bath.

    One might think that the Δ(entropy) = ∫dq/T law should always give a finite Δ(entropy) for pure
    ideal gases as well as real materials, but as I (simply) demonstrated, it doesn’t do so for ideal
    gases with the lower temperature equal to 0°K, even though ideal gases would not experience
    any variation of heat capacity with temperature or pressure, or any phase transitions. I recently
    thought about this problem some more, stimulated by the PF discussion, and arrived at a (now
    obvious to me) solution, or at least an explanation, that actually favors the thermodynamic
    definition of entropy change, which involves infinite, or arbitrarily large, entropy change for
    process (involving ideal gases) with starting temperatures at, or arbitrarily near, absolute zero,
    over the statistical mechanical view, which requires any finite system to have only finite absolute
    entropy at any temperature, including absolute zero, so gives only finite entropy change for
    processes, for finite systems, operating between any two states at any two finite temperatures,
    even when one is absolute zero. This solution or explanation will require quite a number of lines
    to state. I hope that it is not so obvious a one that I am wasting your, my, and any other persons
    who are reading this posts’ time by going through it.

    The basic reason that the statistical mechanical (SM) entropy (S) of a pure (classical) ideal gas
    SYS in any equilibrium macrostate at 0°K or any temperature above that is zero or a finite
    positive number, whereas its thermodynamic (THRM) entropy change between an equilibrium
    macrostate of SYS at 0°K and one at any temperature above that is infinite, is that the SM
    entropy of SYS in some equilibrium macrostate MAC is calculated using a discrete
    approximation NA to the uncertainty of the exact state of SYS when in MAC–NA is the number
    of microstates available to SYS when it is in MAC, with some mostly arbitrary definition of the
    size in phase space of a microstate of SYS– whereas the THRM entropy change between two
    equilibrium macrostates of SYS is calculated using the (multi-dimensional) area or volume in
    phase space of the set of microstates available to SYS when in those macrostates, which can
    be any positive real number (and for a macrostate at 0°K is 0). The details follow:

    The state of an ideal gas SYS composed of N point-particles each of mass m which interact
    only by elastic collision is specified by a point P[SUB]s[/SUB] in 6N-dimensional phase space, 3N
    coordinates of them for position and 3N of them for momentum. If the gas is in equilibrium,
    confined to a cube 1 unit on a side, and has a thermal energy of E, SM and THRM both consider P[SUB]s[/SUB]
    to be equally likely to be anywhere on the energy surface ES determined by E, which is the set of all
    points corresponding to SYS having a thermal energy of E, and the probability density of P[SUB]s[/SUB]
    being at any point x is the same positive constant for each x ∈ ES, and 0 elsewhere . Since E is
    purely (random) kinetic energy, E = Σ[SUB]1[/SUB][SUP]N[/SUP]p[SUB]i[/SUB][SUP]2[/SUP]/2m, where p[SUB]i[/SUB] is the ith particle's momentum,
    so this energy surface is the set of all points with position coordinates within the unit cube in the
    position part of the phase space for SYS, and whose momentum coordinates are on the 3N-1
    dimensional sphere MS in momentum space centered at the origin with radius r = √(2mE). The
    area (or volume) where P[SUB]s[/SUB] might be is proportional to the area A of MS, and A ∝ r[SUP]3N-1[/SUP]. The entropy
    S of SYS is proportional to ln(the area of phase space where P[SUB]s[/SUB] might be), S ∝ ln(A), therefore
    S = const1.+ const2. x ln(E), and since E ∝ T by the equipartition theorem, S = const1.+ const2. x
    [const3. + ln(T)]. Thus dS/dE ∝ dS /dT = const2. x 1/T, so dS/dE = const4. x 1/T, and choosing const4.
    to be 1, dS = dE/T. This shows the origin of your THRM dS law, for ideal gases (with dE = dq), which
    you probably knew. SM approximates this law, adequately for high T and so large A, by dividing
    phase space up into boxes with more-or-less arbitrary dimensions of position and momentum, and
    replacing A by the number NA of boxes which contain at least one point of ES. This makes S a
    function of T which is not even continuous, let alone differentiable, but for large T the jumps in NA,
    and so in S, as a function of T are small enough compared to S to ignore, and the SM entropy
    can approximately also follow the dS = dE/T law, and be about equal to the THRM entropy, for
    suitable box dimensions. However, as T approaches 0°K, the divergence of the SM entropy from the
    THRM entropy using these box dimensions becomes severe. As T decreases in steps by factors
    of, say, D, the THRM entropy S decreases by some constant amount ln(D) per step, becoming
    arbitrarily negative for low enough T, but with T never quite reaching 0°K by this process. For
    T = 0°K, A = 0, so S = (some positive) const. x ln(A) = const. x ln(0) = minus infinity. Since the
    energy surface ES must intersect at least one box of the SM partition of phase space, NA can never
    go below 1, no matter how small T and so A become. Thus the SM entropy S can never go below
    const. x ln(1) = 0. The THRM absolute entropy can be finite, except at T = 0, because, although Δ(S)
    from a state of SYS whose T is arbitrarily close to 0°K to a state at a higher T can be arbitrarily
    large (positive), S at the starting state can be negative enough that the resulting S for the state at
    the higher temperature is some constant finite number, regardless of how near 0°K the starting
    state is. For the SM entropy, a similar situation is not the case, since although the SM Δ(S) is
    about as large as the THRM Δ(S), the SM S at the starting state can never be less than 0. The
    temperature at which the SM entropy S gets stuck at 0, not being able to go lower for a lower T, is
    not a basic feature of the laws of the universe. Making SYS bigger or making the boxes of the
    SM partition of phase space smaller would result in the sticking temperature being lower, and
    of course making SYS smaller or the boxes larger would raise the sticking temperature.

    I have read somewhere (of course, maybe it was written by a low-temperature physicist) that the
    amount of interesting phenomena for a system within a range of temperatures is proportional to
    the ratio of the highest to the lowest temperature of that range, not to their difference. If so,
    there would be as large an amount of such phenomena between .001°K and .01°K as between
    100°K and 1000°K, but the usual SM entropy measure would show no entropy difference
    between any two states of a very small system in the lower temperature range, but a non-zero
    difference between different states in the upper range, so would be of no help in analyzing
    processes in the lower range, even though of some help in the upper range (or if not, for a given
    system, for these two temperature ranges, it would be so for some other two temperature ranges
    each with a 10 to 1 temperature ratio). On the other hand, the THRM entropy measure would show
    as much entropy difference (which would be non-zero) between states at the bottom and at the top
    of the lower range as between states at the bottom and at the top of the upper range.

  3. DrDu says:
    fox26

    […] dq = C(dT), which you've used in evaluating such integrals, with C = the (constant) heat capacity,
    say at constant volume, of SYS, or dq = k(dT/2)x(the number of degrees of freedom of SYS), which
    is implied by the Equipartition Theorem,[…]Even in phenomenological thermodynamics, the heat capacity C generically depends on temperature. The equipartition theorem is a theorem from classical mechanics. It is approximately applicable if the number of quanta in each degree of freedom is >>1. In solids, this leads to the well known rule of Dulong-Petit, stating that the heat capacity per atom in a solid is approximately ##3k_mathrm{B}##. At lower temperatures, the heat capacity decreases continuously as the degrees of freedom start to "freeze out", with the exception of the sound modes. This leads to the celebrated Debye expression for the heat capacity at low temperatures ##C_V approx T^3##.

  4. DrDu says:
    fox26

    You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero….No, the third law was formulated by Walter Nernst. He also did not find that the absolute entropy at T=0 was 0. Rather he found that the entropy of an ideal crystal becomes independent of all the other variables of the system (like p) in the limit T to 0. So entropy at T=0 is a constant and this constant can conveniently be chosen to be 0.

  5. Chestermiller says:

    For pure materials, the ideal gas is only a model for real gas behavior above the melting point and at low reduced pressures, ##p/p_{critical}##. For real gases, the heat capacity is not constant, and varies with both temperature and pressure. So, the solution to your problem is, first of all, to take into account the temperature-dependence of the heat capacity (and pressure-dependence, if necessary). Secondly, real materials experience phase transitions, such as condensation, freezing, and changes in crystal structure (below the freezing point). So one needs to take into account the latent heat effects of these transitions in calculating the change in entropy. And, finally, before and after phase transitions, the heat capacity of the material can be very different (e.g., ice and liquid water).

  6. fox26 says:
    Chestermiller

    Chestermiller submitted a new PF Insights post

    Understanding Entropy and the 2nd Law of Thermodynamics

    View attachment 178074

    Continue reading the Original PF Insights Post.

    Chestermiller

    Wow. Thank you for finally clarifying your question.

    You are asking how the absolute entropy of a system can be determined. This is covered by the 3rd Law of Thermodynamics. I never mentioned the 3rd Law of Thermodynamics in my article. You indicated that, in my article, I said that "some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0." I never said this in my article or in any of my comments. If you think so, please point out where. My article only deals with relative changes in entropy from one thermodynamic equilibrium state to another.Chet,
    My statement about what you had said regarding 0 entropy at 0° Kelvin did not involve a direct
    quote from you, using “ “, it involved an indirect quote, using the word “that”, and included a part
    which I wasn’t attributing to you, the “some important person in thermodynamics, I don’t remember
    who, so call him “X” (maybe it was Clausius)” –that was my comment about what you had said. I
    admit it wasn't perfectly clear which parts were ones that I was saying that you had said, and which
    were mine, but making such things completely unambiguous in the English language often, as with
    what I intended to say in this case, requires overly long and awkward constructions. Also, I didn’t
    say that you had made the 0 entropy at 0° K statement in your article; in fact, I thought that you
    had made it while replying to a comment about your article, but after you stated in your email that
    you hadn't said it in your article or in any of your comments, I looked back over them, and found
    that it had occurred in a quote from INFO-MAN which you had included in one of your comments.
    X in that quote was "Kelvin", not "Clausius". According to INFO-MAN, Kelvin had said that a pure
    substance (mono-molecular?–fox26's question, not Kelvin’s) at absolute zero would have zero
    entropy. Using "entropy" in the statistical mechanical sense, this statement attributed to Kelvin is
    true (classically, not quantum mechanically).

    Fine, but that brings up what may be a serious problem with the thermodynamics equation:
    Δ(entropy) for a reversible process between equilibrium states A and B of a system SYS = the
    integral of dq/T between A and B. If SYS is a pure gas in a closed container, and A is SYS at 0° K,
    and the relation between dq and dT, which one must know to evaluate the integral, is either
    dq = C(dT), which you've used in evaluating such integrals, with C = the (constant) heat capacity,
    say at constant volume, of SYS, or dq = k(dT/2)x(the number of degrees of freedom of SYS), which
    is implied by the Equipartition Theorem, then the integral of dq/T between A and B is [the integral,
    between 0° K and the final temperature T1, of some non-zero constant P times dT/T] =
    P[ln(T1) – ln(0)] = ∞ (infinity [for T1 > 0], but actually even then it might be better to regard the
    integral as not defined). This problem isn’t solved by requiring the lower (starting) temperature
    T0 to be non-zero, but allowing it to be anything above zero, because the integral between
    T0 and any T1 > 0 can be made arbitrarily (finitely) large by making T0 some suitably small but
    non-zero temperature. Thus, if (1), Kelvin’s sentence is true with “entropy” having the
    thermodynamic as well as with it having the statistical mechanical meaning, (2), the Δ(entropy) =
    ∫dq/T law is true for thermodynamic as well as statistical mechanical entropy, and (3), a linear
    relation between dq and dT holds
    , then the thermodynamic entropy for any (non-empty)
    system in equilibrium and at any temperature T1 above absolute zero can’t be finite, even though
    the statistical mechanical entropy for such a (finite) system can be made arbitrarily small by taking
    T1 to be some suitable temperature > 0° K. Surely the thermodynamic entropy can’t be so different
    from the statistical mechanical entropy that the conclusion of the previous sentence is true. The
    problem's solution might be that the heat capacity C varies at low temperatures in such a way, for
    example C ∝ √T, that the integral is finite, or that the Equipartition Theorem breaks down at low
    temperatures, but at least for systems which are a gas composed of classical point particles
    interacting, elastically, only when they collide, which is an ideal gas (never mind that they would
    almost never collide), the Equipartition Theorem leads to, maybe is equivalent to, the Ideal Gas Law,
    which can be mathematically shown to be true for such a gas, even down to absolute zero, and
    experimentally breaks down severely, at low temperatures with real gases, only because of their
    departures, including their being quantum mechanical, from the stated conditions. What is the
    solution of this problem? Must thermodynamics give up the Δ(entropy) = ∫dq/T law as an exact, and
    for low temperatures as even a nearly exact, law?

  7. Chestermiller says:
    fox26

    I asked general questions because those were what I was interested in, not a specific calculation. You mostly made general statements, instead of specific calculations, in your article and answers to replies, which often were themselves general. However, if you won't answer general questions from me, here's a specific one, even though it's a particular case of the first general question in the last paragraph of my last previous reply:

    Suppose a closed (in your sense) system SYS in state S1 consists of a gas of one kilogram of hydrogen molecules in equilibrium at 400 degrees kelvin in a cubical container one meter on a side; I leave it to you to calculate its internal pressure approximately, if you wish, using the ideal gas law. How can its entropy be calculated? Integrating dq/T over the path of a reversible process going from some other state S2 of SYS to S1 can give the change of entropy Δentropy(S2,S1) caused by the process, and entropy(S1) = entropy(S2) + Δentropy(S2,S1), but what is entropy(S2), and how can that be determined by thermodynamic considerations alone, without invoking statistical mechanical ones? You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0, so the problem would be solved, except for the question of how X had determined that entropy(S2), or any other system at absolute zero, = 0, using only thermodynamic considerations. You wrote "determined", so I assume he didn't do this just by taking entropy(any system at absolute zero) = 0 as an additional law of thermodynamics, or part of the thermodynamic definition of "entropy", but instead calculated it. How? It can be done by statistical mechanical considerations (for the SM idea of entropy), but you presumably would want to do it by thermodynamics alone.Wow. Thank you for finally clarifying your question.

    You are asking how the absolute entropy of a system can be determined. This is covered by the 3rd Law of Thermodynamics. I never mentioned the 3rd Law of Thermodynamics in my article. You indicated that, in my article, I said that "some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0." I never said this in my article or in any of my comments. If you think so, please point out where. My article only deals with relative changes in entropy from one thermodynamic equilibrium state to another.

  8. fox26 says:
    Chestermiller

    Huh??? From what you have written, I don't even really know whether we are disagreeing about anything. Are we?

    By a specific problem, what I was asking for was not something general, such as systems you have only alluded to, but for a problem with actual numbers for temperatures, pressures, masses, volumes, forces, stresses, strains, etc. Do you think you can do that? If not, then we're done here. I'm on the verge of closing this thread.I asked general questions because those were what I was interested in, not a specific calculation. You mostly made general statements, instead of specific calculations, in your article and answers to replies, which often were themselves general. However, if you won't answer general questions from me, here's a specific one, even though it's a particular case of the first general question in the last paragraph of my last previous reply:

    Suppose a closed (in your sense) system SYS in state S1 consists of a gas of one kilogram of hydrogen molecules in equilibrium at 400 degrees kelvin in a cubical container one meter on a side; I leave it to you to calculate its internal pressure approximately, if you wish, using the ideal gas law. How can its entropy be calculated? Integrating dq/T over the path of a reversible process going from some other state S2 of SYS to S1 can give the change of entropy Δentropy(S2,S1) caused by the process, and entropy(S1) = entropy(S2) + Δentropy(S2,S1), but what is entropy(S2), and how can that be determined by thermodynamic considerations alone, without invoking statistical mechanical ones? You did state somewhere that some important person in thermodynamics, I don't remember who, so call him "X" (maybe it was Clausius), had determined that the entropy of any system consisting of matter (in equilibrium) at absolute zero would be zero, so letting S2 = SYS at absolute zero, we would have entropy(S2) = 0, so the problem would be solved, except for the question of how X had determined that entropy(S2), or any other system at absolute zero, = 0, using only thermodynamic considerations. You wrote "determined", so I assume he didn't do this just by taking entropy(any system at absolute zero) = 0 as an additional law of thermodynamics, or part of the thermodynamic definition of "entropy", but instead calculated it. How? It can be done by statistical mechanical considerations (for the SM idea of entropy), but you presumably would want to do it by thermodynamics alone.

  9. Chestermiller says:
    fox26

    Is this not possible (classically, ignoring the internal energy of atoms and molecules and the relativistic rest-mass equivalent E = mc^2 energy): Total internal energy E of a closed (in my sense) system, in the center of mass frame = mechanical (macroscopic, including macroscopic kinetic and internal potential energy) energy + thermal (microscopic kinetic) energy? That is what I meant and, when I wrote my first comment, thought you meant, by "mechanical" and "thermal" energy. (I used "heat", non-precisely, in parenthesis after "thermal" in my second comment to try to indicate the meaning of "thermal" just because you seemed, in your reply to my first comment, to think my "thermal energy" meant the total internal energy of the system, which of course it didn't.) My comment that the atmosphere of the earth would not be a system in equilibrium under my stated conditions, according to your definition of "thermodynamic equilibrium state", follows from your definition of that in the first sentence after the second bold subheading "First Law of Thermodynamics" in your article. I agreed, in the last sentence of the first paragraph of my second comment (this is my third comment) with your statements 3 and 4, of 5 total, in your reply to Khashishi ("and" in 3 should be "which"). Two other specific problems are stated in the second and last paragraph of my second comment.Huh??? From what you have written, I don't even really know whether we are disagreeing about anything. Are we?

    By a specific problem, what I was asking for was not something general, such as systems you have only alluded to, but for a problem with actual numbers for temperatures, pressures, masses, volumes, forces, stresses, strains, etc. Do you think you can do that? If not, then we're done here. I'm on the verge of closing this thread.

  10. fox26 says:
    Chestermiller

    Not so. Internal energy is a physical property of a material (independent of process path, heat and work), and heat and work depend on process path.

    Not correct. The atmosphere at a uniform temperature and completely still would be in equilibrium even with pressure variation. The form of the first law equation I gave, for simplicity, omitted the change in potential energy of the system.

    I still don't understand what you are asking or saying. Why don't you define a specific problem that we can both solve together? Define a problem that you believe would illustrate what you are asking. Otherwise, I don't think I can help you, and we will just have to agree to disagree.Is this not possible (classically, ignoring the internal energy of atoms and molecules and the relativistic rest-mass equivalent E = mc^2 energy): Total internal energy E of a closed (in my sense) system, in the center of mass frame = mechanical (macroscopic, including macroscopic kinetic and internal potential energy) energy + thermal (microscopic kinetic) energy? That is what I meant and, when I wrote my first comment, thought you meant, by "mechanical" and "thermal" energy. (I used "heat", non-precisely, in parenthesis after "thermal" in my second comment to try to indicate the meaning of "thermal" just because you seemed, in your reply to my first comment, to think my "thermal energy" meant the total internal energy of the system, which of course it didn't.) My comment that the atmosphere of the earth would not be a system in equilibrium under my stated conditions, according to your definition of "thermodynamic equilibrium state", follows from your definition of that in the first sentence after the second bold subheading "First Law of Thermodynamics" in your article. I agreed, in the last sentence of the first paragraph of my second comment (this is my third comment) with your statements 3 and 4, of 5 total, in your reply to Khashishi ("and" in 3 should be "which"). Two other specific problems are stated in the second and last paragraph of my second comment.

  11. Chestermiller says:
    fox26

    Chet,
    I didn't read your article before entering my post, just read your 5 listed points in reply to Khashishi, but a little while ago I did look at its first part, and found, of course, that what I called "thermal energy", q, is not, despite what you said, what you called the "internal energy", which according to your introduction includes both what I called thermal (heat) energy, but also mechanical energy, as it usually is meant to include.Not so. Internal energy is a physical property of a material (independent of process path, heat and work), and heat and work depend on process path.
    Also I found that you defined "equilibrium" so that even the atmosphere of earth at a uniform temperature and completely still would not be in equilibrium, because of the pressure variation with altitude.Not correct. The atmosphere at a uniform temperature and completely still would be in equilibrium even with pressure variation. The form of the first law equation I gave, for simplicity, omitted the change in potential energy of the system.

    The situation that I was concerned with is stated in the first sentence of my previous post. For my last statement of what your 5 points implied, instead of "entropy", I should have had "the integral of dq/T". The main point where I disagreed with you, apparently, is in the definition of "equilibrium", and so of "state" of the system. About the only thing you would consider to be a system in equilibrium is a sealed container with a gas, absolutely still, at uniform pressure and temperature, floating in space in free fall, with the state of the system, for a given composition and amount of gas, specified completely by its temperature and pressure, as DrDu said. Then its entropy, given the gas, is determined by that temperature and pressure, and I am willing to believe that Clausius did, by calculating many examples, almost show, except for paths involving such things as mechanical shock excitation of the gas, what you claimed he did show, for such a system.

    However, defining entropy from just changes in entropy isn't possible; a starting point whose entropy is known is necessary. Can this be, say, empty space, with zero entropy (classically)? Also, if the second law, that the entropy of a closed system (by this I, and most other people, mean what you mean by an "isolated system") never (except extremely rarely, for macroscopic systems) decreases, is to have universal applicability, it must be possible to define "the entropy" of any system, even ones far from equilibrium, in your or more general senses of "equilibrium". How can this be done? In particular, why the entropy of the entire universe, or very large essentially closed portions of it, always increases, or at least never decreases, is now a fairly hot topic. Do you think this is a meaningful question?I still don't understand what you are asking or saying. Why don't you define a specific problem that we can both solve together? Define a problem that you believe would illustrate what you are asking. Otherwise, I don't think I can help you, and we will just have to agree to disagree.

  12. fox26 says:
    Chestermiller

    I stated very clearly in the article that q represents the heat flowing into the system across its boundary, from the surroundings to the system. The T in the equation is the temperature at this boundary.

    What you are calling the thermal energy of the system, I would refer to as its internal energy. But, the internal energy is not what appears in the definition of the entropy change.

    I don't understand what you are trying to say here. Maybe it would help to give a specific focus problem to illustrate what you are asking.Chet,
    I didn't read your article before entering my post, just read your 5 listed points in reply to Khashishi, but a little while ago I did look at its first part, and found, of course, that what I called "thermal energy", q, is not, despite what you said, what you called the "internal energy", which according to your introduction includes both what I called thermal (heat) energy, but also mechanical energy, as it usually is meant to include. Also I found that you defined "equilibrium" so that even the atmosphere of earth at a uniform temperature and completely still would not be in equilibrium, because of the pressure variation with altitude. The situation that I was concerned with is stated in the first sentence of my previous post. For my last statement of what your 5 points implied, instead of "entropy", I should have had "the integral of dq/T". The main point where I disagreed with you, apparently, is in the definition of "equilibrium", and so of "state" of the system. About the only thing you would consider to be a system in equilibrium is a sealed container with a gas, absolutely still, at uniform pressure and temperature, floating in space in free fall, with the state of the system, for a given composition and amount of gas, specified completely by its temperature and pressure, as DrDu said. Then its entropy, given the gas, is determined by that temperature and pressure, and I am willing to believe that Clausius did, by calculating many examples, almost show, except for paths involving such things as mechanical shock excitation of the gas, what you claimed he did show, for such a system.

    However, defining entropy from just changes in entropy isn't possible; a starting point whose entropy is known is necessary. Can this be, say, empty space, with zero entropy (classically)? Also, if the second law, that the entropy of a closed system (by this I, and most other people, mean what you mean by an "isolated system") never (except extremely rarely, for macroscopic systems) decreases, is to have universal applicability, it must be possible to define "the entropy" of any system, even ones far from equilibrium, in your or more general senses of "equilibrium". How can this be done? In particular, why the entropy of the entire universe, or very large essentially closed portions of it, always increases, or at least never decreases, is now a fairly hot topic. Do you think this is a meaningful question?

  13. Chestermiller says:
    fox26

    In "dq/T", does "q" stand for, as it did in my statistical mechanics courses, the thermal energy of the system in question, call it "SYS"?I stated very clearly in the article that q represents the heat flowing into the system across its boundary, from the surroundings to the system. The T in the equation is the temperature at this boundary.

    What you are calling the thermal energy of the system, I would refer to as its internal energy. But, the internal energy is not what appears in the definition of the entropy change.

    If so, then dq/T is dS, the change in entropy of SYS, so in, for example, a process such as slow compression of a gas in a cylinder by a piston, which we will call "SYS", which would be a reversible process, the environment external to SYS, which supplies the mechanical energy for compression, can have zero entropy change, so dq and so dS must be zero (assuming T>0), otherwise the entropy change of SYS together with the environment would be non-zero, so positive–it could hardly be negative–so the process wouldn't be reversible. The point of this is that you said that in all reversible paths between the initial and final equilibrium states of a system give exactly the same (maximum) value of the integral of dq/T, which is the total entropy change of SYS together with the environment in our example, so all other paths between the initial and final states must give less than or equal to zero entropy change, and a change less than zero would violate the second law, so all paths must give zero entropy change, so no paths which increase the entropy can exist, which is obviously false. Was there a typo in both 3. and 4. in your article, and it should have been (minimum)? The only other likely possibility that I can think of right now is that your dq = my -dq.I don't understand what you are trying to say here.

  14. Chestermiller says:
    Tahira Firdous

    You have taken internal pressure times change in volume in work equation with a positive convention. I am learning from YouTube lectures they have taken work as external pressure times change in volume with a negative convention, in this way work done on the system becomes positive but J M Smith takes work done by the system positive.So can you please explain me some logical reason behind these conventions and also about internal and external pressures in work equation.
    ThanksSome people take work done on the system by the surroundings as positive and some people take work done by the system on the surroundings as positive. Of course, this results in different signs for the work term in the expression of the first law. In engineering, we take work done by the system on the surroundings as positive. Chemists often take work done on the system by the surroundings as positive

  15. fox26 says:
    Chestermiller

    That's your opinion.

    Did you not read the article? I made it perfectly clear that:

    1. Entropy is a function of state
    2. There are an infinite number of process paths between the initial and final equilibrium states of the system
    3. The integral of dq/T over all these possible paths has a maximum value, and is thus a function of state
    4. All reversible paths between the initial and final equilibrium states of the system give exactly the same (maximum) value of the integral, so you don't need to evaluate all possible paths
    5. To get the change in entropy between the initial and final equilibrium states of the system, one needs only to conceive of a single convenient reversible path between the two states and integtrate dq/T for that path.

    Try using the statistical mechanical definition of entropy to calculate the change in entropy of a real gas between two thermodynamic equilibrium states.

    ChetIn "dq/T", does "q" stand for, as it did in my statistical mechanics courses, the thermal energy of the system in question, call it "SYS"? If so, then dq/T is dS, the change in entropy of SYS, so in, for example, a process such as slow compression of a gas in a cylinder by a piston, which we will call "SYS", which would be a reversible process, the environment external to SYS, which supplies the mechanical energy for compression, can have zero entropy change, so dq and so dS must be zero (assuming T>0), otherwise the entropy change of SYS together with the environment would be non-zero, so positive–it could hardly be negative–so the process wouldn't be reversible. The point of this is that you said that in all reversible paths between the initial and final equilibrium states of a system give exactly the same (maximum) value of the integral of dq/T, which is the total entropy change of SYS together with the environment in our example, so all other paths between the initial and final states must give less than or equal to zero entropy change, and a change less than zero would violate the second law, so all paths must give zero entropy change, so no paths which increase the entropy can exist, which is obviously false. Was there a typo in both 3. and 4. in your article, and it should have been (minimum)? The only other likely possibility that I can think of right now is that your dq = my -dq.

  16. fox26 says:
    Chestermiller

    That's your opinion.

    Did you not read the article? I made it perfectly clear that:

    1. Entropy is a function of state
    2. There are an infinite number of process paths between the initial and final equilibrium states of the system
    3. The integral of dq/T over all these possible paths has a maximum value, and is thus a function of state
    4. All reversible paths between the initial and final equilibrium states of the system give exactly the same (maximum) value of the integral, so you don't need to evaluate all possible paths
    5. To get the change in entropy between the initial and final equilibrium states of the system, one needs only to conceive of a single convenient reversible path between the two states and integtrate dq/T for that path.

    Try using the statistical mechanical definition of entropy to calculate the change in entropy of a real gas between two thermodynamic equilibrium states.

    Chet

  17. Tahira Firdous says:

    You have taken internal pressure times change in volume in work equation with a positive convention. I am learning from YouTube lectures they have taken work as external pressure times change in volume with a negative convention, in this way work done on the system becomes positive but J M Smith takes work done by the system positive.So can you please explain me some logical reason behind these conventions and also about internal and external pressures in work equation.
    Thanks

  18. Chestermiller says:

    [QUOTE=”Khashishi, post: 5184825, member: 331471″]This is a good explanation, but personally I feel like the classical description of thermodynamics which defines entropy as some maximum value of an integral should be deprecated in light of our increasing knowledge of physics. The statistical mechanics definition of entropy is far superior.[/quote]
    That’s your opinion.
    [quote]

    The classical definition is inherently confusing because entropy is a state variable, yet it is defined in terms of paths. Each path gives you a different integral. Experimentally, how do you determine what the maximum is out of an infinite number of possible paths? If the system is opened up in a way such that more paths become available, can the entropy increase?[/quote]
    Did you not read the article? I made it perfectly clear that:
    [LIST=1]
    [*]Entropy is a function of state
    [*]There are an infinite number of process paths between the initial and final equilibrium states of the system
    [*]The integral of dq/T over all these possible paths has a maximum value, and is thus a function of state
    [*][B]All reversible paths[/B] between the initial and final equilibrium states of the system give exactly the same (maximum) value of the integral, so you don’t need to evaluate all possible paths
    [*]To get the change in entropy between the initial and final equilibrium states of the system, one needs only to conceive of a single convenient reversible path between the two states and integtrate dq/T for that path.
    [/LIST]
    Try using the statistical mechanical definition of entropy to calculate the change in entropy of a [B]real[/B] gas between two thermodynamic equilibrium states.

    Chet

  19. Chestermiller says:

    [QUOTE=”Jano L., post: 5184772, member: 193673″]I think your article may be helpful to students, but it would be good to put some more disclaimers to places where it simplifies a lot.

    For example, you wrote
    [SIZE=4][I][/I]
    [I]The time variation of q˙(t) and w˙(t) between the initial and final states uniquely defines the so-called process path[/I]
    [I][/I]

    I think this is true for simple system whose thermodynamic state is determined by two numbers, say entropy and internal energy. But there may be more complicated situations, when one has magnetic and electric work in addition to volume work and then two numbers q˙ and w˙ are not sufficient to determine the path through the state space.[/SIZE][/QUOTE]
    Thanks Jano L.

    I toyed with the idea of mentioning that there are other forms of work that might need to be considered also, but in the end made the judgement call not to. You read my motivation for the article in some of my responses to the comments and in the article itself. I just wanted to include the bare minimum to give the students what they needed to do most of their homework problems. I felt that, if I made the article too long and comprehensive, they would stop reading before they had a chance to benefit from the article. There are many other things that I might have included as well, such as the more general form of the first law, which also includes changes in kinetic and potential energy of the system.

    I invite you to consider writing a supplement to my article in which you flesh things out more completely. Thanks for your comment.

    Chet

  20. Jano L. says:

    [QUOTE=”Khashishi, post: 5184825, member: 331471″]This is a good explanation, but personally I feel like the classical description of thermodynamics which defines entropy as some maximum value of an integral should be deprecated in light of our increasing knowledge of physics. The statistical mechanics definition of entropy is far superior. The classical definition is inherently confusing because entropy is a state variable, yet it is defined in terms of paths. Each path gives you a different integral. Experimentally, how do you determine what the maximum is out of an infinite number of possible paths? If the system is opened up in a way such that more paths become available, can the entropy increase?

    The statistical mechanics definition (and the related information theory definition) makes it clear why it is a state variable, because it only depends on the states. The paths are irrelevant.[/QUOTE]

    There is no one entropy to be defined by some optimal definition. In thermodynamics, (Clausius) entropy is defined through integrals. There is nothing confusing about it; to understand entropy in thermodynamics, the paths and integrals are necessary. It is hardly a disadvantage of the definition, since processes and integrals are very important things to understand while learning thermodynamics.

    In statistical physics, there are several concepts that are also called entropy, but neither of these is the same concept as Clausius entropy. *Sometimes* the statistical concept has similar functional dependence on the macroscopic variables to thermodynamic entropy. But it is not the same concept as thermodynamic entropy. Any use of statistical physics for explanation of thermodynamics is based on the *assumption* that statistical physics applies to thermodynamic systems. It does not replace thermodynamics in any way.

« Older Comments

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply