What is the thermodynamics temperature scale?

In summary, the thermodynamic temperature scale is a real, positive number that is used to relate to the energy of a system in thermodynamics. The ideal gas law can be used to determine temperature in both theory and practice, but it is not the only method. The thermodynamic temperature scale is independent of physical substance, and is based on the efficiency of a Carnot engine and the triple point of water. Negative temperatures can also exist in certain systems, such as a paramagnetic solid in a magnetic field.
  • #36
Gerenuk said:
A better method for measuring the temperature. Just describe one. But one which doesn't use ideal gases and no Carnot engine. Especially Carnot engines don't exist.

What's wrong with the mercury thermometer I have in my lab? It seems to work well enough... I also have one of these:

http://www.fishersci.com/wps/portal/PRODUCTDETAIL?prodcutdetail=%27prod%27&productId=1585059&catalogId=29104&matchedCatNo=150782C||150778||150781&pos=5&catCode=RE_SC&endecaSearchQuery=%23store%3DScientific%23N%3D0%23rpp%3D15&fromCat=yes&keepSessionSearchOutPut=true&fromSearch=Y&searchKey=digital||thermometer||thermometers&highlightProductsItemsFlag=Y

which I think uses a thermocouple, and it seems to work ok as well. At least, the two thermometers agree with each other when they are both in an incubator.
 
Last edited by a moderator:
Science news on Phys.org
  • #37
Mr.Miyagi said:
Thank you for the counter-examples. But when you say the entropy and the energy are known, are they not related? Is that the problem?

I'll try to read up a bit more.

Well for the case of a laser, the light has a very low entropy. But the energy can be fairly arbitrary- just change the intensity. There is no unique way to assign a value of temperature.

This is a good article:

How hot is radiation?
Christopher Essex, Dallas C. Kennedy, and R. Stephen Berry
Am. J. Phys. 71 969 (2003)
 
Last edited:
  • #38
"How hot is radiation? "
Yes, can I have a copy?
 
  • #39
Andy Resnick said:
What's wrong with the mercury thermometer I have in my lab? It seems to work well enough... I also have one of these:
Because you require an ideal gas experiment to calibrate those thermometers in the first place. So in the end again the only definition that's useful is an ideal gas definition and the zeroth law.
 
  • #40
If I had asked about the time scale and clocks instead, would the discussion be similar?
 
  • #41
quantum123 said:
If I had asked about the time scale and clocks instead, would the discussion be similar?
No, that's more well known. Time is defined by a specified atomic process (whichever is the most current candidate...). Length is the product of time multiplied with a fixed velocity.
 
  • #42
Gerenuk said:
Because you require an ideal gas experiment to calibrate those thermometers in the first place.

Gas thermometers are only used for some parts of the temperature scale. The lowest temperature part is defined using the melting curve of He3. For some parts the melting/freezing points of various metals as well as tripple points are used as fixed points to calibrate a platinum resistor. For very high temperatures fixed points are used together with radiation thermometry.
There are also other primary (primary means that it does not need to be calibrated) thermometers that -although not officially parts of ITS-90- are often used; the most common being nuclear orientation thermometers (used below about 0.3K, I use one in my lab) and various types of noise thermometers (as well as Coloumb blockade thermometers etc).
 
  • #43
To measure absolute temperature experimentally, one can start from the fact that absolute temperature is an integrating factor for heat, dS=dQ/T, at least for reversible changes, where T is a function of the empirical temperature theta (e.g. the height of mercury in a thermometer). Details can be found e.g. in the book by Buchdahl "Concepts of classical thermodynamics", Cambridge UP, 2009 - a very carefull axiomatic treatise on thermodynamics. Experimentally, one has to measure basically heat capacities dU/d theta and compressibilities.
 
  • #44
Gerenuk said:
Because you require an ideal gas experiment to calibrate those thermometers in the first place. So in the end again the only definition that's useful is an ideal gas definition and the zeroth law.

Oh- calibration. I thought you said *measure*.

I would calibrate my thermometers with a triple-point device. Water, as you know, is not an ideal gas at the triple point.

The ideal gas scale of temperature is only used to set the energy equivalent of a change of temperature of 1 degree. To that extent, it is an arbitrary scale.
 
  • #45
f95toli said:
Gas thermometers are only used for some parts of the temperature scale. The lowest temperature part is defined using the melting curve of He3. For some parts the melting/freezing points of various metals as well as tripple points are used as fixed points to calibrate a platinum resistor. For very high temperatures fixed points are used together with radiation thermometry.
Thanks. I didn't know that yet.
OK, but to use the melting curve, you need to know it's theoretical behaviour, right? In fact you always need to know the theoretical behaviour which you will only will get from the microscopic statmech definition. So in this case the statmech definition is consistently used?!
An triple points are single points only where you still don't know how to assign good temperature values between them.

f95toli said:
There are also other primary (primary means that it does not need to be calibrated) thermometers that -although not officially parts of ITS-90- are often used; the most common being nuclear orientation thermometers (used below about 0.3K, I use one in my lab) and various types of noise thermometers (as well as Coloumb blockade thermometers etc).
Can you quickly name the theoretical concepts which stand behind all these methods? I assume they used the statmech definition to predict temperature and compare it to the measure curves?!

Andy Resnick said:
The ideal gas scale of temperature is only used to set the energy equivalent of a change of temperature of 1 degree. To that extent, it is an arbitrary scale.
Of course. And that's the topic here. So in the end the ideal gas scale is applied all the time! That little triple point scaling is just a detail. This ideal gas definition is necessary to set all the marks on your thermometers. The first degree might be arbitrary, but all others are predicted by the ideal gas.

DrDu said:
To measure absolute temperature experimentally, one can start from the fact that absolute temperature is an integrating factor for heat, dS=dQ/T, at least for reversible changes
I think it's highly impractical or even impossible to find a real reversible engine that transfers heat. But apart from that, I agree. If perfect, cyclically operating engines exists which are also able to completely reverse their operation, then temperature ratios can be defined.
 
Last edited:
  • #46
Gerenuk said:
<snip>
Of course. And that's the topic here. So in the end the ideal gas scale is applied all the time! That little triple point scaling is just a detail. This ideal gas definition is necessary to set all the marks on your thermometers. The first degree might be arbitrary, but all others are predicted by the ideal gas.

<snip>

That's not at all what I meant. The choice of an ideal-gas scale sets the energy equivalent of a 1-degree *change*. In fact, an ideal gas is defined by the choice of temperature scale, not the other way around.
 
  • #47
Andy Resnick said:
That's not at all what I meant. The choice of an ideal-gas scale sets the energy equivalent of a 1-degree *change*. In fact, an ideal gas is defined by the choice of temperature scale, not the other way around.
No. An ideal gas obeys
[tex]pV\propto T[/tex]
So it's a straight line in the graph. No need for personal definitions.
It's a physical law by itself. If you define temperature what way you like, then the ideal gases won't agree with you. You won't get a straight line the the graph.
 
  • #48
Sigh. It's an unfortunate state of affairs when a non-existent idealization is considered the correct basis of reality. Whither viscosity?

All I am willing to do now is recommend you read up on the foundations of thermodynamics. I haven't read Buchdahl's book (recommended above) but the 'peek inside' seems to indicate it's decent. personally, I recommend Truesdell's "Rational Thermodynamics".
 
  • #49
Gerenuk said:
Can you quickly name the theoretical concepts which stand behind all these methods? I assume they used the statmech definition to predict temperature and compare it to the measure curves?!

The basic idea behind Nuclear orientation thermometry is that the pattern of gamma rays emitted from nuclei depends on the alignment of the nuclei (well, the spin). This is implemented using a Co-60 crystal which is bolted to whatever you want to measure the temperature of. Co is a ferromagnet meaning the spins tend to line up in the direction of the intrinsic B-field at low temperatures.
At 0K the emission pattern would ideally look like a d-wave meaning there are nodes in the pattern,but as the temperature increases the spins become more "randomized" meaning the emission from an ensemble is uniform.
The temperature is measured by using a gamma ray detector positioned in a node direction; you get the temperature by calculating the ration N(T)/N(high T), where N means number of counts. "high T" means any temperature high enough to have randomized the spins (in reality everything over about 600 mK or so). This ratio plus a bunch of constants is all you need to calculate the absolute temperature. This is a very reliable method, and if you buy a thermometer calibrated for temperature below about 300 mK this is how it was calibrated, many low-temperature labs (including mine) have nuclear orientation setups (very handy for trouble-shooting since one can rule out problems with thermometery).

The simplest form of noise thermometry is to simply to measure the thermal noise across a resistor in a known BW. This is not a very good method but works in principle.
Most "real" noise thermometers are based on the "escape from a potential well" concept, i.e. what you basically measure is the escape probability from a potential which then can be related to the absolute temperature using the Boltzmann factor (or FD or BE factors).
The tricky part here is of course that you are always measuring an effective temperature, i.e. the thermodynamic temperature+any other sources of noise. Hence, the temperature measured using e.g. an electrical noise thermometer will always be higher (and unless you are careful MUCH higher, I've seen temperature as high as 1K in system thermalized to a 30mK bath) than the phonon temperature.
 
  • #50
Thanks for the details. I'll save that contribution to my files :)

So finally, both methods rely on the statmech theory and the prediction by the Boltzmann distribution?
 
  • #51
I found finally some time to go through my books: A more practical procedure to determine the absolute temperature than trying to run a carnot machine as reversible as possible is described in Peter T. Landsberg, Thermodynamics and Statistical Mechanics, Dover, NY, 1990, Chapter 6.1. "Empirical and absolute temperature scales". With T being absolute temperature and t being an empirical temperature, the important result (his formula 6.2) is
[tex]\ln\frac{T_1}{T_0}=\int_{t_0}^{t_1}\frac{(\partial p/\partial t)_
V}{p+(\partial U/\partial V)_t} dt[/tex]
The right side does depend only on the empirical temperature (e.g. the length of a mercury filament in a thermometer), but neither on absolute Temperature nor entropy.
The reference Temperature T_0 being fixed arbitrarily, the absolute temperature corresponding to any temperature t may be calculated once the two derivatives have been measured for the intermediate temperature range.
 
  • #52
I have found some similar equation. One problem is that you need to be able to go from the reference state to the final state with a reversible engine (i.e. constant entropy). That might be troublesome?!

Also note that the derivative in the denominator should be at constant pressure. So you cannot calculate it while running the Carnot engine. To make a change at constant pressure you would destroy the reversibility. Can one even use the definition in that case?
 
  • #53
No, you don't need a reversible engine!
You only have to measure the quantities in the integral for the range of temperature you are integrating over. But this are all standard calorimetric measurements. More importantly, all functions in the integral only depend on the states but are not path dependent.
E.g., to measure the differential dU/dV at fixed t, you could measure the Heat and Work needed to expand some substance at constant temperature. While heat and work may depend on the path taken on their own, their sum will not.
PS: the derivative in the denominator is at fixed temperature t, not fixed pressure p.
 
  • #54
The whole integral depends on path and has to be taken at constant entropy (strictly reversible process). The derivative dU/dV needs to be taken at constant pressure and no other variable. I've done the calculation myself. Maybe you can check the book again.
Meanwhile I try to figure out if your equation is also correct, but I highly doubt it.

EDIT: I have 5 equations of this type and I found you are indeed correct.
 
Last edited:
  • #55
But note that the integral has to be taken at constant volume only, i.e. your change of state variables necessarily needs to go along constant volume.
So you can only find temperature ratios between states of equal volume.

Also dU/dV is taken at constant temperature, which does not lie on the constant volume curve on which the integral has to be. So I'm not sure how to do this in practice?!

EDIT: Actually I agree, that there isn't really a path of the integral. But the derivatives which are taken along different routes (constant volume or constant temperature) have to be taken care of. Is that doable?
 
Last edited:
  • #56
I once read something about the experiments at NIST (if I remember correctly) which lead to the definition of the ITS-90 temperature scale. Seemed to have been a fascinatingly complex experiment. Lamentably, such experiments do not get the attention as e.g. the largest short-circuit experiment at CERN.
I am not an experimentalist, but I don't see any principal problems in evaluating the integral we were talking about experimentally. E.g. for a (real) gas, the pressure and the derivatives dp/dt and dU/dV vary smoothly with V and t. So you could measure them on a grid of points V and t, and interpolate between these points. Then you numerically integrate your formula and you are done.
 
  • #57
DrDu said:
I am not an experimentalist, but I don't see any principal problems in evaluating the integral we were talking about experimentally. E.g. for a (real) gas, the pressure and the derivatives dp/dt and dU/dV vary smoothly with V and t. So you could measure them on a grid of points V and t, and interpolate between these points. Then you numerically integrate your formula and you are done.
Interesting. Actually you could make constant t steps to calculate dU/dV followed by constant V steps to calculate dp/dt. For the first kind, the variable t doesn't change, which does the trick.

As I said I have different such equations (including the one I wanted first), but I wasn't aware of how practical your particular equation is. Thanks for pointing that out.

Now I can go back to derivation and look where it comes from. Because temperature seems to pop out of nothing for a completely arbitrary system!
 
  • #58
As I already said, it is a consequence of the fact that the inverse of absolute temperature is defined so as to be an integrating factor for dQ in a reversible process, i.e. dS=dQ / T(t). I'll sketch the derivation: For a reversible process, dQ=dU+pdV, now express [tex]dU=(\partial U/\partial t)_V dt +(\partial U/\partial V)_t dV [/tex]. Use also that [tex]\partial/\partial t=dT/dt \partial /\partial T[/tex]. Hence dS can be written as dS=Adt+BdV. For dS to be a total differential, [tex]\partial A/\partial V=\partial B/\partial t[/tex]. Solve this condition for T(t).
 
  • #59
I mean that's all just letters. You could take this definition and apply it to a herd of bisons. V being the number of bisons. E being the total age of the bisons. Then you can define any social dynamics between groups of them and calculate things like "temperature"!
Two interacting groups of bisons will have the same parameter "temperature".

That's what I find surprising. It's all just maths with hardly any assumptions and in fact no assumption that relate it to any particular physics.
 
  • #60
The modern formulation of the second law, named after Caratheodory, says that not all states in the neighbourhood of a given state can be reached from it, without the exchange of heat. To reach these states, the system has to loose heat. This shows that heat or some function of it, should be a function of state, which we name entropy. Heat is not a function of state, whence we need an integrating factor. That it has to be a function of empirical temperature is easy to show, the rest is mathematics. Both the books by Buchdahl and by Landsberg, which I already cited elaborate this view in detail (although I even prefer the original article by Caratheodory). That the efficiency of a thermodynamic cycle is limited by the absolute temperatures of the reservoirs follows as a corollary.
However, from classical thermodynamics it is not clear, why for an ideal gas T=pV/nR.
 
  • #61
DrDu said:
To reach these states, the system has to loose heat.
What is heat? The best definition I came up with is "heat is what is left over after you subtract all known energy contributions", i.e.
[tex]dQ=dU + pdV[/tex]

DrDu said:
This shows that heat or some function of it, should be a function of state, which we name entropy.
Are you sure? I think it doesn't yet show that heat is a one-dimensional variable of state. States could have different heat flows which do not add up to zero if you do a cycle. Even a function of heat doesn't do it. Just imagine some whacky system that has random transition heats between all pairs of states. There is no state variable possible.

DrDu said:
However, from classical thermodynamics it is not clear, why for an ideal gas T=pV/nR.
For that you need to restrict yourself to a particular system, of course.
 
  • #62
1. Yes, Caratheodory tried to avoid heat at all in his work and always used the difference between the differential of internal energy and work in his paper, instead.

2. Obviously heat itself is not a state variable and I did say so, but some function of it, involving also temperature, namely dS=dQ/T, is the differential of a function of state. As concerning your whacky systems, which performs random transitions between states, these don't exist in equilibrium thermodynamics.

I don't see what you are about. It is impossible to explain thermodynamics on five lines in a forum and at the same time cover all the loopholes. For that purpose there still exist these old fashioned things called books. I think I already provided the relevant references.
 
  • #63
DrDu said:
2. Obviously heat itself is not a state variable and I did say so, but some function of it, involving also temperature, namely dS=dQ/T, is the differential of a function of state. As concerning your whacky systems, which performs random transitions between states, these don't exist in equilibrium thermodynamics.
Claiming they don't exist is easily said. You have to make a full proof that equilibrium thermodynamics requires some function of heat to be a variable of state. I general there is no mathematical reason why a function of heat should be a variable of state. You have to make many additional assumptions about the dynamics to deduce that.

DrDu said:
For that purpose there still exist these old fashioned things called books. I think I already provided the relevant references.
You are just not aware that some things you believe are just special cases derived by some clever people. You should make your own thoughts and not memorize book quotes.
 
  • #64
Gerenuk said:
Claiming they don't exist is easily said.

It also happens to be correct ...

You have to make a full proof that equilibrium thermodynamics requires some function of heat to be a variable of state. I general there is no mathematical reason why a function of heat should be a variable of state. You have to make many additional assumptions about the dynamics to deduce that.

No, this is taken care of by the laws of thermodynamics.

0th law: existence of relative quantity temperature, which describes a thermodynamic state variable that is equal for any systems in thermodynamic equilibrium

3rd law: demonstrates that there must exist an absolute scale for the thermodynamic state variable temperature.

1st law: Conservation of thermodynamic state variable "energy" --> since work is not a state variable, this implies the existence of another quantity that also has units of energy, and accounts for "transferrable energy not expressible as work". That is what we call "heat" in thermodynamics (as you basically said in an earlier post).

2nd law: The entropy of the universe must increase for any spontaneous process in nature. Many different equivalent statements of the 2nd law can be made, one of which is, "heat cannot spontaneously flow from a body at low temperature into a body at high temperature". Since temperature is a state function, and the 2nd law is not path-dependent, this means that there MUST BE as state function that is expressible as a function of heat.

Thus, these fundamental laws of nature provide support for DrDu's statements.

You are just not aware that some things you believe are just special cases derived by some clever people. You should make your own thoughts and not memorize book quotes.

Hmmm ... you may not care, but I for one have found DrDu's statements to be much more well-formed, cogent, and convincing that any of your own arguments. You need to learn what is in the books before you can critically analyze it. There is not a general conspiracy to only teach or write about "special cases" in scientific texts ... most authors are quite good about stating their assumptions out front. As far as I can tell, the derivations DrDu has provided are *general*, and require only the assumptions he noted in his posts (i.e. reversibility).
 
  • #65
What is heat? The best definition I came up with is "heat is what is left over after you subtract all known energy contributions",

Yes, that is correct. Thermodynamics has to be understood as a phenomenological description within which you cannot gain a deeper understanding. It merely postulates the existence of heat, entropy, temperature etc.

The only way to really undersand this topic is to adopt the information theoretical point of view. I.e. you have a physical system that can only be described exactly by specifying an astronomical amount of information. You then want to describe the system in an effective way by keeping only a very limited amount of information. This can, of course, only be done by making some assumptions about the statistical behavior of the system (e.g. ergodicity etc. etc.)

The fact that this is the only correct way to understand the topic can readily be seen from Maxwell's Demon thought experiment.
 
  • #66
SpectraCat said:
1st law: Conservation of thermodynamic state variable "energy" --> since work is not a state variable, this implies the existence of another quantity that also has units of energy, and accounts for "transferrable energy not expressible as work". That is what we call "heat" in thermodynamics (as you basically said in an earlier post).
Again here are many assumptions that might be true for the most ordinary thermodynamic gas, but are not part of the general definition. Equal energy doesn't have to correspond to identical states. Just because there is work and energy, it doesn't follow that energy minus work is a variable of state, yet. Special conditions are needed to ensure that. Or do you have a mathematical proof in your notes?

SpectraCat said:
2nd law: The entropy of the universe must increase for any spontaneous process in nature.
I suppose you have never tried to define "spontaneous"? And here you see where I claim that some people are stuck with formulations from books, and do not really know what they mean.
The word "spontaneous" contains many different assumption that make the 2nd law valid for only a very small class of homogeneous physical objects like gases or spin state.

SpectraCat said:
Hmmm ... you may not care, but I for one have found DrDu's statements to be much more well-formed, cogent, and convincing that any of your own arguments.
Maybe that's because whenever someone is reciting what you saw in your own books, then you find it well-formed. I can copy a sentence from Weinberg and you will honor me. But if something is not a sentence from an undergrad book you might find it confusing.
For example. some people here have confirmed that for example the entropy of a system for sure will decrease in the very far future. That's not written in introductory undergrad books. If you believe that entropy always strictly increases, then you might not have put some thought in its origins.

SpectraCat said:
There is not a general conspiracy to only teach or write about "special cases" in scientific texts ... most authors are quite good about stating their assumptions out front.
That doesn't imply that most students are good at picking them up.

SpectraCat said:
As far as I can tell, the derivations DrDu has provided are *general*, and require only the assumptions he noted in his posts (i.e. reversibility).
Surely not. For example the system might have additional parameters so that equal V and equal E do not have to correspond to identical systems. There are many hidden assumptions he is missing out.
 
  • #67
Gerenuk said:
Again here are many assumptions that might be true for the most ordinary thermodynamic gas, but are not part of the general definition. Equal energy doesn't have to correspond to identical states. Just because there is work and energy, it doesn't follow that energy minus work is a variable of state, yet. Special conditions are needed to ensure that. Or do you have a mathematical proof in your notes?

I never said energy minus work is a variable of state ... that is clearly incorrect. Heat is not a state function, so there is no inconsistency in my statements. As far as I can see, there is only one "special condition" that I neglected to mention, which is that the "energy=work + heat" formulation requires that there be no exchange of mass (i.e. particles) with the surroundings. In that case, you would need to allow for changes in the energy due to the chemical potential, as well as those due to exchanges of work and heat.

I suppose you have never tried to define "spontaneous"? And here you see where I claim that some people are stuck with formulations from books, and do not really know what they mean.

A workable definition of "spontaneous" in the current context (i.e. thermodynamics of macroscopic systems) is: a spontaneous process is one that moves the system and surroundings closer to thermodynamic equilibrium. That is a completely general definition that is free of any assumptions outside of the zeroth and first laws of thermodynamics. If you have a different definition in this context, I would like to hear it.

I agree that in the context of the fluctuation theorem, there really is no good definition of spontaneous ... all possible changes in the system can be represented as fluctuations, with various weights representing the probability of observing such a change. However, the fluctuation theorem is also consistent with the macroscopic version of the second law that I mentioned earlier, because for any macroscopic system, the integrated probability of entropy-increasing fluctuations is always higher than for entropy-decreasing fluctuations. Thus the net evolution of an isolated system integrated over time will tend towards higher entropy. Furthermore, the probability of observing spontaneous entropy-increasing processes decreases exponentially with the complexity of a system, so for macroscopic systems, the probability of observing entropy-increasing processes is negligible, leading to the common statement of the 2nd law I gave in my earlier post.

The word "spontaneous" contains many different assumption that make the 2nd law valid for only a very small class of homogeneous physical objects like gases or spin state.

Wait, did you just claim that the 2nd law of thermodynamics is restricted only to a few "special cases"? That is certainly not a mainstream view, and requires some detailed support from you. I am unaware of any reputable scientific work that makes such a claim.

Maybe that's because whenever someone is reciting what you saw in your own books, then you find it well-formed. I can copy a sentence from Weinberg and you will honor me. But if something is not a sentence from an undergrad book you might find it confusing.

Ok, you seem to be deliberately trying to provoke those of us debating with you by using statements like the one above. What purpose does that serve? You have made many unsupported (and apparently unsupportable statements), using the excuse that others "wouldn't understand your explanations". Please. Get off your high-horse and support your positions .. I will let you know if I have questions, and I am sure others will do the same.

For example. some people here have confirmed that for example the entropy of a system for sure will decrease in the very far future.

Can you please provide a link to support that? I am aware of the Lohschmidt paradox which seems to imply that the entropy of the universe must have been higher in the past than it is now, but I have not seen any statements that say the entropy will definitely decrease in the future. Unless you are talking about the fluctuation theorem? If so, then I believe your statement above is not generally held to be correct for macroscopic systems.

Surely not. For example the system might have additional parameters so that equal V and equal E do not have to correspond to identical systems. There are many hidden assumptions he is missing out.

That statement is vague and unhelpful and ultimately unconvincing. To which one of DrDu's posts are you referring in the above statement? Please provide an example of how "the system might have additional parameters so that equal V and equal E do not have to correspond to identical systems", so I can understand what you are trying to say.
 
  • #68
Here is an explanation for some things you asked:
Tell me how to apply thermodynamics to the moon orbiting the earth! You might say it doesn't make sense since it is a different field of physics. I agree, it doesn't make sense because the numerous presuppositions of thermodynamics don't apply to this physical system. Now can you mathematically rigorously say which? Have you ever thought about such a question which is not in introductory books?
That's what I mean by you should start making your own thoughts. I do not need to hear about Lohschmidt paradox, but I'd rather hear the "SpectraCat Idea" which is your own argumentation with known physical laws.
I think you agreed that increasing entropy is only a very very likely probabilistic statement? So a sharp decrease in entropy is not impossible and will occur at some point. In fact it is bound to:
https://www.physicsforums.com/showthread.php?t=387938
Maybe the people who answered there can it explain it better. No one else objected in this thread, because it is trivially known to physicists that at an unimaginable distant future time all gas molecules will gather in one corner.
No good thermodynamics book claims that the laws are applicable to just anything. It is only the popular press that transfigures the second law and says that "disorder of any kind must increase". That's why thermodynamics is valid for only very few special problems with enough homogenuity and randomness. Luckily engineering consists of mostly these special cases (gases, ...). Think back about the orbiting moon.
Or do you know a derivation of [itex]S=\sum p_i\ln p_i[/itex]? You apply entropy to a system where the microscopic laws are already known. Therefore it is not admissible to postulate a new law, like this definition or even the second law of TD. It has to be derived with logic. There is a quite simple derivation, which also points out when you can apply this definition.
Now if DrDu would start to mathematically derive all laws from some minimum axioms, then he would notice where assumptions comes in. At some point for example he would say that there are variables of state because the closed contour integral vanishes. This he can only do if he included all parameters of the system. That's what I mean by "maybe E and V are not the only parameters". It could be something like chemical potential or something like a physical property of the gas which is changed by an external magnetic field.
Here is another example where assumptions of thermodynamics reveal:
http://www.aip.org/png/html/maxwell.html

And as long as there are surprises like this or difficulties with knowing how to apply it to something trivial like two orbiting point particles, you do not know all assumptions of thermodynamics.
 
Last edited by a moderator:
  • #69
Gerenuk said:
<snip>
Tell me how to apply thermodynamics to the moon orbiting the earth! <snip>

I had to take up this challenge...

Assuming the earth-moon system is in equilibrium, we apply the first law of thermostatics:

[tex]\Delta E=\Delta Q-\Delta W[/tex]

We then assume that the earth-moon system is adiabatic and closed, and that the moon moves at right angles to the gravitational force. The first law of thermostatics then becomes

[tex]\Delta E=0[/tex]

The energy is given by:

E=1/2mv^2- G m m_e/r

Where we assumed the Earth does not move (m << m_e)

Thus, 1/2mv^2- G m m_e/r is constant, and you will find that this is the usual result from orbital mechanics.

Here, the assumptions are laid bare: equilibrium, closed, adiabatic.
 
  • #70
Andy Resnick said:
Here, the assumptions are laid bare: equilibrium, closed, adiabatic.
How do you justify adiabatic and equilibrium? For that you need a notion of entropy. So how do you define entropy here?
 
Back
Top