Two approaches to calculating entropy differ by factor of two. Why?

In summary, the two approaches to calculating entropy differ by a factor of two due to the choice of logarithmic base used in the calculations. One method uses base 2 logarithms, resulting in entropy measured in bits, while the other uses natural logarithms, leading to entropy in nats. This difference in logarithmic bases accounts for the factor of two discrepancy in the calculated values of entropy.
  • #1
zenterix
708
84
Homework Statement
A sample of monoatomic ideal gas (##n=1.00\text{mol}##) at ##T_1=300\text{K}## is allowed to thermalize with a second sample of the same ideal gas (##n=1.00\text{mol}##) at ##T_2=350\text{K}##.
Relevant Equations
What is the change in entropy for this process?

Recall that for a monoatomic ideal gas ##C_V=\frac{3}{2}R##.
Here is how I did this problem

Let's call the two samples sample 1 and sample 2.

The change in entropy for sample 1 is

$$\Delta S_1=\int dS_1=\int_{U_1}^{U_1+\Delta U}\frac{1}{T_1}dU\tag{1}$$

$$=\frac{1}{T_1}\Delta U\tag{2}$$

Similarly, ##\Delta S_2=-\frac{1}{T_2}\Delta U##.

Note that I used the fact that ##U## is extensive and conserved so

$$U_1+\Delta U_1+U_2+\Delta U_2=U_1+U_2$$

$$\implies \Delta U_1=-\Delta U_2=\Delta U$$

The entropy change of the system is then

$$\Delta S=\Delta U \left (\frac{1}{T_1}-\frac{1}{T_2}\right )\geq 0\tag{3}$$

$$=\Delta U\left ( \frac{1}{300}-\frac{1}{350}\right )\tag{4}$$

$$\implies \Delta U\geq 0\tag{5}$$

Note that in (1) the integral is defined in such a way that the internal energy of sample 1 is increasing by ##\Delta U## which we've now shown is nonnegative.

Thus, energy (heat in this process) flows from sample 2 to sample 1.

We need to find what ##\Delta U## is.

$$dU=dQ=C_VdT=\frac{3}{2}RdT$$

$$\Delta U=\frac{3}{2}R(T-T_1)=-\frac{3}{2}R(T-T_2)$$

$$\implies T=\frac{T_1+T_2}{2}=325\text{K}$$

where ##T## is the equilibrium temperature of the system.

Thus,

$$\Delta U=\frac{3}{2}R\cdot 25$$

and

$$\Delta S=\Delta U\left (\frac{1}{T_1}-\frac{1}{T_2}\right )$$

$$=\frac{3}{2}R\cdot 25\cdot\left (\frac{1}{300}-\frac{1}{350}\right )$$

$$=0.1484$$

Now, this result seems to be incorrect, and more precisely it seems to be about double the correct result which is 0.0740.

Here is another approach.

The change in entropy for an adiabatic process is

$$\Delta S = \int dS=\int_{T_i}^{T_f} \frac{1}{T} C_VdT$$

$$=C_V\ln{\left (\frac{T_f}{T_i}\right )}$$

$$=\frac{3}{2}R\ln{\left (\frac{T_f}{T_i}\right )}$$

Thus

$$\Delta S_{\text{tot}}=\Delta S_1+\Delta S_2$$

$$=\frac{3}{2}R\left ( \ln{\left ( \frac{325}{300} \right )}+\ln{\left ( \frac{325}{350} \right )} \right )$$

$$=0.0740$$

What is wrong with the first approach?
 
Physics news on Phys.org
  • #2
Temperatures T1 and T2 will change during thermalization. Hence eq. 1 and 2 are wrong as you have used fixed T1 and T2 in the integrals instead of the varying temperatures.
 
  • Informative
  • Like
Likes zenterix and berkeman
  • #3
Got it. Is there a way to substitute for ##\frac{1}{T}## in (1) instead of subbing in for ##dU##?
 
  • #4
As U= C_vT, T=U/C_V.
 

FAQ: Two approaches to calculating entropy differ by factor of two. Why?

What are the two approaches to calculating entropy?

The two common approaches to calculating entropy are the Boltzmann entropy formula, S = k * ln(W), where S is entropy, k is the Boltzmann constant, and W is the number of microstates, and the Shannon entropy formula, H = -Σ(p_i * log(p_i)), where H is entropy, p_i is the probability of each state, and the sum is taken over all possible states. The difference arises from the logarithm base used in each formula.

Why do the two approaches differ by a factor of two?

The difference by a factor of two often arises from the choice of logarithm base. The Boltzmann entropy uses natural logarithms (base e), while the Shannon entropy can use logarithm base 2. When converting between these bases, a factor of log2(e) ≈ 1.4427 comes into play, leading to the observed factor of two in certain contexts, especially when comparing entropy units (e.g., bits vs. nats).

In what contexts are these two entropy calculations used?

The Boltzmann entropy is primarily used in statistical mechanics and thermodynamics to describe the disorder of a system at the molecular level. The Shannon entropy, on the other hand, is used in information theory to quantify the amount of uncertainty or information content in a set of possible outcomes. Both have applications across various fields, including physics, computer science, and data analysis.

Can you provide an example of a system where both entropy calculations yield different values?

Consider a simple system with two microstates, where each state has equal probability. Using the Boltzmann formula, with W = 2, the entropy would be S = k * ln(2). In contrast, using the Shannon formula, H = -2 * (1/2 * log(1/2)) = 1 bit. The values will differ by a factor of log2(e) due to the base difference in logarithms, illustrating how the same physical scenario can yield different numerical results depending on the approach.

How can one convert between the two entropy measures?

To convert between the two measures of entropy, one can use the relationship between the logarithm bases. If you have the Boltzmann entropy in nats (using natural logarithm), you can convert it to bits (using base 2) by multiplying by log2(e). Conversely, to convert Shannon entropy from bits to nats, you can divide by log2(e). This conversion highlights how the choice of logarithm base impacts the numerical representation of entropy.

Back
Top