- #1
eprparadox
- 138
- 2
Hey!
So we're deriving something in Daniel Schroeder's Introduction to Thermal Physics and it starts with this:
[tex]
\Omega \left( N,q\right) =\dfrac {\left( N-1+q\right) !} {q!\left( N-1\right) !}
[/tex]
Both N and q are large numbers and q >> N.
The derivation is in the book, but I am always confused with when I can throw away terms.
For example, in this case intuitively, I would want to say that since q >> N and we have N - 1 + q, I would just keep the q and throw away the N - 1. I know this is wrong, but I don't know why.
In the book, they begin by throwing away the "- 1" term since both q and N are large numbers. I would have thrown away the N - 1. Why can't I do this and when can I/can't I throw away terms?
Thanks!
So we're deriving something in Daniel Schroeder's Introduction to Thermal Physics and it starts with this:
[tex]
\Omega \left( N,q\right) =\dfrac {\left( N-1+q\right) !} {q!\left( N-1\right) !}
[/tex]
Both N and q are large numbers and q >> N.
The derivation is in the book, but I am always confused with when I can throw away terms.
For example, in this case intuitively, I would want to say that since q >> N and we have N - 1 + q, I would just keep the q and throw away the N - 1. I know this is wrong, but I don't know why.
In the book, they begin by throwing away the "- 1" term since both q and N are large numbers. I would have thrown away the N - 1. Why can't I do this and when can I/can't I throw away terms?
Thanks!