- #1
forkosh
- 6
- 1
- TL;DR Summary
- Why is the entropy different if the pvm's use the same resolution of the identity?
The von Neumann entropy for an observable can be written ##s=-\sum\lambda\log\lambda##, where the ##\lambda##'s are its eigenvalues. So suppose you have two different pvm observables, say ##A## and ##B##, that both represent the same resolution of the identity, but simply have different eigenvalues, with ##\lambda_{A_i}>\lambda_{B_i}## always. Then ##s_A>s_B##, but why should that be?
If they both represent the same resolution of the identity, then exactly the same experimental apparatus measures them both. Just change the labels on the pointer dial from the ##A##-values to the ##B##-values. For example, the ##A##-measurement could be mass in grams, whereas ##B## is simply in kilograms. Why should the entropy of those two measurements be any different?
If they both represent the same resolution of the identity, then exactly the same experimental apparatus measures them both. Just change the labels on the pointer dial from the ##A##-values to the ##B##-values. For example, the ##A##-measurement could be mass in grams, whereas ##B## is simply in kilograms. Why should the entropy of those two measurements be any different?