No problem at all, happy to help! Have a great weekend as well.

  • I
  • Thread starter Frost_Xue
  • Start date
In summary, the conversation discusses the use of the identity operator in Dirac's "The principle of QM" and its role in decomposing an observable into its eigenvectors and eigenvalues. The conversation also explains the concept of resolution of the identity and how it can be used to expand any vector in a specific basis.
  • #1
Frost_Xue
9
0
I was reading Dirac's "The principle of QM" and bit of confused.
In equation (21) does the number 1 in the sum or outside of the sum? If out side how come the sum over r is 1? X function is the quotient when the factors diveded by single one factor. for example if a*b*c*d then X_b is a*c*d
i
IMG_20160723_155740.jpg
 
Physics news on Phys.org
  • #2
1 is outside the sum, and it should be the identity operator, not a number (but this is a very common abuse of notation).

An observable (Hermitian operator) ##\xi## can be decomposed as a sum ##\xi = \sum_r c_r |r\rangle \langle r|## of its orthonormal eigenvectors ##\{ |r \rangle \}## written as projection operators ##|r\rangle \langle r|##, and eigenvalues ##c_r##.

What Dirac is doing is the converse. How can we write each projection operator as a function of ##\xi## ? Let's first look at $$\chi_r (\xi) = \prod_{q \neq r} ( \xi - c_q \hat{1}), $$ where ##\hat{1}## is the identity operator. Each term in the product annihilates ## | q \rangle ## i.e. ## (\xi - c_q \hat{1} )|q \rangle = 0##. So it is easy to see in the basis in which ## \xi ## is diagonalized that ##\chi_r (\xi)## has a bunch of zeros in the diagonal. In fact, the only vector not annihilated by ##\chi_r (\xi)## is ##|r\rangle. ## Moreover, $$\chi_r (\xi) = \prod_{q \neq r} (c_r - c_q)|r\rangle \langle r|.$$ Note that ##|r\rangle \langle r|## could be inside or outside the product. It doesn't matter, because ##(|r\rangle \langle r|)^2 = |r\rangle \langle r| ##. Therefore, \begin{aligned} \frac{\chi_r (\xi) }{\chi_r (c_r)} &= |r \rangle \langle r| \\ \sum_r \frac{\chi_r (\xi) }{\chi_r (c_r)} &= \hat{1}, \end{aligned} where I used the fact that ##\{ |r\rangle \}## forms a resolution of the identity in the last line.

Perhaps an easier way to say all of this is that, whenever we have the resolution of the identity ##\sum_r |r\rangle \langle r| = \hat{1}##, we can expand any vector in the ##|r \rangle ## basis by inserting the identity $$ |P\rangle = \hat{1} |P\rangle = \sum_r \langle r|P \rangle |r\rangle. $$
 
Last edited:
  • Like
Likes Mentz114, vanhees71 and jim mcnamara
  • #3
Truecrimson said:
1 is outside the sum, and it should be the identity operator, not a number (but this is a very common abuse of notation).

An observable (Hermitian operator) ##\xi## can be decomposed as a sum ##\xi = \sum_r c_r |r\rangle \langle r|## of its orthonormal eigenvectors ##\{ |r \rangle \}## written as projection operators ##|r\rangle \langle r|##, and eigenvalues ##c_r##.

What Dirac is doing is the converse. How can we write each projection operator as a function of ##\xi## ? Let's first look at $$\chi_r (\xi) = \prod_{q \neq r} ( \xi - c_q \hat{1}), $$ where ##\hat{1}## is the identity operator. Each term in the product annihilates ## | q \rangle ## i.e. ## (\xi - c_q \hat{1} )|q \rangle = 0##. So it is easy to see in the basis in which ## \xi ## is diagonalized that ##\chi_r (\xi)## has a bunch of zeros in the diagonal. In fact, the only vector not annihilated by ##\chi_r (\xi)## is ##|r\rangle. ## Moreover, $$\chi_r (\xi) = \prod_{q \neq r} (c_r - c_q)|r\rangle \langle r|.$$ Note that ##|r\rangle \langle r|## could be inside or outside the product. It doesn't matter, because ##(|r\rangle \langle r|)^2 = |r\rangle \langle r| ##. Therefore, \begin{aligned} \frac{\chi_r (\xi) }{\chi_r (c_r)} &= |r \rangle \langle r| \\ \sum_r \frac{\chi_r (\xi) }{\chi_r (c_r)} &= \hat{1}, \end{aligned} where I used the fact that ##\{ |r\rangle \}## forms a resolution of the identity in the last line.

Perhaps an easier way to say all of this is that, whenever we have the resolution of the identity ##\sum_r |r\rangle \langle r| = \hat{1}##, we can expand any vector in the ##|r \rangle ## basis by inserting the identity $$ |P\rangle = \hat{1} |P\rangle = \sum_r \langle r|P \rangle |r\rangle. $$
That was great answer. Thank you so much for the help and time. Sorry to bother you with such trivial question. have a good weekend.
 

FAQ: No problem at all, happy to help! Have a great weekend as well.

What is the definition of an eigenket?

An eigenket, also known as an eigenstate, is a vector in a vector space that represents a quantum state with a definite value for a particular observable quantity. It is a fundamental concept in quantum mechanics and is used to describe the state of a quantum system.

How is an eigenket expressed mathematically?

An eigenket is expressed as a column vector with complex coefficients, where each coefficient represents the amplitude of the state in a particular basis. For example, in the position basis, an eigenket would be expressed as |x>, where x represents the position and the vertical bars indicate a ket vector.

What is the significance of expressing a state as an eigenket?

Expressing a state as an eigenket allows us to easily calculate the probability of obtaining a certain measurement for the observable associated with that state. It also allows us to easily manipulate and transform the state using mathematical operators.

Can an eigenket have multiple eigenvalues?

No, an eigenket can only have one eigenvalue for a particular observable. This is because the eigenvalue represents the specific measurable quantity associated with the state, and a state cannot have multiple definite values for the same observable.

How is an eigenket different from an eigenvector?

An eigenket and an eigenvector are essentially the same concept, but they are used in different contexts. An eigenket is used in quantum mechanics to represent the state of a quantum system, while an eigenvector is used in linear algebra to represent a vector that is unchanged when multiplied by a specific matrix.

Back
Top