Tensor product of operators and ladder operators

In summary, the conversation discusses the properties of SU(2) matrices and their tensor product representation using the up-down basis. It also explains the action of the ladder operator and the concept of adding two spins to get the reduction of the product representation. The Clebsch-Gordan coefficients are also mentioned in this context.
  • #1
Heidi
418
40
Hi Pfs
i have 2 matrix representations of SU(2) . each of them uses a up> and down basis (d> and u>
If i take their tensor product i will get 4*4 matrices with this basis:
d>d>,d>u>,u>d>,u>u>
these representation is the sum equal to the sum of the 0-representation , a singlet represertation with
m= 0 and a 1-representation of 3*3 matris with m= -1,0,1
i have two states with m= 0 corresponding to u>d> and d>u> in their vector space.
if i start with m = 1 a ladder operator will decrease m to 0 that will be in the triplet .
How to write this action of the ladder operator in the 4 vector basis?
 
Physics news on Phys.org
  • #2
As the total angular mometum of 2 particles is their sum, it seems natural to take a symmetric result to the action on u>u> of the down ladder
and to get u>d> + d>u> (up to a normalization factor.
repeating its action on this vector would give d>d>
for the singulet u>d> - d>u> it would give the nul vector.
Is this correct?
So changin the basis uu>, ud> , du>, dd> into uu>, (ud>+du>),dd,(ud>-du>), would diagonalize by bloc the tensor product of two 2*2 SU(2) representations.
 
  • #3
Consider the addition of two spins ##\vec{S}=\vec{s}_1+\vec{s}_2## and consider that ##\vec{s}_1## and ##\vec{s}_2## are realized in the irreps. with spins ##s_1## and ##s_2##, respectively, and we look for the reduction of ##\vec{S}## into irreducible parts of the induced "product representation".

This is achieved by noting that we have the product basis ##|s_1,s_2;\sigma_1,\sigma_2 \rangle##, which is the common basis of the compatible operators ##\vec{s}_1^2##, ##\vec{s}_2^2##, ##s_{1z}##, and ##s_{2z}## with (setting ##\hbar=1## for simplicity)
$$\vec{s}_1^2 |s_1,s_2;\sigma_1,\sigma_2 \rangle = s_1 (s_1+1) |s_1,s_2;\sigma_1,\sigma_2 \rangle,$$
$$\vec{s}_2^2 |s_1,s_2;\sigma_1,\sigma_2 \rangle = s_2 (s_2+1) |s_1,s_2;\sigma_1,\sigma_2 \rangle,$$
$$s_{13} |s_1,s_2;\sigma_1,\sigma_2 \rangle =\sigma_{13} |s_1,s_2;\sigma_1,\sigma_2 \rangle,$$
$$s_{23} |s_1,s_2;\sigma_1,\sigma_2 \rangle =\sigma_{23} |s_1,s_2;\sigma_1,\sigma_2 \rangle,$$
where ##s_1, s_2 \in \{0,1/2,1,3/2,\ldots \}## and ##\sigma_{13} \in \{-s_1,-s_1+1,\ldots s_1-1,s_1 \}##, ##\sigma_{23} \in \{-s_2,-s_2+1,\ldots,s_2-1,s_2 \}##.
Formally these basis vectors are given by the "product basis"
$$|s_1, s_2;\sigma_1,\sigma_2 \rangle = |s_1,\sigma_1 \rangle \otimes |s_2,\sigma_2 \rangle.$$
On the other hand we have ##\vec{S}^2##, ##S_3##, ##\vec{S}_1^2##, ##\vec{S}_2^2## as another compatible set of spin observables, i.e., we can build a common basis ##|S,s_1,s_2,\Sigma \rangle## with
$$\vec{S}^2 |S,s_1,s_2,\Sigma \rangle=S(S+1) |S,s_1,s_2,\Sigma \rangle,$$
$$\vec{s}_1^2 |S,s_1,s_2,\Sigma \rangle=s_1 (s_1+1) |S,s_1,s_2,\Sigma \rangle,$$
$$\vec{s}_2^2 |S,s_1,s_2,\Sigma \rangle=s_2 (s_2+1) |S,s_1,s_2,\Sigma \rangle,$$
$$S_3 |S,s_1,s_2,\Sigma \rangle = S |S,s_1,s_2,\Sigma \rangle.$$
Now the product basis is also an eigenbasis of ##S_3## and
$$S_3 |s_1,s_2;\sigma_1,\sigma_2 \rangle=(\sigma_1 + \sigma_2) |s_1,s_2;\sigma_1,\sigma_2 \rangle.$$
That means that the possible ##\Sigma \in \{(s_1+s_2),(s_1+s_2)-1,\ldots,-(s_1+s_2)+1,-(s_1+s_2) \}## and
$$\langle S,s_1,s_2,\Sigma|s_1,s_2;\sigma_1,\sigma_2 \rangle \propto \delta_{\Sigma,\sigma_1+\sigma_2}.$$
This implies that we must have ##S \geq s_1+s_2##, because otherwise there couldn't be the largest eigenvalue ##s_1+s_2## for ##S_3=s_{13}+s_{23}##. We can not have ##S>s_1+s_2##, because then via the ladder operator ##S_+## there'd be an eigenvector of ##S_3## with eigenvalue ##s_1+s_2+1##, but this doesn't exist within our product representation and so cannot exist in the other basis either.

Thus there's exactly one eigenvector ##|S=s_1+s_2,s_1,s_2,\Sigma=s_1+s_2 \rangle##, and we can choose it to be the product-basis vector ##|s_1,s_2;\sigma_1=s_1,\sigma_2=s_2 \rangle##. Thus we must have as one part of the new basis the complete basis to the irreducible representation with ##S=s_1+s_2##, which we can get from a repeated application of ##S_-=S_x-\mathrm{i} S_y## on this eigenvector with maximum ##\Sigma=s_1+s_2##, which finally stops with the eigenvector with ##\Sigma=-(s_1+s_2)##.

Now you consider the orthogonal complement of the subspace spanned by these ##2(s_1+s_2)+1## eigenvectors for ##S=s_1+s_2##. The largest eigenvalue for ##S_3## in this orthogonal complement can only be ##s_1+s_2-1##, and there's only one eigenvector to this eigenvalue (since there are two in the product basis, i.e., the ones where ##\sigma_1=s_1-1, \quad \sigma_2=s_2## and ##\sigma_1=s_1, \quad \sigma_2=s_2-1##, i.e., the eigenspace ##\text{Eig}(\Sigma,s_1+s_2-1)## is two-dimensional, and one basis vector we have constructed with ##S=s_1+s_2##. Now with the same argument as above for ##S## when restricting ##\vec{S}^2## to the orthonal complement to the ##S=s_1+s_2## irrep. we must have ##S=s_1+s_2-1##, and there's a unique (up to a factor) vector orthogonal to ##|S=s_1+s_2, s_1,s_2,\Sigma=s_1+s_2-1##, which we have to choose as the eigenvector of ##S=s_1+s_2-1,s_1,s_2,\Sigma=s_1+s_2-1##, and again we get all other eigenvectors for the irrep. with ##S=s_1+s_2-1## by acting repeatedly with ##S_-## to this vector.

Then we iterate the argument to the orthogonal complement of both the ##S=s_1+s_2## and ##S=s_1+s_2-1## subspaces etc. It becomes clear that the iteration stops with the lowest possible value ##S=|s_1-s_2|##. This you get by simple counting of the resulting dimensions. We know that the entire space is ##(2s_1+1)(2s_2+1)##-dimensional (spanned by the product basis) and the dimensions of the ##S## bases of the irred. subspaces just constructed iteratively add indeed up to (make ##s_1 \geq s_2## for simplicity)
$$[2(s_1+s_2)+1]+[2(s_1+s_2-1)+1]+[2 (s_1+s_2-2)+1]+\cdots + [2(s_1-s_2)+1]=(2s_2+1)2(s_1+s_2) - 2(1+2+\cdots 2s_2)+(2s_2+1)= 2 (2s_2+1)(s_1+s_2) -2 \frac{1}{2} (2s_1+1)(2s_2)+2s_2+1=(2s_1+1)(2s_2+1).$$
By explicitly doing this iterative scheme you can derive the Clebsch-Gordan coefficients
$$C(S,s_1,s_2,\Sigma|s_1,s_2;\sigma_1 \sigma_2)=\langle S,s_1,s_2,\Sigma|s_1,s_2;\sigma_1,\sigma_2 \rangle.$$
For your example we have
$$|S=1,s_1=1/2,s_2=1/2,\Sigma=1 \rangle=|s_1=1/2,s_1=1/2,\sigma_1=1/2,\sigma_1=1/2 \rangle,$$
$$|S=1,s_1=1/2,s_2=1/2,\Sigma=0 \rangle = \frac{1}{\sqrt{2}} (|s_1=1/2,s_2=1/2,\sigma_1=-1/2,\sigma_2=1/2 \rangle + |s_1=1/2,s_2=1/2,\sigma_1=1/2,\sigma_2=-1/2 \rangle),$$
$$S=1,s_1=1/2,s_2=1/2,\Sigma=-1 \rangle = |s_1=1/2,s_2=1/2,\sigma_1=-1/2,\sigma_2=-1/2 \rangle.$$
The remaining eigenvector with ##\Sigma=0## must be orthogonal to the above given with ##S=1## and ##\Sigma=0##, which uniquely (up to a phase factor) is
$$|S=0,s_1=1/2,s_2=1/2,\Sigma=0 \rangle=\frac{1}{\sqrt{2}} (|s_1=1/2,s_2=1/2,\sigma_1=-1/2,\sigma_2=1/2 \rangle -|s_1=1/2,s_2=1/2,\sigma_1=1/2,\sigma_2=-1/2 \rangle).$$
 
  • Love
  • Like
Likes Heidi and topsquark

FAQ: Tensor product of operators and ladder operators

What is a tensor product of operators?

A tensor product of operators is a mathematical operation that combines two or more operators to create a new operator. It is denoted by the symbol ⊗ and is used to represent the combined action of the individual operators on a larger system.

What is the significance of the tensor product in quantum mechanics?

The tensor product is an important concept in quantum mechanics because it allows us to combine operators that act on different parts of a system. This is particularly useful when studying composite systems, such as multi-particle systems, where the individual particles may have different properties.

What are ladder operators in quantum mechanics?

Ladder operators are a set of mathematical operators that are used to describe the energy levels of a quantum system. They are commonly used in the study of quantum harmonic oscillators and can be used to calculate the energy spectrum of a system.

How are ladder operators related to the tensor product?

Ladder operators are often expressed as a tensor product of creation and annihilation operators. This allows for a more compact and elegant representation of the operators and their properties.

What are some applications of the tensor product of operators and ladder operators?

The tensor product of operators and ladder operators has many applications in quantum mechanics, including the study of quantum entanglement, quantum information processing, and quantum computing. It is also used in the development of quantum algorithms and in the analysis of quantum systems with multiple degrees of freedom.

Similar threads

Replies
11
Views
2K
Replies
7
Views
1K
Replies
10
Views
2K
Replies
2
Views
1K
Replies
9
Views
2K
Replies
21
Views
2K
Replies
27
Views
2K
Replies
17
Views
2K
Back
Top