Cyclic rotation of the cross product involving derivation

In summary: Operators are defined by their action on functions. And, the action of the cross product of vector operators is defined by the composition of the actions of the individual operators.So, let's start with the vector equation:$$\vec \nabla \times \vec A = \vec B$$Let's define a vector operator ##\hat B## as follows:$$\hat B [f] = \vec B f$$And, let's define another vector operator ##\hat A## as follows:$$\hat A [f] = \vec A f$$Now, let's look at the action of the vector operator ##\hat B
  • #1
Garlic
Gold Member
181
72
TL;DR Summary
How does the cyclic rotation of the cross product work, if one of the vectors is an operator acting on the other?
Dear PF,

so we know that cross product of two vectors can be permutated like this: ## \vec{ \alpha } \times \vec{ \beta }=-\vec{ \alpha} \times \vec{ \beta} ##
But in a specific case, like ## \vec{p} \times \vec{A} = \frac{ \hbar }{ i } \vec{ \nabla } \times \vec{A} ## the cyclic permutation of the cross product isn't that simple, because the ## \vec{p} ## is an operator acting on ## \vec{A} ( \vec{r} ) ## and the wave function ## \psi( \vec{r} ) ## (which isn't implicitly shown).A professor of mine writes
$$ \vec{p} \times \vec{A} = - \vec{A} \times \vec{p} + \frac{ \hbar }{ i } ( \vec{ \nabla } \times \vec{A} ) = - \vec{A} \times \vec{p} + \frac{ \hbar }{ i } \vec{B}
$$
and I've also seen him writing "## \vec{B} = \vec{ \nabla } \times \vec{A} ## , therefore ## \vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B}) ##" which leaves my poor soul utterly confused.

What is the general rule of the cyclic permutation a cross product, can one derive this rule only by knowing the chain rule and the basic rules of cross product?

Thank you for your time,
-Garlic
 
Last edited:
Physics news on Phys.org
  • #2
Let's start with a simpler example. We have the differential operator ##\frac d {dx}##. And, any function ##g(x)## can be seen as an operator, where the action of the operator is the simple product: $$g(x)[f(x)] = g(x)f(x)$$ Now, we may consider the operator "product" (which is defined by composition) of these:
$$(\frac d {dx})(g(x)) \equiv (\frac d {dx} \circ g(x))$$ And the action of this product (or composition) of operators on a test function ##f(x)## is seen to be:
$$(\frac d {dx})(g(x))[f(x)] = \frac d {dx}[g(x)f(x)] = \frac {dg}{dx} f(x) + g(x)\frac{df}{dx} = (\frac {dg}{dx} + g(x)\frac d {dx})[f(x)]$$ And this gives us an operator identity:
$$ (\frac d {dx})(g(x)) = \frac {dg}{dx} + g(x)\frac d {dx}$$
In your case, you need to establish the definition of the cross product of two vector operators. The natural definition would be as follows:
$$\vec A \times \vec B =
\begin{vmatrix}
\hat x & \hat y & \hat z\\
A_x&A_y&A_z\\
B_x&B_y&B_z\\
\end{vmatrix} = (A_yB_z - A_zB_y)\hat x + (A_zB_x - A_xB_z)\hat y + (A_xB_y - A_yB_x) \hat z
$$
Where the product of operators is defined as usual by operator composition. E.g.
$$(A_yB_z)[f] = A_y[B_z[f]]$$
Now, if you use that definition and do some algebra, then the vector operator identities pulled out of the hat by your professor should be provable.
 
  • Like
Likes sysprog and Garlic
  • #3
Garlic said:
and I've also seen him writing "## \vec{B} = \vec{ \nabla } \times \vec{A} ## , therefore ## \vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B}) ##" which leaves my poor soul utterly confused.
This is a neat trick where if we define $$\vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B}) $$, we can show that $$ \vec{ \nabla } \times \vec{A} = \frac 1 2 \vec{ \nabla } \times ( \vec{r} \times \vec{B}) = \vec B$$
Again, this can be relatively easily proved using the definition of curl and some vector algebra.

PS note that the solution is not unique. The curl of any gradient is zero, so if we define:
$$\vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B}) + \vec \nabla f$$ For any scalar function ##f##, then we also have$$ \vec{ \nabla } \times \vec{A} = \vec B$$
 
  • Like
Likes sysprog and Garlic
  • #4
Thank you very much for your replies!

Although I think I understood what you have written, now I'm stuck proving a more complex expression:
## \vec{p} \cdot \vec{A} ## where the identity ## \vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B} ) ## is inserted. I am trying to prove that :
$$
\vec{p} \cdot \vec{A} + \vec{A} \cdot \vec{p} = \frac{\hbar}{2 i} [ \nabla \cdot (\vec{r} \times \vec{B}) + (\vec{r} \times \vec{B}) \cdot \nabla ] = \vec{L} \cdot \vec{B}
$$

I tried to solve this with einstein summation convention:

##

\vec{\nabla} \cdot (\vec{r} \times \vec{B}) = \epsilon_{ijk} \; \partial_{i} \; r_{j} \; B_{k}

##

##

=
\epsilon_{ijk} \; ( \partial_{i} \; r_{j} ) \; B_{k}
+
\epsilon_{ijk} \; r_{j} \; ( \partial_{i} \; B_{k} )
+
\epsilon_{ijk} \; r_{j} \; B_{k} \; \partial_{i}

##

##

=
\epsilon_{jki} \; ( \partial_{j} \; r_{k} ) \; B_{i}
+
\epsilon_{jik} \; r_{i} \; ( \partial_{j} \; B_{k} )
+
\epsilon_{kij} \; r_{i} \; B_{j} \; \partial_{k}

##

##

=
\epsilon_{ijk} \; ( \partial_{j} \; r_{k} ) \; B_{i}
-
\epsilon_{ijk} \; r_{i} \; ( \partial_{j} \; B_{k} )
+
\epsilon_{ijk} \; r_{i} \; B_{j} \; \partial_{k}

##

##

= (\vec{\nabla} \times \vec{r}) \vec{B} - \vec{r} ( \vec{ \nabla} \times \vec{B}) + \vec{r} ( \vec{B} \times \vec{ \nabla} )

##

##

= \vec{L} \cdot \vec{B} - \vec{r} ( \vec{ \nabla} \times \vec{B}) + \vec{r} ( \vec{B} \times \vec{ \nabla} )

##

And;

##
( \vec{r} \times \vec{B} ) \cdot \vec{ \nabla }

##

##

=
\epsilon_{ijk} \; r_{j} \; B_{k} \; \partial_{i}

##

##

=
\epsilon_{kij} \; r_{i} \; B_{j} \; \partial_{k}

##

##

=
\epsilon_{ijk} \; r_{i} \; B_{j} \; \partial_{k}

##

##

= \vec{r} \cdot ( \vec{B} \times \vec{ \nabla } )

##Therefore I find;
$$
\vec{p} \cdot \vec{A} + \vec{A} \cdot \vec{p}

=
\frac{\hbar}{2 i} [ \vec{L} \cdot \vec{B} - \vec{r} ( \vec{ \nabla} \times \vec{B} ) + 2 \vec{r} ( \vec{B} \times \vec{ \nabla} ) ]

\\

$$

Can you see what I'm doing wrong?
It's funny that although I started learning theoretical physics 5 years ago, but apparently I still didn't understand cross product correctly. Maybe I should relearn cross product rules from the beginning..

Thank you for your time!
-Garli
 
Last edited:
  • #5
There's a useful vector identity: $$\vec X \cdot (\vec Y \times \vec Z) = \vec Y \cdot (\vec Z \times \vec X) = \vec Z \cdot (\vec X \times \vec Y)$$You need to be careful how this transforms when you are using the differential operators like ##\vec p##. That might help.

In the meantime, I'll take a look at this.
 
  • Like
Likes Garlic
  • #6
Try redoing your work using a test function. It's too easy to go wrong with operator identities otherwise.
 
  • #7
Garlic said:
A professor of mine writes
$$ \vec{p} \times \vec{A} = - \vec{A} \times \vec{p} + \frac{ \hbar }{ i } ( \vec{ \nabla } \times \vec{A} ) = - \vec{A} \times \vec{p} + \frac{ \hbar }{ i } \vec{B}
$$
Let's take a closer look at this. There's a subtlety here. Note that the equation $$\vec \nabla \times \vec A = \vec B$$ is a vector equation. It's not an operator equation. Whereas:
$$\vec p \times \vec A = \frac \hbar i \vec \nabla \times \vec A$$ is an operator equation. The symbol ##\vec \nabla## is different in each case. For that reason, I'll avoid using ##\vec \nabla## as a differential operator. To prove this identity we have:
$$(\vec p \times \vec A)[f] = \frac \hbar i((\partial_y A_z - \partial_z A_y)\hat x + (\partial_zA_x - \partial_x A_z)\hat y +(\partial_x A_y - \partial_y A_x)\hat z)[f]$$$$ = \frac \hbar i((A_z \frac{\partial f}{\partial y} - A_y\frac{\partial f}{\partial z})\hat x + (A_x \frac{\partial f}{\partial z} - A_z\frac{\partial f}{\partial x})\hat y +(A_y \frac{\partial f}{\partial x} - A_x \frac{\partial f}{\partial y})\hat z)$$$$+ \frac \hbar i((\frac{\partial A_z}{\partial y} - \frac{\partial A_y}{\partial z})\hat x + (\frac{\partial A_x}{\partial z} - \frac{\partial A_z}{\partial x})\hat y +(\frac{\partial A_y}{\partial x} - \frac{\partial A_x}{\partial y})\hat z)[f]$$$$= -(\vec A \times \vec p)[f] + \frac \hbar i(\vec \nabla \times \vec A)[f] = -(\vec A \times \vec p)[f] + \frac \hbar i(\vec B)[f]$$
And we see that: $$\vec p \times \vec A = -(\vec A \times \vec p) + \frac \hbar i \vec B$$is a valid operator equation. Whereas: $$\vec p \times \vec A = -(\vec A \times \vec p) + \frac \hbar i (\vec \nabla \times \vec A)$$ is not a valid operator equation (as we must interpret ##\vec \nabla## as simply a vector operation and not as a functional operator).

The other option, of course, is to put hats on operators in order to make the distinction.
 
  • Like
Likes sysprog, Garlic and dextercioby
  • #8
Garlic said:
I'm stuck proving a more complex expression:
## \vec{p} \cdot \vec{A} ## where the identity ## \vec{A} = \frac{1}{2} ( \vec{r} \times \vec{B} ) ## is inserted. I am trying to prove that :
$$
\vec{p} \cdot \vec{A} + \vec{A} \cdot \vec{p} = \frac{\hbar}{2 i} [ \nabla \cdot (\vec{r} \times \vec{B}) + (\vec{r} \times \vec{B}) \cdot \nabla ] = \vec{L} \cdot \vec{B}
$$
I don't get this to come out. I'm assuming that ##\vec L = \vec r \times \vec p##.
 
  • #9
@Garlic what I forgot is that ##\vec A = \frac 1 2(\vec r \times \vec B)## is only a solution for constant magnetic field ##\vec B##. The identity should drop out easily enough in that case. Although, perhaps not quite.

That said, your working in post #4 looks like it has a few mistakes.
 
Last edited:
  • Like
Likes Garlic
  • #10
Again: The correct vector potential is (note the different sign!):
$$\vec{A}=\frac{1}{2}(\vec{B} \times \vec{r}).$$
Proof:
$$\vec{\nabla} \times \vec{A} = \vec{e}_j \epsilon_{jkl} \partial_k A_l =\frac{1}{2} \vec{e}_j \epsilon_{jkl} \partial_k (\epsilon_{lmn} B_m r_n)=\frac{1}{2} \vec{e}_j (\delta_{jm} \delta_{kn}-\delta_{jn} \delta_{km}) B_m \delta_{kn} = \frac{1}{2} \vec{e}_j (3 B_j-B_j)=\vec{B}.$$
 
  • Like
Likes sysprog, Garlic and PeroK
  • #11
After struggling with this problem I finally understood it.
My solution:
$$
\vec{ \nabla } \cdot ( \vec{ B } \times \vec{ r } ) = \partial_i \; \epsilon_{ijk} \; B_j \; r_k
$$

$$
= \epsilon_{ijk} \; ( \; ( \partial_i \; B_j \; ) r_k + B_j \; ( \partial_i \; r_k ) + B_j \; r_k \; \partial_i )
$$

$$
= \epsilon_{ijk} \; B_j \; r_k \; \partial_i
$$
Where the first term vanishes because we have a constant magnetic field ## \vec{B} ##
and the second term vanishes because ## \partial_i \; r_k = \; \delta_{ik} ## and ## \epsilon_{ijk} \; \delta{ik} = \; \epsilon_{iji} = 0 ## .

Therefore
$$
\vec{p} \cdot \vec{A} + \vec{A} \cdot \vec{p} = \frac{ \hbar }{ 2 i } \cdot 2 \; \epsilon_{ijk} \; B_j \; r_k \; \partial_i
$$

$$
= \frac{ \hbar }{ 2 i } \cdot 2 \; \epsilon_{kij} \; B_i \; r_j \; \partial_k
$$

$$
= \frac{ \hbar }{ 2 i } \cdot 2 \; \epsilon_{ijk} \; B_i \; r_j \; \partial_k
$$

$$
= \frac{ \hbar }{ i } \; \vec{B} \cdot ( \; \vec{r} \times \vec{ \nabla } ) = \vec{B} \cdot \vec{L}
$$

I tried to analyze the reasons why I had so much trouble proving this equation.
What I didn't think about in the beginning:
- Constant magnetic field ## B \rightarrow \partial_k \; B_m=0 ##
-I forgot some basic things about einstein summation convention like the fact that ## δ_{nn}=3 ##
-When I chose ## \vec{A}=−\frac{1}{2} \vec{r} \times \vec{B} ## I got a slightly different but physically equivalent expression, which needed to be transformed using ## \vec{r} \cdot ( \vec{B} \times \vec{ \nabla })=−\vec{B} \cdot (\vec{r}×\vec{ \nabla }) ## in order to get ## \vec{B} \cdot \vec{L} ##
-The lecture notes weren't mine, so the person wrote ## \vec{L} \cdot \vec{B} ## instead of ## \vec{B} \cdot \vec{L} ## which confused me a lot.

My conclusion:
I learned many things from this post and I hope this question will help others who also struggle with this problem. I would like to thank everyone for their help!
 
Last edited:
  • Like
Likes sysprog, vanhees71 and PeroK
  • #12
For ##\nabla\times({\bf r\times B})## with ##\bf B## constant, use the algebra 'bac minus cab,
bur the order must be ##\bf B\nabla r##, so ##\nabla## acts only on ##\bf r##.
Then, ##\nabla\times({\bf r\times B})=(B\cdot\nabla)r-B(\nabla\cdot r)##.
 
  • #13
Well, correct is
$$\vec{\nabla} \times (\vec{r} \times \vec{B})=(\vec{B} \cdot \vec{\nabla}) \vec{r}-\vec{B}(\vec{\nabla} \cdot \vec{r})=\vec{B}-3 \vec{B}=-2 \vec{B}.$$
The "nabla calculus" is shorter but imho much less clear than the Ricci calculus (see my posting #10).
 
  • Like
Likes sysprog

FAQ: Cyclic rotation of the cross product involving derivation

What is cyclic rotation of the cross product?

Cyclic rotation of the cross product involves changing the order of the vector elements in a cross product while maintaining the same direction and magnitude. This is commonly used in vector calculus and can be represented mathematically as a cyclic permutation.

How is cyclic rotation of the cross product performed?

To perform cyclic rotation of the cross product, the vector elements are rearranged in a specific order. For example, if we have the cross product of vectors A and B, the cyclic rotation of AB would be equivalent to BA. This can also be represented using the Levi-Civita symbol.

What is the significance of cyclic rotation in vector calculus?

Cyclic rotation is important in vector calculus because it allows us to manipulate and simplify complex vector equations. It also helps in understanding the properties and relationships between different vector operations.

How is cyclic rotation related to derivation?

Cyclic rotation is closely related to derivation in vector calculus. Derivation involves finding the rate of change of a function with respect to its variable. Cyclic rotation can be used to simplify the process of derivation by rearranging the vector elements and making the calculation easier.

Can cyclic rotation be applied to any type of vector?

Yes, cyclic rotation can be applied to any type of vector, including 2D and 3D vectors. It is a general property of vector operations and can be used in various contexts, such as physics, engineering, and mathematics.

Back
Top