Proof of a vector identity in electromagnetism

  • #1
Ishika_96_sparkles
57
22
Homework Statement
Prove that ##\vec{\nabla} \big[\vec{M}\cdot\vec{\nabla} \big(\frac{1}{r}\big)\big]=\frac{3(\vec{M}\cdot\vec{r})\vec{r}}{r^5}-\frac{\vec{M}}{r^3}##
Relevant Equations
##\vec{\nabla} \big(\frac{1}{r}\big) =-\frac{\vec{r}}{r^3}##
During the calculations, I tried to solve the following
$$ \vec{\nabla} \big[\vec{M}\cdot\vec{\nabla} \big(\frac{1}{r}\big)\big] = -\big[\vec{\nabla}(\vec{M}\cdot \vec{r}) \frac{1}{r^3} + (\vec{M}\cdot \vec{r}) \big(\vec{\nabla} \frac{1}{r^3}\big) \big]$$

by solving the first term i.e., ##\frac{1}{r^3} \vec{\nabla}(\vec{M}\cdot \vec{r}) ## by using the following ways
1) ##\frac{d (\vec{A}\cdot\vec{B})}{dt}=\frac{d \vec{A}}{dt} \cdot\vec{B} +\vec{A}\cdot \frac{d \vec{B}}{dt}## and by
2) ##\frac{d (\vec{A}\cdot\vec{B})}{dt}=\frac{d (A_xB_x+A_yB_y+A_zB_z)}{dt}##.
We keep in mind that ##\vec{A} ## here is a constant vector.

1) $$ \vec{\nabla}(\vec{M}\cdot \vec{r})= (\vec{\nabla}\vec{M}) \cdot \vec{r} +\vec{M}\cdot ( \vec{\nabla} \vec{r})$$
now, since ##\vec{\nabla}\vec{A}## only makes sense if we have a dot product between them i.e., the divergence ##\vec{\nabla}\cdot\vec{A}##. Therefore, we proceed, by assuming a dot product here to get
$$(\vec{\nabla} \cdot \vec{M}) \cdot \vec{r} +\vec{M} \cdot ( \vec{\nabla} \cdot \vec{r})= 0 +\vec{M}\cdot ( \vec{\nabla} \cdot \vec{r})$$
since, ##\vec{M}## is a constant vector and ##\vec{r}=x\hat{i}+y\hat{j}+z\hat{k}##. Now, ##\vec{\nabla} \cdot \vec{r}=3## and thus, we get the answer ##3\vec{M}## and hence ##-\frac{\vec{3M}}{r^3}##.

However, by doing the method 2) we get
$$\vec{\nabla}(\vec{M}\cdot \vec{r})= \vec{\nabla}(M_1 x+M_2 y+M_3 z)$$
This is a gradient of a scalar and thus could be written as ##(M_1\vec{\nabla} x+M_2 \vec{\nabla}y+M_3 \vec{\nabla}z) =(M_1 \hat{i}+M_2 \hat{j}+M_3 \hat{k})=\vec{M}##

What am I doing wrong here?
 
Last edited:
Physics news on Phys.org
  • #2
Ishika_96_sparkles said:
1) $$ \vec{\nabla}(\vec{M}\cdot \vec{r})= (\vec{\nabla}\vec{M}) \cdot \vec{r} +\vec{M}\cdot ( \vec{\nabla} \vec{r})$$
This is wrong. The LHS is a vector and the RHS doesn't make much sense. In any case:
$$\vec{\nabla}(\vec a \cdot \vec b) \ne (\vec \nabla \cdot \vec{a})\vec b + (\vec \nabla \cdot \vec{b})\vec a$$
Ishika_96_sparkles said:
However, by doing the method 2) we get
$$\vec{\nabla}(\vec{M}\cdot \vec{r})= \vec{\nabla}(M_1 x+M_2 y+M_3 z)$$
This is a gradient of a scalar and thus could be written as ##(M_1\vec{\nabla} x+M_2 \vec{\nabla}y+M_3 \vec{\nabla}z) =(M_1 \hat{i}+M_2 \hat{j}+M_3 \hat{k})=\vec{M}##
Method 2 is correct.
 
  • #3
To tidy this up, we have:
$$\vec \nabla (\frac 1 {r}) = -\frac{\vec r}{r^3}$$$$\vec{\nabla}(\vec M \cdot \vec r) = \vec M$$$$\vec \nabla (\frac 1 {r^3}) = -\frac{3\vec r}{r^5}$$Hence:
$$ \vec{\nabla} \big[\vec{M}\cdot\vec{\nabla} \big(\frac{1}{r}\big)\big] = - \vec{\nabla} \big[\frac{\vec{M}\cdot\vec r}{r^3}\big]$$$$-\big[\vec{\nabla}(\vec{M}\cdot \vec{r}) \frac{1}{r^3} + (\vec{M}\cdot \vec{r}) \big(\vec{\nabla} \frac{1}{r^3}\big) \big] = -\big[\frac{\vec M}{r^3} - (\vec{M}\cdot \vec{r})\frac{3\vec r}{r^5}\big ]$$$$= \frac{3(\vec{M}\cdot \vec{r})\vec r}{r^5} - \frac{\vec M}{r^3}$$
 
  • Like
Likes Ishika_96_sparkles
  • #4
Thank you so much!!!
 
  • #5
PeroK said:
This is wrong. The LHS is a vector and the RHS doesn't make much sense.
Not so. The RHS will make sense if it is interpreted properly, which is by using the tensor formalism and realising that ##\vec \nabla \vec M## (ie, the gradient of ##\vec M##) should be interpreted as a rank 2 tensor. The application of this tensor on the position vector with the dot product is correct.

In index form
$$
\nabla(\vec M \cdot \vec r) = \vec e_i \partial_i (M_j x_j) = \vec e_i (\partial_i M_j) x_j + \vec e_i M_i
$$
where ##\partial_i M_j## would be the components of ##\nabla \vec M##.

Granted, this will often not be covered in introductory vector analysis and randomly moving the dot product around is certainly not the solution.

It should also be noted that the first term will always disappear as ##\vec M## is considered constant.
 
  • Informative
  • Like
Likes Ishika_96_sparkles and PeroK
  • #6
Orodruin said:
Not so. The RHS will make sense if it is interpreted properly, which is by using the tensor formalism and realising that ##\vec \nabla \vec M## (ie, the gradient of ##\vec M##) should be interpreted as a rank 2 tensor. The application of this tensor on the position vector with the dot product is correct.

In index form
$$
\nabla(\vec M \cdot \vec r) = \vec e_i \partial_i (M_j x_j) = \vec e_i (\partial_i M_j) x_j + \vec e_i M_i
$$
where ##\partial_i M_j## would be the components of ##\nabla \vec M##.

Granted, this will often not be covered in introductory vector analysis and randomly moving the dot product around is certainly not the solution.

It should also be noted that the first term will always disappear as ##\vec M## is considered constant.
Back with a bang!
 
  • #7
Thank you @PeroK and @Orodruin for your interest in my query, your valuable insights and for your time.

@PeroK I came across the following identity (No idea who derived it and how it was constructed) but it solves my question straightaway and also gives birth to another query. Let us see the following identity
$$ \vec{\nabla}(\vec{A}\cdot\vec{B}) = \vec{A}\times (\vec{\nabla} \times \vec{B} ) + \vec{B} \times (\vec{\nabla} \times \vec{A} ) +(\vec{A}\cdot\vec{\nabla})\vec{B}+(\vec{B}\cdot\vec{\nabla}) \vec{A}$$
Where, if we use ##\vec{A}=\vec{M}## and ##\vec{B}=\vec{r}##, the cross-product terms vanish, and we are left with the two dot-product terms. Out of these terms only one is non-zero i.e., ##(\vec{M}\cdot\vec{\nabla}) \vec{r}## which gives ##\vec{M}## as the output.

Now, here is my question for both of you, in the light of the following statement
Orodruin said:
Granted, this will often not be covered in introductory vector analysis and randomly moving the dot product around is certainly not the solution.

why the gradient operator changed to the curl and dot product in this identity, while my assumption in the OP about the dot-product in the terms ##\vec{\nabla}\vec{M}## and ##\vec{\nabla}\vec{r}## gave wrong result?

@Orodruin Are these terms i.e., ##\vec{\nabla}\vec{M}## and ##\vec{\nabla}\vec{r}## called the dyads in ancient scientific literature?
 
  • #8
Ishika_96_sparkles said:
Thank you @PeroK and @Orodruin for your interest in my query, your valuable insights and for your time.

@PeroK I came across the following identity (No idea who derived it and how it was constructed) but it solves my question straightaway and also gives birth to another query. Let us see the following identity
$$ \vec{\nabla}(\vec{A}\cdot\vec{B}) = \vec{A}\times (\vec{\nabla} \times \vec{B} ) + \vec{B} \times (\vec{\nabla} \times \vec{A} ) +(\vec{A}\cdot\vec{\nabla})\vec{B}+(\vec{B}\cdot\vec{\nabla}) \vec{A}$$
It's not too hard to prove. Just a bit messy. It's a standard identity in vector calculus.
 
  • Like
Likes Ishika_96_sparkles
  • #9
Ishika_96_sparkles said:
why the gradient operator changed to the curl and dot product in this identity, while my assumption in the OP about the dot-product in the terms ∇→M→ and ∇→r→ gave wrong result?
Are you familiar with the BAC-CAB rule? This identity has the same vector structure with the addition that the derivative acts on a product so you get four terms due to the Leibniz rule rather than just two terms.

Ishika_96_sparkles said:
@Orodruin Are these terms i.e., ∇→M→ and ∇→r→ called the dyads in ancient scientific literature?
Yes. It is not a notation I am particularly fond of, I prefer index notation - it is usually much clearer.
 
  • Like
Likes Ishika_96_sparkles
  • #10
Orodruin said:
Are you familiar with the BAC-CAB rule? This identity has the same vector structure with the addition that the derivative acts on a product so you get four terms due to the Leibniz rule rather than just two terms.

yes i am aware of the rule and proved it as a part of exercise. So, this one uses two pieces of knowledge
1) BAC-CAB rule and 2) LEIBNITZ RULE. I get it now that its just a mater of notation or dressing up but the structure is the same...just a nicer way of writing an identity.

Thank you very much for the reply.
 

Related to Proof of a vector identity in electromagnetism

What is a vector identity in electromagnetism?

A vector identity in electromagnetism is a mathematical expression that equates two different vector quantities or operations, often simplifying complex equations. These identities are used to manipulate and simplify equations in electromagnetism, such as Maxwell's equations, to facilitate easier analysis and solution.

Why are vector identities important in electromagnetism?

Vector identities are important in electromagnetism because they allow for the simplification of complex vector equations. This simplification is crucial for solving problems related to electric and magnetic fields, wave propagation, and other phenomena described by Maxwell's equations. By using vector identities, one can reduce the computational complexity and gain deeper insights into the physical meaning of the equations.

What are some common vector identities used in electromagnetism?

Some common vector identities used in electromagnetism include the divergence of a curl (which is always zero), the curl of a gradient (which is always zero), and the vector triple product identity. These identities help in transforming and simplifying the equations governing electromagnetic fields.

How do you prove the vector identity ∇ × (∇φ) = 0?

To prove the vector identity ∇ × (∇φ) = 0, where φ is a scalar field, we use the definition of the curl and the properties of partial derivatives. The curl of a gradient is given by:∇ × (∇φ) = (∂/∂y (∂φ/∂z) - ∂/∂z (∂φ/∂y)) i + (∂/∂z (∂φ/∂x) - ∂/∂x (∂φ/∂z)) j + (∂/∂x (∂φ/∂y) - ∂/∂y (∂φ/∂x)) k.Since mixed partial derivatives are equal (Clairaut's theorem), each term in the parentheses is zero, proving that ∇ × (∇φ) = 0.

Can vector identities be applied to non-Cartesian coordinate systems?

Yes, vector identities can be applied to non-Cartesian coordinate systems such as cylindrical and spherical coordinates. However, the expressions for divergence, gradient, and curl take different forms in these coordinate systems. The fundamental principles remain the same, but the specific forms of the vector identities need to be adapted to the coordinate system being used.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
860
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
439
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
935
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
755
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
Back
Top