The Gradient of a Vector: Understanding Second Order Derivatives

In summary, the gradient of a vector field is a second-order tensor, and it can be useful in forming Taylor expansions of scalar functions. The gradient is defined as the partial derivatives of each component of the vector field with respect to each coordinate. The curl of a vector field is a pseudovector in three dimensions and generalizes to a skew-symmetric tensor in N dimensions. The gradient of a dot product can be found using the identity "BAC-CAB" and must take into account the product rule.
  • #1
Weather Freak
40
0
First off, this is not a homework problem, but rather is an issue that I've had for a while not and haven't quite been able to reason out to my satisfaction on my own.

u-vector = ui + vj + wk
What is grad(u-vector)?

I know what the gradient of a function is, but this is the gradient of a vector. I know what the answer is, because we did it a kazillion times in class, and I know how to get it by memorizing, but what is the technique at work here? There must be a method to the madness somewhere. I've tried looking up the gradient of a vector, gradient of a tensor (thinking there might be a general formula for gradient of a tensor that would reduce to gradient of a vector), but it has all led to nothing but confusion.

Could someone open my eyes a bit?

Thanks!

Kyle
 
Physics news on Phys.org
  • #3
granpa said:
Neither of these is the gradient of a vector field. The divergence of a vector, [itex]\boldsymol{\nabla}\cdot \mathbf F[/itex], is a scalar while the curl of a vector field, [itex]\boldsymol{\nabla}\times \mathbf F[/itex], is a vector. The gradient of a vector field is a second order tensor:

[tex](\boldsymol{\nabla}\mathbf F)_{ij} = \frac{\partial F_i(\boldsymbol x)}{\partial x_j}[/itex]

One way to look at this: The ith row of the gradient of a vector field [itex]\mathbf F(\mathbf x)[/itex] is the plain old vanilla gradient of the scalar function [itex]F_i(\mathbf x)[/itex].

One place where the concept is useful is in forming a Taylor expansion of a scalar function. To first order,

[tex]f(\mathbf x_0 + \Delta \mathbf x) \approx f(\mathbf x_0) +
\boldsymol{\nabla}\mathbf f(\mathbf x)|_{\mathbf x=\mathbf x_0}\cdot \Delta \mathbf x[/tex]

Higher order expansions require higher order derivatives. The second order expansion requires taking the gradient of the gradient (i.e., taking the gradient of a vector).

[tex]f(\mathbf x_0 + \Delta \mathbf x) \approx
f(\mathbf x_0) +
\sum_i
(\boldsymol{\nabla}(\mathbf f))(\mathbf x_0)_i
\Delta x_i +
\sum_{i,j}
\Delta x_i
(\boldsymol{\nabla}(\boldsymol{\nabla}(\mathbf f)))(\mathbf x_0)_{i,j}
\Delta x_j
[/tex]

One application of this is computing the gravity gradient torque induced on a vehicle.
 
Last edited:
  • #4
I would think that the curl of a vector field would only be a vector (technically a pseudovector, which is really a tensor) in 3 dimensions. in more than 3 its a tensor.

perhaps the tensor you are talking about is simply the true value of the curl. otherwise I have no idea what you are talking about.

I didnt mention grad because he asked for the gradient of a vector not a scalar.
 
  • #5
I am not talking about curl, which is a pseudovector in three dimensions and generalizes to a [itex]N(N-1)/2\times N(N-1)/2[/itex] skew-symmetric tensor in N dimensions. I am talking about about the [itex]N\times N[/itex] tensor

[tex](\boldsymol{\nabla}\mathbf F)_{ij} = \frac{\partial F_i(\boldsymbol x)}{\partial x_j}[/itex]

which I goofed up in my first post (now corrected).:redface:

If [itex]f(\mathbf x)[/tex] is a scalar function, then the gradient [itex]\boldsymol{\nabla}f = \partial f(\mathbf x)/\partial x_i[/itex] is a vector field. The "gradient" of this vector is what I was talking about in the second part of my post.

Aside: Is there a name for the second-order spatial derivative [itex]\partial^2 f(\mathbf x)/\partial x_i \partial x_j[/itex]?
 
  • #6
D H said:
I am not talking about curl, which is a pseudovector in three dimensions and generalizes to a [itex]N(N-1)/2\times N(N-1)/2[/itex] skew-symmetric tensor in N dimensions. I am talking about about the [itex]N\times N[/itex] tensor

[tex](\boldsymol{\nabla}\mathbf F)_{ij} = \frac{\partial F_i(\boldsymbol x)}{\partial x_j}[/itex]

which I goofed up in my first post (now corrected).:redface:

If [itex]f(\mathbf x)[/tex] is a scalar function, then the gradient [itex]\boldsymol{\nabla}f = \partial f(\mathbf x)/\partial x_i[/itex] is a vector field. The "gradient" of this vector is what I was talking about in the second part of my post.

Aside: Is there a name for the second-order spatial derivative [itex]\partial^2 f(\mathbf x)/\partial x_i \partial x_j[/itex]?

Thanks a lot, that definitely answers the question! The trick of each row being the gradient of Fi really makes it easy to remember as well.
 
  • #7
I have a follow-up question to this thread. The gradient of a dot product is given

[tex]
\nabla ( A \cdot B) = \underbrace{(B\cdot \nabla)A + (A\cdot \nabla)B}_{\mbox{gradients of vectors?}} + B \times (\nabla\times A) + A \times (\nabla \times B)
[/tex]

All the terms in the equation should be vectors, not second order tensors, which is what gradients of vectors were explained to be earlier in this thread. How then to interpret the first two terms of the right hand side?

Also, the hint I've seen for deriving the identity is to use the "BAC-CAB" identity

[tex]
A \times (B \times C) = B (A \cdot C) - C (A \cdot B),
[/tex]

which can be rewritten

[tex]
B(A \cdot C) = A \times (B \times C) + (A \cdot B) C.
[/tex]

But using this to expand the gradient of the dot product of two vectors (letting [tex] B = \nabla, A = A, \mbox{ and } C = B [/tex]) appears to yield

[tex]
\nabla (A \cdot B) = A \times (\nabla \times B) + (A \cdot \nabla) B,
[/tex]

which is not consistent with the expansion given in textbooks unless

[tex] (B \cdot \nabla) A + A \times (\nabla \times B) = 0 [/tex],

and by symmetry, I don't think that can be true (wouldn't the whole right hand side of the textbook grad of dot product expansion then be zero?).
Please help.

Thanks, Genya
 
  • #8
musemonkey said:
[tex]
B(A \cdot C) = A \times (B \times C) + (A \cdot B) C.
[/tex]

But using this to expand the gradient of the dot product of two vectors (letting [tex] B = \nabla, A = A, \mbox{ and } C = B [/tex]) appears to yield

[tex]
\nabla (A \cdot B) = A \times (\nabla \times B) + (A \cdot \nabla) B,
[/tex]


You can't do that. [itex]\nabla[/itex] is not a vector. It's an differential operator that in some cases we can treat like a vector but this is not one of those cases because in doing so you didn't take account of the product rule.

For example, for ordinary vectors [itex]\mathbf{A}[/itex] and [itex]\mathbf{B}[/itex] and a scalar function [itex]\psi[/itex], the following identity holds:

[tex]\mathbf{A} \cdot (\psi \mathbf{B}) = \psi \mathbf{A}\cdot\mathbf{B}[/tex]

If you were to just plug in [itex]\mathbf{A} "=" \nabla[/itex] now, you would arrive at

[tex]\nabla \cdot (\psi \mathbf{B}) = \psi \nabla \cdot \mathbf{B}[/tex]

which is simply not correct. The correct expression is

[tex]\nabla \cdot (\psi \mathbf{B}) = \psi \nabla \cdot \mathbf{B} + \mathbf{B}\cdot \nabla \psi[/tex]
 
  • #9
I thought it was OK to substitute [tex] \nabla [/tex] into the BAC-CAB identity because Feynman Lectures vol. II sec. 2-7 contain the following:

[tex] A \times (B \times C) = B ( A \cdot C) - (A \cdot B) C [/tex]
[tex] \nabla \times (\nabla \times h) = \nabla (\nabla \cdot h) - (\nabla \cdot \nabla) h [/tex]

Thank you Mute for the speedy response, but I'm not sure what to make of it. It's still unclear what the terms of form [tex] {(B\cdot \nabla)A [/tex] mean and how the derivation of the gradient of a dot product formula is supposed to be done using the BAC-CAB identity.
 
  • #10
musemonkey said:
[tex]
\nabla ( A \cdot B) = \underbrace{(B\cdot \nabla)A + (A\cdot \nabla)B}_{\mbox{gradients of vectors?}} + B \times (\nabla\times A) + A \times (\nabla \times B)
[/tex]

All the terms in the equation should be vectors, not second order tensors, which is what gradients of vectors were explained to be earlier in this thread. How then to interpret the first two terms of the right hand side?

The first term in the above equation in cartesian coordinates is

[tex](B\cdot \nabla)A = \bmatrix
b_x\frac{\partial a_x}{\partial x} +
b_y\frac{\partial a_x}{\partial y} +
b_z\frac{\partial a_x}{\partial z} \\
b_x\frac{\partial a_y}{\partial x} +
b_y\frac{\partial a_y}{\partial y} +
b_z\frac{\partial a_y}{\partial z} \\
b_x\frac{\partial a_z}{\partial x} +
b_y\frac{\partial a_z}{\partial y} +
b_z\frac{\partial a_z}{\partial z}
\endbmatrix[/tex]

One way to think of [itex]B\cdot\nabla[/itex] is as defining a new operator:

[tex]B\cdot\nabla \equiv \sum_j b_j\frac{\partial}{\partial x_j}[/tex]


Also, the hint I've seen for deriving the identity is to use the "BAC-CAB" identity ...

The path you took is, as Mute noted, invalid. The "BAC-CAB" identity can be used if one uses Feynman's notation:

[tex]\nabla(A\cdot B) = \nabla_A(A\cdot B) + \nabla_B(A\cdot B)
= \nabla_A(B\cdot A) + \nabla_B(A\cdot B)[/tex]

where [itex]\nabla_A[/itex] only operates on A and [itex]\nabla_B[/itex] only operates on B. Then one can "safely" use the BAC-CAB identity as you did:

[tex]\aligned
\nabla_A(B\cdot A) &= B \times (\nabla_A \times A) + (B \cdot \nabla_A) A \\
&= B \times (\nabla \times A) + (B \cdot \nabla) A \\
\nabla_B(A\cdot B) &= A \times (\nabla_B \times B) + (A \cdot \nabla_B) B \\
&= A \times (\nabla \times B) + (A \cdot \nabla) B \\
\nabla(A\cdot B) &= \nabla_A(B\cdot A) + \nabla_B(A\cdot B) \\
&= (B \cdot \nabla) A + (A \cdot \nabla) B + B \times (\nabla \times A) + A \times (\nabla \times B)
\endaligned
[/tex]

This, however, is too much sleight-of-hand for me. It happens to work. Your approach happened not to work.
 
  • #11
Thank you DH and Mute! You answered everything, and I appreciate your taking the time to write everything out.
 
  • #12
Hello,

About the second order spatial derivative [tex]\partial^2 f(\mathbf x)/\partial x_i \partial x_j[/tex] which DH wrote above, in tensor notation can it be written as

[tex]\nabla(\nabla{f}(\mathbf x))[/tex]?
 

FAQ: The Gradient of a Vector: Understanding Second Order Derivatives

What is the definition of the gradient of a vector?

The gradient of a vector is a mathematical operation that calculates the rate of change of a vector field in a given direction. It is represented by the symbol ∇ and is also known as "del" or "nabla" operator.

How is the gradient of a vector calculated?

The gradient of a vector is calculated by taking the partial derivatives of the vector field with respect to each of its components. This results in a vector containing the slope or rate of change of the field in each direction.

What is the significance of the gradient of a vector?

The gradient of a vector is significant because it provides important information about the direction and magnitude of the maximum change of a vector field. It is used in many fields such as physics, engineering, and economics to analyze and understand the behavior of vector fields.

Can the gradient of a vector be negative?

Yes, the gradient of a vector can be negative. This indicates that the vector field is decreasing in the direction of the gradient. A positive gradient indicates an increase in the field in the direction of the gradient, and a gradient of zero indicates that the field is constant in that direction.

How is the gradient of a vector used in real-world applications?

The gradient of a vector is used in various real-world applications such as calculating the flow of fluids, analyzing electric and magnetic fields, and optimizing functions in machine learning and data science. It is also used in navigation systems to determine the direction and rate of change of motion.

Back
Top