Reciprocals of Derivatives: How Can They Simplify Calculus?

In summary, to determine the relationship between divergence, gradient, and laplacian in polar coordinates, you need to use cartesian coordinates.
  • #1
rsq_a
107
1
What is required for one to simply write,

[tex]\frac{\partial x}{\partial y} = \frac{1}{\left(\dfrac{\partial y}{\partial x}\right)}[/tex]

There are probably necessary conditions on the smoothness of the inverse map, but I'd like an easy way to know when I can just compute dx/dy by this method.
 
  • Like
Likes craigthone
Physics news on Phys.org
  • #2
The inverse function theorem addresses exactly this issue!

If f is continuously differentiable in a neighborhood of the point of interest, and the derivative at that point is not zero, then the inverse of the derivative is the derivative of the inverse.

Additionally, the inverse is continuously differentiable on some neighborhood of the point and its image, the other point you brought up.

http://en.wikipedia.org/wiki/Inverse_function_theorem
 
  • #3
The fact that the original poster used partial derivatives makes me wonder if he's referring to a multivariable case, which is quite messy. :frown:
 
  • #4
Hurkyl said:
The fact that the original poster used partial derivatives makes me wonder if he's referring to a multivariable case, which is quite messy. :frown:

Not really!

If D(f(x)) denotes the best linear approximation to f at x (eg, the jacobian in finite dimensions, or the Frechet derivative on a more general Banach space), then basically the exact same result holds.

If f is a mapping between banach spaces that is C1 in a neighborhood of point x, and if D(f(x)) is an isomorphism (eg, in the real to real case d/dx exists and is nonzero), then f is a diffeomorphism on neighborhoods of x and y=f(x), and D(f-1)(y) = (D(f(x))-1.
 
  • #5
maze said:
Not really!

If D(f(x)) denotes the best linear approximation to f at x...
You can make anything simple if you change the problem. You're not talking about partial derivatives here...
 
  • #6
Let me say this better...


Things like the derivative of function (as you described) and the exterior derivative are "intrinsic" properties of a function -- they depend on the function and nothing else.

Partial derivatives in Leibniz notation are more complicated, because they depend not only on the function and the variable you want to differentiate with respect to... but they also depend on what coordinate chart you've decided to use on the parameter space.

The net effect is that you have to jump through hoops to even figure out what the equation [itex]\partial y / \partial x = 1 / (\partial x / \partial y)[/itex] even means, let alone figure out whether or not it's a valid equation. Arildno described the type of thing you have to do.

And just to demonstrate (for everyone, particularly the opening poster) some of the bad things partial derivatives can do, consider the following:

You have three variables x, y, z related by x + y + z = 0. If we write z as a function of x and y, then [itex]\partial z / \partial x = -1[/itex]. If we write y as a function of x and z, then [itex]\partial y / \partial z = -1[/itex]. If we write x as a function of y and z, then [itex]\partial x / \partial y = -1[/itex]. Combining these three expressions:

[tex]\frac{\partial z}{\partial x} \frac{\partial y}{\partial z} \frac{\partial x}{\partial y} = -1[/tex]
:bugeye:
 
  • #7
Unfortunately, my post was riddled with errors, I'll make a better one later.
 
  • #8
Hurkyl said:
Let me say this better...Things like the derivative of function (as you described) and the exterior derivative are "intrinsic" properties of a function -- they depend on the function and nothing else.

Partial derivatives in Leibniz notation are more complicated, because they depend not only on the function and the variable you want to differentiate with respect to... but they also depend on what coordinate chart you've decided to use on the parameter space.

The net effect is that you have to jump through hoops to even figure out what the equation [itex]\partial y / \partial x = 1 / (\partial x / \partial y)[/itex] even means, let alone figure out whether or not it's a valid equation. Arildno described the type of thing you have to do.

And just to demonstrate (for everyone, particularly the opening poster) some of the bad things partial derivatives can do, consider the following:

You have three variables x, y, z related by x + y + z = 0. If we write z as a function of x and y, then [itex]\partial z / \partial x = -1[/itex]. If we write y as a function of x and z, then [itex]\partial y / \partial z = -1[/itex]. If we write x as a function of y and z, then [itex]\partial x / \partial y = -1[/itex]. Combining these three expressions:

[tex]\frac{\partial z}{\partial x} \frac{\partial y}{\partial z} \frac{\partial x}{\partial y} = -1[/tex]
:bugeye:

I seem to have forgotten a lot of my Calculus.

The problem which provoked the question was, I was trying to figure out the relationship between the divergence/gradient/laplacian, etc. in polar coordinates with cartesion coordinates.

So for example, if [tex]x = r\sin\theta[/tex] and [tex]y = r\cos\theta[/tex], I immediately wrote down,

[tex]\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}} = \frac{1}{r\cos\theta}[/tex]

However, if you write [tex]\theta = \text{atan}(y/x)[/tex] and differentiate, you get [tex]\frac{\partial \theta}{\partial x} = -\frac{\sin\theta}{r}[/tex], which seems correct.
 
  • #9
rsq_a said:
I seem to have forgotten a lot of my Calculus.

The problem which provoked the question was, I was trying to figure out the relationship between the divergence/gradient/laplacian, etc. in polar coordinates with cartesion coordinates.

So for example, if [tex]x = r\sin\theta[/tex] and [tex]y = r\cos\theta[/tex], I immediately wrote down,

[tex]\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}} = \frac{1}{r\cos\theta}[/tex]

However, if you write [tex]\theta = \text{atan}(y/x)[/tex] and differentiate, you get [tex]\frac{\partial \theta}{\partial x} = -\frac{\sin\theta}{r}[/tex], which seems correct.

The hitch, if I recall correctly, is that

[tex]\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}}[/tex]

is true, except that you left off a very important piece of information. Namely, the variables being held constant. The true statement, if I recall correctly, is that

[tex]\left(\frac{\partial \theta}{\partial x}\left)_{y} = \frac{1}{\left(\dfrac{\partial x}{\partial \theta}\right)_y}[/tex]

Note that on both sides of the equality, the variable y is being held constant - hence, you cannot just differentiate x with respect to theta while holding r fixed, because r depends on x! Here's the derivation:

[tex]x = r\cos\theta = \sqrt{x^2 + y^2}\cos\theta \Rightarrow \left(\frac{\partial x}{\partial \theta}\right)_y = \frac{x\left(\frac{\partial x}{\partial \theta}\right)_y}{\sqrt{x^2+y^2}}\cos\theta - r\sin\theta[/tex]

Solve for the derivative:

[tex]\left(\frac{\partial x}{\partial \theta}\right)_y(1 - \cos^2\theta) = -r\sin\theta \Rightarrow \left(\dfrac{\partial x}{\partial \theta}\right)_y = -\frac{r}{\sin\theta}[/tex]

Hence,

[tex]\left(\frac{\partial \theta}{\partial x}\left)_{y} = \frac{1}{\left(\dfrac{\partial x}{\partial \theta}\right)_y} = -\frac{\sin\theta}{r}[/tex]

In general, the rule is that a partial derivative is equal to the reciprocal of the inverting of the "numerator" and "denominator", but you MUST differentiate both with respect to the SAME variables being held constant on both sides. I'm not sure of a good, general way to write it down. Maybe

[tex]\left(\frac{\partial y_i(x_1,x_2,...)}{\partial x_j}\right)_{x_k, k \neq j} = \frac{1}{\left(\frac{\partial x_j(x_1,x_2,...,x_j,...,y_i)}{\partial y_i}\right)_{\all x_k, k \neq j}}[/tex]

Note that on the RHS you have x_j written as a function of all the other x_k's, and y_i, and potentially as an implicit function of itself.
 
Last edited:
  • Like
Likes craigthone

FAQ: Reciprocals of Derivatives: How Can They Simplify Calculus?

1. What are reciprocals of derivatives?

Reciprocals of derivatives are the inverse functions of derivatives. They are also known as antiderivatives or integrals. They represent the original function that was differentiated to get the derivative.

2. Why are reciprocals of derivatives important?

Reciprocals of derivatives are important because they allow us to find the original function when we only know its derivative. They are also used in many real-world applications, such as calculating areas and volumes.

3. How do you find the reciprocal of a derivative?

To find the reciprocal of a derivative, you can use the reverse rules of differentiation. For example, if the derivative is 2x, then the reciprocal is x^2. You can also use integration techniques, such as u-substitution or integration by parts, to find the reciprocal.

4. What is the relationship between derivatives and reciprocals of derivatives?

The relationship between derivatives and reciprocals of derivatives is that they are inverses of each other. This means that when you differentiate a function and then take the reciprocal of the resulting derivative, you will get back the original function.

5. Can reciprocals of derivatives be used to solve optimization problems?

Yes, reciprocals of derivatives can be used to solve optimization problems. When finding the maximum or minimum of a function, we can use derivatives to find the critical points, and then use the reciprocal of the derivative to find the corresponding x-value. This will give us the optimal solution to the optimization problem.

Similar threads

Replies
1
Views
2K
Replies
2
Views
2K
Replies
3
Views
2K
Replies
3
Views
3K
Replies
4
Views
2K
Replies
5
Views
2K
Back
Top