Definition of a derivative in several variables

In summary: I'm probably doing something wrong, but I'm trying to apply the same thing to my f(x,y) = sin(x+y) at (1,1). So I thought the lin. approximation would be h+ k + sqrt{h^2+ k^2} \rho(h,k)In summary, the definition of a derivative in several variables is similar to the definition in one variable, but requires some knowledge of linear algebra. The derivative of a function of two variables is a linear transformation that approximates the function in a neighborhood of a point. The formula for this derivative
  • #1
Gramsci
66
0

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
[tex]f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
And if [tex] \rho[/tex] goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Apart from that, I'm trying to do show that a derivative exists at a point:
[tex]f(x,y) = sin(x+y) [/tex] at (1,1)

Homework Equations


-

The Attempt at a Solution


To show that the derivative exists:
[tex]f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
and:
[tex] \rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0 [/tex]
Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case?
 
Physics news on Phys.org
  • #2
Gramsci said:

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
[tex]f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
And if [tex] \rho[/tex] goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Just as the derivative in one variable, of f(x), gives the slope of the tangent line to y= f(x), so the derivative in two variables gives the inclination of a tangent plane to the surface z= f(x,y). Why that exact formula requires a little linear algebra. In general, if [itex]f(x_1, x_2, ..., x_n)[/itex] is a function of n variables, its derivative, at [itex](x_{01}, x_{02}, ..., x_{0n})[/itex] is not a number or a set of numbers but the linear function that best approximates f in some neighborhood of that point. In one variable, any linear function is of the form y= mx+ b. Since b is given by the point, the new information is "m" and we think of that as being the derivative. If [itex]f(x_1, x_2, ..., x_n)[/itex] is a function from [itex]R^n[/itex] to R, then a linear function from [itex]R^n[/itex] to R is of the form [itex]y= a_1x_1+ a_2x_2+ ... + a_nz_n+ b[/itex] which we can think of as [itex]y- b= <a_1, a_2, ..., a_n>\cdot <x_1, x_2, ..., x_n>[/itex], the product being the dot product of two vectors. In that way, we can think of the vector [itex]<a_1, a_2, ..., a_2>[/itex] as being the derivative.

A more precise definition is this: if [itex]f: R^n\to R^m[/itex], the derivative of f at [itex]x_0= (x_{01}, x_{02}, ..., x_{0n})[/itex] is the linear transformation, L, from [itex]R^n\to R^m[/itex] such that
[tex]f(x)= f(x_0)+ L(x- x_0)+ \epsilon(x)[/tex]
for some function [itex]\epsilon[/itex] from [itex]R^n[/itex] to R such that
[tex]\lim_{x\to x_0} \frac{\epsilon(x)}{|x|}= 0[/tex]
where |x| is the length of the vector x.

Of course, that requires that [itex]\epsilon(x)[/itex] go to 0 as x goes to [itex]x_0[/itex] so this is an approximation of f around [itex]x_0[/itex]. The requirement that [itexs\epsilon/|x|[/itex] also go to 0 is essentially the requirement that this be the best linear approximation.

In the case [itex]R^2\to R[/itex], a "real valued function of two variables", as I said, any linear transformation from [itex]R^2[/itex] to R can be represented as a dot product: [itex]<a, b>\cdot<x, y>= ax+ by[/itex] so the definition above becomes:
[tex]f(x)= f(x_0, y_0)+ a(x- x_0)+ b(y- y_0)+ \epsilon(x,y)[/tex]
and
[tex]lim_{x\to x_0}\frac{\epsilon(x,y)}{|(x-x_0, y- y_0)|}= 0[/itex]
Of course, in two dimensions that "length" is [itex]\sqrt{(x-x_0)^2+ (y- y_0)^2}[/itex].
If we take [itex]x- x_0= h[/itex] and [itex]y- y_0= k[/itex] that last says that
[tex]\frac{\epsilon}{\sqrt{h^2+ k^2}[/tex]
goes to 0. If we set equal to [itex]\rho(h, k)[/itex] then [itex]\epsilon= \rho(h,k)\sqrt{h^2+ k^2}[/itex] and the result is your formula.

Apart from that, I'm trying to do show that a derivative exists at a point:
[tex]f(x,y) = sin(x+y) [/tex] at (1,1)


Homework Equations


-


The Attempt at a Solution


To show that the derivative exists:
[tex]f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
How did you get "0*h" and "0*k" here? The "A1" and "A2" in your formula are the partial derivatives at the point which, here, are both cos(2), not 0.

and:
[tex] \rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0 [/tex]
Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case?
 
  • #3
HallsofIvy:
I'll show you where I got the 0*a from a previous example here. I probably misunderstood something, but just to see what I'm thinking.
Let's sa we want to show that f(x,y) = xy is differentiable at (1,1).

[tex]f(1+h,1+k)-f(1,1) = (1+h)(1+k)-1 = h+k+hk = 1*h+1*k+\sqrt{h^2+k^2}*hk/(\sqrt{h^2+k^2} \rightarrow A_1=A_2=1 \and \rho(h,k) = hk/\sqrt{h^2+k^2}[/tex]
 

FAQ: Definition of a derivative in several variables

What is the definition of a derivative in several variables?

The derivative in several variables is a mathematical concept that measures the rate of change of a function with respect to multiple variables. It represents the slope of a tangent line to a multi-dimensional surface at a specific point.

How is the derivative in several variables calculated?

The derivative in several variables is calculated using partial derivatives, which involve taking the derivative of a multi-variable function with respect to one variable at a time, holding the other variables constant. These partial derivatives are then combined to form the gradient vector, which represents the direction and magnitude of the steepest ascent of the function.

What is the relationship between the derivative in several variables and the derivative in one variable?

The derivative in several variables is an extension of the derivative in one variable. In one variable, the derivative represents the slope of a curve at a specific point. In several variables, the derivative represents the slope of a multi-dimensional surface at a specific point along a specific direction.

What are the applications of the derivative in several variables?

The derivative in several variables has various applications in fields such as physics, engineering, economics, and statistics. It is used to optimize functions, solve optimization problems, and analyze the behavior of complex systems.

Can the derivative in several variables be negative?

Yes, the derivative in several variables can be negative. This indicates that the function is decreasing in the direction of the derivative. However, the magnitude of the derivative is more important than its sign, as it represents the rate of change of the function.

Similar threads

Replies
4
Views
791
Replies
5
Views
1K
Replies
7
Views
2K
Replies
11
Views
2K
Replies
8
Views
1K
Back
Top