Differentials in Multivariable Functions .... Kantorovitz: Example 4, page 66

In summary, the conversation discusses the limit of a function defined by Kantorovitz in Example 4 on page 66 of "Several Real Variables". The function, denoted as \phi_0 (h), is shown to tend towards 0 as the input variable h approaches 0. The conversation also includes a link to Kantorovitz' Section on "The Differential" and a discussion on the book's typographical error, which was deemed harmless and does not affect the validity of the argument.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading the book "Several Real Variables" by Shmuel Kantorovitz ... ...

I am currently focused on Chapter 2: Derivation ... ...

I need help with an aspect of Kantorovitz's Example 4 on page 66 ...

Kantorovitz's Example 4 on page 66 reads as follows:View attachment 7817In the above example, Kantorovitz show that\(\displaystyle \phi_0 (h) = - \frac{ \| h \|^2 }{( 1 + \sqrt{ 1 + \| h \|^2 )}^2 }\)Kantorovitz then declares that \(\displaystyle \frac{ \phi_0 (h) }{ \| h \| } \rightarrow 0\) as \(\displaystyle h \rightarrow 0\) ... ...Can someone please show me how to demonstrate rigorously that this limit is as stated i.e that is that \(\displaystyle \frac{ \phi_0 (h) }{ \| h \| } \rightarrow 0\) as \(\displaystyle h \rightarrow 0\) ... ...
... ... Help will be much appreciated ...

Peter============================================================================================

***NOTE***

Readers of the above post may be helped by having access to Kantorovitz' Section on "The Differential" ... so I am providing the same ... as follows:View attachment 7818
View attachment 7819
https://www.physicsforums.com/attachments/7820
 
Physics news on Phys.org
  • #2
Peter said:
In the above example, Kantorovitz show that
\(\displaystyle \phi_0 (h) = - \frac{ \| h \|^2 }{\left( 1 + \sqrt{ 1 + \| h \|^2 }\right)^2 }\)
Kantorovitz then declares that \(\displaystyle \frac{ \phi_0 (h) }{ \| h \| } \rightarrow 0\) as \(\displaystyle h \rightarrow 0\) ... ...

Can someone please show me how to demonstrate rigorously that this limit is as stated i.e that is that \(\displaystyle \frac{ \phi_0 (h) }{ \| h \| } \rightarrow 0\) as \(\displaystyle h \rightarrow 0\)

Well, I don't know about rigorous, but intuitively if you're talking about the limit as $h\to 0$, then $h\not=0$, which forces $\|h\|\not=0$ (in most spaces, at least). Then
$$\phi_0 (h) = - \frac{ \| h \|^2 }{\left( 1 + \sqrt{ 1 + \| h \|^2 }\right)^2 } \; \implies \;
\frac{\phi_0 (h)}{\|h\|}=- \frac{ \| h \|}{\left( 1 + \sqrt{ 1 + \| h \|^2 }\right)^2 }.$$
The denominator is always strictly greater than $4$ (in particular, it's bounded away from zero), and the numerator goes to zero.
 
  • #3
Peter said:
In the above example, Kantorovitz show that\(\displaystyle \phi_0 (h) = - \frac{ \| h \|^2 }{( 1 + \sqrt{ 1 + \| h \|^2 )}^2 }\)

It seems to me that in your book there is a factor $2$ missing in the denominator of $\phi_0(h)$. (The error occurs in the third equality in his example.) So, I think it should be
\[
\phi_0 (h) = - \frac{ \| h \|^2 }{2\left( 1 + \sqrt{ 1 + \| h \|^2}\right)^2 },
\]
but this is innocent: It does not invalidate Ackbach's argument.

Ackbach said:
Well, I don't know about rigorous, but intuitively if you're talking about the limit as $h\to 0$, then $h\not=0$, which forces $\|h\|\not=0$ (in most spaces, at least). Then
$$\phi_0 (h) = - \frac{ \| h \|^2 }{\left( 1 + \sqrt{ 1 + \| h \|^2 }\right)^2 } \; \implies \;
\frac{\phi_0 (h)}{\|h\|}=- \frac{ \| h \|}{\left( 1 + \sqrt{ 1 + \| h \|^2 }\right)^2 }.$$
The denominator is always strictly greater than $4$ (in particular, it's bounded away from zero), and the numerator goes to zero.

In my opinion this is rigorous: I don't think the author of the book expects the reader to prove the limit from the $(\epsilon,\delta)$-definition, although here that is not hard, but it is just too time-consuming. Instead the reader can resort to the quotient rule for limits, exactly for the reasons you state.
 
  • #4
Krylov said:
It seems to me that in your book there is a factor $2$ missing in the denominator of $\phi_0(h)$. (The error occurs in the third equality in his example.) So, I think it should be
\[
\phi_0 (h) = - \frac{ \| h \|^2 }{2\left( 1 + \sqrt{ 1 + \| h \|^2}\right)^2 },
\]
but this is innocent: It does not invalidate Ackbach's argument.
In my opinion this is rigorous: I don't think the author of the book expects the reader to prove the limit from the $(\epsilon,\delta)$-definition, although here that is not hard, but it is just too time-consuming. Instead the reader can resort to the quotient rule for limits, exactly for the reasons you state.
I now understand the above limit ... thanks to Ackbach and Krylov ...

Thanks to you both ...

Peter
 

FAQ: Differentials in Multivariable Functions .... Kantorovitz: Example 4, page 66

What is a multivariable function?

A multivariable function is a function that has more than one independent variable. This means that the output of the function depends on multiple input values, as opposed to a single variable function where the output only depends on one input value.

What is a differential in multivariable functions?

A differential in multivariable functions refers to the change in the output of a function with respect to small changes in the independent variables. It is represented by the symbol "d" and can be thought of as the slope of the tangent line to the function at a specific point.

How do you find differentials in multivariable functions?

To find differentials in multivariable functions, you can use the partial derivatives of the function with respect to each independent variable. The differential can then be calculated using the formula: dF = (∂F/∂x)dx + (∂F/∂y)dy, where dF is the differential, ∂F/∂x and ∂F/∂y are the partial derivatives, and dx and dy are small changes in the independent variables.

Why are differentials important in multivariable functions?

Differentials are important in multivariable functions because they allow us to approximate the change in the output of a function with respect to small changes in the independent variables. This is useful in many real-world applications, such as optimization problems and approximating solutions to differential equations.

Can differentials be used to find the maximum or minimum of a multivariable function?

Yes, differentials can be used to find the maximum or minimum of a multivariable function. This is because at these points, the slope of the tangent line (differential) will be equal to 0. By setting the differentials equal to 0 and solving for the independent variables, we can find the critical points where the function reaches its maximum or minimum value.

Back
Top