Differentiation on R^n ....need/ use of norms ....

In summary, the conversation focuses on the use of norm signs in the definition and proof of Theorem 9.1.10 in Junghenn's book "A Course in Real Analysis". The discussion revolves around the necessity of norm signs in the numerator of the limit in the proof, and whether they are necessary or not. It is concluded that the norm signs are not necessary and do not affect the outcome of the proof.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on [FONT=MathJax_AMS]R[/FONT][FONT=MathJax_Math]n[/FONT]"

I need some help with an aspect of Theorem 9.1.10 ...

Theorem 9.1.10 reads as follows:
View attachment 7883The proof of Theorem 9.1.10 relies on the definition of the derivative of a vector-valued function of several variables ... that is, Definition 9.1.6 ... so I am providing the same ... as follows:
View attachment 7884
In Junghenn's proof of Theorem 9.1.10 above, we read the following:

" ... ... and

\(\displaystyle \eta (h) = \frac{ f(a + h ) - f(a) - df_a (h) }{ \| h \| }\) if \(\displaystyle h \neq 0\)

... ... "Now there are no norm signs around this expression (with the exception of around \(\displaystyle h\) in the denominator ...) ... and indeed no norm signs around the expression \(\displaystyle \lim_{ h \rightarrow 0 } \eta(h) = 0\) ... nor indeed are there any norm signs in the limit shown in Definition 9.1.6 above (with the exception of around \(\displaystyle h\) in the denominator ...) ...

... BUT ...

... ... this lack of norm signs seems in contrast to the last few lines of the proof of Theorem 9.1.10 as follows ... where we read ...

" ... ... Conversely if (9.6) holds for some \(\displaystyle \eta\) and \(\displaystyle T\), then \(\displaystyle \lim_{ h \rightarrow 0 } \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| } = \lim_{ h \rightarrow 0 } \| \eta(h) \| = 0
\)

... ... "Here, in contrast to the case above, there are norm signs around the numerator and indeed around \(\displaystyle \eta(h)\) ... ...
Can someone please explain why norm signs are used in the numerator and around \(\displaystyle \eta(h)\) in one case ... yet not the other ...
Help will be appreciated ...

Peter
 
Physics news on Phys.org
  • #2
Peter said:
In Junghenn's proof of Theorem 9.1.10 above, we read the following:

" ... ... and

\(\displaystyle \eta (h) = \frac{ f(a + h ) - f(a) - df_a (h) }{ \| h \| }\) if \(\displaystyle h \neq 0\)

... ... "Now there are no norm signs around this expression (with the exception of around \(\displaystyle h\) in the denominator ...) ... and indeed no norm signs around the expression \(\displaystyle \lim_{ h \rightarrow 0 } \eta(h) = 0\) ... nor indeed are there any norm signs in the limit shown in Definition 9.1.6 above (with the exception of around \(\displaystyle h\) in the denominator ...) ...

... BUT ...

... ... this lack of norm signs seems in contrast to the last few lines of the proof of Theorem 9.1.10 as follows ... where we read ...

" ... ... Conversely if (9.6) holds for some \(\displaystyle \eta\) and \(\displaystyle T\), then \(\displaystyle \lim_{ h \rightarrow 0 } \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| } = \lim_{ h \rightarrow 0 } \| \eta(h) \| = 0
\)

... ... "Here, in contrast to the case above, there are norm signs around the numerator and indeed around \(\displaystyle \eta(h)\) ... ...
Can someone please explain why norm signs are used in the numerator and around \(\displaystyle \eta(h)\) in one case ... yet not the other ...
Help will be appreciated ...

Peter
\(\displaystyle f(a + h ) - f(a) - df_a (h)\) is a vector in $\Bbb{R}^m$, and $h$ is a vector in $\Bbb{R}^n$. A vector can be multiplied (or divided) by a scalar, but not by another vector. So in the quotient \(\displaystyle \eta(h) = \frac{ f( a + h ) - f(a) - Th }{ \| h \| }\) it is essential to have norm signs in the denominator. There are no norm signs in the numerator, and so $\eta(h)$ is a vector in $\Bbb{R}^m$.

When it comes to taking the limit as $h\to0$, it is always the case that a vector goes to zero if and only if its norm goes to zero. Therefore the conditions $\eta(h) \to0$ and $\|\eta(h)\| \to0$ are equivalent.

Finally, it follows from one of the axioms for a norm that $$\|\eta(h)\| = \left\|\frac{ f( a + h ) - f(a) - Th }{ \| h \| } \right\| = \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| }.$$
 
  • #3
Opalg said:
\(\displaystyle f(a + h ) - f(a) - df_a (h)\) is a vector in $\Bbb{R}^m$, and $h$ is a vector in $\Bbb{R}^n$. A vector can be multiplied (or divided) by a scalar, but not by another vector. So in the quotient \(\displaystyle \eta(h) = \frac{ f( a + h ) - f(a) - Th }{ \| h \| }\) it is essential to have norm signs in the denominator. There are no norm signs in the numerator, and so $\eta(h)$ is a vector in $\Bbb{R}^m$.

When it comes to taking the limit as $h\to0$, it is always the case that a vector goes to zero if and only if its norm goes to zero. Therefore the conditions $\eta(h) \to0$ and $\|\eta(h)\| \to0$ are equivalent.

Finally, it follows from one of the axioms for a norm that $$\|\eta(h)\| = \left\|\frac{ f( a + h ) - f(a) - Th }{ \| h \| } \right\| = \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| }.$$
Thanks Opalg ... your post was very helpful ...

It was particularly helpful to me to be reminded that ... ... " ... ... When it comes to taking the limit as $h\to0$, it is always the case that a vector goes to zero if and only if its norm goes to zero. ... ... "

But ... given what you have said, I am still a bit perplexed as to why the author bothered to put norm signs around the numerator of ... ...\(\displaystyle \lim_{ h \rightarrow 0 } \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| } = \lim_{ h \rightarrow 0 } \| \eta(h) \| = 0
\)

... given what you said, surely he need not have bothered .... can you comment ...Thank you again for your help ...

Peter
 
  • #4
Peter said:
I am still a bit perplexed as to why the author bothered to put norm signs around the numerator of

\(\displaystyle \lim_{ h \rightarrow 0 } \frac{ \| f( a + h ) - f(a) - Th \| }{ \| h \| } = \lim_{ h \rightarrow 0 } \| \eta(h) \| = 0 \)

... given what you said, surely he need not have bothered .
I can't see any need for the norm signs. It makes no difference whether they are there or not.
 
  • #5
Opalg said:
I can't see any need for the norm signs. It makes no difference whether they are there or not.
Thanks Opalg ...

I understand ... but that is a very important point to me ...

THanks again for clarifying the issue ...

Peter
 

FAQ: Differentiation on R^n ....need/ use of norms ....

1. What is differentiation on R^n?

Differentiation on R^n refers to the process of finding the derivative of a function with multiple variables. It is a fundamental tool in calculus and is used to determine the rate of change of a function in different directions.

2. Why is differentiation on R^n important?

Differentiation on R^n is important because it allows us to analyze the behavior of a function with multiple variables. It helps us to understand how the function changes in different directions and plays a crucial role in many real-world applications such as optimization, physics, and economics.

3. How is differentiation on R^n different from differentiation on R?

Differentiation on R^n is different from differentiation on R because it involves multiple variables instead of just one. This means that the derivative of a function on R^n will be a vector instead of a single number, and the rate of change of the function will vary in different directions.

4. What is the need for norms in differentiation on R^n?

Norms are important in differentiation on R^n because they allow us to measure the magnitude of a vector. This is necessary when finding the derivative of a vector-valued function since the result will also be a vector. Norms also play a crucial role in optimization problems where we need to minimize or maximize a function with multiple variables.

5. How are norms used in differentiation on R^n?

Norms are used in differentiation on R^n to find the direction in which a function changes the most. This is known as the directional derivative and is calculated by taking the dot product of the gradient of the function with the unit vector in the desired direction. The norm of this result gives us the rate of change in that direction.

Back
Top