Norm of a Linear Transformation .... Junghenn Propn 9.2.3 ....

In summary, the conversation discusses Proposition 9.2.3 in Chapter 9 of Hugo D. Junghenn's book "A Course in Real Analysis". The proof of Proposition 9.2.3 involves using the homogeneity of the exponent to show that ||T(x)|| can be simplified to ||T(||x||^-1x)||, where x is a non-zero vector with ||x||=1. The conversation also touches on the issue of rescaling and Without Loss of Generality assumptions in the proof.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on ##\mathbb{R}^n##"

I need some help with the proof of Proposition 9.2.3 ...

Proposition 9.2.3 and the preceding relevant Definition 9.2.2 read as follows:
Junghenn - 1 -  Proposition 9.2.3   ... PART 1  ... .png

Junghenn - 2 -  Proposition 9.2.3   ... PART 2   ... .png

In the above proof we read the following:

" ... ... If ##\mathbf{x} \neq \mathbf{0} \text{ then } \| \mathbf{x} \|^{-1} \mathbf{x}## has a norm ##1##, hence

##\| \mathbf{x} \|^{-1} \| T \mathbf{x} \| = \| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \| \le 1## ... ... "
Now I know that ##T( c \mathbf{x} ) = c T( \mathbf{x} )##

... BUT ...

... how do we know that this works "under the norm sign" ...

... that is, how do we know ...##\| \mathbf{x} \|^{-1} \| T \mathbf{x} \| = \| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \|##... and further ... how do we know that ...##\| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \| \le 1##

Help will be appreciated ...

Peter
 

Attachments

  • Junghenn - 1 -  Proposition 9.2.3   ... PART 1  ... .png
    Junghenn - 1 - Proposition 9.2.3 ... PART 1 ... .png
    27.5 KB · Views: 1,192
  • Junghenn - 2 -  Proposition 9.2.3   ... PART 2   ... .png
    Junghenn - 2 - Proposition 9.2.3 ... PART 2 ... .png
    34.3 KB · Views: 1,015
Physics news on Phys.org
  • #2
Math Amateur said:
Now I know that ##T( c \mathbf{x} ) = c T( \mathbf{x} )##

... BUT ...

... how do we know that this works "under the norm sign" ...

... that is, how do we know ...##\| \mathbf{x} \|^{-1} \| T \mathbf{x} \| = \| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \|##

The idea is homogeneity of the the exponent.

Look at the 2 norm of some vector ##\mathbf v## (it can be any vector, including perhaps ##\mathbf v := T \mathbf x##

##\big \Vert \mathbf v \big \Vert_2 = \big(\sum_{i=1}^n v_i^2\big)^\frac{1}{2}##

thus, in your case with some ##c \gt 0##

##c \big \Vert \mathbf v \big \Vert_2 = c \big(\sum_i v_i^2\big)^\frac{1}{2} = \big(\sum_i c^2 v_i^2\big)^\frac{1}{2} = \big(\sum_i (c v_i)^2\big)^\frac{1}{2} = \big \Vert c \mathbf v \big \Vert_2##

Keep an eye for this sort of homogeneity -- it comes up all the time in inequalities.
Math Amateur said:
... and further ... how do we know that ...

##\| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \| \le 1##

I either didn't read the question close enough, or something is missing. My inference is they rescaled such that ##T## has an operator norm of 1 here -- but I didn't catch where that was done. If said rescaling wasn't done, the statement in general is not true. For example, consider a diagonal matrix ##T## where ##T_{1,1} = 5## and all other diagonal entries are one -- then apply the argument where ##\mathbf x = \mathbf e_1## i.e. the first standard basis vector -- the result is 5, not 1. But again I think there was a rescaling / Without Loss of Generality assumption that I missed somewhere.
 
  • Like
Likes Math Amateur
  • #3
StoneTemplePython said:
The idea is homogeneity of the the exponent.

Look at the 2 norm of some vector ##\mathbf v## (it can be any vector, including perhaps ##\mathbf v := T \mathbf x##

##\big \Vert \mathbf v \big \Vert_2 = \big(\sum_{i=1}^n v_i^2\big)^\frac{1}{2}##

thus, in your case with some ##c \gt 0##

##c \big \Vert \mathbf v \big \Vert_2 = c \big(\sum_i v_i^2\big)^\frac{1}{2} = \big(\sum_i c^2 v_i^2\big)^\frac{1}{2} = \big(\sum_i (c v_i)^2\big)^\frac{1}{2} = \big \Vert c \mathbf v \big \Vert_2##

Keep an eye for this sort of homogeneity -- it comes up all the time in inequalities.

I either didn't read the question close enough, or something is missing. My inference is they rescaled such that ##T## has an operator norm of 1 here -- but I didn't catch where that was done. If said rescaling wasn't done, the statement in general is not true. For example, consider a diagonal matrix ##T## where ##T_{1,1} = 5## and all other diagonal entries are one -- then apply the argument where ##\mathbf x = \mathbf e_1## i.e. the first standard basis vector -- the result is 5, not 1. But again I think there was a rescaling / Without Loss of Generality assumption that I missed somewhere.
Thanks StoneTemplePython ...

Appreciate your help ...

Peter
 
  • #4
Math Amateur said:
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on ##\mathbb{R}^n##"

I need some help with the proof of Proposition 9.2.3 ...

Proposition 9.2.3 and the preceding relevant Definition 9.2.2 read as follows:
View attachment 221315
View attachment 221316
In the above proof we read the following:

" ... ... If ##\mathbf{x} \neq \mathbf{0} \text{ then } \| \mathbf{x} \|^{-1} \mathbf{x}## has a norm ##1##, hence

##\| \mathbf{x} \|^{-1} \| T \mathbf{x} \| = \| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \| \le 1## ... ... "
Now I know that ##T( c \mathbf{x} ) = c T( \mathbf{x} )##

... BUT ...

... how do we know that this works "under the norm sign" ...

... that is, how do we know ...##\| \mathbf{x} \|^{-1} \| T \mathbf{x} \| = \| T ( \| \mathbf{x} \|^{-1} \mathbf{x} ) \|##..Peter

Aren't we working with the set of x with ||x||=1 ? Seems like subbing this in would show equality, if I did not miss something obvious.
 

FAQ: Norm of a Linear Transformation .... Junghenn Propn 9.2.3 ....

What is the norm of a linear transformation?

The norm of a linear transformation is a measure of its size or magnitude, similar to the absolute value of a number. It is calculated by finding the maximum length of the transformed vectors from the origin.

How is the norm of a linear transformation calculated?

The norm of a linear transformation can be calculated using the formula ||T|| = max ||T(x)||, where T is the transformation and x is a vector in the domain. This means finding the maximum length of the transformed vectors from the origin.

What is the significance of the norm of a linear transformation?

The norm of a linear transformation can be used to determine important properties of the transformation, such as its invertibility and the rate of convergence of iterative methods. It also helps in understanding the relationship between the input and output vectors.

Can the norm of a linear transformation be negative?

No, the norm of a linear transformation cannot be negative. It is always a non-negative value, as it represents the length or magnitude of a vector.

How does the norm of a linear transformation relate to the concept of a matrix norm?

The norm of a linear transformation is closely related to the concept of a matrix norm. In fact, the norm of a linear transformation is equal to the norm of its associated matrix. This means that the properties and calculations of matrix norms can also be applied to the norm of a linear transformation.

Back
Top