Composition of Linear Transformation and Matrix Multiplication

In summary, the theorem and proof state that for linear transformations between finite-dimensional vector spaces with defined ordered bases, the transformation of a vector in one space can be represented by a matrix in the other space, and the matrix can be obtained by expressing the vector in its basis and then using the coefficients of the transformation in the basis of the other space. The proof involves defining linear transformations and using the property of linearity to simplify the expressions.
  • #1
jeff1evesque
312
0
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf.. Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
Physics news on Phys.org
  • #2
jeff1evesque said:
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf..].

The defining property of "linear transformation", T, is T(au+ bv)= aT(u+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.


Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
  • #3
The defining property of "linear transformation", T, is T(au+ bv)= aT(u)+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.

Wouldn't it be:
g(a) = aT(u) = T(au) = T(f(a)) = Tf.
But why can we replace f(a) by f?
--------------------------------------------------------------

Lastly, how does this line happen:
[T(u)]C = [g(1)]C = [g]AC

Thanks a lot,

JL
 
  • #4
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.

To understand what is going on, consider the following example carefully:
Let U be a 3D vector space with basis e1, e2, e3.
Let V be a 2D vector space with basis b1, b2,
Let T:U->V be linear.

(The same reasoning generalizes to different numbers of dimensions)

http://img187.imageshack.us/img187/8246/r3tor21.png

Question: For some u in U, can we write T(u) in a simpler form?

Answer: Yes! If you express u in the basis given for U, then u = u1 e1 + u2 e2 + u3 e3 for some scalars u1, u2, u3.

http://img155.imageshack.us/img155/9341/r3tor32.png

But then by linearity,
T(u) = u1 T(e1) + u2 T(e2) + u3 T(e3)

But we could pull the same trick on each of those T(ei)'s by writing them in the basis of V (remember, if x is in U, then T(x) is in V by definition).
http://img249.imageshack.us/img249/6382/r3tor23.png

Doing this gets us:
T(e1) = T11 b1 + T21 b2
T(e2) = T12 b1 + T22 b2
T(e3) = T13 b1 + T23 b2

for 6 different scalars T11, T21 etc. You may think of these scalars Tij as "defining" the transformation T for when it is expressed in these bases.

Now we can substitute this back into the expression from before to get a very simple form for T(u):
T(u) = u1 (T11 b1 + T21 b2) + u2 (T12 b1 + T22 b2) + u3 (T13 b1 + T23 b2)

= (u1 T11 + u2 T12 + u3 T13)b1 + (u1 T21 + u2 T22 + u3 T23)b2

= (T11 T12 T13)*(u1 u2 u3)T b1 + (T21 T22 T23)*(u1 u2 u3)T b2

In other words, if we write u in a given basis (here e1, e2, e3), and we want to know T(u) in a given basis (here b1, b2), it is equivalent to computing the matrix multiplication:

[tex]T(\textbf{u})_{b_1, b_2. basis} = \left[\begin{matrix} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \left[\begin{matrix} u_1 \\ u_1 \\ u_3\end{matrix}\right] = \left[\begin{matrix}T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \textbf{u}_{e_1, e_2, e_3. basis}[/tex]
 
Last edited by a moderator:
  • #5
If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL
 
  • #6
maze said:
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.


Yeah, what book is that? I've seen it presented much more clearly
 
  • #7
Back to my last post- is my reasoning correct?

If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL


and this book is intended to portray the theory aspect of linear algebra (with limited actual application). The book is by Friedberg, Insel, Spence, “Linear Algebra” (Prentice Hall, 4th edition).
 
  • #8
I have a weird way of visualizing linear transformations involving a corn field as the basis, areas of corn turned into bushels as the e's, and taking them to market to get paid when multiplying the matrix.

I don't know if it would help or hurt anyone, or if it is even plausible as a way of looking at it, but it seems to help me. lol
 
  • #9
I still am not sure how [g(1)]C = [g]AC occurs. I understand everything elsethanks,JL
 
  • #10
Isometry

Hi, I have a question:
I am trying to find an isometry such that T(aU+bV)≠aT(U)+bT(V).
I have tried so many possibilities. I gave T(X)=MX given that M is a matrix that doesn't have an inverse. But i still can't find a nice matrix that will make the proposition possible.
Help please
 

FAQ: Composition of Linear Transformation and Matrix Multiplication

What is the definition of a linear transformation?

A linear transformation is a function that maps one vector space to another in a way that preserves the operations of vector addition and scalar multiplication. It can also be thought of as a function that transforms a set of coordinates into a new set of coordinates while preserving the geometric structure of the original space.

What is the composition of linear transformations?

The composition of linear transformations is the process of applying one linear transformation after another. It is achieved by multiplying the matrices of the individual transformations in the order they are applied. The resulting matrix represents the combined effect of both transformations.

How is matrix multiplication related to linear transformations?

Matrix multiplication is the mathematical operation used to perform the composition of linear transformations. By multiplying the matrix of one transformation by the matrix of another, we can calculate the resulting matrix that represents the composition of the two transformations.

What is the significance of the identity matrix in linear transformations?

The identity matrix is a special matrix that, when multiplied by any vector, results in the same vector. In linear transformations, the identity matrix serves as the neutral element, meaning that when it is multiplied by a transformation matrix, it does not change the original transformation.

How does the order of matrix multiplication affect the result of a composition of linear transformations?

The order of matrix multiplication is crucial in the composition of linear transformations. When multiplying matrices, the order matters, and switching the order of multiplication can result in a different transformation. Therefore, it is essential to follow the correct order when computing the composition of linear transformations.

Similar threads

Back
Top