How Do Linear Transformations Form a Basis in Finite Dimensional Spaces?

  • Thread starter WiFO215
  • Start date
  • Tags
    Theorem
In summary, the proof shows that the space of linear transformations L(V,W) from an n-dimensional vector space V to an m-dimensional vector space W is finite dimensional and has dimension mn. This is proven by constructing a set of mn linear transformations E(p,q) that form a basis for L(V,W), and showing their independence by proving that any linear combination of these transformations resulting in the zero transformation must have all coefficients equal to zero. This is done by considering the coordinates of a given linear transformation T with respect to the basis B' of W, and showing that they can be written as a linear combination of the E(p,q). The proof also addresses a question about linear independence in a simpler example, highlighting the fact that two linear transformations
  • #1
WiFO215
420
1
Theorem:
Let V and W be n-dimensional vector spaces over the field F of complex/real numbers. Then the space of linear transformations L(V,W) is finite dimensional and has dimension mn.

Proof:
Let B = {[tex]\alpha 1, \alpha 2 ... , \alpha n[/tex]} and B' = {[tex]\beta 1, \beta 2,... \beta m[/tex]} be ordered bases for V and W respectively. For each pair of integers (p,q) with 1[tex]\leq[/tex] p [tex]\leq[/tex] m and 1 [tex]\leq[/tex] q [tex]\leq[/tex] n, we define a linear transformation E(p,q) from V into W by

E(p,q)([tex]\apha i[/tex]) = 0, if i[tex]\neq[/tex] q
=[tex]\beta p[/tex], if i = q


=[tex]\delta[/tex](i,q)[tex]\beta[/tex]p.

According to theorem, there is a unique linear transformation from V into W satisfying these conditions. The claim is that the mn transformations E(p,q) form a basis for L(V,W).

Let T be a linear transformation from V into W. For each j, 1 [tex]\leq[/tex] j [tex]\leq[/tex] n, let A(i,j),...,A(m,j) be the coordinates of the vector T[tex]\alpha i[/tex] in the ordered basis B', i.e.,

T[tex]\alpha j [/tex] = [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex].

We wish to show that
T = [tex]\sum^{m}_{p=1} \sum^{n}_{q=1}[/tex] A(p,q) E(p,q) ... (1)

Let U be the linear transformation in the right hand member of (1). Then for each j

U[tex]\alpha[/tex]j = [tex]\sum_{p} \sum_{q} A(p,q) E(p,q)(\alpha[/tex]j)

= [tex]\sum_{p} \sum_{q} A(p,q) \delta[/tex](j,q)([tex]\beta[/tex]p)

= [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex]

= T[tex]\alpha[/tex]j

and consequently U = T. Now (1) shows that the E(p,q) span L(V,W); we must prove that they are independent [ THIS IS THE PART THAT I DON'T UNDERSTAND. I COULD FOLLOW UP TO HERE]. But this is clear from what we did above; for, if the linear transformation

U = [tex]\sum_{p} \sum_{q}[/tex] A(p,q) E(p,q)

is the zero transformation, then U[tex]\alpha[/tex]j = 0 for each j, so

[tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex] = 0

and the independence of the [tex]\beta[/tex]p implies that A(p,j) = 0 for every p and j.

------END OF PROOF IN TEXT

Now let me explain a little more clearly what I don't understand with a rather simple example.

Let S be the set of ordered pairs (a,1) with 1[tex]\leq[/tex] a [tex]\leq[/tex] n, a is an integer, and F be the set of real numbers.

Now let me define a function f(i,j), f: S [tex]\rightarrow[/tex] F, such that

f (i,j) [(a,1)] = [tex]\delta(j,a)[/tex]

This could be represented as a space of nx1 column matrices with 1s in the jth position.

What I am trying to point out is that f(1,1) maps to the matrix [1 0 0 0... 0], but so does f(1,2). If both of them map to the same fellow, how the heck are the two linearly independent?
 
Last edited:
Physics news on Phys.org
  • #2
Okay, wait. I see a flaw in my argument. It doesn't matter if two f(i,j) map to the same vector in F. That ought to be a good thing actually since that just says we can even construct linear transformations which aren't 1:1. So that's that. But I still don't understand how the mn linear transformations are linearly independent. I fully understand how the span the space of linear transformations. I just can't connect the dots.
 
  • #3
DONE. You can delete this thread now. I was able to prove it right after posting it here as usual.
 

FAQ: How Do Linear Transformations Form a Basis in Finite Dimensional Spaces?

What is the Theorem from Hoffman and Kunze?

The Theorem from Hoffman and Kunze, also known as the Spectral Theorem, states that every matrix with real or complex entries can be diagonalized by a unitary or normal matrix.

Why is the Theorem from Hoffman and Kunze important?

This theorem is important because it allows us to simplify the study of matrices by reducing them to a diagonal form, which is often easier to work with. It also has many applications in fields such as linear algebra, quantum mechanics, and signal processing.

What is the difference between a unitary and a normal matrix?

A unitary matrix is a square matrix whose conjugate transpose is equal to its inverse, while a normal matrix is a square matrix that commutes with its conjugate transpose. In other words, a unitary matrix is a special case of a normal matrix.

Can the Theorem from Hoffman and Kunze be extended to non-square matrices?

No, the Theorem from Hoffman and Kunze only applies to square matrices. However, there are similar theorems, such as the Singular Value Decomposition (SVD) for rectangular matrices, that can be used to diagonalize non-square matrices.

What are some real-world applications of the Theorem from Hoffman and Kunze?

The Theorem from Hoffman and Kunze has many applications in fields such as image processing, signal processing, quantum mechanics, and data compression. For example, it can be used to reduce the dimensionality of data, making it easier to analyze and interpret. It also plays a crucial role in the study of quantum mechanical systems and their properties.

Similar threads

Replies
10
Views
1K
Replies
24
Views
3K
Replies
52
Views
3K
Replies
23
Views
1K
Replies
8
Views
2K
Replies
2
Views
1K
Replies
1
Views
1K
Back
Top