- #1
WiFO215
- 420
- 1
Theorem:
Let V and W be n-dimensional vector spaces over the field F of complex/real numbers. Then the space of linear transformations L(V,W) is finite dimensional and has dimension mn.
Proof:
Let B = {[tex]\alpha 1, \alpha 2 ... , \alpha n[/tex]} and B' = {[tex]\beta 1, \beta 2,... \beta m[/tex]} be ordered bases for V and W respectively. For each pair of integers (p,q) with 1[tex]\leq[/tex] p [tex]\leq[/tex] m and 1 [tex]\leq[/tex] q [tex]\leq[/tex] n, we define a linear transformation E(p,q) from V into W by
E(p,q)([tex]\apha i[/tex]) = 0, if i[tex]\neq[/tex] q
=[tex]\beta p[/tex], if i = q
=[tex]\delta[/tex](i,q)[tex]\beta[/tex]p.
According to theorem, there is a unique linear transformation from V into W satisfying these conditions. The claim is that the mn transformations E(p,q) form a basis for L(V,W).
Let T be a linear transformation from V into W. For each j, 1 [tex]\leq[/tex] j [tex]\leq[/tex] n, let A(i,j),...,A(m,j) be the coordinates of the vector T[tex]\alpha i[/tex] in the ordered basis B', i.e.,
T[tex]\alpha j [/tex] = [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex].
We wish to show that
T = [tex]\sum^{m}_{p=1} \sum^{n}_{q=1}[/tex] A(p,q) E(p,q) ... (1)
Let U be the linear transformation in the right hand member of (1). Then for each j
U[tex]\alpha[/tex]j = [tex]\sum_{p} \sum_{q} A(p,q) E(p,q)(\alpha[/tex]j)
= [tex]\sum_{p} \sum_{q} A(p,q) \delta[/tex](j,q)([tex]\beta[/tex]p)
= [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex]
= T[tex]\alpha[/tex]j
and consequently U = T. Now (1) shows that the E(p,q) span L(V,W); we must prove that they are independent [ THIS IS THE PART THAT I DON'T UNDERSTAND. I COULD FOLLOW UP TO HERE]. But this is clear from what we did above; for, if the linear transformation
U = [tex]\sum_{p} \sum_{q}[/tex] A(p,q) E(p,q)
is the zero transformation, then U[tex]\alpha[/tex]j = 0 for each j, so
[tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex] = 0
and the independence of the [tex]\beta[/tex]p implies that A(p,j) = 0 for every p and j.
------END OF PROOF IN TEXT
Now let me explain a little more clearly what I don't understand with a rather simple example.
Let S be the set of ordered pairs (a,1) with 1[tex]\leq[/tex] a [tex]\leq[/tex] n, a is an integer, and F be the set of real numbers.
Now let me define a function f(i,j), f: S [tex]\rightarrow[/tex] F, such that
f (i,j) [(a,1)] = [tex]\delta(j,a)[/tex]
This could be represented as a space of nx1 column matrices with 1s in the jth position.
What I am trying to point out is that f(1,1) maps to the matrix [1 0 0 0... 0], but so does f(1,2). If both of them map to the same fellow, how the heck are the two linearly independent?
Let V and W be n-dimensional vector spaces over the field F of complex/real numbers. Then the space of linear transformations L(V,W) is finite dimensional and has dimension mn.
Proof:
Let B = {[tex]\alpha 1, \alpha 2 ... , \alpha n[/tex]} and B' = {[tex]\beta 1, \beta 2,... \beta m[/tex]} be ordered bases for V and W respectively. For each pair of integers (p,q) with 1[tex]\leq[/tex] p [tex]\leq[/tex] m and 1 [tex]\leq[/tex] q [tex]\leq[/tex] n, we define a linear transformation E(p,q) from V into W by
E(p,q)([tex]\apha i[/tex]) = 0, if i[tex]\neq[/tex] q
=[tex]\beta p[/tex], if i = q
=[tex]\delta[/tex](i,q)[tex]\beta[/tex]p.
According to theorem, there is a unique linear transformation from V into W satisfying these conditions. The claim is that the mn transformations E(p,q) form a basis for L(V,W).
Let T be a linear transformation from V into W. For each j, 1 [tex]\leq[/tex] j [tex]\leq[/tex] n, let A(i,j),...,A(m,j) be the coordinates of the vector T[tex]\alpha i[/tex] in the ordered basis B', i.e.,
T[tex]\alpha j [/tex] = [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex].
We wish to show that
T = [tex]\sum^{m}_{p=1} \sum^{n}_{q=1}[/tex] A(p,q) E(p,q) ... (1)
Let U be the linear transformation in the right hand member of (1). Then for each j
U[tex]\alpha[/tex]j = [tex]\sum_{p} \sum_{q} A(p,q) E(p,q)(\alpha[/tex]j)
= [tex]\sum_{p} \sum_{q} A(p,q) \delta[/tex](j,q)([tex]\beta[/tex]p)
= [tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex]
= T[tex]\alpha[/tex]j
and consequently U = T. Now (1) shows that the E(p,q) span L(V,W); we must prove that they are independent [ THIS IS THE PART THAT I DON'T UNDERSTAND. I COULD FOLLOW UP TO HERE]. But this is clear from what we did above; for, if the linear transformation
U = [tex]\sum_{p} \sum_{q}[/tex] A(p,q) E(p,q)
is the zero transformation, then U[tex]\alpha[/tex]j = 0 for each j, so
[tex]\sum^{m}_{p=1}[/tex]A(p,j) [tex]\beta p[/tex] = 0
and the independence of the [tex]\beta[/tex]p implies that A(p,j) = 0 for every p and j.
------END OF PROOF IN TEXT
Now let me explain a little more clearly what I don't understand with a rather simple example.
Let S be the set of ordered pairs (a,1) with 1[tex]\leq[/tex] a [tex]\leq[/tex] n, a is an integer, and F be the set of real numbers.
Now let me define a function f(i,j), f: S [tex]\rightarrow[/tex] F, such that
f (i,j) [(a,1)] = [tex]\delta(j,a)[/tex]
This could be represented as a space of nx1 column matrices with 1s in the jth position.
What I am trying to point out is that f(1,1) maps to the matrix [1 0 0 0... 0], but so does f(1,2). If both of them map to the same fellow, how the heck are the two linearly independent?
Last edited: