- #1
Emspak
- 243
- 1
OK, I am working on proofs of the rank-nullity (otherwise in my class known as the dimension theorem).
Here's a proof that my professor gave in the class. I want to be sure I understand the reasoning. So I will lay out what he had here with a less-precise layman's wording, as I want to be sure I know what I am doing. It makes the proof easier to memorize for me.
So:
Let V and W be vector spaces.
T:V→W is linear
V is finite-dimensional
function [itex]f \in [/itex] HomK(V,W)
Let dim(V) = n for some n[itex]\in \mathbb N[/itex] and dim(ker([itex]f[/itex]) = r
dim(V) = nullity(T) + rank(T) = dim(ker([itex]f[/itex]) + dim(Im([itex]f[/itex]))
in some notations (like the one in our text) this wold look like dim(V) = nullity(T) + rank(T) = dim(N(T)) + dim(R(T))
on to the proof:
[itex]ker(f) \subseteq V[/itex]. And it is a subspace.
Why a subspace? Because, since the kernel of any function is the set of vectors that goes to zero, adding to those vectors another vector in V will still be in V, a will multiplying them (since they go to zero).
since we let dim(V)=n all the bases (basis-es?) of V will have n elements.
therefore [itex] \exists[/itex] a basis {x1, x2, ... , xn} of [itex]ker(f)[/itex] where r≤n.
(The reason is that any basis will have an equal or lesser number of dimensions than the space it describes. ker(f) is a subspace).
by the exchange lemma, which says that given any linearly independent subset
[itex] \exists[/itex] {y1, y2, ... ys} [itex]\in V [/itex] such that {y1, y2, ... yn}[itex] \cap [/itex]{x1, x2,... ,xr} = [itex] \varnothing [/itex] the next step says that {y1, y2, ... yn}[itex] \cup [/itex]{x1, x2,... ,xr} is a basis of V.
Now, my question is if that is because the intersection of the two sets is the empty set and they are linearly independent?
After that, we get to saying that {[itex]f(y_1), f(y_2),... f(y_n)[/itex]} is a basis of Im([itex]f[/itex]).
But I am not sure why that is.
He then says we can claim the following:
[itex] λ_1 f(y_1) + λ_2 f(y_2)+... +λ_s f(y_s)= 0 [/itex]
for some λ1, λ2, ..., λs [itex] \in [/itex] K
so taking
[tex]f \Big( \sum_{i=1}^s x_i y_i \Big) = 0 [/tex]
we can make that into
[tex] \Big[ \Big( \sum_{i=1}^s \lambda_i f(y_i) \Big) \Big] = \sum_{i=1}^s \lambda_i y_i \in ker(f)[/tex]
That step I am a bit fuzzy on the reasoning. IIUC, it's just saying that taking the sum of f using the union of x an y sets equals zero (its just f(x,y) ) and the summation of the product of λ and all the f(y) terms is the same as the sum of all the λy terms and they are all in the kernel of f. But I wanted to be sure.
He then said that the above implies that there exists some set of scalars, α1, α2, ... αs[itex]\in[/itex] K s.t. [itex]\sum_{i=1}^s \lambda_i y_i = \sum_{j=1}^r y_i x_j[/itex] and that further implies
[tex]\sum_{j=1}^r \alpha_j x_j - \sum_{i=1}^s \alpha_i x_i = 0 [/tex] which implies αj, λi = 0 for all 1≤j≤r and 1≤i≤s.
and that further implies that the set {f(y1), f(y2), ... ,f(ys)} is linearly independent.
Then he says: for all z in the Im(f) there exists x[itex]\in[/itex] V s.t. z=f(x) (this seems obvious at one level but I felt like it was just sleight of hand).
then
[tex]z = f \Big(\sum_{j=1}^r \alpha_j x_j - \sum_{i=1}^s \lambda_i y_i \Big) = \sum_{j=1}^r \alpha_j f(x_i) + \sum_{i=1}^s x_i f(y_i)= 0 + \sum_{i=1}^s x_i f(y_i)[/tex]
and then he says dim(V) = r + s = dim(ker(f)) + dim(Im(f))
its the last few steps I can't seem to justify in my head. Any help would be appreciated (and seeing if I copied this wrong from the board).
Here's a proof that my professor gave in the class. I want to be sure I understand the reasoning. So I will lay out what he had here with a less-precise layman's wording, as I want to be sure I know what I am doing. It makes the proof easier to memorize for me.
So:
Let V and W be vector spaces.
T:V→W is linear
V is finite-dimensional
function [itex]f \in [/itex] HomK(V,W)
Let dim(V) = n for some n[itex]\in \mathbb N[/itex] and dim(ker([itex]f[/itex]) = r
dim(V) = nullity(T) + rank(T) = dim(ker([itex]f[/itex]) + dim(Im([itex]f[/itex]))
in some notations (like the one in our text) this wold look like dim(V) = nullity(T) + rank(T) = dim(N(T)) + dim(R(T))
on to the proof:
[itex]ker(f) \subseteq V[/itex]. And it is a subspace.
Why a subspace? Because, since the kernel of any function is the set of vectors that goes to zero, adding to those vectors another vector in V will still be in V, a will multiplying them (since they go to zero).
since we let dim(V)=n all the bases (basis-es?) of V will have n elements.
therefore [itex] \exists[/itex] a basis {x1, x2, ... , xn} of [itex]ker(f)[/itex] where r≤n.
(The reason is that any basis will have an equal or lesser number of dimensions than the space it describes. ker(f) is a subspace).
by the exchange lemma, which says that given any linearly independent subset
[itex] \exists[/itex] {y1, y2, ... ys} [itex]\in V [/itex] such that {y1, y2, ... yn}[itex] \cap [/itex]{x1, x2,... ,xr} = [itex] \varnothing [/itex] the next step says that {y1, y2, ... yn}[itex] \cup [/itex]{x1, x2,... ,xr} is a basis of V.
Now, my question is if that is because the intersection of the two sets is the empty set and they are linearly independent?
After that, we get to saying that {[itex]f(y_1), f(y_2),... f(y_n)[/itex]} is a basis of Im([itex]f[/itex]).
But I am not sure why that is.
He then says we can claim the following:
[itex] λ_1 f(y_1) + λ_2 f(y_2)+... +λ_s f(y_s)= 0 [/itex]
for some λ1, λ2, ..., λs [itex] \in [/itex] K
so taking
[tex]f \Big( \sum_{i=1}^s x_i y_i \Big) = 0 [/tex]
we can make that into
[tex] \Big[ \Big( \sum_{i=1}^s \lambda_i f(y_i) \Big) \Big] = \sum_{i=1}^s \lambda_i y_i \in ker(f)[/tex]
That step I am a bit fuzzy on the reasoning. IIUC, it's just saying that taking the sum of f using the union of x an y sets equals zero (its just f(x,y) ) and the summation of the product of λ and all the f(y) terms is the same as the sum of all the λy terms and they are all in the kernel of f. But I wanted to be sure.
He then said that the above implies that there exists some set of scalars, α1, α2, ... αs[itex]\in[/itex] K s.t. [itex]\sum_{i=1}^s \lambda_i y_i = \sum_{j=1}^r y_i x_j[/itex] and that further implies
[tex]\sum_{j=1}^r \alpha_j x_j - \sum_{i=1}^s \alpha_i x_i = 0 [/tex] which implies αj, λi = 0 for all 1≤j≤r and 1≤i≤s.
and that further implies that the set {f(y1), f(y2), ... ,f(ys)} is linearly independent.
Then he says: for all z in the Im(f) there exists x[itex]\in[/itex] V s.t. z=f(x) (this seems obvious at one level but I felt like it was just sleight of hand).
then
[tex]z = f \Big(\sum_{j=1}^r \alpha_j x_j - \sum_{i=1}^s \lambda_i y_i \Big) = \sum_{j=1}^r \alpha_j f(x_i) + \sum_{i=1}^s x_i f(y_i)= 0 + \sum_{i=1}^s x_i f(y_i)[/tex]
and then he says dim(V) = r + s = dim(ker(f)) + dim(Im(f))
its the last few steps I can't seem to justify in my head. Any help would be appreciated (and seeing if I copied this wrong from the board).
Last edited: