Given specific v, dimension of subspace of L(V, W) where Tv=0?

  • #1
zenterix
515
74
Homework Statement
Suppose ##V## and ##W## are finite-dimensional vector spaces. Let ##v\in V##. Let

##E=\{T\in L(V,W): Tv=0\}##

(a) Show that ##E## is a subspace of ##L(V,W)##.

(b) Suppose ##v\neq 0##. What is the dimension of ##E##?
Relevant Equations
(a)

First of all, the transformation ##Tw=0## for all ##w## is in ##E##.

Let ##T_1, T_2\in E##.

Then, ##(a_1T_1+a_2T_2)v=0## so that ##a_1T_1+a_2T_2\in E## and since the closure axioms are satisfied for the set ##E## then we can infer it is a subspace.
I was stuck when I started writing this question. I think I solved the problem in the course of writing this post.

My solution is as follows:

Consider any basis ##B## of ##V## that includes ##v##: ##(v, v_2, ..., v_n)##.

##L(V,W)##, where ##\dim{(V)}=n## and ##\dim{(W)}=m## is isomorphic with ##F^{m,n}##, the vector space of ##m## by ##n## matrices with entries in field ##F##.

In other words, each linear map from ##V## to ##W## relative to specific bases in ##V## and ##W## is represented by a unique ##m## by ##n## matrix.

Now, using our basis ##B## and a specific basis of ##W##, the matrices representing linear maps ##E## have all elements in the first column equal to zero.

Thus it seems to me the dimension of all such matrices is simply ##m\cdot (n-1)##.

Another way to see this is the following:

Linear maps in ##E## always map ##v## to ##0\in W## and map vectors from ##\text{span}(v_2,...,v_n)## to vectors in ##W##.

Every vector in ##V## with a non-negative ##v## component gets mapped the same way as the that vector without the ##v## component, which is in ##\text{span}(v_2,...,v_n)##.

Thus, the variation between the different linear maps in ##E## is the variation in how they map the vectors in ##\text{span}(v_2,...,v_n)##.

Thus, ##E## has the same dimension as ##L(\text{span(v_2,...,v_n)},...,W)##.

Now, ##\text{span}(v_2,...,v_n)## has dimension ##n-1## and there is a theorem that says that ##\dim L(V,W) = \dim(V)\cdot \dim(W)##.

Thus, we get a dimension for ##E## of ##(n-1)*m##.
 
Physics news on Phys.org
  • #2
It looks correct, although it could do with being tidied up. If you state the closure axiom holds without any calculations, then perhaps you need the word "obviously" or "clearly".
 
  • Like
Likes zenterix
  • #3
PeroK said:
It looks correct, although it could do with being tidied up. If you state the closure axiom holds without any calculations, then perhaps you need the word "obviously" or "clearly".
He just proved closure
Let T1,T2∈E.

Then, (a1T1+a2T2)v=0 so that a1T1+a2T2∈E
The rest of the sentence is a bit of a misfortunate formulation though. It should probably be put in a new sentence.
 
  • #4
Orodruin said:
He just proved closure
That's not a proof. That's a statement of what is to be proved without further calculation. Or, without an explicit indication that it's trivial.
 
  • Like
Likes docnet
  • #5
PeroK said:
That's not a proof. That's a statement of what is to be proved without further calculation. Or, without an explicit indication that it's trivial.
I disagree, no more calculation is required. It is quite clear that ##T_i v = 0## by definition.
 
  • Like
Likes docnet
  • #6
If ## \mathbb F ## is any generic field, I don't believe you can talk generically about " negative" elements. For, e.g., Reals, meaning vector spaces ##\mathbb R^n## over ##\mathbb R##, then yes. But, over, say, the Complexes, there's no such thing as negative Complex numbers.

Still, a slight rewrite to make it, at least to my taste, cleaner.

Still, use that for any v, the dimension will be the same . Then, for ## L(V,W)##, with dim(V)=n ; dim(W)= m , so that Dim ##L(V,W)= m \times n ##, choose the basis of matrices that are 1 in the ##(i,j)## entry , for ##1\leq i,j \leq n\times m##, and ##0## elsewhere.
Then, for convenience, let ##v## in ##W##, be the vector ##(1,0,..,0)^{T}##.

Then it's clear , as pointed out, that only matrices ##M## in the basis with ##M(1,0,..,0)=0## , will be all those with the first column all ##0s##. There are ##m\times n-n=n(m-1)## such matrices, out of the ##m\times n## basis matrices.
 
  • #7
WWGD said:
If ## \mathbb F ## is any generic field, I don't believe you can talk generically about " negative" elements. For, e.g., Reals, meaning vector spaces ##\mathbb R^n## over ##\mathbb R##, then yes. But, over, say, the Complexes, there's no such thing as negative Complex numbers.

Still, a slight rewrite to make it, at least to my taste, cleaner.
Good point. I actually don't see where it was necessary to say anything about positive or negative.
 
  • Like
Likes WWGD
  • #8
FactChecker said:
I actually don't see where it was necessary to say anything about positive or negative.
I don't either. I had to take another look at the OP to see if there was any mention of negative elements, but didn't find any.
 
  • #9
Mark44 said:
I don't either. I had to take another look at the OP to see if there was any mention of negative elements, but didn't find any.
5th paragraph from the bottom:

" Every vector in v with a negative component...".
 
  • #10
WWGD said:
5th paragraph from the bottom:

" Every vector in v with a negative component...".
I said "every vector in ##v## with a non-negative component" but I meant "non-zero component"
 
  • Like
Likes WWGD
  • #11
Mark44 said:
I don't either. I had to take another look at the OP to see if there was any mention of negative elements, but didn't find any.
I meant non-zero. I didn't mean to make the distinction between positive and negative.
 
  • #12
PeroK said:
It looks correct, although it could do with being tidied up. If you state the closure axiom holds without any calculations, then perhaps you need the word "obviously" or "clearly".
The closure axioms are

1) Closure under addition

For every pair of elements ##x## and ##y## in a set ##V## there corresponds a unique element in ##V## called the sum of ##x## and ##y## and denoted ##x+y##

2) Closure under multiplication by real numbers

For every pair of elements ##x## in a set ##V## and every real number ##a## there corresponds a unique element in ##V## called the product of ##x## and ##a## and denoted ##ax##.

In the case of the set ##E##, let ##T_1## and ##T_2## be any elements in ##E##, and let ##a_1## and ##a_2## be any real numbers.

Consider the linear map ##a_1T_1##.

By the properties of linear map, we have ##a_1T_1v=a_1(T_1v)=0## so ##a_1T_1\in E##.

Also, ##(T_1+T_2)v=T_1v+T_2v=0+0=0## so ##(T_1+T_2)\in E##.

As for uniqueness, I think we show that these elements of ##E##, namely multiplication of an element of ##E## by a scalar and the sum of two elements of ##E##, are unique by definition of the underlying operations of addition and multiplication by a scalar. I am not sure about this part, however.

In my OP, I thought that by simply showing that ##a_1T_1+a_2T_2## is in ##E## I was showing both closure axioms simultaneously.
 
  • #13
In the original post, there is another correction to be made.

I wrote:

Now, using our basis ##B## and a specific basis of ##W##, the matrices representing linear maps ##E## have all elements in the first column equal to zero.

I should have written:

Now, using our basis ##B## and a specific basis of ##W##, the matrices representing linear maps in ##E## have all elements in the first column equal to zero except for the element in position (1, 1) which is 1.
 
  • #14
zenterix said:
In my OP, I thought that by simply showing that ##a_1T_1+a_2T_2## is in ##E## I was showing both closure axioms simultaneously.
My point was that you didn't justify why ##(a_1T_1+a_2T_2)v = 0##

My response was perhaps you ought to justify this- or, at the very least, indicate that you think it's too obvious to need justification. Even if you said "by linearity" that would be something.

In a pure maths exam you might get away with that, but given that that is all there is to part a) you may be expected to justify that.

All you needed, in my opinion, was to expand that expression:
$$(a_1T_1 + a_2T_2)v = a_1T_1v + a_2T_2v = 0$$Then there is no doubt you understand why that expression is zero.

Also, I notice now that you didn't explicitly say that you were showing that ##E## is non-empty. Again, we have to guess that you know what you're doing.

These are minor points, but on a pure mathematics course, these small imprecisions will eventually let you down.
 
  • Like
Likes zenterix
  • #15
zenterix said:
I said "every vector in ##v## with a non-negative component" but I meant "non-zero component"
Yes. That makes sense. I thought so. Now I'm happy. :-) Thanks.
 
  • Like
Likes zenterix
  • #16
Everything here looks correct to me, (except for the unsigned generic statements appended at the end), but I suggest looking at it a little more intrinsically.

I.e. given V,W there is a bilinear pairing L(V,W)xV-->W taking <f,v> to f(v). Fixing f, the set of all v such that f(v) = 0 is the kernel of f, a subspace of V. Fixing v≠0, the set of all f such that f(v) = 0 is a subspace of L(V,W), the kernel of "evaluation at v". (The generic remarks appended to this thread confuse the two kernels.)

Here one wants not the kernel of f, but the (dimension of) the kernel of evaluation at v. If dimV = n, and dimW = m, then dimL(V,W) = nm, and since the map "eval at v":L(V,W)-->W is easily seen to be surjective, its kernel has dimension nm-m = (n-1)m as stated and proved above.

Another perspective takes advantage of the important idea of a quotient space. If U is a subspace of V, then there is a natural map V-->V/U to the quotient space, sending U to zero, and a natural one-one correspondence between linear maps V-->W that send U to zero, and L(V/U, W).

I.e. given a linear map V/U-->W we get by composition, a linear map
V-->V/U-->W, that sends each element of U in V to zero in W; and every linear map V-->W sending U to zero, factors through V/U in this way.

Hence there is a natural isomorphism between those maps V-->W sending v to zero, and L(V/span{v}, W). Since v≠0 implies span{v} is one dimensional, V/span{v} is n-1 dimensional, so L(V/span{v}, W) is (n-1)m dimensional. qed. (This is the intrinsic way to state the second proof in post #1, since the equivalence classes of the vectors v2,...vn, give a basis for the quotient space V/span{v}.)
 

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
476
  • Calculus and Beyond Homework Help
Replies
1
Views
659
  • Calculus and Beyond Homework Help
Replies
34
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
681
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
483
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Back
Top