Bases, Subspaces, Orthogonal Complements and More to Come

But a) is simple. if T is one-to-one, then every vector in the range of T must have one and only one preimage. So for a transformation V->W, if dim(V)>dim(W), then there are more vectors in V than in W, and it's impossible for every vector in W to have a unique preimage in V, so T can't be one-to-one.
  • #1
StopWatch
38
0

Homework Statement



Show that the set W consisting of all vectors in R4 that are orthogonal to both X and Y is a subspace of R4. Here X and Y are vectors such that X = (1001) and Y = (1010).

Part b) Find a basis for W.


The Attempt at a Solution



So I know to satisfy being a subspace we have to satisfy scalar multiplication, and vector addition, and the zero vector has to be in the space. How do I set up the conditions of the space though? Are all the vectors orthogonal to these two going to be those such that c1X + c2Y = 0 only has c1,c2 = 0 as a solution?

The second part I know works off of the first. I was really just hoping someone could point me in the right direction. Thanks!
 
Physics news on Phys.org
  • #2
Pick a point in W, say w=(a,b,c,d). Use the dot product. You need to have w.X=0 and w.Y=0. That's two equations in the four variables a, b, c and d. Solve them. What do the infinite number of solutions look like??
 
  • #3
So then w.X = a + d = 0 and w.Y = a + c = 0, so a = -c and a = -d ==> -c = -d or c = d = -a? I think I'm confusing myself. Do the infinite number of solutions look like the zero vector then? Such that a + -a = 0?
 
  • #4
StopWatch said:
So then w.X = a + d = 0 and w.Y = a + c = 0, so a = -c and a = -d ==> -c = -d or c = d = -a? I think I'm confusing myself. Do the infinite number of solutions look like the zero vector then? Such that a + -a = 0?

Yes, confusing yourself. But it looks ok so far. a can be anything, b can be anything, but then c=d=(-a). You can write the vector w in terms of a and b as (a,b,-a,-a). Is that a vector space? Can you write a basis?
 
  • #5
Is there anything more I should do in finding the basis other than realizing that (a,b) is it? The reasoning being that -a is a linear combination of a (and the other -a a linear combination of itself of course) so those are the only two left?

I'd also like to introduce three other questions if anyone has time, since I think that one is cleared up then if I'm right about that. Thanks Dick, by the way.

Prove for a transformation T: V --> W:

(a) If dim(V ) > dim(W), then T cannot be 1-1.
(b) If dim(V ) < dim(W), then T cannot be onto.
(c) If dim(V ) = dim(W), then T is 1-1 if and only if T is onto.

First, I'd like to clear up that I have some terms in my head right:

Kernel(T) = all the things the transformation maps to 0, including the 0 vector.
Dimension of a vector space or subspace = the number of vectors in its basis

And what are the rules exactly on mapping to a subspace versus mapping from a vector space to a vector space? Am I only mapping from a vector space to a vector space if I'm mapping ALL the elements from one to another? Is it otherwise subspace to subspace? vectorspace to subspace? or subspace to vector space? depending on whether SOME --> SOME, ALL -->SOME, SOME-->ALL, respectively?

Is the preimage the same as the domain?

Sorry that got a bit messy, do those questions make sense?
 
  • #6
StopWatch said:
Is there anything more I should do in finding the basis other than realizing that (a,b) is it? The reasoning being that -a is a linear combination of a (and the other -a a linear combination of itself of course) so those are the only two left?

I'd also like to introduce three other questions if anyone has time, since I think that one is cleared up then if I'm right about that. Thanks Dick, by the way.

The first one isn't finished. A basis is linearly independent set of vectors that span W. What two vectors do you multiply a and b by to get everything in W. That's the basis.
 
  • #7
Is it aX + bY = span(X,Y) = Basis?
 
  • #8
StopWatch said:
Is it aX + bY = span(X,Y) = Basis?

No, not if X and Y are your original two vectors. w=(a,b,-a,-a). If w=a*U+b*V then U and V are the vectors that span W. What are they?
 
  • #9
Are they the standard basis of R4? In this case 1000, and 0100 for a,b respectively?
 
  • #10
StopWatch said:
Are they the standard basis of R4? In this case 1000, and 0100 for a,b respectively?

a and b aren't vectors at all. There are just numbers. You want a*U+b*V=(a,b,-a,-a). U and V are the vectors. What are they? You can get this if you concentrate.
 
  • #11
10-1-1 and 0100? For U and V respectively? I feel a bit more confident about this answer lol, am I wrong?
 
  • #12
StopWatch said:
10-1-1 and 0100? For U and V respectively? I feel a bit more confident about this answer lol, am I wrong?

No, you are right! So the vectors U=(1,0,-1,-1) and V=(0,1,0,0) are a basis for W, yes? They span W. Are they linearly independent?
 
Last edited:
  • #13
Yes, since no linear combination of one can make the other. Thanks for guiding me through that question! You don't happen to have time for those other ones up there do you? No worries if you don't. I'm going to post a try at a, b and c above either way!
 
  • #14
StopWatch said:
Yes, since no linear combination of one can make the other. Thanks for guiding me through that question! You don't happen to have time for those other ones up there do you? No worries if you don't. I'm going to post a try at a, b and c above either way!

It's kind of late here, so I won't be around much longer. But somebody else will jump in or I'll be around tomorrow. This will get easier as you go along.
 
  • #15
BTW opening separate thread on separate questions is a really good idea. People tend to ignore threads with a huge number of posts on them. I'm one of them.
 
  • #16
StopWatch said:
Is there anything more I should do in finding the basis other than realizing that (a,b) is it? The reasoning being that -a is a linear combination of a (and the other -a a linear combination of itself of course) so those are the only two left?

I'd also like to introduce three other questions if anyone has time, since I think that one is cleared up then if I'm right about that. Thanks Dick, by the way.

Prove for a transformation T: V --> W:

(a) If dim(V ) > dim(W), then T cannot be 1-1.
(b) If dim(V ) < dim(W), then T cannot be onto.
(c) If dim(V ) = dim(W), then T is 1-1 if and only if T is onto.

First, I'd like to clear up that I have some terms in my head right:

Kernel(T) = all the things the transformation maps to 0, including the 0 vector.
Dimension of a vector space or subspace = the number of vectors in its basis

And what are the rules exactly on mapping to a subspace versus mapping from a vector space to a vector space? Am I only mapping from a vector space to a vector space if I'm mapping ALL the elements from one to another? Is it otherwise subspace to subspace? vectorspace to subspace? or subspace to vector space? depending on whether SOME --> SOME, ALL -->SOME, SOME-->ALL, respectively?

Is the preimage the same as the domain?

Sorry that got a bit messy, do those questions make sense?

first, some basics. a mapping is just a function. so if you have a mapping f:V→W where V and W are vector spaces, it has to be defined for every v in V, but it does not have to take on every value in W. for example f(v) = 0 is a perfectly good mapping. of course, we're usually only interesting in LINEAR mappings, because those preserve "vector-space-ness".

one of the first things you usually show is that if T is a linear transformation T:V→W, that im(T) (or the image set T(V)) is a subspace of W. ker(T) on the other hand, is a subspace of V. these concepts are "dual" to each other: a smaller kernel means a bigger image. at the extremes:

T:V→W given by T(v) = 0, has ker(T) = V, and im(T) = {0}.
T:V→V given by T(v) = v, has ker(T) = {0}, and im(T) = V.

if a kernel is as small as possible, T is injective.
if an image is as large as possible, T is surjective. note: these are not quite "technical" definitions, they are just to give you an idea of what is going on.

now, for your questions:

a) suppose dim(V) > dim(W). this means that there are more elements in any basis for V, than there are in any basis for W. pick a basis, any basis, for V, say:

{v1,v2,...,vn}

consider the set {T(v1),T(v2),...,T(vn)}. can this set be linearly independent? if not, than by the linearity of T...

b) use the same idea, except now the set of T(vj) has fewer elements than any basis for W. can it be a basis for W? is there any difference between im(T) and
span(T(v1),T(v2),...,T(vn))? if W has elements not in this span, then...

c) if you've answered (a) and (b), it should be clear how to prove this.
 
  • #17
That's because the Kernel is the subspace of things that get mapped to zero by the transformation right? And injection requires that 0 be the only thing mapped to 0 (i.e. the transformation is linearly independent) and surjection requires that everything in W get mapped to (i.e. transformation spans W) and so together a bijection maps the basis of one space to the basis of the other? Is there something more nuanced to think about regarding that?

I'll read over your responses to a,b,c, and think about them before responding to that part. Thanks!
 
  • #18
StopWatch said:
That's because the Kernel is the subspace of things that get mapped to zero by the transformation right? And injection requires that 0 be the only thing mapped to 0 (i.e. the transformation is linearly independent) and surjection requires that everything in W get mapped to (i.e. transformation spans W) and so together a bijection maps the basis of one space to the basis of the other? Is there something more nuanced to think about regarding that?

I'll read over your responses to a,b,c, and think about them before responding to that part. Thanks!

that is correct, but we can say more. not only does injective --> ker(T) = {0}, but the reverse is true: ker(T) = {0} --> injective. a similar statement holds for surjectivity.

the concept of linear independence is intimately tied up with the idea of injectivity.

the concept of spanning is intimately tied up with surjectivity.

so the question of two vector spaces being isomorphic as vector spaces, reduces to the question of two bases being isomorphic...as SETS. we've reduced an algebraic structure question, to a problem of COUNTING. that's...powerful.
 
  • #19
a) suppose dim(V) > dim(W). this means that there are more elements in any basis for V, than there are in any basis for W. pick a basis, any basis, for V, say:

{v1,v2,...,vn}

consider the set {T(v1),T(v2),...,T(vn)}. can this set be linearly independent? if not, than by the linearity of T...

The set can't be linearly independent because at least two elements of V are going to have to map to only one element of W? Is there a more formal way of saying this? By the linearity of T...I'm not sure what you mean actually lol. I probably should.

b) use the same idea, except now the set of T(vj) has fewer elements than any basis for W. can it be a basis for W? is there any difference between im(T) and
span(T(v1),T(v2),...,T(vn))? if W has elements not in this span, then...

It can be a basis for W since W could have equal dimension to V, but then it wouldn't be surjective right? Because not everything would be mapped to? The im(T) will be smaller than the span?
 
  • #20
Also, that use of isomorphism really is cool.
 
  • #21
that's the general idea.

the specifics:

if the T(vj) are not all linearly independent, then we can write:

T(vn) = c1T(v1) +...+ cn-1T(vn-1) and by linearity:

= T(c1v1+...+cn-1vn-1)

but these two vectors are different, by the linear independence of the vj, and T maps them to the same element of W.
 
  • #22
I'm still not sure I understand all the details, but I may just need to think on it more.

One other quick question for anyone: -(1/4<u,u> - 1/4<u,v>) + (1/4<v,u> - 1/4<v,v> apparently equals <u,v> (the inner product) - why? Any help would be appreciated, and again thanks to everyone that's helped so far.
 
  • #23
Unless there are special properties here, it does NOT equal <u, v>. For a vector space over the real numbers, <u, v>= <v, u> so that -((1/4)<u, u>- (1/4)<u,v>)+ ((1/4)<v, u>- (1/4)<v, v>)= -(1/4)<u,u>+ (1/2)<u,v>- (1/4)<v, v>= -(1/4)(<u,u>- 2<u, v>+ <v, v>) which you can then write as -(1/4)<u- v, u- v>.
 
  • #24
Ohhh I'm an idiot: Take exactly what I wrote, but with all positive coefficients (i.e. anywhere there is a negative put a positive) and add that to what I wrote. It's equal to <u,v> now isn't it? I think I figured out where I went wrong.
 

FAQ: Bases, Subspaces, Orthogonal Complements and More to Come

What is a base in linear algebra?

A base is a set of linearly independent vectors that can be used to represent any vector in a vector space through linear combinations. In other words, a base is a set of vectors that span the entire vector space and can be used to form any other vector in that space.

How is a subspace defined?

A subspace is a subset of a vector space that satisfies three properties: closure under vector addition, closure under scalar multiplication, and contains the zero vector. This means that when any vectors in a subspace are added or multiplied by a scalar, the resulting vector also lies within the subspace.

What is the orthogonal complement of a subspace?

The orthogonal complement of a subspace is the set of all vectors that are perpendicular to every vector in the subspace. In other words, the dot product of any vector in the orthogonal complement with any vector in the subspace is equal to zero.

How is the orthogonal projection of a vector onto a subspace calculated?

The orthogonal projection of a vector onto a subspace is calculated by finding the vector in the subspace that is closest to the original vector. This can be done by using the formula for orthogonal projection, which involves finding the dot product of the vector with the basis vectors of the subspace and then multiplying them by the basis vectors.

What are some applications of bases, subspaces, and orthogonal complements?

Bases, subspaces, and orthogonal complements are used in a variety of applications, such as computer graphics, data compression, and solving systems of linear equations. They are also essential in understanding and solving problems in quantum mechanics and other areas of physics that involve vector spaces.

Similar threads

Replies
4
Views
2K
Replies
10
Views
2K
Replies
15
Views
2K
Replies
3
Views
2K
Replies
7
Views
2K
Replies
15
Views
1K
Back
Top