Linear Algebra: Help with Span Proofs

In summary: P_2'). So M_{2\times 2}(P_2') is not a subspace of M_{2\times 2}(P).In summary, the set of 2x2 matrices with entries from the set of polynomials with degree less than or equal to 2, where the constant term of the polynomial is not equal to 0, is not a subspace of the set of 2x2 matrices with entries from the set of polynomials with degree less than or equal to 2. This is because it does not contain the null matrix, which is necessary for a subset to be considered a subspace.
  • #1
elle
91
0
Span Proof help please!

Hi,

I'm working through a textbook on Linear Algebra and just need some info on some proofs.

The text gives the Theorem but only provides proof on only one of the parts and I'm really curious as to how to prove b) and also c) so that I can understand the Theorem a bit better. It says the proofs are easy but I've just started learning on this topic so I'm not too clued up :redface: I've looked around on the internet and other books for proofs on this but can't find anything :frown:

Can anyone give me a slight briefing on the proofs?

Many many thanks!
 
Last edited:
Physics news on Phys.org
  • #2
for b)

denote [itex]A=\left\{v_1,v_2,...,v_n \right\}[/itex], [itex]B=\left\{v_1,v_2,...,v_n,...,v_{n+m} \right\}[/itex]

Then u in sp(A) is of the form

[tex]u=\sum_{i=1}^n c_iv_i = \sum_i^{n+m}c_iv_i[/tex]

where [itex]c_i =0 \ \forall i\geq n+1[/itex]. Nevertheless, this shows that u is in sp(B) also.
 
Last edited:
  • #3
for c)

subspace have the property of closure. I.e if u,v are in A, then au+bv is in A also. I.e any linear combination of vectors of A is in A also. But that is all sp(A) is: the set of all linear combinations A. So since the element of sp(A) are all in A, [itex]sp(A)\subset A[/itex].

And according to the first part of the thm, [itex]A\subset sp(A)[/itex].

These two relations btw set can only happen if they are equal: A=sp(A).
 
  • #4
quasar987 said:
for b)

denote [itex]A=\left\{v_1,v_2,...,v_n \right\}[/itex], [itex]B=\left\{v_1,v_2,...,v_n,...,v_{n+m} \right\}[/itex]

Then u in sp(A) is of the form

[tex]u=\sum_{i=1}^n c_iv_i = \sum_i^{n+m}c_iv_i[/tex]

where [itex]c_i =0 \ \forall i\geq n+1[/itex]. Nevertheless, this shows that u is in sp(B) also.

Thanks very much for the proofs! Very clear to understand :) I'm still trying to understand the one for b) but I'm getting there :-p c) is very clear to me now and has confirmed the first half of my own attempted proof so it does reassure me that I am taking some of this in :rolleyes:

I came across this interesting one and was wondering if you had the time to help me out on this one too?

http://i9.tinypic.com/4dfhoie.jpg

Your help very much appreciated :smile:
 
  • #5
In this one, V is a vector space and U,W are subspaces. The author asks "Is there a largest subspace of V contained in both U and W?".

If you just look and U and W as sets, then [itex]U\cap W[/itex] is the largest set contained both in U and W (of course it is; [itex]U\cap W[/itex] is defined as the set of all the elements common to both U and W). Now we ask: is [itex]U\cap W[/itex] a subspace? If it is, then it is the largest subspace of V contained in both U and W.

To show that it is a subspace, consider u,v in [itex]U\cap W[/itex] and a,b two scalars. Then u and v are in both U and W (by definition of intersection). Both both U and W are subspaces, so they have the property of closure, meaning that au+bv is in both U and W. So au+bv is in [itex]U\cap W[/itex] (by definition of intersection).
 
  • #6
Whoa! I'm so confused.

What is b) and c)?

quasar987 is one hell of a mind reader!
 
  • #7
lol, there was a file attached in the OP.
 
  • #8
JasonRox said:
Whoa! I'm so confused.

What is b) and c)?

quasar987 is one hell of a mind reader!

LOL! Sorry I took the link out because the website only allows images to uploaded for a certain period of time and since my question has been answered, I took it out :redface: sorry!

Okay I have just one more question, if you have the time to help me out. Thanks very much for being so patient with me lol! As you can see I am a beginner and I'm finding linear algebra very tedious :frown:

It has stated as an example but the author has not gone into an explanation so I'm kind of confused and curious at the same time. I don't really know how to start proving or showing how such and such is a subspace when it concerns matrices :confused:

EDITED: Thanks again!
 
Last edited:
  • #9
An easy test to check if a subset of a vector space is a subspace is to check whether the null element is in it. Suppose V is a vector space and A is a subset of V, but A does not contain 0. Then A is not a subspace because subspaces have the property of closure, meaning that for any v in A and constant c, cv is in A. In particular, for c=0, the closure property says that 0v=0 is in A. So you see that if 0 is not in A, A cannot have the closure property and thus cannot be a subspace.

Now consider [itex]P_2'[/itex]={[itex] p\in P_2:a\neq 0\left\}[/itex]. Since a[itex]\neq[/itex]0, the null polynomial (0t+0), is not in [itex]P_2'[/itex].

And so, with that in mind, if we consider [itex]M_{2\times 2}(P_2')[/itex], the set of 2x2 matrices whose entries are elements of [itex]P_2'[/itex], then the null matrix [tex]\left( \begin {array} {cc} 0 & 0 \\ 0 & 0 \end{array}\right)[/tex] is not in [itex]M_{2\times 2}(P_2')[/itex]. So [itex]M_{2\times 2}(P_2')[/itex] is not a subspace of [itex]M_{2\times 2}(P)[/itex].
 
Last edited:
  • #10
quasar987 said:
An easy test to check if a subset of a vector space is a subspace is to check whether the null element is in it. Suppose V is a vector space and A is a subset of V, but A does not contain 0. Then A is not a subspace because subspaces have the property of closure, meaning that for any v in A and constant c, cv is in A. In particular, for c=0, the closure property says that 0v=0 is in A. So you see that if 0 is not in A, A cannot have the closure property and thus cannot be a subspace.

Now consider [itex]P_2'[/itex]={[itex] p\in P_2:a\neq 0\left\}[/itex]. Since a[itex]\neq[/itex]0, the null polynomial (0t+0), is not in [itex]P_2'[/itex].

And so, with that in mind, if we consider [itex]M_{2\times 2}(P_2')[/itex], the set of 2x2 matrices whose entries are elements of [itex]P_2'[/itex], then the null matrix [tex]\left( \begin {array} {cc} 0 & 0 \\ 0 & 0 \end{array}\right)[/tex] is not in [itex]M_{2\times 2}(P_2')[/itex]. So [itex]M_{2\times 2}(P_2')[/itex] is not a subspace of [itex]M_{2\times 2}(P)[/itex].

Hmm okay I think I'll have to think over that carefully...just out of curiosity, what would the 2x2 matrix [itex]M_{2\times 2}(P_2')[/itex], look like if I had to write it out? Like what would the elements be called compared to [itex]M_{2\times 2}(P)[/itex], matrix? :confused:
 
  • #11
Well if I understood the book correctly, P_2 is the set of all first degree polynomials (why didn't he call it P_1?), so the elements of [itex]M_{2\times 2}(P_2)[/itex] are of the form

[tex]\left( \begin {array} {cc} a_1t+b_1 & a_2t+b_2 \\ a_3t+b_3 & a_4t+b_4 \end{array}\right)[/tex]
 
  • #12
hmm good question! but thanks! i think i'll stop with the questions for now and have a good read over the stuff again. Thanks again! :biggrin:
 
  • #13
quasar987 said:
An easy test to check if a subset of a vector space is a subspace is to check whether the null element is in it. Suppose V is a vector space and A is a subset of V, but A does not contain 0. Then A is not a subspace because subspaces have the property of closure, meaning that for any v in A and constant c, cv is in A. In particular, for c=0, the closure property says that 0v=0 is in A. So you see that if 0 is not in A, A cannot have the closure property and thus cannot be a subspace.

Now consider [itex]P_2'[/itex]={[itex] p\in P_2:a\neq 0\left\}[/itex]. Since a[itex]\neq[/itex]0, the null polynomial (0t+0), is not in [itex]P_2'[/itex].

And so, with that in mind, if we consider [itex]M_{2\times 2}(P_2')[/itex], the set of 2x2 matrices whose entries are elements of [itex]P_2'[/itex], then the null matrix [tex]\left( \begin {array} {cc} 0 & 0 \\ 0 & 0 \end{array}\right)[/tex] is not in [itex]M_{2\times 2}(P_2')[/itex]. So [itex]M_{2\times 2}(P_2')[/itex] is not a subspace of [itex]M_{2\times 2}(P)[/itex].

I would just check for closure of scalar multiplication. I never bother looking for the zero vector.
 
  • #14
quasar987 said:
Well if I understood the book correctly, P_2 is the set of all first degree polynomials (why didn't he call it P_1?), so the elements of [itex]M_{2\times 2}(P_2)[/itex] are of the form

[tex]\left( \begin {array} {cc} a_1t+b_1 & a_2t+b_2 \\ a_3t+b_3 & a_4t+b_4 \end{array}\right)[/tex]

I believe most books use P_2 to be the set of all first degree polynomials. I'm not 100% sure, but for some reason I doubt it. Maybe the author was looking for an easy way to denote the vector space while making the dimension of the space obvious.

I remember once seeing something of the sort like...

The dimension of P_n is n+1.

Which is precisely what you think it should be.
 
  • #15
yeah scalar multiplication is also a good way of checking...
 
Last edited:
  • #16
A scalar is just a number. The test for closure under scalar multiplication would be checking if for any real number c,

[tex]c\left( \begin {array} {cc} a_1t+b_1 & a_2t+b_2 \\ a_3t+b_3 & a_4t+b_4 \end{array}\right)\in M_{2\times 2}(P_2')[/tex]

But obviously, for c=0, the resulting matrix (the null matrix) is not in P_2', because the null polynomial is not in P_2'. However for any other scalar c, the abover matrix is in [itex]M_{2\times 2}(P_2')[/tex]..
 
Last edited:
  • #17
quasar987 said:
A scalar is just a number. The test for closure under scalar multiplication would be checking if for any real number c,

[tex]c\left( \begin {array} {cc} a_1t+b_1 & a_2t+b_2 \\ a_3t+b_3 & a_4t+b_4 \end{array}\right)\in M_{2\times 2}(P_2')[/tex]

But obviously, for c=0, the resulting matrix (the null matrix) is not in P_2, because the null polynomial is not in P_2. However for any other scalar c, the abover matrix is in [itex]M_{2\times 2}(P_2')[/tex]..

lol yeah i had to re-read JasonRox's post and realized it was scalar multiplication :redface:

There's an example here in the book where the author has taken two matrices, added them together and then multiplied them by a scalar, hence concluding that the set is a subspace of the matrix 2x2...is that necessary to see if it satisfies the addition property as well? :confused:
 
  • #18
quasar987 said:
A scalar is just a number. The test for closure under scalar multiplication would be checking if for any real number c,

[tex]c\left( \begin {array} {cc} a_1t+b_1 & a_2t+b_2 \\ a_3t+b_3 & a_4t+b_4 \end{array}\right)\in M_{2\times 2}(P_2')[/tex]

But obviously, for c=0, the resulting matrix (the null matrix) is not in P_2, because the null polynomial is not in P_2. However for any other scalar c, the abover matrix is in [itex]M_{2\times 2}(P_2')[/tex]..

It's not in P_2?

I always thought 0+0t was a polynomial of degree 1. I'm sure it is.
 
  • #19
elle said:
lol yeah i had to re-read JasonRox's post and realized it was scalar multiplication :redface:

There's an example here in the book where the author has taken two matrices, added them together and then multiplied them by a scalar, hence concluding that the set is a subspace of the matrix 2x2...is that necessary to see if it satisfies the addition property as well? :confused:

Yes, you must check both closure by scalar multiplication and addition.

This should be a theorem in your text.
 
  • #20
JasonRox said:
Yes, you must check both closure by scalar multiplication and addition.

This should be a theorem in your text.

Ohh yeah I found it :biggrin:
 
Last edited:
  • #21
JasonRox said:
It's not in P_2?

I always thought 0+0t was a polynomial of degree 1. I'm sure it is.

I changed the P_2 for P_2' in my post. I apologize for the confusion.
 
  • #22
quasar987 said:
I changed the P_2 for P_2' in my post. I apologize for the confusion.

Oh, now I see what you did. :approve:
 
  • #23


I have a problem.
Suppose that {u1,u2,...,um} are vectors of R^n. Prove, directly that span
{u1,u2,...,um} is a subspace of R^n.
 

FAQ: Linear Algebra: Help with Span Proofs

What is the definition of span in linear algebra?

The span of a set of vectors is the set of all possible linear combinations of those vectors. In other words, it is the space that can be reached by scaling and adding the given vectors.

How do you prove that a vector is in the span of a given set of vectors?

To prove that a vector is in the span of a given set of vectors, you can use the definition of span and show that the vector can be written as a linear combination of the given vectors. This can be done by finding the coefficients that satisfy the equation and showing that they work.

Can a set of vectors span a vector space without being linearly independent?

No, a set of vectors can only span a vector space if they are linearly independent. If the vectors are linearly dependent, then some of them can be written as a linear combination of the others, making the span redundant and unable to reach all points in the vector space.

How do you prove that a set of vectors is linearly independent?

To prove that a set of vectors is linearly independent, you can show that the only solution to the equation where the vectors are set equal to the zero vector is when all the coefficients are zero. This means that none of the vectors can be written as a linear combination of the others.

What is the difference between the column space and the row space of a matrix?

The column space of a matrix is the span of its column vectors, while the row space is the span of its row vectors. The column space represents all possible linear combinations of the column vectors, while the row space represents all possible linear combinations of the row vectors. In general, the column space and row space are different, but they can be equal for special types of matrices, such as symmetric matrices.

Similar threads

Back
Top