I am still confused how to prove that a set is a basis other than

In summary, proving that a set is a basis involves showing that it is linearly independent and that it spans the vector space. This can be done by forming a matrix with the set's vectors as rows and reducing it to row echelon form. If the matrix is square and in reduced row echelon form, then the set spans the entire space. To compare a set's span with another set's span or a matrix's row or column space, the same process of forming a matrix and reducing it can be applied. If comparing with a set of homogeneous linear equations, the set's span must be compared to the null space of the matrix formed from the coefficients of the equations. This process is based on the fundamental definition of a basis as a
  • #1
elabed haidar
135
1
i am still confused how to prove that a set is a basis other than proving it linearly independent and system of generator that have to do with matrices? please help
 
Physics news on Phys.org
  • #2


It is merely a matter of parsing the definition of basis. They must be linearly independent and must span the space to which they are presumed to be a basis.

That second part, if you want to be rigorous is a matter of expressing a given arbitrary vector in the space as a linear combination of the basis vectors. It is sufficient to show that each of the standard basis vectors of the space can be expressed in terms of the vectors in the set and that will involve inverting the matrix you form by using the set of vectors to form columns. (using their coefficients in the standard basis).

For example:
Given the set of vectors in 2-space:
[tex] \vec{u}= \hat{i}, \vec{v} = \hat{i}+2\hat{j}[/tex]
Their columns of coefficients are:
[tex] \vec{u} = \left(\begin{array}{c}1\\0\end{array}\right), \vec{v} = \left(\begin{array}{c}1\\2\end{array}\right).[/tex]

Then the matrix:
[tex]M = \left(\begin{array}{cc}1 & 1\\ 0 & 2\end{array}\right)[/tex]
Gives, via right multiplication the transformation from coefficients in the basis {u,v} to coefficients in the basis {i,j}:
[tex] a\vec{u} + b\vec{v}=x\hat{i} + y\hat{j}[/tex]
where
[tex]\left(\begin{array}{c}x\\y\end{array}\right)= \left(\begin{array}{cc}1 & 1\\ 0 & 2\end{array}\right)\left(\begin{array}{c}a\\b\end{array}\right)[/tex]

Since the matrix is in invertible its inverse will give you back the coefficients (a,b) in terms of the (x,y) coefficients of a vector in the {i,j} basis:
[tex] \left(\begin{array}{c}a\\b\end{array}\right)=\left(\begin{array}{cc}1 & 1\\ 0 & 2\end{array}\right)^{-1}\left(\begin{array}{c}x\\y\end{array}\right)[/tex]

Now if you're working with a subspace (i.e. two vectors as a basis of a subspace in 3 or more dimensions) you won't have square matrices but you can still carry out row operations to see if a specific vector is a linear combination of the proposed basis set and also if they are linearly independent.
 
Last edited:
  • #3


Hi elabed haidar :smile:

I'm quite confused in what you want hear from us here. The standard way of proving that something is a basis is to prove that it is linear independent and that it spans the vector space.

Of course, sometimes there are shortcuts. Specifically, if you already know the dimension of your vector space and if it happens to be finite, then it becomes a tad easier. For example, if the dimension of the space is n, and if you have exactly n vectors, then it simply suffices to show that the vectors are linear independt OR that the vectors span the space. Thus it suffices to show only one of these.

But other than that, I see no other way of proving something a basis...
 
  • #4


thank you both but what I am asking is the last two sentences jambaugh just said with subspaces I am still confused on how to prove a system to be a basis
 
  • #5


elabed haidar said:
thank you both but what I am asking is the last two sentences jambaugh just said with subspaces I am still confused on how to prove a system to be a basis

Again, you know how to determine linear independence... to review, use the vector's components in some basis as rows in a matrix (since we'll use row operations... if you want you can use columns and column operations it's just the transpose case).
What we're doing here is forming a matrix whose row space is by definition the span of our set of vectors.

Then row reduce until you are in row echelon form.

Row operations on a matrix of row vectors replaces rows with linear combinations of rows and thus each row remains within the span of the space. The row space of the matrix is unchanged.

If any row becomes all zeros then you know your set was not linearly independent...
=>you essentially subtracted a linear combination of other rows from that row so a linear combination of other vectors equaled that vector.

Note if you have as many vectors as the dimension of the space, and if they are all linearly independent then you will a.) have a square matrix of row vectors, and b.) will get the identity matrix when you reduce to reduced row echelon form. This means the span of your original set of vectors is the span of the standard basis, i.e. is the whole space.

Now to answer the last of your question will depend on how you are specifying a subspace for which you wish to check a set of vectors for basis status.

If you wish to compare one set's span to another, you simply form the two matrices (of row vectors) and their spans will be equal if they have the same reduced row echelon form modulo any extra rows of all zeros.

If you wish to compare a set's span with the row space of a matrix, well then that is just the above case with the first step done for you. The matrix is already the matrix of row vectors for another set and the process is the same as above.

If you wish to compare a set's span with the column space of a matrix then you just take the matrix's transpose, the column space of M is the row space of transpose(M).

If you wish to compare a set's span with a set of homogenous linear constraints, i.e. system of homogenous linear equations (linear combinations of coordinates set equal to zero) then you are comparing the set's span to the null space (or kernel) of the matrix formed from the coefficients of the homogeneous linear equations.

You must then get a basis for the kernel (see: http://en.wikipedia.org/wiki/Kernel_(matrix)" ) and follow the first procedure.

That's about every case I can think of at the moment without getting into more abstract spaces such as function spaces. Make sure you understand these procedures in terms of the fundamental definition... a basis is a linearly independent spanning set. All of these procedures directly apply this definition at their core.

This should all be outlined in your textbook, and is available on Wikipedia. Just google it.

Regards,
James Baugh
 
Last edited by a moderator:
  • #6


thank you sir very much
 

FAQ: I am still confused how to prove that a set is a basis other than

How do I prove that a set is a basis for a vector space?

To prove that a set is a basis for a vector space, you need to show that the set is linearly independent and spans the entire vector space. This means that none of the vectors in the set can be written as a linear combination of the other vectors, and that every vector in the space can be written as a linear combination of the vectors in the set.

Can I use any set of vectors to form a basis?

No, not every set of vectors can form a basis for a vector space. The set must be linearly independent and span the entire space in order to be considered a basis.

What is the difference between a basis and a spanning set?

A basis is a set of vectors that is both linearly independent and spans the entire vector space, while a spanning set is a set of vectors that only spans the space but may not be linearly independent.

How do I know if a set of vectors is linearly independent?

A set of vectors is linearly independent if none of the vectors can be written as a linear combination of the other vectors. This means that the only solution to the equation c1v1 + c2v2 + ... + cnvn = 0 is c1 = c2 = ... = cn = 0.

Can a vector space have more than one basis?

Yes, a vector space can have multiple bases. As long as the set of vectors is linearly independent and spans the entire space, it can be considered a basis.

Back
Top