- #1
mrxtothaz
- 14
- 0
I'm doing a bit of review and have a few brief questions.
1) Say you have 3 polynomials that generates a 3dimensional space. Let this basis be {x2 + 3x -2, 2x2 +5x -3, -x2 -4x +4}. To prove that these vectors span the space, I want to show that any vector in this space can be expressed using these basis elements. I did this at the beginning of the year before learning matrices; I have attempted to do this with the use of matrices and for some reason I can't figure it out.
My approach has basically been to put the coefficients of each degree 2 term in a row and solve for a, and then doing the same thing for the terms of different degree (in terms of b and c). What am I doing wrong?
2) To find the equation of a line passing through two points, say: (3, -2, 4) and (-5, 7, 1), why is it that this is done by subtracting them, to get some parameter t? The equation ends up being v=(3,-2,4) +t(-5,7,1), where I presume t = (-5,7,1) - (3,-2,4)=(-8,9,-3). But I don't understand the geometric reason behind this.
3) Is there such a thing as an invertible mxn transformation? Or does this concept apply only to square matrices? In my reading, I have come across the definition that an isomorphism between vector spaces (say S and T) is when you have an invertible map between them such that both ST = I and TS = I. Is this only the case for square matrices?
4) Before being introduced to linear transformations being represented by matrices, I used to think of vectors as being row vectors. Then I learned it was the other way around. Something I have recently read has brought a new question to mind. Basically, what I read was that the correspondence of a vector x with [x]B (coordinate vector of x relative to B) provides a linear transformation from V to Fn. So am I correct to interpret this as saying that you can consider the elements of some vector space as row vectors, but with respect to some basis (putting them w.r.t to a basis is a transformation), they end up becoming column vectors?
5) By using row reduction, you can find out the rank and nullity of a given transformation. I have read that you can tell this both by looking at the columns or the rows (the maximum number of linearly independent rows OR columns). Do both these techniques work with row reduction?
I would appreciate any help. Thanks in advance.
1) Say you have 3 polynomials that generates a 3dimensional space. Let this basis be {x2 + 3x -2, 2x2 +5x -3, -x2 -4x +4}. To prove that these vectors span the space, I want to show that any vector in this space can be expressed using these basis elements. I did this at the beginning of the year before learning matrices; I have attempted to do this with the use of matrices and for some reason I can't figure it out.
My approach has basically been to put the coefficients of each degree 2 term in a row and solve for a, and then doing the same thing for the terms of different degree (in terms of b and c). What am I doing wrong?
2) To find the equation of a line passing through two points, say: (3, -2, 4) and (-5, 7, 1), why is it that this is done by subtracting them, to get some parameter t? The equation ends up being v=(3,-2,4) +t(-5,7,1), where I presume t = (-5,7,1) - (3,-2,4)=(-8,9,-3). But I don't understand the geometric reason behind this.
3) Is there such a thing as an invertible mxn transformation? Or does this concept apply only to square matrices? In my reading, I have come across the definition that an isomorphism between vector spaces (say S and T) is when you have an invertible map between them such that both ST = I and TS = I. Is this only the case for square matrices?
4) Before being introduced to linear transformations being represented by matrices, I used to think of vectors as being row vectors. Then I learned it was the other way around. Something I have recently read has brought a new question to mind. Basically, what I read was that the correspondence of a vector x with [x]B (coordinate vector of x relative to B) provides a linear transformation from V to Fn. So am I correct to interpret this as saying that you can consider the elements of some vector space as row vectors, but with respect to some basis (putting them w.r.t to a basis is a transformation), they end up becoming column vectors?
5) By using row reduction, you can find out the rank and nullity of a given transformation. I have read that you can tell this both by looking at the columns or the rows (the maximum number of linearly independent rows OR columns). Do both these techniques work with row reduction?
I would appreciate any help. Thanks in advance.