Consider the subset U ⊂ R3[x] defined as

  • Thread starter Karl Porter
  • Start date
In summary: That the sum of the two polynomials, ##(a_3x^3 + a_2x^2 + a_1x + a_0)## and ##(b_3x^3 + b_2x^2 + b_1x + b_0)##, is a degree three (or fewer) polynomial with roots at 0 and -1.
  • #36
Karl Porter said:
rightt so the basis is the 1,1,1?
A basis is a set of vectors. Is "1" a vector? That is, is it a member of your vector space?
 
Physics news on Phys.org
  • #37
jbriggs444 said:
A basis is a set of vectors. Is "1" a vector?
no?
 
  • Like
Likes jbriggs444
  • #38
Karl Porter said:
no?
Correct. Although technically, the constant function f(x) = 1 is a degree zero polynomial, it was pretty clear that you did not intend your "1" to denote that polynomial.

If you want to find a basis for the set of polynomials of degree 3 or less, there is an obvious one to choose:

{ x^3, x^2, x, 1 }

But that is not the only possible basis for the set of polynomials of degree 3 or less.
 
  • #39
jbriggs444 said:
Correct. If you want to find a basis for the set of polynomials of degree 3 or less, there is an obvious one to choose:

{ x^3, x^2, x, 1 }

But that is not the only possible basis for the set of polynomials of degree 3 or less.
what would other possible basis look like?
x^2,x,1?
 
  • #40
Karl Porter said:
what would other possible basis look like?
For a simple example, you could use:

{ x^3, x^2, x, x+1 }

The four members are still linearly independent. You cannot form anyone as a linear combination of the others. But they still span the entire space of degree 3 or fewer polynomials.
 
  • Like
Likes Karl Porter
  • #41
Karl Porter said:
what would other possible basis look like?
x^2,x,1?
The proposed basis { x^2, x, 1 } does not work. How can you express ##x^3## as a linear combination of ##x^2##, ##x## and ##1##?
 
  • Like
Likes Karl Porter
  • #42
jbriggs444 said:
For a simple example, you could use:

{ x^3, x^2, x, x+1 }

The four members are still linearly independent. You cannot form anyone as a linear combination of the others. But they still span the entire space of degree 3 or fewer polynomials.
right so if i wanted to calculate the basis of U which is the vector set and the basis of the subset is { x^3, x^2, x, 1 }. what does that show? The vector space U can have a higher degree than the subset right?
 
  • #43
jbriggs444 said:
The proposed basis { x^2, x, 1 } does not work. How can you express ##x^3## as a linear combination of ##x^2##, ##x## and ##1##?
so you can't times x^2 by x only a scalar quantity? or added a certain amount
 
  • #44
Karl Porter said:
by linear combination are those under +,-,x,÷ and sqrt
Karl Porter said:
so you can't times x^2 by x only a scalar quantity?
No. The only operations allowed in a linear combination are
  • multiplication of a vector (or function) by a scalar such as 2, or -3, or 1.5;
  • addition of vectors (or functions) of the basis.
Your textbook should have definitions of the terms you're asking about, together with examples.
 
  • #45
Karl Porter said:
right so if i wanted to calculate the basis of U which is the vector set and the basis of the subset is { x^3, x^2, x, 1 }. what does that show? The vector space U can have a higher degree than the subset right?
You cannot calculate the basis of a vector space. No possible way. It cannot be done. There is no such thing.

You can find a basis of a vector space. It will not be unique.

The dimension of a vector space is the number of elements in a basis for that space.
 
  • Like
Likes Karl Porter
  • #46
If we are clear on what a "basis" is and how it determines the "dimension" of a vector space we can move on to try to find a basis for the sub-space of third degree polynomials with roots at 0 or -1.

Recall that I had suggested using interpolating polynomials.

Are you ready to proceed? Or would you like to firm up your understanding of the idea of a "basis" and of the "dimension" of a vector space?
 
  • #47
jbriggs444 said:
Yes. But you have to make sure that the zero vector is a member of the subspace...

... oh right. Since we've already proved closure under multiplication by a scalar, that one we get for free.
This isn't quite correct. Closure of scalar multiplication says if ##p(x) \in U##, then ##0\cdot p(x) \in U##. You still need to show that there is in fact a ##p(x)## in ##U##.
 
  • Like
Likes jbriggs444 and Karl Porter
  • #48
jbriggs444 said:
If we are clear on what a "basis" is and how it determines the "dimension" of a vector space we can move on to try to find a basis for the sub-space of third degree polynomials with roots at 0 or -1.

Recall that I had suggested using interpolating polynomials.

Are you ready to proceed? Or would you like to firm up your understanding of the idea of a "basis" and of the "dimension" of a vector space?
we can have different basis but dimension will be the same?
but yes we can move onto interpolating.
 
  • #49
Karl Porter said:
we can have different basis but dimension will be the same?
Yes. It will turn out that every basis will have the same number of members.

[This stuff is pretty much all reasoned out from my own intuition and stuff picked up over the years. I've never taken a formal course in linear algebra]

Karl Porter said:
but yes we can move onto interpolating.
So here is the idea.

If we have plotted four distinct points on a graph, we can always find a degree 3 (or fewer) polynomial that matches those points.

This is the Lagrange interpolating polynomial for those points. It will be unique.

[My one and only published paper deals with these polynomials. So I have some affinity]

Suppose that we select two x coordinates in addition to -1 and 0. If we assign function values at those points, we can find a polynomial that fits those values. Can you use this idea to come up with two distinct polynomials in our sub-space?

You do not have to write those polynomials down. All you need for now is a proof that they exist and that they are linearly independent.
 
  • #50
jbriggs444 said:
Yes. It will turn out that every basis will have the same number of members.

[This stuff is pretty much all reasoned out from my own intuition and stuff picked up over the years. I've never taken a formal course in linear algebra]So here is the idea.

If we have plotted four distinct points on a graph, we can always find a degree 3 (or fewer) polynomial that matches those points.

This is the Lagrange interpolating polynomial for those points. It will be unique.

[My one and only published paper deals with these polynomials. So I have some affinity]

Suppose that we select two x coordinates in addition to -1 and 0. If we assign function values at those points, we can find a polynomial that fits those values. Can you use this idea to come up with two distinct polynomials in our sub-space?

You do not have to write those polynomials down. All you need for now is a proof that they exist and that they are linearly independent.
yea i lost you there. i was reading through the wiki page, do i need a range? or am i making the range
lets say i pick the points x=1 and x=2 how would i know the function value from these? am I substituting to p(x)
 
  • #51
Karl Porter said:
yea i lost you there. i was reading through the wiki page, do i need a range? or am i making the range
lets say i pick the points x=1 and x=2 how would i know the function value from these? am I substituting to p(x)
OK, Let's pick the points x=1 and x=2.

You do not need to know the function values at x=1 and x=2. Make some up.

Make up two function values for your first proposed basis member to use.
Make up two function values for your second proposed basis member to use.
 
  • #52
jbriggs444 said:
OK, Let's pick the points x=1 and x=2.

You do not need to know the function values at x=1 and x=2. Make some up.

Make up two function values for your first proposed basis member to use.
Make up two function values for your second proposed basis member to use.
x3 x2 x 1
2x3 2x2 2x 1
?
 
  • #53
Karl Porter said:
x3
Yep. I did lose you there, it seems. Let me try to restore some context and make sure we are straight on the immediate goals

We want to find a basis for our vector sub-space of degree 3 polynomials with roots at 0 and -1.

We know that the full space of degree 3 polynomials has ##{x^3, x^2, x, 1}## as a basis. Four vectors in the basis, so the dimension of the full space is 4.

There are many other possible sets of 4 polynomials that also qualify to be a basis for the full space. We do not particularly care. As long as we have one set of linearly independent vectors that spans the space, we know we have a basis and we know that the dimension of the space is the number of vectors in that basis.

But now we want to know the dimension of the sub-space. That basis that we had in hand (##{x^3, x^2, x, 1}##) will not help us out. That set of vectors is not a basis for the sub-space because none of the vectors in that supposed basis are elements of the sub-space.

We want to find some polynomials that are elements of the subspace.

The suggestion is to use Lagrange Interpolating Polynomials.

In order to come up with a Lagrange Interpolating Polynomial for degree three polynomials, we need four x coordinates and four corresponding values for p(x).

If we want the resulting polynomials to have roots at 0 and at -1, two of the four x coordinates are already chosen for us:

x = 0 p(x) = 0
x = -1 p(x) = 0 [whoopsie, had that value as 1 initially]

Now we have to find two more points and values. You've suggested x=1 and x=2. So we have two more table entries to fill.

x = 1 p(x) = ?
x = 2 p(x) = ?

We need two different polynomials. So you'll need to fill in those two table entries twice. We need four numbers in total.
 
  • #54
jbriggs444 said:
Yep. I did lose you there, it seems. Let me try to restore some context and make sure we are straight on the immediate goals

We want to find a basis for our vector sub-space of degree 3 polynomials with roots at 0 and -1.

We know that the full space of degree 3 polynomials has ##{x^3, x^2, x, 1}## as a basis. Four vectors in the basis, so the dimension of the full space is 4.

There are many other possible sets of 4 polynomials that also qualify to be a basis for the full space. We do not particularly care. As long as we have one set of linearly independent vectors that spans the space, we know we have a basis and we know that the dimension of the space is the number of vectors in that basis.

But now we want to know the dimension of the sub-space. That basis that we had in hand (##{x^3, x^2, x, 1}##) will not help us out. That set of vectors is not a basis for the sub-space because none of the vectors in that supposed basis are elements of the sub-space.

We want to find some polynomials that are elements of the subspace.

The suggestion is to use Lagrange Interpolating Polynomials.

In order to come up with a Lagrange Interpolating Polynomial for degree three polynomials, we need four x coordinates and four corresponding values for p(x).

If we want the resulting polynomials to have roots at 0 and at -1, two of the four x coordinates are already chosen for us:

x = 0 p(x) = 0
x = -1 p(x) = 1

Now we have to find two more points and values. You've suggested x=1 and x=2. So we have two more table entries to fill.

x = 1 p(x) = ?
x = 2 p(x) = ?

We need two different polynomials. So you'll need to fill in those two table entries twice. We need four numbers in total.
and those numbers can be random? px=3,px=4
 
  • #55
Karl Porter said:
and those numbers can be random? px=3,px=4
They can be random. However, if it were me, I'd try to choose some simple values.

Remember, you need to pick four numbers.

One number for the value of the first polynomial evaluated at x=1.
One number for the value of the first polynomial evaluated at x=2.

One number for the value of the second polynomial evaluated at x=1.
One number for the value of the second polynomial evaluated at x=2.
 
  • Like
Likes Karl Porter
  • #56
IMO, the thread has gone astray from the original intent in post #1, which I have copied below.
Karl Porter said:
Homework Statement:: Show that U is a subspace of R3[x]
Relevant Equations:: U = {p(x) = a3x^3 + a2x^2 + a1x + a0 such that p(0) = 0 and p(−1) = 0}

so to show its a subspace (from definition) I need to prove its closed under addition and multiplication , contains 0 and for every w there is a -w? has it already been proven to contain 0 as p(-1)=0?
also I did sub in -1 and ended up with the equation a1+a3=a2+a0 but I don't know if that is relevant to solving this question?
Actually, you get two equations from the facts that p(0) = 0 and p(-1) = 0, but more about that later.
jbriggs444 said:
Perhaps you want to work in the ring of formal polynomials or in the ring of polynomial functions over the reals.
I doubt very much that rings of polynomials or polynomial functions are germane to this question, something that you (@jbriggs444) suspected in a post that followed this quote.

Karl Porter said:
the next part asked for the dimension of U
the lack of constants is throwing me off but I put in the values of -1 and 0 and I am left with a matrix
-1 1 -1 1 │ 0
0 0 0 0 │0
so wouldn't the dimension just be 1x4?
Two points here:
1) the first row above is correct, but the second is not.
From the equation p(0) = 0, you get ##a_3 \cdot 0 + a_2 \cdot 0 + a_1 \cdot 0 + a_0 = 0##, or more simply, ##1a_0 = 0##. This means that the second row of your matrix needs to be 0 0 0 1 | 0.
2) The rank of the corrected matrix is 2. If you row-reduce that matrix, you get a basis for your function space.

jbriggs444 said:
Personally, I would be thinking in terms of interpolating polynomials as a way to come up with an alternate basis for the vector space and for the sub-space.
This might be an approach, but there is one that is much simpler, based on the matrix derived from the equations p(-1) = 0 and p(0) = 0.

The corrected matrix is
##\begin{bmatrix} -1 & 1 & -1 & 1 & | & 0 \\ 0 & 0 & 0 & 1 & | & 0\end{bmatrix}##
We can simplify this a bit by multiplying the top row by -1, to get:
##\begin{bmatrix} 1 & -1 & 1 & -1 & | & 0 \\ 0 & 0 & 0 & 1 & | & 0\end{bmatrix}##
By adding the 2nd row to the first, we get the matrix in reduced row-echelon form.
##\begin{bmatrix} 1 & -1 & 1 & 0 & | & 0 \\ 0 & 0 & 0 & 1 & | & 0\end{bmatrix}##

The first and second rows represent these equations:
##a_3 - a_2 + a_1 ...= 0##
##.....a_0 = 0##
So ##a_3## depends on ##a_2## and ##a_1##, which means that ##a_2## and ##a_1## can be chosen arbitrarily. ##a_0## depends on no other parameter, and is zero.

So
##a_3 = 1a_2 - 1a_1##
##a_2 = 1a_2 + 0a_1## (It's arbitrary.)
##a_1 = 0a_2 +1a_1## (Also arbitrary.)
##a_0 = 0a_2 + 0a_1##
If you squint your eyes a bit, you might be able to spot the coordinates of two functions that form a basis for U.

Karl Porter said:
but yes we can move onto interpolating.
I don't think this will be helpful in what you're trying to do
 
  • Like
Likes Karl Porter and jbriggs444
  • #57
You can write a polynomial of degree at most 3 with roots at 0 and -1 as [itex]x(1+x)p(x)[/itex] where [itex]p[/itex] is a polynomial of degree at most 1. Closure under vector addition and scalar multiplication then follows straightforwardly: [tex]x(1+x)p(x) + \alpha x(1+x)q(x) = x(1+x)(p(x) + \alpha q(x)).[/tex]
 
  • Like
Likes jbriggs444
  • #58
ok so i went through a few more examples and ended up with Dim 2

at x=0 i ended up with the equation a0=0
at x=-1 i get a2=a1+a3
so for all values of p(x) in u they can be represented by two parameters
(a3-a1)x3+a2x2+a1x
 
  • #59
Karl Porter said:
ok so i went through a few more examples and ended up with Dim 2

at x=0 i ended up with the equation a0=0
at x=-1 i get a2=a1+a3
so for all values of p(x) in u they can be represented by two parameters
(a3-a1)x3+a2x2+a1x
and then for the basis i sub these a2 and a1 in forms of a3 into p(x)
 
  • #60
Karl Porter said:
ok so i went through a few more examples and ended up with Dim 2

at x=0 i ended up with the equation a0=0
at x=-1 i get a2=a1+a3
The part above is OK.
Karl Porter said:
so for all values of p(x) in u they can be represented by two parameters
(a3-a1)x3+a2x2+a1x
No. Take a closer look at post #56. I've laid it all out for you, including how to get a set of basis functions.
Karl Porter said:
and then for the basis i sub these a2 and a1 in forms of a3 into p(x)
I have no idea what you're trying to say here.
 
  • #61
Mark44 said:
The part above is OK.
No. Take a closer look at post #56. I've laid it all out for you, including how to get a set of basis functions.
I have no idea what you're trying to say here.
Basis would be {x^3+x^2,x^3-x}
 
  • #62
Karl Porter said:
Basis would be {x^3+x^2,x^3-x}
Yes.
Let's call these ##p_1(x)## and ##p_2(x)##, respectively.
As a check that these functions are in U, it would be good to verify that ##p_1(0) = 0, p_1(-1) = 0## and that ##p_2(0) = 0, p_2(-1) = 0##.
 
  • Like
Likes Karl Porter

Similar threads

Back
Top