Is the set of all continuous functions on the interval [0,1] a vector space?

In summary: And I am almost 30, so high school was... a while ago...In summary, the conversation discusses the properties of a vector space and applies them to the set of continuous functions on the interval [0,1]. The conversation then moves on to discussing the set of non-negative functions and the differences between continuous and non-continuous functions in a vector space. The speaker also mentions their lack of prior knowledge in math due to dropping out of high school, but is determined to learn linear algebra.
  • #1
Saladsamurai
3,020
7
I am trying to teach myself Linear Algebra and it is really slow going. As much as I hate to admit a weakness, I really suck at abstract thinking. So some really basic ideas are tripping me up. Here is a question from the first exercise in the book I am using


Homework Statement



Which of the following sets (with natural addition and multiplication by a
scalar) are vector spaces. Justify your answer.
a) The set of all continuous functions on the interval [0, 1];
b) The set of all non-negative functions on the interval [0, 1];
c) The set of all polynomials of degree exactly n;
d) The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]





Homework Equations


Definition of a vector space

A vector space V is a collection of ob jects, called vectors (denoted in this
book by lowercase bold letters, like v), along with two operations, addition
of vectors and multiplication by a number (scalar) 1 , such that the following
8 properties (the so-called axioms of a vector space) hold:
The first 4 properties deal with the addition:
1. Commutativity: v + w = w + v for all v, w ∈ V ;

2. Associativity: (u + v) + w = u + (v + w) for all u, v, w ∈ V ;

3. Zero vector: there exists a special vector, denoted by 0 such that
v + 0 = v for all v ∈ V ;

4. Additive inverse: For every vector v ∈ V there exists a vector w ∈ V
such that v + w = 0. Such additive inverse is usually denoted as −v;

5. Multiplicative identity: 1v = v for all v ∈ V ;

6. Multiplicative associativity: (αβ )v = α(β v) for all v ∈ V and all
scalars α, β ;

7. α(u + v) = αu + αv for all u, v ∈ V and all scalars α;

8. (α + β )v = αv + β v for all v ∈ V and all scalars α, β .


The Attempt at a Solution



Let's just start with (a) the set of all continuous functions on the interval [0, 1]

This is probably really easy, but I am having trouble figuring out how to answer this one.

I guess I start by seeing if all continuous functions adhere to the eight criterion above right?

Well it appears that 1 and 2 hold, as continuous functions add commutatively and associatively right?

3 (the existence of a zero vector such that v+0=v) seems true enough

4 and 5 should hold (out of curiosity, when does 1*v not equal to v?)

6,7,8 also seem obvious enough, but I don't know how to prove any of this.

So I am concluding that the set of all continuos functions on the interval [0,1] IS a vector space.


What is the proper approach to these kinds of problems? And why did they choose the interval [0,1] ? Why not all reals?


Sorry for so many questions! Any input towards ANY of them is greatly appreciated!
 
Physics news on Phys.org
  • #2
1*v=v is basically just a definition of 1. The point of 5) is really just to say 1 exists. 6,7 and 8 hardly even need proving. When you multiply functions by constants and add them then you are really just multiplying and adding real numbers. So 6,7 and 8 aren't very mysterious. The thing to prove is that a constant times a continuous function is continuous and that the sum of two continuous functions is continuous. If you want to get down into the dirt and prove them, then use epsilons and deltas. But I think you've probably proved that before, right? Otherwise, they are just properties of real numbers.
 
Last edited:
  • #3
Dick said:
1*v=v is basically just a definition of 1. The point of 5) is really just to say 1 exists. 6,7 and 8 hardly even need proving. When you multiply functions by constants and add them then you are really just multiplying and adding real numbers. So 6,7 and 8 aren't very mysterious. The thing to prove is that a constant times a continuous function is continuous and that the sum of two continuous functions is continuous. If you want to get down into the dirt and prove them, then use epsilons and deltas. But I think you've probably proved that before, right? Otherwise, they are just properties of real numbers.

Actually, somehow I got like the one Calculus professor who did not find it necessary to do epsilon and deltas (with limits). I am afraid of them:redface:
 
  • #4
I didn't say you HAD to prove them. If you know they are true, just cite the relevant theorem. The real content here is just that if you apply vector space type operations to continuous functions, they remain continuous.
 
  • #5
Before you ask, the fact it's on [0,1] instead of (-infinity,infinity) doesn't matter at all.
 
  • #6
Okay then! Let's move on to part (b):smile: Now it seems similar to part (a) except that now it includes just positive functions and they are not necessarily continuous.

What is the difference here? I am sorry, I am losing focus here... which of these properties (1-8) would not hold for noncontinuous functions?
 
  • #7
That's easy. If V is the set of nonnegative functions and v is in V and I do a vector space type operation like (-1)*v, is the result necessarily a nonnegative function? These aren't so hard, are they?
 
  • #8
Saladsamurai said:
Okay then! Let's move on to part (b):smile: Now it seems similar to part (a) except that now it includes just positive functions and they are not necessarily continuous.

What is the difference here? I am sorry, I am losing focus here... which of these properties (1-8) would not hold for noncontinuous functions?

They all hold for functions that aren't necessarily continuous as well.
 
Last edited:
  • #9
Am I stupid? Okay, don't answer that... I dropped out of high school, so I did not have the luxury of high-school maths. So when I started out at Community college, I started with "Intermediate Algebra" and now that I am transferring out of that school, I have completed many math courses. However, I feel like I missed MANY of the BASICS!

Crap like, the definition of a set and stuff like that...

I know that the terms aren't difficult, but I have to think about them instead of them just being in there (my head!) and knowing them well...
 
  • #10
Saladsamurai said:
Am I stupid? Okay, don't answer that... I dropped out of high school, so I did not have the luxury of high-school maths. So when I started out at Community college, I started with "Intermediate Algebra" and now that I am transferring out of that school, I have completed many math courses. However, I feel like I missed MANY of the BASICS!

Crap like, the definition of a set and stuff like that...

I know that the terms aren't difficult, but I have to think about them instead of them just being in there (my head!) and knowing them well...

I was NOT implying you were stupid. It takes practice to focus on what's important in a list of 8 axioms. I was trying to encourage you.
 
Last edited:
  • #11
Dick said:
That's easy. If V is the set of nonnegative functions and v is in V and I do a vector space type operation like (-1)*v, is the result necessarily a nonnegative function? These aren't so hard, are they?

Okay. I think I might be with you now. The approach to these is something like this:

I have some collection (set) of objects (elements); now I should ask myself: If I apply a vector space type operation (i.e. 1-8) to one of these elements, do I as a result, get one of those elements?

If the answer is yes, it IS a vector space. If the answer is NO, it is not.

Is this right?
 
  • #12
Dick said:
I was NOT implying you were stupid. I was trying to encourage you.

I didn't mean you were! I was just sort of "talking out loud"... sorry, I do it a lot! Sometimes if I type out what I am thinking, it helps me to sort out the nonsense going on inside my head. :smile:

I had originally planned on asking what class most people started learning stuff like that in... but I got lost somewhere along the line!
 
  • #13
Saladsamurai said:
Okay. I think I might be with you now. The approach to these is something like this:

I have some collection (set) of objects (elements); now I should ask myself: If I apply a vector space type operation (i.e. 1-8) to one of these elements, do I as a result, get one of those elements?

If the answer is yes, it IS a vector space. If the answer is NO, it is not.

Is this right?

Yes, that's REALLY right. What about c) and d)?
 
  • #14
Dick said:
Yes, that's REALLY right.

Sweet-Jesus!:smile:
 
  • #15
Saladsamurai said:
I had originally planned on asking what class most people started learning stuff like that in... but I got lost somewhere along the line!

Likely, they learned it the same class you are taking. Sorry, NOT TAKING.
 
Last edited:
  • #16
So for part (c) I would say that the set of all polynomials of degree exactly n IS a vector space since,

1-8 hold and since all said operations yield a polynomial of degree exactly n
 
  • #17
Saladsamurai said:
So for part (c) I would say that the set of all polynomials of degree exactly n IS a vector space since,

1-8 hold and since all said operations yield a polynomial of degree exactly n

Agreed. Now finish it with d) and agree with me that it's not that hard and you aren't stupid.
 
Last edited:
  • #18
Umm. They are just symmetric matrices. I'm trying not to let a bit of index oddness in what you wrote throw me off. a_{ij}=a_{ji), right?
 
  • #19
Dick said:
Agreed. Now finish it with d) and agree with me that it's not that hard and you aren't stupid.
:smile: Okee-dokee!

This will be the harder of the four since I have to really think about what it is saying.

I am a little confused by the definition of symmetric matrices:

The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]

How can a matrix be equal to its transpose unless all of the entries of that matrix are the SAME entry? If you have some 2 x 2 matrix called A. And you take the first row of A and make it the first column of some other matrix B. And then you take the 2nd row of A and make it the 2nd column of B. B is now the transpose of A. Isn't the only way the A=B if all the entries in A were the same entry?
 
  • #20
Dick said:
Umm. They are just symmetric matrices. I'm trying not to let a bit of index oddness in what you wrote throw me off. a_{ij}=a_{ji), right?

I thought I copied their definition right... but yes, it said symmetric matrices.:smile:EDIT: Here's a screenshot of the text

Picture1-14.png
 
  • #21
Nooooo. A=[[1,2],[2,1]]. A^T=A. All of the entries in A aren't the same.
 
  • #22
Saladsamurai said:
I thought I copied their definition right... but yes, it said symmetric matrices.:smile:


EDIT: Here's a screenshot of the text

Picture1-14.png

This may show you that there are people even stupider than you are. I have no idea what that means. What does the superscript 'n' mean? What does j,k=1 mean? I do know what a symmetric matrix is, and that notation does nothing to convey the meaning.
 
  • #23
Dick said:
Nooooo. A=[[1,2],[2,1]]. A^T=A. All of the entries in A aren't the same.

:rolleyes: Sorry, but I am confused by that matrix... wait... is that comma in between sets of brackets to denote a new row? ... Seems it must be.
 
  • #24
Oh, you know what a symmetric matrix is, right? A^T=A. Ignore that gibberish. I got to go. See ya.
 
  • #25
Saladsamurai said:
:rolleyes: Sorry, but I am confused by that matrix... wait... is that comma in between sets of brackets to denote a new row? ... Seems it must be.

Yes, new row. Ones along the diagonal, twos along the opposite diagonal. Symmetric, I'm pretty sure.
 
  • #26
Okay. So by my logic from post #11 and applying axiom 1, not only does some pair of symmetric matrices need to add commutatively, but the sum needs to be a symmetric matrix too. I am trying to think of a way to show prove that that is or is not true. Ill mull it over and post tomorrow as it is 3 am my time!

Thanks Dick! You're responses are always helpful to me and get me thinking.:smile:
 
  • #27
Saladsamurai said:
I thought I copied their definition right... but yes, it said symmetric matrices.:smile:


EDIT: Here's a screenshot of the text

Picture1-14.png

That little bit of indexy stuff just means the A is a two index object with indices running from 1 to n. I.e. that it's an nxn matrix. Must have been tired last night.
 
  • #28
Saladsamurai said:
Okay. So by my logic from post #11 and applying axiom 1, not only does some pair of symmetric matrices need to add commutatively, but the sum needs to be a symmetric matrix too. I am trying to think of a way to show prove that that is or is not true.

I have had very limited exposure to proofs in general, let alone those involving matrices. I am having trouble figuring out how to go about showing that the sum of two symmetric matrices is or is not ALSO a symmetric matrix.

A hint to get me going here would be great:redface::smile:
 
  • #29
You just have to write this one out to see why it's the case. The addition of two matrices is just the sum of it's corresponding entries.
 
  • #30
Defennder said:
You just have to write this one out to see why it's the case. The addition of two matrices is just the sum of it's corresponding entries.

I know how to add two matrices if I have numbers, but what about the general case? How do you go about adding two general matrices. I tried this:

[tex]\left[\begin{array}{cc}a_{11}&a_{12} \\ a_{21}&a_{22}\end{array}\right]+\left[\begin{array}{cc}b_{11}&b_{12} \\ b_{21}&b_{22}\end{array}\right]=
\left[\begin{array}{cc}a_{11}+b_{11}&a_{12}+b_{12} \\ a_{21}+b_{21}&a_{22}+b_{22}\end{array}\right][/tex]

But that does not tell me much. Is there a better way to approach this? That is to say, I am not sure what to write out. :smile:
 
Last edited:
  • #31
Ok, I can't tell if you can see why the sum of two symmetic matrices is itself symmetric, or if you can see that it is so but can't think of a formal or acceptable way to prove it. Consider this then:

A matrix A is symmetric if for all its entries [itex]a_{ij}=a_{ji}[/itex] Suppose there's another symmetric matrix B with the same property.

The sum of the 2 matrices is C and a typical matrix entry of C is [itex]c_{ij} = a_{ij} + b_{ij} [/itex]. Now can you show if [itex]c_{ij} = c_{ji}[/itex]?
 
  • #32
Defennder said:
Ok, I can't tell if you can see why the sum of two symmetic matrices is itself symmetric, or if you can see that it is so but can't think of a formal or acceptable way to prove it. Consider this then:

A matrix A is symmetric if for all its entries [itex]a_{ij}=a_{ji}[/itex] Suppose there's another symmetric matrix B with the same property.

The sum of the 2 matrices is C and a typical matrix entry of C is [itex]c_{ij} = a_{ij} + b_{ij} [/itex]. Now can you show if [itex]c_{ij} = c_{ji}[/itex]?

How about this:
[itex]c_{ij} = a_{ij} + b_{ij} [/itex]



[itex]c_{ji} = a_{ji} + b_{ji} [/itex]

But since [itex]a_{ij}=a_{ji}[/itex] and [itex]b_{ij}=b_{ji}[/itex]

then [itex]c_{ij} = a_{ij} + b_{ij} =a_{ji} + b_{ji}=c_{ji}[/itex]


Does that work? I think it does if I got my indexes right:redface:
 
  • #33
Yep that should do it. I don't know how formal you need that to be, though. I've never been a fan of mathematical formalism.
 
  • #34
Thanks!:smile:
 
  • #35
Hold on a second guys. I don't like to hijack what looks completed, but I am a little befuddled for part c. The polynomial 0 is not a polynomial of degree n, so how can we say a zero element exists? Furthermore it is not closed under addition, for example a=(x^2 + 1), b = -x^2... a+b is not a polynomial of degree 2. According to wikipedia, some sources choose to include the axioms of closure as additional vector space axioms.
 
Last edited:
Back
Top