Subspaces of Functions- definition

  • #1
sonnichs
8
0
TL;DR Summary
What is an example of (f+g)(x) ~= f(x) + g(x)
Assume s is a set such that Fs denotes the set of functions from S-->F where F is a field such as R, C or [0,1] etc.
One requirement for F to be a vector space of these functions is closure- e.g. that sums of these functions are in the space:
For f,g in Fs the sum f+g must be in Fs hence: (f+g)(x) = f(x)+g(x)

I am trying to think of an example where this relation if not true. In fact I thought this was the definition of the relation.
Am I missing something here.
 
Physics news on Phys.org
  • #2
##(f+g)(x)=f(x)+g(x)## is indeed a definition for ##f+g.## It requires that we can add function values, i.e. that the codomain allows an addition. Fields, algebras, rings, vector spaces, and even additive groups all have this property. Without it, the right-hand side wouldn't make sense, and thus the left-hand side wouldn't either.

We do not explicitly need to be able to add functions. We enforce it with that defining equation.
 
  • #3
Thank you for your reply. So I think we can safely state that for functions, the condition of additive closure is always met due to definition.
fritz
 
  • #4
sonnichs said:
Thank you for your reply. So I think we can safely state that for functions, the condition of additive closure is always met due to definition.
fritz
Yes. Subtraction and multiplication are analog, and if we have a vector space of function values, scalar multiplication, too: ##(\lambda \cdot f)(x)=\lambda\cdot f(x).## Division can only be defined that way if the denominator function hasn't zeroes.
 
  • #5
Matters only become interesting if we impose additional constraints on our functions, such as that they take a particular value at a given point or are continuous with respect to some specified topology. Then we do need to check closure.

For example if we require [itex]f(x_0) = 1[/itex] then [itex](f_1+ f_2)(x_0) = 2 \neq 1[/itex], and we do not have a subspace.
 
  • #6
Thank you.
I probably have a half dozen books laying about titled "Linear Algebra.....". Sadly none seem inclined to deliberate much on function spaces. Hilbert spaces of functions are of great interest in quantum mechanics.
I found a good hint of what you mention in "Linear Algebra Done Right" 4th ed. Axler. It was buried in the (unanswered) problems. In the text the author states the criteria for a function space but has few or no examples here and of course no deliberation. (that said I think his coverage of intermediate theory is decent). so--
problem (1C. 9) is a curious problem which could have been very instructive.

A function 𝑓 ∶ 𝐑 → 𝐑 is called periodic if there exists a positive number 𝑝 such that 𝑓 (𝑥) = 𝑓 (𝑥 + 𝑝) for all 𝑥 ∈ 𝐑.
Is the set of periodic functions from 𝐑 to 𝐑 a subspace of 𝐑^𝐑 ?

Here I see an example of what you are referring too (e.g. "additional constraints" being periodicity).
I think the key in this is that some functions f can "fill up" the space but due to the added constraint there are "holes" in the space where the functions are not periodic. In fact if we chose two periodic functions:
f(x) = cos( 2pi/sqrt(2) x) and g(x)= cos(2pi x)
then fulfilling the additive requirement "f(x)+g(x) has be periodic to be a member of the space" must apply. In fact, 2pi/sqrt(2) and 2pi are incommensurable and f + g cannot lead to a period.

fritz
 
  • #7
sonnichs said:
In fact, 2pi/sqrt(2) and 2pi are incommensurable and f + g cannot lead to a period.
Yes. Wikipedia uses ##\sin(x)+\sin(\pi x)## as example.

Another important property of function spaces is their dimension. They are usually infinite dimensional so many tools of linear algebra like determinates or matrices break away. Not every tool, but many. For instance, finding an orthonormal system of basis vectors by the Gram-Schmidt algorithm is still possible in Hilbert spaces because it only uses the inner product to define lengths and angles. The theory of Hilbert spaces is thus a part of functional analysis (theory of linear operators) rather than a part of linear algebra. It establishes its own branch of mathematics. If you (or other readers) are interested in function spaces, here are some articles about the subject:

https://www.physicsforums.com/insights/hilbert-spaces-relatives/
https://www.physicsforums.com/insights/hilbert-spaces-relatives-part-ii/
https://www.physicsforums.com/insights/tell-operations-operators-functionals-representations-apart/

They show that function spaces are closer to analysis than they are to linear algebra. Whatever that means.
 
  • #8
Thank you for the links. I finally read thru them-I understand some but not all of what you indicated as I am somewhat limited in the area of analysis.
As you said in our post there is quite an intertwining between Functional Analysis, Linear Algebra, and Analysis. Even the names don't seem to represent their purposes to me.
=================
There seems to be some inconsistency over the use of "Field" in describing spaces.
For a vector space, this is usually stated more precisely as the "Vector Space over a Field".

I think the favoured symbolism for a vector field (Hoffman and Kunze) is F^N. Here F designates the field and N designates the number of elements in the tuples contained by that field. Then values are selected from F and placed into the tuples (x1,x2,..) to yield a vector in the space.
Taking 3D Euclidian space as an example, N=3. Possibly the most common fields are R and C. So for R^3, we are generating a space of a set of triplets where the 3 elements in each triple are selected from R and a vector is output.
Moving on the function spaces here is my confusion (We will use single valued functions).
We see this definition (Axler): If S is a set then F^S denotes the set of functions from S to F.
Apparently the F in this case does not mean an input--it means an output. The functions can have output values in R, C etc. S could be anything, but often members R and C as well Thus we have for input [0,1] the symbolism is C^R [0,1] and would select inputs from the Real numbers and the output would be a complex number. This is quite different from the first definition above.
Fritz
 
  • #9
A relation ##\sim## between two sets ##S## and ##R## sets elements of ##S## in relation to elements of ##R.## This means a relation can be seen as a set of pairs: ##\{(s,r)\,|\,s\sim r\}.## As such, a relation is a subset of the Cartesian product ##\sim \;\subseteq S\times R.##

A function is a relation: it pairs the elements of the input set ##S## with the elements of the output set ##R## by defining
$$ s\sim_f r \Longleftrightarrow f(s)=r \Longleftrightarrow f=\{(s,r)\,|\,s\sim_f r\}=\{(s,r)\,|\,f(s)=r\}\subseteq S\times R$$
with the additional requirement, that one ##s## cannot be related to two different ##r.## This is allowed for general relations (Susan is a friend of Jane and is a friend of Doris, too), but not for functions. An input value can have only one output value.

Nevertheless, as functions can be seen as specific relations, and relations can be seen as subsets of the Cartesian product between ##S## and ##R##, it follows that a function can be viewed as a subset of the Cartesian product between input set ##S## and output set ##R##. One possible way to express this is by writing the function
$$
f=R^S=\{\underbrace{(r_1,r_2,\ldots)}_{S \text{ many times}}\,|\,f(s_i)=r_i\}.
$$
You see, this explanation on the right has a flaw since we usually cannot number all input values and their output. That's why ##f=R^S## is the better notation. The notation ##F^S## is even more sloppy since it uses the same name for the function and the output set.

This little detour explains why you have found the notation ##F^S.##
 
Last edited:
Back
Top