Simple question about simple functions

  • Thread starter Fredrik
  • Start date
  • Tags
    Functions
In summary, I feel like this should be really easy, but for some reason I don't see how to finish it. I'm probably missing something obvious.
  • #1
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,877
423
I feel like this should be really easy, but for some reason I don't see how to finish it. I'm probably missing something obvious.

The integral of an integrable simple function ##f=\sum_{k=1}^n a_k\chi_{E_k}## is
$$\int f\,\mathrm{d}\mu=\sum_{k=1}^n a_k\mu(E_k),$$
where the right-hand side is interpreted using the convention ##0\cdot\infty=0##. If we want to use something like this as the definition of the integral, we need to make sure that there's no ambiguity in it. (The problem is that there are many ways to write a given integrable simple function as a linear combination of characteristic functions of measurable sets). One way to do that is to specify the a_k and E_k. We define the a_k by writing the range of f as ##f(X)=\{a_1,\dots,a_n\}## and the E_k by ##E_k=f^{-1}(a_k)##. Now there's no ambiguity in the formula above, so it's safe to use it as a definition.

This raises the question of whether ##f=\sum_{k=1}^n a_k\chi_{E_k}=\sum_{i=1}^m b_i\chi_{F_i}## implies that
$$
\sum_{k=1}^n a_k\mu(E_k)=\sum_{i=1}^m b_i\mu(F_i).
$$ If the E_k are defined as above, it's easy to see that ##\{E_k\}_{k=1}^n## is a partition of X (the underlying set of the measure space). If the ##\{F_i\}_{i=1}^m## is another partition of X, then I find it easy to prove the equality above. However, it holds even when ##\{F_i\}_{i=1}^m## is not a partition of X, right? I first thought that it doesn't, but I failed to find a counterexample, so now I think it does. My problem is that I don't see how to do the proof in the general case.

If ##\{E_k\}_{k=1}^n## and ##\{F_i\}_{i=1}^m## are both partitions of X, then we just write
$$
\begin{align}
& \sum_{k=1}^n a_k\mu(E_k) =\sum_{k=1}^n a_k\mu\bigg(\bigcup_{i=1}^m E_k\cap F_i\bigg) =\sum_{k=1}^n \sum_{i=1}^m a_k \mu\big(E_k\cap F_i\big),\\
& \sum_{i=1}^m b_i\mu(F_i) =\sum_{i=1}^m b_i\mu\bigg(\bigcup_{k=1}^n E_k\cap F_i\bigg)
=\sum_{k=1}^n \sum_{i=1}^m b_i \mu\big(E_k\cap F_i\big),
\end{align}
$$ and then we can easily prove that the left-hand sides are equal by showing that the right-hand sides are equal term for term. Let k,i be arbitrary. If ##E_k\cap F_i=\emptyset##, then ##a_k\mu(E_k\cap F_i)=0=b_i\mu(E_k\cap F_i)##. If ##E_k\cap F_i\neq\emptyset##, then let ##x\in E_k\cap F_i## be arbitrary. We have ##a_k=f(x)=b_i##, and this obviously implies ##a_k\mu(E_k\cap F_i)=b_i\mu(E_k\cap F_i)##.

So...anyone see how to prove or disprove the general case?
 
Physics news on Phys.org
  • #2
You can always force the [itex]F_i[/itex] to be a partition:

If [itex]F_i\cap F_j\neq \emptyset[/itex], then you can write

[tex]b_i\chi_{F_i}+b_j\chi_{F_j}=b_i\chi_{F_i\setminus F_j}+b_j\chi_{F_j\setminus F_i}+(b_i+b_j)\chi_{F_i\cap F_j}[/tex]

And if [itex]F=\bigcup F_j[/itex], then you just need to add a term [itex]0\chi_{X\setminus F}[/itex].
 
  • #3
Fredrik said:
So...anyone see how to prove or disprove the general case?

One way of proving this is to split the [itex]\{F_i\}_{i = 1}^m[/itex] into disjoint subsets. This gives you some refinement [itex]\{F_i'\}_{i = 1}^{k}[/itex] of the [itex]\{F_i\}_{i=1}^m[/itex]. Now write out your simple function in terms of the original [itex]a_1,\dots,a_m[/itex] and the [itex]\{F_i'\}_{i=1}^{k}[/itex]. It is fairly simple to check that these have the same integral and that completes the proof.
 
  • #4
Thanks guys. Forgive me for being perhaps slower than usual today, but I still don't see how to turn these ideas into a calculation that looks like this:
$$\sum_{i=1}^m b_i\mu(F_i)=\cdots=\sum_{k=1}^n a_k\mu(E_k).$$
I don't even see an easy way to define the refinement in jgens's post. I've been thinking things like this:

Define I={1,...,m}. For each x in X, define ##I_x=\{i\in I|x\in F_i\}## and ##G_x=\bigcap_{i\in I_x}F_i##. Now ##\{G_x|x\in X\}\cup \{F^c\}## (where F was defined in micromass' post) should be finite, and be a partition. But I still need to prove that (probably not too hard), and then find a way to use it in the calculation.
 
Last edited:
  • #5
I think I figured this out. I thought this was really hard actually. Makes me wonder if I'm making it more complicated than it needs to be. These are the essentials of my proof:

Define I={1,...,m}, and ##F=\bigcup_{i\in I} F_i##. For each x in F, define ##I_x=\{i\in I|x\in F_i\}## and ##G_x=\bigcap_{i\in I_x}F_i##. For all x in F, ##x\in G_x##, so ##\{G_x|x\in F\}## covers F. ##z\in G_x## implies ##G_z\subset G_x##. A corollary of this is that for all x,y in F, either ##G_x\cap G_y=\emptyset## or ##G_x=G_y##. So the collection ##\{G_x|x\in F\}## is mutually disjoint, and it's also finite, because
$$
\big|\{G_x\}\big|\leq\big|\{I_x\}\big| \leq\big|\mathcal P(I)\big|=2^m.

$$ Since it's finite, there's a bijection ##H:\{1,\dots,p\}\to\{G_x|x\in F\}##. I'll write ##H_j## instead of H(j). Define ##H_0=F^c##. Now ##\{H_j\}_{j=0}^p## is a partition of X, and we have
$$
\begin{align}
\sum_{k=1}^n a_k\mu(E_k) &=\sum_{j=0}^p\sum_{k=1}^n a_k \mu(E_k\cap H_j)\\
\sum_{i=1}^m b_i\mu(F_i) &=\sum_{j=0}^p\sum_{i=1}^m b_i \mu(F_i\cap H_j).
\end{align}
$$ To prove that the left-hand sides are equal, we prove that the sums over j on the right-hand sides are equal term for term. Let j be arbitrary. Let x be a member of F such that ##H_j=G_x##. Let ##k_0## be the positive integer such that ##G_j\subset E_{k_0}##. Note that ##H_j\subset F_i## when ##i\in I_x## and ##F_i\cap H_j=\emptyset## when ##i\notin I_x##. These things imply that
$$
\sum_{k=1}^n a_k\mu(E_k\cap H_j) =a_{k_0}\mu(H_j)=\sum_{i\in I_x} b_i\mu(H_j)=
\sum_{i=1}^m b_i\mu(F_i\cap H_j).
$$
Hm, now that I'm looking at this, I'm thinking that there probably isn't an easier way.
 
Last edited:

FAQ: Simple question about simple functions

What is a simple function?

A simple function is a mathematical concept where a single input (or independent variable) is used to produce a single output (or dependent variable). The output value is determined by applying a set of operations or rules to the input value.

What are some examples of simple functions?

Some common examples of simple functions include linear functions (where the output is directly proportional to the input), quadratic functions (where the output is a squared function of the input), and exponential functions (where the output is an exponential function of the input).

How are simple functions different from complex functions?

Simple functions have a single input and produce a single output, while complex functions may have multiple inputs and produce multiple outputs. Simple functions also tend to have a straightforward relationship between the input and output, while complex functions may have more complicated relationships.

Why are simple functions important in science?

Simple functions are important in science because they provide a way to model and understand real-world phenomena. Many natural processes can be described and predicted using simple functions, allowing scientists to make accurate predictions and gain insights into the natural world.

How can simple functions be used in experimental design?

Simple functions can be used in experimental design to help manipulate and control variables. By using simple functions, scientists can design experiments to test the effects of specific inputs on the output. This allows for a more systematic and controlled approach to understanding and studying various phenomena in science.

Back
Top