Proof related to the expected value of cardinality

In summary: Therefore, it is not always true that the expected value for set $A$ will be greater than or equal to the expected value for set $B$. In summary, the answer to the question is not a definite "YES" or "NO" because it depends on the specific values of the probabilities $r_i$ and the sets $A$ and $B$. It is possible for either set to have a higher expected value, depending on the scenario.
  • #1
baiyang11
9
0
Consider [tex]N[/tex] random variables [tex]X_{n}[/tex] each following a Bernoulli distribution [tex]B(r_{n})[/tex] with [tex]1 \geq r_{1} \geq r_{2} \geq ... \geq r_{N} \geq 0[/tex]. If we make following assumptions of sets [tex]A[/tex] and [tex]B[/tex]:

(1) [tex]A \subset I [/tex] and [tex]B \subset I[/tex] with [tex]I=\{1,2,3,...,N\}[/tex]

(2) [tex]|A \cap I_{1}| \geq |B \cap I_{1}|[/tex] with [tex]I_{1}=\{1,2,3,...,n\}, n<N[/tex]

(3) [tex]|A|=|B|=n[/tex]

Do we have [tex]\mathbb{E}(\Sigma_{a\in A} X_{a}) \geq\mathbb{E}(\Sigma_{b\in B} X_{b})[/tex]?

To avoid confusion, [tex]\mathbb{E}[/tex] means expected value.Thanks!
 
Physics news on Phys.org
  • #2
This question looks really similar to your previous one so I hope I can be of more help this time. I'll give my thoughts about this question.

We have given $N$ random variables with a Bernoulli distribution. That means $\mathbb{P}[X_n = 1] = r_n$ and $\mathbb{P}[X_n = 0]=1-r_n$. Furthermore, $\mathbb{E}[X_i]=r_i$ for $i=1,\ldots,N$.

By condition $(2) |A \cap I_1| \geq |B \cap I_1| \neq 0 $ we know that $A$ has more elements in common with $I_1$. Since the random variables $X_1,\ldots,X_n$ have greater expected values then the random variables $X_j$ with $j \in I \backslash I_1$, the statement
$$\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$$
probably holds. Of course this is not a formal proof but it should give some intuition.

The only thing we know about $n$ and $N$ is $n<N$ so it could be the case that $N>2n$. In this case we can suppose $A \cap I_1 = \emptyset$ and $B \cap I_1 = \emptyset$. Now it is possible to choose $A$ en $B$ such that the statement does not hold anymore.

To make my point more clear. Suppose $I = \{1,2,\ldots,7\}$ and $I_1 = \{1,2,3\}$, thus $n=3$ and $N = 7$. Set $A = \{5,6,7\}$ and $B=\{4,5,6\}$. Then we have
$$\mathbb{E}\left(\sum_{a \in A} X_a \right) = \mathbb{E}[X_5]+\mathbb{E}[X_6]+\mathbb{E}[X_7] = r_5+r_6+r_7$$
$$\mathbb{E}\left(\sum_{b \in B} X_b \right) = r_4+r_5+r_6$$.

Since we can say $r_4 > r_7$, we have in this case that the statement does not hold.

I hope I made my point clear though I did not give a formal explanation. But I wanted to give my thoughts because in my opinion this is an interesting question.
 
  • #3
Siron said:
This question looks really similar to your previous one so I hope I can be of more help this time. I'll give my thoughts about this question.

We have given $N$ random variables with a Bernoulli distribution. That means $\mathbb{P}[X_n = 1] = r_n$ and $\mathbb{P}[X_n = 0]=1-r_n$. Furthermore, $\mathbb{E}[X_i]=r_i$ for $i=1,\ldots,N$.

By condition $(2) |A \cap I_1| \geq |B \cap I_1| \neq 0 $ we know that $A$ has more elements in common with $I_1$. Since the random variables $X_1,\ldots,X_n$ have greater expected values then the random variables $X_j$ with $j \in I \backslash I_1$, the statement
$$\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$$
probably holds. Of course this is not a formal proof but it should give some intuition.

The only thing we know about $n$ and $N$ is $n<N$ so it could be the case that $N>2n$. In this case we can suppose $A \cap I_1 = \emptyset$ and $B \cap I_1 = \emptyset$. Now it is possible to choose $A$ en $B$ such that the statement does not hold anymore.

To make my point more clear. Suppose $I = \{1,2,\ldots,7\}$ and $I_1 = \{1,2,3\}$, thus $n=3$ and $N = 7$. Set $A = \{5,6,7\}$ and $B=\{4,5,6\}$. Then we have
$$\mathbb{E}\left(\sum_{a \in A} X_a \right) = \mathbb{E}[X_5]+\mathbb{E}[X_6]+\mathbb{E}[X_7] = r_5+r_6+r_7$$
$$\mathbb{E}\left(\sum_{b \in B} X_b \right) = r_4+r_5+r_6$$.

Since we can say $r_4 > r_7$, we have in this case that the statement does not hold.

I hope I made my point clear though I did not give a formal explanation. But I wanted to give my thoughts because in my opinion this is an interesting question.
Thank you for reply.
You provide a clear example when $A$ and $B$ have nothing in common with $I_{1}$ at all. But I think my original intention is to make $|A \cap I_{1}| \geq |B \cap I_{1}|>0$. I admit it is my mistake not to make this clear.
So if $|A \cap I_{1}| \geq |B \cap I_{1}|>0$, is the answer "YES" or "NO"?

It may be still No. Someone else gives me the following example.
Let $n \geq 2$, $N=n+1, A=I-\{1\}, B=I-\{2\}$ and $r_{1}>r_{2}$. Then
$$\mathbb{E}\left(\sum_{a \in A} X_a \right)=\sum_{i} r_{i}-r_{1} $$
$$\mathbb{E}\left(\sum_{b \in B} X_b \right)=\sum_{i} r_{i}-r_{2} $$
With $r_{1}>r_{2}$, we get $\mathbb{E}\left(\sum_{a \in A} X_a \right)<\mathbb{E}\left(\sum_{b \in B} X_b \right)$ to make the answer "NO".

In fact, by asking my original question, I what to abstract the following statement which seems to be true intuitively.

If I have better knowledge of what incidents have the top probabilities to happen (like I know set A rather than set B), is the knowledge trustable (A is still favored) when those incidents realize (the expected value thing)?

Now, inspired by the counterexample above, it seems that we can always manipulate different $r_{i}$ to get $B$ favored with $|A|=|B|$.

Now that under the original assumptions $\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$ does not always hold, I am trying to see under what assumptions, we can always favor A, i.e. $\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$. But I am a bit lost.
 
  • #4
baiyang11 said:
Thank you for reply.
You provide a clear example when $A$ and $B$ have nothing in common with $I_{1}$ at all. But I think my original intention is to make $|A \cap I_{1}| \geq |B \cap I_{1}|>0$. I admit it is my mistake not to make this clear.
So if $|A \cap I_{1}| \geq |B \cap I_{1}|>0$, is the answer "YES" or "NO"?

It may be still No. Someone else gives me the following example.
Let $n \geq 2$, $N=n+1, A=I-\{1\}, B=I-\{2\}$ and $r_{1}>r_{2}$. Then
$$\mathbb{E}\left(\sum_{a \in A} X_a \right)=\sum_{i} r_{i}-r_{1} $$
$$\mathbb{E}\left(\sum_{b \in B} X_b \right)=\sum_{i} r_{i}-r_{2} $$
With $r_{1}>r_{2}$, we get $\mathbb{E}\left(\sum_{a \in A} X_a \right)<\mathbb{E}\left(\sum_{b \in B} X_b \right)$ to make the answer "NO".

In fact, by asking my original question, I what to abstract the following statement which seems to be true intuitively.

If I have better knowledge of what incidents have the top probabilities to happen (like I know set A rather than set B), is the knowledge trustable (A is still favored) when those incidents realize (the expected value thing)?

Now, inspired by the counterexample above, it seems that we can always manipulate different $r_{i}$ to get $B$ favored with $|A|=|B|$.

Now that under the original assumptions $\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$ does not always hold, I am trying to see under what assumptions, we can always favor A, i.e. $\mathbb{E}\left(\sum_{a \in A} X_a \right) \geq \mathbb{E}\left(\sum_{b \in B} X_b \right)$. But I am a bit lost.

Nice counterexample! I did not see it that way. Anyway your question is interesting but not something I can answer right away. I'll think about it but in the mean time I hope some other members can help you out.

Ps: where does this problem come from? I have had some probability theory courses but I've never discussed problems like this before.
 
  • #5
Siron said:
Nice counterexample! I did not see it that way. Anyway your question is interesting but not something I can answer right away. I'll think about it but in the mean time I hope some other members can help you out.

Ps: where does this problem come from? I have had some probability theory courses but I've never discussed problems like this before.

I appreciate your continuous attention on this problem and I do hope attentions from some other members, maybe from various areas, because it seems to be interdisciplinary between set theory and probability theory.

This abstracted problem comes from a concrete problem in my major, which is electrical engineering (EE). We come up with a concrete method for a EE problem we are researching currently. Then I realize the essence of the concrete method is trying to answer the question, the italic one in my previous post. Then I looked at the EE problem and what we do in the concrete method, and come up with this mathematical problem, which I found interesting as well.
 

FAQ: Proof related to the expected value of cardinality

What is the definition of expected value?

The expected value, also known as the mean or average, is a measure of the central tendency of a probability distribution. It represents the theoretical long-term average outcome of a random variable.

How is expected value related to cardinality?

The expected value of a random variable is calculated by multiplying each possible outcome by its probability and summing these products. In the case of cardinality, the random variable represents the size of a set, and the expected value represents the average size of the set.

What is the formula for calculating the expected value of cardinality?

The formula for calculating the expected value of cardinality is: E[X] = Σx*p(x), where X is the random variable representing the size of the set, x is each possible outcome (i.e. size of the set), and p(x) is the probability of that outcome occurring.

How is proof related to the expected value of cardinality?

Proof is used to show that the formula for calculating the expected value of cardinality is mathematically valid. It involves using mathematical principles and logic to demonstrate that the expected value of cardinality is a well-defined and meaningful concept.

What is the significance of the expected value of cardinality in probability and statistics?

The expected value of cardinality is a fundamental concept in probability and statistics. It is used to make predictions about the average size of a set and to understand the behavior of random variables. It also serves as the basis for other important concepts, such as variance and standard deviation.

Back
Top