Is this a typo or misunderstanding?

In summary, the problem is that if $P(A_i)=1$ for all $i \ge 1$, then $P(\bigcap_{i=1}^{\infty}A_i)=1$. This is only true if $A_1=A_2=...A_n$ because the sum of their probabilities (keeping in mind inclusion-exlusion of course) cannot be larger than 1. However, if $A_1eq A_2eq...A_n$ then the intersections of all of them is 1, but this is not what troubles me. Thus, the problem may be a typo.
  • #1
Jameson
Gold Member
MHB
4,541
13
Problem: Show that if $P(A_i)=1$ for all $i \ge 1$ then $P(\bigcap_{i=1}^{\infty}A_i)=1$.

What is strange about this question is the first part, $P(A_i)=1$ for all $i \ge 1$. If I'm understanding this correctly that's saying that $P(A_1)=1$, $P(A_2)=1$...$P(A_n)=1$, for $n \ge i \ge 1$. This is only true if $A_1=A_2=...A_n$ because the sum of their probabilities (keeping in mind inclusion-exlusion of course) cannot be larger than 1. It's obvious that if this is the case that the intersections of all of them is 1 as well, but that's not the part that troubles me.

So is this a typo or am I misunderstanding the problem do you think?
 
Last edited:
Mathematics news on Phys.org
  • #2
I agree both with your interpretation and with the fact that the result is seemingly obvious. I think I would state:

$P(\bigcap_{i=1}^{n}A_i)=n-(n-1)=1$
 
  • #3
I'm not familiar with that identity. Does it have a name or can you briefly explain where it comes from?
 
  • #4
I don't know what it's called, and perhaps it is an over-simplification, I was basically using:

$\displaystyle \sum_{i=1}^{n}P(A_i)=n$

and:

$\displaystyle \sum_{i=1}^{n-1}P(A_i \cap A_{i+1})=n-1$

This only pairs subsequent events in the sequence, and probably is invalid for that reason.

I think a better method would be to use the method for counting intersections outlined here:

Inclusion
 
  • #5
Jameson said:
Problem: Show that if $P(A_i)=1$ for all $i \ge 1$ then $P(\bigcap_{i=1}^{\infty}A_i)=1$.

What is strange about this question is the first part, $P(A_i)=1$ for all $i \ge 1$. If I'm understanding this correctly that's saying that $P(A_1)=1$, $P(A_2)=1$...$P(A_n)=1$, for $n \ge i \ge 1$. This is only true if $A_1=A_2=...A_n$
Let $\mathbb{R}\supset A_i=[0,1]\setminus\{1/n\}$ and let $P(A)$ be the measure (length) of $A$. Then $P(A_i)=1$ but $A_i\ne A_j$ for $i\ne j$.
 
  • #6
Jameson said:
Problem: Show that if $P(A_i)=1$ for all $i \ge 1$ then $P(\bigcap_{i=1}^{\infty}A_i)=1$.

What is strange about this question is the first part, $P(A_i)=1$ for all $i \ge 1$. If I'm understanding this correctly that's saying that $P(A_1)=1$, $P(A_2)=1$...$P(A_n)=1$, for $n \ge i \ge 1$. This is only true if $A_1=A_2=...A_n$ because the sum of their probabilities (keeping in mind inclusion-exlusion of course) cannot be larger than 1. It's obvious that if this is the case that the intersections of all of them is 1 as well, but that's not the part that troubles me.

So is this a typo or am I misunderstanding the problem do you think?

Hi Jameson!

A proof should be based on the axioms and propositions of probability theory.
See wiki.Let $B_n=\displaystyle\bigcap_{i=1}^{n}A_i$.

Then $P(\displaystyle \bigcap_{i=1}^{\infty}A_i)=\displaystyle \lim_{n \to \infty}P(B_n)$.

According to the sum rule, we have:
$P(B_n \cup A_{n+1})=P(B_n) + P(A_{n+1}) - P(B_n \cap A_{n+1})$​

According to the monotonicity rule and the numeric bound rule we also have:
$1 = P(A_{n+1}) \le P(B_n \cup A_{n+1}) \le 1$​

It follows that:
$1=P(B_n) + 1 - P(B_n \cap A_{n+1})$

$P(B_{n+1}) = P(B_n \cap A_{n+1}) = P(B_n)$​

With induction it follows that $P(\displaystyle \bigcap_{i=1}^{\infty}A_i)=P(A_1)=1$. $\qquad \blacksquare$
 
  • #7
Thank you for your comments Evgeny.Makarov and ILikeSerena! :)

I talked with my professor today about this problem and I realized a couple things:

1) I really need a class on measure theory and probably set theory as well to understand this problem and similar ones on a deeper level. I'm trying to build something with improper tools.

2) My conclusion that this problem implies that \(\displaystyle A_1=A_2=...A_n\) is incorrect, although I am still processing the details of why.

@ILikeSerena - That is the idea that my professor was hinting towards, although this isn't a proof based class so he doesn't expect that kind of rigor. However, I am trying to attempt formal proofs where possible so I will review yours and post back if I don't follow something. From a short glance at it though, I think I follow each step.

Thanks again to all who have replied! :)
 
  • #8
Jameson said:
@ILikeSerena - That is the idea that my professor was hinting towards, although this isn't a proof based class so he doesn't expect that kind of rigor. However, I am trying to attempt formal proofs where possible so I will review yours and post back if I don't follow something. From a short glance at it though, I think I follow each step.

I did compress the proof a little and skipped a couple of steps, since I mostly wanted to highlight what was probably intended.
Let me know if you need any explanation.
 
  • #9
Btw, your "mistake" was the assumption that P(A)=1 implies that A is the set of all possible outcomes.
As Evgeny.Makarov showed, this is not necessarily the case.
It is true that if A is the set of all possible outcomes, that then P(A)=1.
 
  • #10
I just thunked up a more specific and perhaps intuitive example.

Suppose we roll a dice with 6 sides.
Let $A_i$ be the event that the result is less than 6+i.
That is, $A_i$ is the set of outcomes {1,2,3,...,6+i-1}.
Then the events are not identical, but the chance on any of them is still 1.
Furthermore, the probability of an outcome in their intersection is also 1.
 
  • #11
Does this look okay?

$P(\cap{A_{i}})<1\Rightarrow 1-P(\cap{A_{i}})=P((\cap{A_i})^{C})=P(\cup{A_{i}^{C}})>0.$

$P(\cup{A_{i}^{C}})>0\Rightarrow\exists j\in{\mathbb{N}}$ with $P(A_j^{C})>0$. (Consider that $P(\cup{A_{i}^{C}})\leq\sum{P(A_{i}^{C})}$)

Then $P(A_{j})=1-P(A_{j}^{C})<1$, contradicting our premise.

I'd like to find a way to show mutual independence without using induction.
 
  • #12
rashtastic said:
Does this look okay?

$P(\cap{A_{i}})<1\Rightarrow 1-P(\cap{A_{i}})=P((\cap{A_i})^{C})=P(\cup{A_{i}^{C}})>0.$

$P(\cup{A_{i}^{C}})>0\Rightarrow\exists j\in{\mathbb{N}}$ with $P(A_j^{C})>0$. (Consider that $P(\cup{A_{i}^{C}})\leq\sum{P(A_{i}^{C})}$)

Then $P(A_{j})=1-P(A_{j}^{C})<1$, contradicting our premise.

I'd like to find a way to show mutual independence without using induction.

Looks fine to me. (Smile)

One small addition: you should start your proof with:
Suppose $P(\cap{A_{i}}) \ne 1$, then:​

That way, you give a proper introduction to a proof by contradiction, since you'd be specifying the premise you're contradicting.
 
  • #13
ILikeSerena said:
Looks fine to me. (Smile)

One small addition: you should start your proof with:
Suppose $P(\cap{A_{i}}) \ne 1$, then:​

That way, you give a proper introduction to a proof by contradiction, since you'd be specifying the premise you're contradicting.

Thanks, ILikeSerena. Here's another attempt...

Take any subset $\Lambda$ from $\{A_{1}^{C}, A_{2}^{C}, A_{3}^{C}, ...\}$. We can label this subset $\Lambda=\{A_{j_{1}}^{C}, A_{j_{2}}^{C}, A_{j_{3}}^{C}...\}.$ Consider $P(\bigcap_{\lambda\in\Lambda}\lambda)$.

$P(\bigcap_{\lambda\in\Lambda}\lambda)=P(A_{j_{1}}^{C}\cap A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...)=P(A_{j_{1}}^{C})P(A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...|A_{j_{1}}^{C})$

$=(1-P(A_{j_{1}}))P(A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...|A_{j_{1}}^{C})=0=\Pi_{\lambda\in\Lambda}{P(\lambda)}.$

Because our choice of $\Lambda$ is arbitrary, the elements of $\{A_{1}^{C}, A_{2}^{C}, A_{3}^{C}, ...\}$ are mutually independent. This implies that (*needs citation!) $A_{1}, A_{2}, A_{3}, ...$ are also mutually independent, so that $P(\cap{A_{i}})=\prod{P(A_{i})}=1.$
 
  • #14
rashtastic said:
Thanks, ILikeSerena. Here's another attempt...

Take any subset $\Lambda$ from $\{A_{1}^{C}, A_{2}^{C}, A_{3}^{C}, ...\}$. We can label this subset $\Lambda=\{A_{j_{1}}^{C}, A_{j_{2}}^{C}, A_{j_{3}}^{C}...\}.$ Consider $P(\bigcap_{\lambda\in\Lambda}\lambda)$.

$P(\bigcap_{\lambda\in\Lambda}\lambda)=P(A_{j_{1}}^{C}\cap A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...)=P(A_{j_{1}}^{C})P(A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...|A_{j_{1}}^{C})$

$=(1-P(A_{j_{1}}))P(A_{j_{2}}^{C}\cap A_{j_{3}}^{C}\cap...|A_{j_{1}}^{C})=0=\Pi_{\lambda\in\Lambda}{P(\lambda)}.$

Because our choice of $\Lambda$ is arbitrary, the elements of $\{A_{1}^{C}, A_{2}^{C}, A_{3}^{C}, ...\}$ are mutually independent. This implies that (*needs citation!) $A_{1}, A_{2}, A_{3}, ...$ are also mutually independent, so that $P(\cap{A_{i}})=\prod{P(A_{i})}=1.$

Ah well, I've given up on trying to verify if it is correct.
Since we already have 2 proofs... any reason to introduce a new one that is more obscure and that makes leaps that really take too much time to verify properly?

I am more the type of guy that prefers proofs that leap to the mind instantly, being obvious in their simplicity. (Wasntme)
 
  • #15
ILikeSerena said:
Ah well, I've given up on trying to verify if it is correct.
Since we already have 2 proofs... any reason to introduce a new one that is more obscure and that makes leaps that really take too much time to verify properly?

I am more the type of guy that prefers proofs that leap to the mind instantly, being obvious in their simplicity. (Wasntme)

The goal for me is not to verify the problem as many times as possible but to learn some probability from it. Mutual independence is a completely different argument, a stronger result, and, I think, a more direct method, but I don't know if the argument works. Therefore I have something to learn, even if that thing is "This is a terrible proof."
 
  • #16
Here is a proof that is perhaps more directly grounded in the axioms of a probability space. (See Probability space - Wikipedia, the free encyclopedia)

$\Pr[\cap_{i=1}^{\infty} A_i]$

$= 1 - \Pr[(\cap_{i=1}^{\infty} A_i)^c]$

$= 1 - \Pr[\cup_{i=1}^{\infty} A_i^c]$

$\ge 1 - \sum_{i=1}^{\infty} \Pr(A_i^c) $

$= 1 - \sum_{i=1}^{\infty} 0$

$= 1-0$

$= 1$
 

FAQ: Is this a typo or misunderstanding?

What is strange notation or typo?

Strange notation or typo refers to any unusual or incorrect symbols, characters, or words used in scientific notation or writing. It can be a result of human error or a misunderstanding of a particular concept.

Why is it important to pay attention to strange notation or typo in scientific work?

Strange notation or typo can lead to misinterpretation or errors in calculations, which can affect the validity and reliability of the scientific work. It is crucial to catch and correct these mistakes to ensure accurate and valid results.

How can strange notation or typo be prevented?

To prevent strange notation or typo, it is important to double-check all mathematical equations and calculations, use proper scientific symbols and terminology, and proofread all written work before submission. Collaborating with colleagues and seeking feedback can also help catch any mistakes.

What should I do if I notice a strange notation or typo in published scientific work?

If you notice a strange notation or typo in published scientific work, it is important to bring it to the attention of the author or publisher. This can help correct any errors and maintain the integrity of the scientific work.

Are there any common examples of strange notation or typo in scientific work?

Some common examples of strange notation or typo in scientific work include using the wrong units, incorrect mathematical symbols, and misspelled scientific terms. These mistakes can be easily avoided by double-checking work and using reliable sources for information.

Back
Top